Compute Clusters

PAOC's compute clusters provide students and researchers with enviable computing resources.

clusters.jpeg (Full)

PAOC currently supports a number of systems, principle among which are the ACES, Darwin, eLorenz and Svante compute clusters.


ACES is a platform for cross-disciplinary collaboration between Earth science researchers and computer science researchers at MIT. For more information visit the cluster website.


Named for the Darwin Project, this facility was created as a resource for modeling upper ocean biological diversity and plankton population changes. The computational core is a 512 core cluster capable of performing 6 TFLOPS with a Terrabyte of memory and half a Petabyte of storage.

  • a 128 compute cluster with 512 cores with a high-speed Myrinet interconnect.
  • a roughly 500TB high-performance GPFS parallel filesystem
  • a 60 2560x1600 pixel panel, 240 Megapixel display system
  • 10 gigabit based networking infrastructure linking around campus and to external project partners.

For more information, visit the Darwin Computational Facility wiki.

  • 4 x dual X5550 (quad-core) @ 2.67GHz, 24GB/node
  • 4 x dual X5650 (hex-core)  @ 2.67GHz, 24GB/node
  • low-latency infiniband network
  • 13TB glusterfs filesystem (nufa)
  • 80 real cores = 160 hyperthreaded cores

To request a username on elorenz, e-mail For more information, visit the eLorenz Cluster wiki.


Named after  Svante Arrhenius, the Swedish scientist who first speculated about fossil fuel emissions and the greenhouse effect, the Svante cluster is available for students, post-docs, and researchers affiliated with The Joint Program on the Science and Policy of Global Change  or The Center For Global Change Science.

  • 64 compute nodes, using the either the Intel "nehalem" or "sandy bridge" chipset, running @ 2.60-3.47 GHz and equipped with 12-64 GB RAM per node (approximately 750 total physical cores)
  • Six dedicated high-capacity file server nodes; total disk storage capacity is over 400 TB.
  • Compute and file server nodes interconnected by a low-latency infiniband fabric.

For more information or to request an account, please email


To provide a means of visualizing large single image fields, or simultaneously comparing multiple synchronous fields, researchers in PAOC has been developing and installing a growing number of "viz-walls".

The biggest installation, Viz-Wall-1, is a 10 x 6 matrix of 2560 x 1600 pixel resolution LCD's in the Stata Center. Funded as part of the Darwin Project, it provides an unparalleled tool for displaying the wealth of high-resolution images and movies generated in the course of PAOC's research activities. It also serves as a compelling outreach tool, housed as it is in a non-Earth-Sciences-centric part of campus. Under the hood (or rather behind the monitors) a head node manages processing and synchronous distribution of the component image fragments under each compute node. Two screens share one compute node between them.

The 4 x 4 screen initial prototype, Viz-Wall-0, can be found on the 16th floor of the Green Building where it is used as a teaching tool in the Synoptic Lab.

Viz-Wall-2 (Viz-Wall-1's 2 x 2 screen younger sibling) is located on the 15th Floor of the Green Building. Field tests of a touch-sensitive control screen are on-going so feel free to come and play...

The LCD walls are designed as a community facility. If you are interested in displaying information on them please contact Chris Hill or Oliver Jahn.