New cluster computer goes into operation at the Albert Einstein Institute

DAMIANA ranks 192nd in the Top500 list of the world's most powerful high-performance computers published today

June 27, 2007

With 386 Intel Xeon (Woodcrest) processors, which drive 168 compute nodes and achieve a proven computing power of 6.52 TeraFlop/s (Linpack), the cluster computer "DAMIANA" at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) will now compute highly complex simulations of black holes and collapsing neutron stars. "With the new cluster computer, we can accelerate our calculations up to four to five times, depending on the problem," says Professor Luciano Rezzolla, head of the 'Numerical Relativity Theory' group at the AEI. "This means that we need only one and a half days for a calculation instead of a week. In addition, we can carry out more systematic studies of the sources of gravitational waves, which were out of reach until now." With the calculations, it will be possible to draw much clearer pictures of these high-energy phenomena in the universe than ever before. The international community of gravitational-wave researchers will benefit from this, as they will be able to search for gravitational-wave signals much more accurately now based on the calculations performed at the AEI.

Today it was published at the International Supercomputer Conference 2007 (ISC2007) in Dresden that DAMIANA is ranked 192nd in the Top500 of the world's most powerful supercomputers, making it the fifth fastest research cluster computer in Germany. The new cluster complements the supercomputers PEYOTE and BELLADONNA already existing at the institute.


From numerical simulations of merging black holes and collapsing neutron stars, scientists can derive important information about the gravitational waves generated by these major cosmic events. Gravitational waves are the last prediction from Einstein's theory of relativity that has not yet been directly proven. Currently, five interferometric gravitational-wave detectors are in operation worldwide: the German-British GEO600 project near Hannover, the three LIGO detectors in the U.S. (in Louisiana and Washington), and the French-Italian Virgo project in Pisa, Italy. The joint ESA/NASA space project LISA (Laser Interferometer Space Antenna) is also planned. AEI scientists are leading the GEO600 and LISA projects and are working closely with their colleagues from the LIGO project.


The cluster computer was named DAMIANA by the Numerical Relativity group led by Prof. Rezzolla - after a plant native to Latin America, which is said to have euphoric and potency-enhancing effects. And the increase of potency - at least in mathematical terms - is an important goal of this new acquisition.

The cluster is particularly suitable for problems that can be easily parallelized. These matrix operations are also used for numerical simulations. For this, the individual nodes of the cluster must be able to communicate with each other particularly quickly and effectively. The computation of Einstein's equations for astrophysically interesting events such as mergers of black holes or neutron stars is the main research area of the Numerical Relativity group.


The new cluster at the AEI was supplied, set up and optimized by the company MEGWARE Computer GmbH. MEGWARE Computer GmbH was founded in 1990 in Chemnitz (Saxony). The company focuses on the development, sales and service of integrated IT system solutions for industry, trade and public authorities. In 2000, MEGWARE started to make a name for itself in the field of high-performance computing - in this case, especially in the field of compute clusters. Up to now, several hundred such clusters have been delivered to renowned teaching and research institutions, but also to industrial companies in Germany and many European countries. MEGWARE develops hardware components and management tools for clusters and is involved in the technological development of these supercomputers in close partnership with international manufacturers.

Numerical simulations as a virtual laboratory

The Numerical Relativity research group at the Max Planck Institute for Gravitational Physics investigates the modelling of sources of gravitational waves and simulates relativistic astrophysical events, such as the coalescence of two black holes. During these catastrophes in the universe, enormous amounts of energy are emitted within a very short time, both as electromagnetic radiation and as gravitational waves. These events can only be described using Einstein's theory of general relativity and are simulated using modern supercomputers.

Over the years, the research group has developed numerical codes, such as the CACTUS code and the WHISKY code, for the simulation of such astrophysical processes, including orbiting black holes and neutron stars, as well as accretion disks (matter disks around black holes or neutron stars). The researchers use these codes as virtual astrophysical laboratories with which they can create the extreme astrophysical conditions that cannot be achieved in normal laboratories on Earth.

The cluster computer: technical data

DAMIANA is a high performance Linux computing cluster. It consists of 168 compute nodes, each with two Intel XEON 5160 Woodcrest processors with a performance of 3.0 GHz each, 8 GB RAM and 250 GB local memory. Five storage nodes with a total capacity of approx. 50 TB store the enormous amounts of the data resulting from the numerical simulations. A head node enables users to communicate with the cluster and serves as a management basis for the entire system. On the other hand, three networks ensure that the individual computers communicate with each other. Each of these networks has its very special task.

The heart of the high-performance cluster is the network and thus the corresponding switch that provides interprocess communication. It‘s an infiniband switch with a bandwidth of up to 11.5 Tbits/sec. The other two networks are used for system administration and for integrating the storage nodes into the cluster. Since typical numerical simulations take several days or even weeks, the jobs are managed by a batch system. A user logs into the head node to compile a program code or to graphically display the results. The CACTUS code, a flexible selection of tools developed at the AEI, plays a key role in all computational tasks of the AEI scientists.

Technical data

168 computer codes with each
2 Intel XEON 5160 Woodcrest processors à 3.0 GHz/1333 FSB (Front Side Bus)
8 GByte RAM Memory
250 GB storage capacity
3 network connections (2 x Gigabit, 1 x Infiniband)
IPMI 2.0 card

5 storage nodes with each
2 Intel XEON 5160 Woodcrest processors à 3.0 GHz/1333 FSB (Front Side Bus)
8 GByte RAM Memory
2 x 250 GB internal disks
RAID controller for connecting a gross storage capacity of 10.5 TB (14 x 750 GB disks)
3 network connections (2 x Gigabit, 1 x Infiniband)
IPMI 2.0 card
Redundant power supply units

1 head node (also called login, access or management node) with
2 Intel XEON 5160 Woodcrest processors a 3.0 GHz/1333 FSB (Front Side Bus)
8 GByte RAM Memory
3 x 500 GB internal disks
3 network connections (2 x Gigabit, 1 x Infiniband)
IPMI 2.0 card
Redundant power supply units

The Scientific Linux 4.4 operating system is installed on all computers.

Compilers: Gnu C++, Intel C++, Intel Fortran
Programming tool: Intel Cluster Tool Kit
Batch system: OpenPBS
Monitoring: ClusterWare Appliance (Megware)
Management software: Clusterware (Megware)

Peak performance of the cluster

The theoretical peak performance is 8.3 Tflops. The true values will be revealed in the benchmarks. We expect an efficiency of over 80%.

Other Interesting Articles

Go to Editor View