Scientists from Institute of Accelerating Systems & Applications (IASA) have used the HellasGrid Infrastructure and the EGI Grid, in order to prepare the equilibrated systems to be used in the context of PRACE European project (Work Package 7.4 “Benchmarking” framework).
In order to have a good estimation of performance and scaling up to some thousands cores of these applications, well equilibrated initial configurations are necessary. For each package a number of physical systems were prepared. The prepared configurations cover various simulation system sizes and methods. Each of these configurations was equilibrated using grid infrastructure. A minimal run of 103 steps for each case of Gromacs and NAMD (in order to have reproducible performance and scaling) takes about 10-30 minutes using 1024-8192 cores on Tier-0 systems (170-4096 core hours). A typical equilibration step needs, depending on the system, more than 105 steps. For Cp2k, much more time is necessary to obtain equilibrated initial configuration (each step for large cases takes ~2 hours using 2048 cores on BG/P machine at Juelich).
In all the aforementioned cases, the equilibration runs were performed using a number of save/restart jobs on Hellasgrid infrastructure, using from 8 up to 32 cores for each job. These applications, and few more (Quantum Espresso, Towhee), were ported and optimized to run on the existing Hellasgrid architectures. As far as Gromacs concerns, it has an internal CPU detection mechanism, hence the corresponding executable is internally optimized for different architectures. For NAMD and Cp2k cases respectively, different executables were produced taking into account the existing hardware. Currently, two major versions of executables were produced: one that runs on all Hellasgrid sites, with optimization for CPUs that have SSE2 instructions and one for CPUs that have SSE4.1 instructions (HG-06). In all cases openmpi-1.4.3 was used as parallel environment. For NAMD and Cp2k, the free unsupported version of Intel compilers was used, obtaining an additional ~2x performance boost. In order to avoid installation of Intel compilers suite in the Hellasgrid clusters, static Intel libraries linking was used. Additional math libraries were required by these packages (fftw2, fftw3, gsl, lapack, Atlas, libint, Blacs, Scalapack). These libraries were compiled and installed on UI (Users Interface service) machine(s). Static linking was followed in every case, thus installation on WNs (Working Nodes) was not necessary.
The HellasGrid Infrastructure provided the CPU resources needed to proceed with the preparation of the equilibrated initial configurations. These packages will be available, for use on Virtual Organization basis, on Hellasgrid clusters soon (this is a work in progress).
- Marios Chatziangelou, IASA, mhaggel (at) iasa.gr
- Dimitris Dellis IASA, ntell (at) iasa.gr
- HellasGrid Application Support Team, IASA, application-support (at) hellasgrid.gr