Infrastructure

Grid infrastructure is composed by middleware services which integrate computational and storage resources, sensors, scientific instruments and databases, by creating a distributed environment for their sharing. This distributed environment of network and software is also called electronic infrastructure or e-Infrastructure, while the technologies-services provided through this type of infrastructure enable new methods of research collaboration named electronic science or eScience. The combination of high-speed networks and broadband access, grid middleware services and advanced virtual collaboration technologies will lead to the creation of World Wide Grid. The creation of such infrastructure, which will provide integrated communication and information processing services is the main objective of the European Research Area.

HellasGrid is the biggest grid infrastructure at the area of Southern Eastern Europe and one of the most sustainable grid infrastructures at European level. The main scope of HellaGrid is the provision of High Performance Computing and High Throughput Computing services to Greek academic and research community. So, the resources of the grid infrastructure are used by Greek researches and researches involved at various European projects. The last years, an more and more increased number of researches coming from various scientific fields (high energy physics, computational chemistry, biomedicine, informatics, meteorology, seismology, etc.) uses the HellasGrid infrastructure in order to cover their needs for computational and storage resources. Access to HellasGrid infrastructure is free to Greek academic and research community, by following a simple registration procedure which is described here.

Figure 1 – HellasGrid infrastructure

The HellasGrid infrastructure is composed by six (6) clusters of computational and storage resources located at Athens (HG-01-GRNET, HG-02-IASA, HG-06-EKT), Thessaloniki (HG-03-AUTH), Patras (HG-04-CTI-CEID) and Heraklion (HG-05-FORTH). Also the HellasGrid Certification Authority has been created in order to issue digital certificates for users and servers, while two teams have been created for the support of Greek users: the User-Support Team and the Applications-Support Team.

The HellasGrid infrastructure is continuously adapted and upgraded according to the needs of its users, while the expanding of its resources is examined in order to cover future needs.

The following two tables provides hardware details about the various sites of the HellasGrid infrastructure.

All users using our infrastructure are kindly requested to add the following text in their publications

“This work used the European Grid Infrastructure (EGI)  through the National Grid Infrastructures NGI_GRNET  – HellasGRID  .”

 

 

HG-01-GRNET

HG-02-IASA

HG-03-AUTH

Location Demokritos, National Center of Scientific Research, Agia Paraskevi Attikis, GR-15310, Athens Institute of Accelerating Systems and Applications (IASA) of the University of Athens, Panepistimiopolis Zografou, Physics Building 9 Aristotle University of Thessaloniki, University Campus, Thessaloniki, Greece
Resources Provided 4 Intel Xeon @2.8GHz 118 Intel Xeon @3.4 GHz for computation 118 Intel Xeon @3.4 GHz for computation
Memory per Core 500MB / core 1GB / core, 2GB /node 1GB / core, 2GB /node
Core Per Machine 1 2 2
Uplink Dedicated 1 Gigabit Dedicated 1 Gigabit Dedicated 1 Gigabit
Interconnect 2 GBit Ethernet (Separate for MPI and shared Storage) 2 GBit Ethernet (Separate for MPI and shared Storage) GBit Ethernet 
Storage Total Scratch Space shared between nodes 800 GB / Disk Storage on DPM SE:5TB Total Scratch Space shared between nodes: 900 GB / Permanent Storage on DPM SE : 10 TB Total Scratch Space shared between nodes: 1TB / Permanent Storage on DPM SE : 8 TB
IPv6 Support Yes Yes No
Operating  System Scientific Linux 4.5 Scientific Linux 5.8 CentOS 6.4
Middleware UMD-3 UMD-3 UMD-3
Maximum Job Execution Time 1 Week Wallclock for SEE and complex VOs, 2 days for the rest of VOs 1 Week Wallclock for SEE and complex VOs, 2 days for the rest of VOs 1 Week Wallclock for SEE and complex VOs, 3 days for the rest of VOs
MPI Support openmpi-1.3, mpich2-1.0 openmpi-1.4, mpich2-1.2 mpich2-1.2.1
Batch System torque / maui torque / maui torque / maui

 

HG-04-CTI-CEID

HG-05-FORTH

HG-06-EKT

Location Department of Computer Engineering & Informatics, University of Patras, Rio, Greece Foundation for Research and Technology Hellas (FORTH), Institute of Computer Science, N. Plastira 100, Vassika Vouton, GR-700 13, Heraklion, Crete, Greece National Documentation Centre (EKT), 48 Vassileos Constantinou Av, GR-11635, Athens
Resources Provided 116 Xeon @3.4 GHz for computation 120 Xeon @3.4 GHz for computation 12 single core Xeon @3.40GHz, 100 quad core Xeon E5405 @2.00GHz
Memory per Core 1GB / core, 2GB /node 1GB / core, 2GB /node 1GB / core for single core Xeon, 2GB / core for quad core Xeon
Core Per Machine 2 2 2 on single core machines, 8 on quad core machines
Uplink Dedicated 1 Gigabit Dedicated 1 Gigabit Dedicated 2 Gigabit
Interconnect 2 Gigabit Ethernet (Separate for MPI and shared Storage) 2 GBit Ethernet (Separate for MPI and shared Storage) 2 GBit Ethernet (Separate for MPI and shared Storage)
Storage Permanent Storage on DPM SE : 4,2 TB Total Scratch Space shared between nodes: 900 GB / Permanent Storage on DPM SE : 3,3 TB Total Scratch Space shared between nodes: 1 TB /  Disk Storage on dCache SE : 8 TB / Tertiary Storage on dCache SE :40 TB
IPv6 Support Yes Yes Yes
Operating  System Scientific Linux 5.5 Scientific Linux 5.7 Scientific Linux 5.7
Middleware UMD-3 UMD-3 UMD-3
Maximum Job Execution Time 1 Week Wallclock for SEE and complex VOs, 3 days for the rest of VOs 1 Week Wallclock for SEE and complex VOs, 2 days for the rest of VOs
MPI Support openmpi-1.4, mpich2-1.2 openmpi-1.4, mpich2-1.2
Batch System torque / maui torque / maui

 

 

GR-09-UOA

GR-10-UOI

GR-11-UPATRAS
Location Department of Informatics and Telecommunications / National and Kapodistrian University of Athens, Panepistimiopolis, Ilissia, Athens 15784 EKEP Laboratory, University of Ioannina, Panepistimoupoli Ioanninon, Ioannina, Greece University of Patras, Department of Electrical an Computer Engineering, Rio, Patras, Greece
Resources Provided 3 x Intel® Core™2 Duo Processor E6600 ( 4M Cache, 2.40 GHz, 2 cores) / 1 x Intel® Xeon® Processor E5620  ( 12M Cache, 2.40 GHz, 4 cores, 8 threads) 120 Cores Opteron 248 @ 2.2 Ghz (118 for computation) 4 Intel(R) Xeon(R) E5530 @ 2.40GHz for computation
Memory per Core 1 GB /core, 2GB / node 2 GB/core, 4GB/node 1.5GB / core, 6GB / node
Core Per Machine 2 4
Uplink Dedicated 2 Gigabit Dedicated 1 Gigabit
Interconnect 2 GBit Ethernet (Separate for MPI and shared Storage) 1 GBit Ethernet
Storage Total Scratch Space shared between nodes  500 GB /  Permanent Storage on DPM SE 300 GB 876 GB Total Scratch Space / 876 GB permanent storage on DPM  Permanent storage on DPM SE: 1TB
IPv6 Support No No No
Operating  System Scientific Linux 5.7 Scientific Linux 5.4 Scientific Linux 5.5
Middleware UMD-1 UMD-2 UMD-1, UMD-2
Maximum Job Execution Time 3 days 1 Week Wallclock for SEE and complex VOs, 3 days for the rest of VOs.
MPI Support No openmpi1.4 / mpich2-1.2 No
Batch System torque / maui torque / maui torque / maui

 

GR-01-AUTH

Location University Campus, Thessaloniki, Greece, Aristotle University of Thessaloniki
Resources Provided 500 AMD Opteron, 100 Intel Xeon for computation
Memory per Core 1 GB / core minimum, 4 GB / core maximum
Cores Per Machine Minimum: 4, Maximum: 64
Uplink AUTHNET Shared 1 Gigabit
Interconnect GBit Ethernet
Storage Total Scratch Space shared between nodes: 1TB/Permanent Storage on DPM SE : 9 TB
IPv6 Support No
Operating  System CentOS 5.9
Middleware UMD-2
Maximum Job Execution Time 1 Week Wallclock for SEE and complex VOs, 3 days for the rest of VOs
MPI Support mpich2-1.2.1
Batch System Torque / Maui