Designing a common European digital service catalogue for research: take part!

The eInfraCentral project (European E-Infrastructure Services Gateway) opened an online survey to gather information about all digital services for research from independent providers with the aim of creating a single entry point to European e-infrastructure services. eInfraCentral‘s mission is to ensure that by 2020 a broader set of users benefits from European infrastructures.

EGI’s contribution to the project is related to requirements efforts and harmonising the classification of services. EGI will also provide advice on the portal specification and is one of the advocates for the common service catalogue in terms of convincing other e-infrastructures to share their service catalogues.

This survey is designed for digital service providers, their users, customers and interested stakeholders to help identifying requirements and needs for the development of a common e-infrastructure service catalogue in Europe.

The survey is open until 31 May 2017.

Take part now!

All FitSM Training Levels are now available

We are happy to announce that registration is open for all 4 FitSM training, certification levels and expert courses. The trainings are organised by the EGI Foundation and will take place throughout June 2017, at the EGI Foundation headquarters in Amsterdam.

The available courses and dates are the following:

6-9 June 2017
19-23 June 2017

More details on the courses and prices are accessible on the EGI website.

For specific interests and questions, feel free to send an e-mail to

We are looking forward to greeting you in Amsterdam!

Register now to the EGI Conference and the INDIGO Summit

The EGI Conference 2017 will take place in Catania, Italy (9-12 May) in partnership with the INDIGO Summit, organised by the INDIGO DataCloud project. The events are hosted by INFN Catania, part of the Italian National Institute for Nuclear Physics.

The EGI Conference 2017 will be the EGI Community’s main event of 2017 and will be the last meeting organised in the context of the EGI-Engage project. The INDIGO Summit 2017 will be the flagship event of the INDIGO-DataCloud project, with a focus on user engagement and the INDIGO service catalogue.

The online registration for the events will close on April 28. To register, please fill in the online registration form.

The EGI Conference and INDIGO Summit 2017 will take place at the Le Ciminiere congress centre in Catania.

Participants will also be able to register on site, during the events.

Register now!

δημιουργία υπολογιστικού συστήματος υψηλών επιδόσεων (High Performance Computer – HPC)

Το Εθνικό Δίκτυο Έρευνας και Τεχνολογίας (ΕΔΕΤ) πρωτοστατεί στον χώρο των υπερυπολογιστικών υποδομών, με τη δημιουργία του πρώτου εθνικού υπολογιστικού συστήματος υψηλών επιδόσεων (High Performance Computer – HPC) για την υποστήριξη επιστημονικών εφαρμογών μεγάλης κλίμακας. Την προμήθεια και εγκατάσταση του νέου συστήματος ανέλαβε η COSMOS Business Systems Α.Ε.Β.Ε. σε συνεργασία με την ΙΒΜ, έπειτα από ανοικτό διεθνή διαγωνισμό που διενήργησε η ΕΔΕΤ Α.Ε. Η νέα υποδομή αναμένεται να παίξει σημαντικό ρόλο στην ανάπτυξη και προαγωγή της επιστημονικής έρευνας στη χώρα και στη Νοτιανατολική Ευρώπη.

Continue reading

Going beyond grid to enable life science data analysis

Life sciences are rapidly transitioning towards the new era of Big Data, where data, algorithms and knowledge are becoming increasingly available for all. Ever since 2007, when sequencers began giving flurries of data, life sciences have been steadily moving towards this ‘Forth Paradigm’, i.e. the analysis of massive data sets. EGI has been a key factor to this transition, and is yet to play a critical role in the shape of research within the context of life sciences.

However, it is evident that, beside the development of new applications and the improvement of the existing ones, the life sciences and ICT communities need to capitalize on their synergies to strengthen the border disciplines. To this end, we will host a thorough discussion of the expected trends, applications and needs of the communities at the upcoming EGI Community forum in Helsinki. This will take place in the form of a dedicated workshop and networking session, aptly named “Going beyond grid to enable life science data analysis”. Going beyond the traditional concept of a workshop and showcasing the role of e-infrastructures and networking platforms, our aim is to promote a discussion and set up a brainstorming environment between the two communities towards concrete collaborations and sustainable synergies.

The workshop will include invited speakers both from key life science infrastructures (such as ELIXIR, BioMedBridges and LifeWatch among others), but also from high profile research groups (such as EBI, CNRS and CSC among others). This will provide participants with a clear overview of the current state-of-the-art in life sciences’ infrastructures, and help identify future paths and outline critical collaborations.

This discussion phase will expand from the workshop to a dedicated networking session. The aim is to provide clear proposals for possible collaborations and suggestions for future EGI actions.


Workshop URL:

Networking UL:

Μηχανικός Πληροφορικής – Support Specialist (κωδ. SS101)

Το Κέντρο Επιστημονικών Υπολογιστικών Υπηρεσιών του Α.Π.Θ. επιθυμεί να προσθέσει στo δυναμικό του έναν νέο μηχανικό πληροφορικής για την υποστήριξη των χρηστών, των επιστημονικών εφαρμογών και των υπηρεσιών του Κέντρου (Scientific Support Unit).

Η ομάδα υποστήριξης χρηστών & εφαρμογών είναι υπεύθυνη για:

  • την παροχή και επικαιροποίηση της τεκμηρίωσης των παρεχόμενων υπηρεσιών
  • την εύρυθμη λειτουργία του helpdesk χρηστών
  • την υποστήριξη επιστημονικών εφαρμογών από όλους τους τομείς επιστημόνων του ΑΠΘ (aplication porting)
  • τον προγραμματισμό και την διεξαγωγή σεμιναρίων κατάρτισης χρηστών
  • τις δράσεις διάχυσης και ενημέρωσης του Κέντρου εντός και εκτός Α.Π.Θ.
  • την έρευνα και βελτιστοποίηση επιστημονικών εφαρμογών σε υπερυπολογιστικές υποδομές

Ο ιδανικός υποψήφιος θα πρέπει να διαθέτει τα παρακάτω χαρακτηριστικά:

Απαραίτητα προσόντα:

  • καλή γνώση προγραμματισμού σε C/C++ ή/και FORTRAN
  • καλή γνώση shell scripting (i.e. bash, python, ruby)
  • βασικές γνώσεις χρήσης Linux (κατά προτίμηση RHEL/Centos/Scientific/Fedora Linux)
  • ικανότητα να αναγνωρίζει, να αντιμετωπίζει και να επιλύει προβλήματα
  • πολύ καλές επικοινωνιακές ικανότητες τόσο στο προφορικό όσο και στο γραπτό λόγο στην ελληνική και την αγγλική γλώσσα
  • διάθεση για εκμάθηση νέων τεχνολογιών & συνεχή κατάρτιση
  • ικανότητα να μπορεί να αναλάβει πρωτοβουλίες, να θέσει προτεραιότητες, να οργανώσει και εκτελέσει πολλαπλά καθήκοντα-εργασίες

Επιθυμητά προσόντα:

  • Βασικές γνώσεις αριθμητικής ανάλυσης
  • Γνώση παράλληλου προγραμματισμού (MPI, OpenMP)
  • Γνώση προγραμματισμού για GPU υποδομές (CUDA, OpenCL, OpenACC)
  • Συμμετοχή σε ομάδες ανάπτυξης λογισμικού ανοιχτού κώδικα
  • Γνώσεις profiling & benchmarking εφαρμογών
  • Γνώση στην υποβολή και διαχείριση batch τύπου εργασιών
  • Διδακτική εμπειρία
  • Εκπληρωμένες στρατιωτικές υποχρεώσεις

Οι ενδιαφερόμενοι μπορούν να υποβάλλουν το βιογραφικό τους στη διεύθυνση jobs at μέχρι την Παρασκευή 7 Σεπτεμβρίου αναφέροντας τον κωδικό της θέσης στο θέμα του μηνύματος.

Διαχειριστής Υπολογιστικών Συστημάτων – Systems Engineer (κωδ. SE102)

Το Κέντρο Επιστημονικών Υπολογιστικών Υπηρεσιών του Α.Π.Θ. επιθυμεί να προσθέσει στην Ομάδα Λειτουργίας Υποδομής ένα μηχανικό/διαχειριστή υπολογιστικών συστημάτων.

Τα μέλη της Ομάδας Λειτουργίας Υποδομής:

  • φροντίζουν για την εύρυθμη λειτουργία της υποδομής καθώς και τη συνεχή βελτίωση και ανάπτυξή της.
  • έχουν ως πρωταρχικά μελήματα την αδιάλειπτη λειτουργία των παρεχόμενων υπηρεσιών (24×7) και τη συνεχή βελτίωσή τους
  • συμμετέχουν σε ελληνικές και διεθνείς ομάδες συνεργασίας
  • συμβάλλουν στη βελτίωση της έρευνας και εκπαίδευσης σε ιδρυματικό, εθνικό και διεθνές επίπεδο μέσω της ανάπτυξης και λειτουργίας αξιόπιστων και πρωτοπόρων ηλεκτρονικών υποδομών
  • εργάζονται σε ένα ευχάριστο περιβάλλον το οποίο στηρίζει και προάγει τη δημιουργικότητα και τη πρωτοβουλία

Ο ιδανικός υποψήφιος θα πρέπει να διαθέτει τα παρακάτω χαρακτηριστικά:

Απαραίτητα προσόντα:

  • Τουλάχιστον 3 χρόνια εμπειρία στη διαχείριση εξυπηρετητών σε επίπεδο υλικού και λογισμικού
  • Διάθεση για εκμάθηση νέων τεχνολογιών & συνεχή εξειδίκευση
  • Πολύ καλή γνώση λειτουργικών συστημάτων LINUX/UNIX και ιδιαίτερα στη διαχείριση διανομών RHEL/Centos/Scientific Linux
  • Πολύ καλή γνώση shell scripting (bash, perl, ruby)
  • Πολύ καλή γνώση των δικτυακών πρωτοκόλλων TCP/IP
  • Αποτελεσματική επικοινωνία τόσο στο προφορικό όσο και στο γραπτό λόγο στην ελληνική και αγγλική γλώσσα
  • Ικανότητα θέσπισης & υλοποίησης πολιτικών, διαδικασιών και στόχων
  • Ικανότητα να αντιμετωπίζει και να επιλύει προβλήματα
  • Να μπορεί να αναλάβει πρωτοβουλίες, να θέσει προτεραιότητες, να οργανώσει και εκτελέσει πολλαπλά καθήκοντα-εργασίες

Επιθυμητά προσόντα:

  • Εμπειρία στη διαχείριση παράλληλων συστημάτων αποθήκευσης δεδομένων (π.χ. GPFS/Lustre/Gluster)
  • Εμπειρία στη διαχείριση ιδεατών μηχανών (Virtual Machines) με χρήση libVirt/XEN/KVM
  • Εμπειρία χρήση και ανάπτυξη κεντρικών εργαλείων για την αυτοματοποίηση της διαχείρισης υπολογιστικών υποδομών
  • Εμπειρία στη χρήση και επέκταση κεντρικών εργαλείων παρακολούθησης (π.χ. Nagios/Monit/Ganglia/Cacti)
  • Ενεργή συμμετοχή σε ομάδες ανάπτυξης λογισμικού ανοιχτού κώδικα

Οι ενδιαφερόμενοι μπορούν να υποβάλλουν το βιογραφικό τους στη διεύθυνση jobs at μέχρι την Παρασκευή 7 Σεπτεμβρίου αναφέροντας τον κωδικό της θέσης στο θέμα του μηνύματος.

Investigating the nature of explosive percolation transition

The Laboratory of Computational Physics is actively involved in the field of investigating the phase transition of various natural and artificial systems. Currently, much effort is being concentrated on the definition of the type of phase transition for a new competitive model named “explosive” percolation: when filling sequentially an empty lattice with occupied sites, instead of randomly occupying a site or bond (according to the classical paradigm), we choose two candidates and investigate which one of them leads to the smaller clustering. The one that does this is kept as a new occupied site on the lattice while the second one is discarded (Figure 1). This procedure considerably slows down the emergence of the giant component, which is now formed abruptly, thus the term “explosive”.

Achlioptas Process

Figure 1: Achlioptas Process according to the sum rule (APSR) for site percolation. White cells correspond to unoccupied sites while colored cells correspond to occupied sites. Different colors (red,green,gray,blue) indicate different clusters. (a) We randomly select two trial unoccupied sites (yellow), noted by A and B, one at a time. We evaluate the size of the clusters that are formed and contain sites A and B, (s_A) and (s_B) respectively. In this example (s_A = 10) and (s_B = 14). (b) According to the Achlioptas Process, we keep site A which leads to the smaller cluster and discard site B.


Following the first publication of Achlioptas et al., a debate was initiated between various teams whether the procedure is continuous or discontinuous. Contributing to these considerations, we have investigated explosive site percolation, both using the product and sum rules. It was found that the exponent (beta / nu ) is vanishing small for both cases, pointing towards the continuity of the transition. Also, we performed numerical analysis for the case of a reverse Achlioptas process (Figure 2). It was shown that for finite systems there is a hysteresis loop between the reverse and forward procedure (Figure 2). This loop vanishes at infinity, giving strong evidence for the continuity of the “explosive” site percolation (Figure 3). Moreover, “explosive” site and bond percolation seem to belong to a different universality class


Figure 2: Reverse Achlioptas Process (AP1) for site percolation according to the sum rule. Blue is for the occupied sites while white for the unoccupied sites. Initially, the lattice is fully occupied. (a) An instance of the process. We randomly choose two trial sites (yellow), noted as A and B, and remove them from the lattice. (b) The clusters formed after the removal. (c) We place site A again in the lattice and calculate the size of the cluster in which it belongs, (s_A = 16). (d) We do the same as before for the case of site B and calculate (s_B = 26). We remove site A which leads to the formation of the smaller cluster and keep site B.

Reverse Achlioptas Process (1)
Reverse Achlioptas Process (2)


Hysteresis Loop

Figure 3: (a) Hysteresis loop between a reverse (red dots) and the forward (black squares) Achlioptas process for a (700times 700) system. (b) The loop vanishes in the thermodynamic limit


Simulations were performed on the EGI. A diagram of the number of jobs and CPU hours consumed per month is shown in Figure 4. We have used extensively the gLite parametric job submission mechanism, using as parameter the different realizations of the system. On average, more than 1000 jobs per simulation were submitted for each lattice size. Considering a typical ( 1000 times 1000 ) lattice, the average time consumed for one run approached 172 minutes. If we had to perform the calculations on a single CPU, this would mean that it would take us 120 days to get complete results for just one lattice size. Using the EGI thus has helped us minimize this time approximately to 172 minutes. This translates to a time gain of the order of (10^3). Moreover, given the availability of more resources, this gain may be even higher. This is a very important feature, because we can numerically analyze systems of the order of (10^6) in a tolerable amount of time.

Achlioptas Jobs

Figure 4: Number of jobs and CPU hours per month consumed for the simulations



  1. D.Achlioptas, R.M. D’Souza and J. Spencer, Explosive Percolation in Random Networks, Science 323,p. 1453 ,(2009)
  2. R.A. da Costa, S.N.Dorogovtsev, A.V.Goltsev, and J.F.F.Mendes, “Explosive Percolation” Transition is Actually Continuous, Physical Review Letters 105(25),255701,(2010)
  3. P.Grassberger, C. Christensen, G. Bizhani, S-W Son, and M. Paczuski, Explosive Percolation is Continuous, but with Unusual Finite Size Behavior, Phys Rev Lett. 106(22), (2011)
  4. O. Riordan and L. Warnke, Explosive percolation is continuous, Science 333 (2011)
  5. R.M. Ziff, Explosive growth in biased dynamic percolation on two-dimensional regular lattice networks, Physical Review Letters 103(4),45701,(2009)
  6. F. Radicchi and S. Fortunato, Explosive Percolation: A numerical analysis, Physical Review E 81(3),036110,(2010)
  7. N.A.M. Araújo and H.J.Hermann, Explosive Percolation via Control of the Largest Cluster, Physical Review Letters 105(3),035701,(2010)

Protein classification algorithms over a distributed computing environment

One of the most important challenges in modern Bioinformatics is the accurate prediction of the functional behavior of proteins. To this end, researchers from the Intelligent Systems and Software Engineering Lab (Dept. of Electrical and Computer Engineering) have been working successfully for several years on the design and implementation of novel data mining algorithms [1-3].

The strong correlation that exists between the properties of a protein and its motif sequence (Figure 1) makes the prediction of protein function possible. The core concept of any approach is to employ data mining techniques in order to construct models, based on data generated from already annotated protein sequences. A major issue in such approaches is the complexity of the problem in terms of data size and computational cost. However, the utilization of the HellasGrid Infrastructure and the EGI Grid, coupled with the close support of the Scientific Computing Center at A.U.Th., helped overcome the computational difficulties often encountered in protein classification problems.

Figure 1: [a] P00747 (Plasminogen precursor – PLMN_HUMAN) protein chain, and [b] an amino-acid pattern expressed as a regular expression


G-Class was the first data-mining algorithm successfully ported to the EGI Grid infrastructure [4]. The G-Class methodology follows a “divide and conquer” approach comprised of 3 steps (Figure 2).

Figure 2: First, protein data from PROSITE, an expert-based database, are divided into multiple disjoint sets, each one preserving the original data distribution. The new sets are used as training sets, and multiple models are derived by means of standard data mining algorithms. Finally, the models are combined to produce the final classification rules, which can be used to classify a given instance and evaluate the methodology.


G-Class was a fairly simplistic approach to the protein classification problem, using generic data mining algorithms for the construction of several models simultaneously. However, the results were impressive both in terms of the speed-up ratio (ranging from 10 to 60) and the amount of data (ranging from 662 proteins over 27 different classes, to 7027 proteins over 96 classes) that were able to be processed (Figure 3).

Figure 3: The processing time in all cases follows the (e^{-alpha x}) model, where (alpha) depends on the size of the original dataset and (x) is the number of splits. The accuracy of the methodology is fairly constant over the number of splits, with minor fluctuations owing to the distribution of the instances of the overlapping protein classes over the different dataset splits.


A second approach was aiming towards the automatic annotation of protein sequences. Although there are a lot of tools for protein annotation, such as the Gene Ontology Project, ProDom, Pfam, and SCOP, in order to assign annotation terms to new non-annotated protein sequences, they have to be either processed directly in a lab or characterized through similarity to already annotated sequences. At the moment, the amino acid sequence of more than 1.000.000 proteins has been obtained. On the contrary, the properties and functions of only 4% of these proteins are known. Therefore, the need for a systematic way to derive clues for the properties of a protein by inspecting its amino acid sequence is obvious. PROTEAS is a novel parallel methodology for protein function prediction which predicts the annotation of an unknown protein, by running its motif sequence each model, producing similarity scores [5-6]. This methodology has been implemented so that it can effectively utilize various classification schemata, such as Gene Ontology, SCOP families, etc (Figure 4).

Figure 4: PROTEAS workflow diagram


The main drawback of this methodology is that it requires a substantial amount of computational time to complete. It has been shown experimentally that the execution time needed to process the entire dataset on a single processor is prohibitively long. In order to address this issue, PROTEAS has been implemented both as a standalone and as a grid-based application. The grid-based application utilizes the MPI library for communication between distinct processes and uses the EGI Grid infrastructure in order to minimize the execution times (Figure 5).


Figure 5: Execution times for model training


Moreover, the Grid provides for the seamless integration of the training process and the actual model evaluation by allowing the concurrent retraining of Gene Ontology models from different input sources or experts and the use of the existing ones (Figure 6).

Figure 6: Execution times for specific Train/Test set ratio and different number of input files (left column), and for different ratios but specific number of input files (right column)


The application was executed on available clusters using from 4 to 16 processors in various experiment configurations (Figure 7). In all cases the accuracy of the results was very high and the overall execution time was satisfactory.


Figure 7: Total processing times for the classification of a single protein sequence, based on the number of CPUs used and the number of input files used as the model construction base.


Contact details:

  • Pericles A. Mitkas, Professor, AUTH, mitkas (at)
  • Fotis E. Psomopoulos, Research Associate, CERTH, fpsom (at)
  • Scientific Computing Center, AUTH, contact (at)



  1. Fotis E. Psomopoulos and Pericles A. Mitkas, “Bioinformatics Algorithm Development for Grid Environments”, Journal of Systems & Software, vol. 83, No 7. (2010), pp. 1249-1257.
  2. Fotis E. Psomopoulos and Pericles A. Mitkas, “Data Mining in Proteomics using Grid Computing”, Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine and Healthcare, Editor: Mario Cannataro, Laboratory of Bioinformatics, University Magna Graecia of Catanzaro, 88100 Catanzaro, Italy, 2009, (chapter 13, pp. 245-267), UK: IGI Global.
  3. Fotis E. Psomopoulos and Pericles A. Mitkas: “Sizing Up: Bioinformatics in a Grid Context”, 3rd Conference of the Hellenic Society For Computational Biology and Bioinformatics – HSCBB ’08, 30-31 October 2008, Thessaloniki, Greece.
  4. Helen Polychroniadou, Fotis E. Psomopoulos and Pericles A. Mitkas: “g-Class: A Divide and Conquer Application for Grid Protein Classification”, Proceedings of the 2nd ADMKD 2006: Workshop on Data Mining and Knowledge Discovery (in conjunction with ADBIS’2006: The 10th East-European Conference on Advances in Databases and Information Systems), 3-7 September 2006, Thessaloniki, Greece, pp. 121-132.
  5. Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas, “A parallel data mining application for Gene Ontology term prediction”, 3d EGEE User Forum, Polydome Conference Centre, 11-14 February 2008, Clermont-Ferrand, France.
  6. Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas, “A parallel data mining methodology for protein function prediction utilizing finite state automata”, presented at the 2nd Electrical and Computer Engineering Student Conference, April 2008, Athens, Greece.

Spatial distribution of site-effects and wave propagation properties in Thessaloniki (N. Greece) using a 3D finite difference method

Scientists from the Geophysical Laboratory (Department of Geophysics, School of Geology of the Aristotle Univ. of Thessaloniki) have studied the site effects of seismic motion in the metropolitan area of the city of Thessaloniki (Northern Greece) for various seismic earthquake scenarios with a 3D finite-difference modeling approach, using the HellasGrid Infrastructure and the EGI with the support of the Scientific Computing Center at A.U.Th.

The city of Thessaloniki (Northern Greece) was selected since it is located in a moderate-to-high seismicity region (Papazachos et al., 1983), with the Servomacedonian massif and Northern Aegean through areas exhibiting the highest seismicity (Figure 1). The city suffered several large earthquakes throughout its history, many of them causing significant damages and human losses (Papazachos and Papazachou, 2002).

Thessaloniki Earthquake Map

Figure 1: Map of known earthquakes with M≥3.0 which occurred in the broader area of central–northern Greece from the historical times (550 BC) till 2007 (Figure after Skarlatoudis et al., 2011a).


An explicit 3D 4th-order velocity-stress finite-difference scheme with discontinuous spatial grid was used to produce synthetic waveforms with numerical simulations. The scheme solves the equation of motion and Hooke’s law for viscoelastic medium with rheology described by the generalized Maxwell body model. Details on the scheme, its grid and material parameterization are provided by Moczo et al. (2002), Kristek & Moczo (2003), Moczo & Kristek (2005), Moczo et al. (2007) and Kristek et al. (2009b).

The computational model used for the simulations is based on the geophysical-geotechnical model and the dynamic characteristics of the soil formations proposed by Anastasiadis et al. (2001) and covers an area of 22 x 16 Km2 (dotted rectangle in Figure 1) (Skarlatoudis et al., 2007; 2008b; Skarlatoudis et al., 2010).

Numerical simulations were performed for six seismic scenarios, corresponding to three different hypocentral locations and two different focal mechanisms for each one. Seismic scenarios with E-W trending normal faults are referred as scenarios (a), while the ones with NW-SE trending normal faults as scenarios (b) (Figure 2). Both types of normal faults (E-W and NW-SE) are the dominant types of faults in the vicinity of the broader Thessaloniki area (e.g. Vamvakaris et al., 2006). Synthetic waveforms were produced for a coarse grid of receivers, in order to study the spatial variation of site-effects on seismic motion in the broader metropolitan area of Thessaloniki (Figure 2).

Earthquake Simulation Scenarios          

Figure 2: Earthquake locations used for the examined seismic scenarios (red stars) and the focal mechanisms used for each scenario. The coarser grid of receivers used for studying the spatial variation of various waveform and site-effect parameters for the six earthquake scenarios is also shown (black diamonds). The location of site OBS, used as a reference station in computations, is denoted with a yellow triangle (Figure after Skarlatoudis et al., 2011a).


The application that implements the 3DFD method is using the MPI libraries for inter-process communications, namely the mpich2 implementation. The compilation and execution of the code was tested in different types of machines and different Fortran90 compilers (commercial and free). The most accurate results and the minimum execution time in each system were achieved with the use of the commercial compiler Pathscale (version 3.0) (Skarlatoudis et al., 2008a). The execution of the 3DFD code is demanding in terms both of CPU power and computer memory. For the aforementioned computational model the memory demands reached 20 GB and the time of computation (per model) was approximately 15 on the HellasGrid Infrastructure with the synchronous usage of 40 Intel Xeon processors.

The implemented workflow relies mainly on gLite middleware (Figure 3). Also a large number of test runs for checking the compatibility of the results on the Grid with the ones obtained from other computational infrastructures have been performed. Moreover the scaling of the execution of the code on the HellasGrid Infrastructure was examined (Skarlatoudis et al., 2008a).

3D FDTD Application Workflow          

Figure 3: Schematic representation of the workflow in HellasGrid infrastructure (Figure after Skarlatoudis et al., 2008a)


Various measures, estimated from the 3D synthetic waveforms that can provide a more detailed evaluation of site-effects, such as spectral ratios, Peak Ground Velocity (PGV), cumulative kinetic energy and Housner Intensity, were used to probe the site-effects spatial distribution and ground motion variability. In Figure 4 the Peak Ground Velocity (PGV) ratio is shown for the 3D over the corresponding 1D bedrock reference model [(PGV3D)/(PGV1D)], estimated for the coarser grid of receivers and for the two horizontal components of ground motion, for all scenarios studied (Skarlatoudis et al. 2011a). The observed relative PGV distribution from the six scenarios, exhibits high values along the coastal zone, with the highest value (~4) shown in the area near the city harbor for the E-W component. High values of relative PGV are also observed in the western parts of the model for the E-W component.


Figure 4: Spatial variation of the average, from the six seismic scenarios, ratio [(PGV3D/PGV1D)], for the horizontal components of ground motion (Figure after Skarlatoudis et al., 2011a)


The 3D wave propagation characteristics of the 4th July, 1978 aftershock (M5.1) of the 20th June, 1978 strong mainshock (M6.5) that struck the city of Thessaloniki were also studied using the 3D finite-difference approach. In Figure 5 the spatial distribution of damages in the metropolitan area of Thessaloniki after the 1978 mainshock is presented (left figure) (Leventakis, 2003), together with the corresponding distribution of the RotD50 ground motion measure of the (PGV3D)/(PGV1D) ratio, for the frequency band 0.2Hz-3Hz (Skarlatoudis et al., 2011b). According to Leventakis, (2003) the largest damage was recorded in the city harbor area and parts of the eastern area of the Thessaloniki. Despite the various limitations of the comparison, a quite good correlation is observed between the damage distribution and the PGV spatial variation, suggesting that the role of local site amplifications studies here is much more important than other factors (e.g. differences in source radiation pattern, non-linearity, etc.).

Thessaloniki Damage Distribution

Figure 5: (Left) Spatial distribution of damage distribution in Thessaloniki caused by the mainshock of July 1978 according to Leventakis (2003). (Right) Spatial distribution of the RotD50 measure of relative PGV values (amplifications) from filtered (0.2Hz-3Hz) horizontal components (Figure after Skarlatoudis et al., 2011b).


This work has been partly performed in the framework of PENED-2003 (measure 8.3, action 8.3.4 of the 3rd EU Support Programme) and the Greek-Slovak Cooperation Agreement (EPAN 2004-2006). Most of the computations were realized on the EGI and HellasGrid infrastructureσ with the support of the Scientific Computing Center at the Aristotle University of Thessaloniki (AUTH). A significant part of the results presented here have been published in peer-review journals (see inline references) and/or presented in national and international conferences (see references at the end of this document).


Contact details:

  • Papazachos C.B., Professor, AUTH, kpapaza (at)
  • Skarlatoudis A.A, Dr. Seismologist, AUTH, askarlat (at)
  • Scientific Computing Center, AUTH, contact (at)



  1. Papazachos, B. C., Tsapanos, T. M. and Panagiotopoulos, D., (1983). The time, magnitude and space distribution of the 1978 Thessaloniki seismic sequence. The Thessaloniki northern Greece earthquake of June 20, 1978 and its seismic sequence. Technical chamber of Greece, section of central Macedonia, 117-131, 1983.
  2. Skarlatoudis A.A., C.B. Papazachos, P. Moczo, J. Kristek, N. Theodoulidis and P. Apostolidis, (2007). Evaluation of ground motions simulations for the city of Thessaloniki, Greece using the FD method: the role of site effects and focal mechanism at short epicentral distances, European Geosciences Union (EGU) General Assembly, Vienna, Austria.
  3. Skarlatoudis A.A., Korosoglou, P., Kanellopoulos, C. and Papazachos C.B, (2008a). Interaction of a 3D finite-difference application for computing synthetic waveforms with the HellasGrid infrastructure, 1st HellasGrid User Forum, Athens, Greece, 3rd EGEE User Forum, Clermont-Ferrand, France.
  4. Skarlatoudis A.A., C.B. Papazachos, P. Moczo, J. Kristek and N. Theodoulidis, (2008b). Ground motions simulations for the city of Thessaloniki, Greece, using a 3-D Finite-Difference wave propagation method, European Geosciences Union (EGU) General Assembly, Vienna, Austria and 31st European General Assembly of the European Seismological Commission, Chania, Greece.
  5. Skarlatoudis A.A., C.B. Papazachos and N. Theodoulidis, (2011b). Site response study of the city of Thessaloniki (N. Greece), for the 04/07/1978 (M5.1) aftershock, using a 3D Finite-Difference wave propagation method, accepted for publication in Bull. Seism. Soc. Am.