Category Archives: News

δημιουργία υπολογιστικού συστήματος υψηλών επιδόσεων (High Performance Computer – HPC)

Το Εθνικό Δίκτυο Έρευνας και Τεχνολογίας (ΕΔΕΤ) πρωτοστατεί στον χώρο των υπερυπολογιστικών υποδομών, με τη δημιουργία του πρώτου εθνικού υπολογιστικού συστήματος υψηλών επιδόσεων (High Performance Computer – HPC) για την υποστήριξη επιστημονικών εφαρμογών μεγάλης κλίμακας. Την προμήθεια και εγκατάσταση του νέου συστήματος ανέλαβε η COSMOS Business Systems Α.Ε.Β.Ε. σε συνεργασία με την ΙΒΜ, έπειτα από ανοικτό διεθνή διαγωνισμό που διενήργησε η ΕΔΕΤ Α.Ε. Η νέα υποδομή αναμένεται να παίξει σημαντικό ρόλο στην ανάπτυξη και προαγωγή της επιστημονικής έρευνας στη χώρα και στη Νοτιανατολική Ευρώπη.

Continue reading δημιουργία υπολογιστικού συστήματος υψηλών επιδόσεων (High Performance Computer – HPC)

Going beyond grid to enable life science data analysis

Life sciences are rapidly transitioning towards the new era of Big Data, where data, algorithms and knowledge are becoming increasingly available for all. Ever since 2007, when sequencers began giving flurries of data, life sciences have been steadily moving towards this ‘Forth Paradigm’, i.e. the analysis of massive data sets. EGI has been a key factor to this transition, and is yet to play a critical role in the shape of research within the context of life sciences.

However, it is evident that, beside the development of new applications and the improvement of the existing ones, the life sciences and ICT communities need to capitalize on their synergies to strengthen the border disciplines. To this end, we will host a thorough discussion of the expected trends, applications and needs of the communities at the upcoming EGI Community forum in Helsinki. This will take place in the form of a dedicated workshop and networking session, aptly named “Going beyond grid to enable life science data analysis”. Going beyond the traditional concept of a workshop and showcasing the role of e-infrastructures and networking platforms, our aim is to promote a discussion and set up a brainstorming environment between the two communities towards concrete collaborations and sustainable synergies.

The workshop will include invited speakers both from key life science infrastructures (such as ELIXIR, BioMedBridges and LifeWatch among others), but also from high profile research groups (such as EBI, CNRS and CSC among others). This will provide participants with a clear overview of the current state-of-the-art in life sciences’ infrastructures, and help identify future paths and outline critical collaborations.

This discussion phase will expand from the workshop to a dedicated networking session. The aim is to provide clear proposals for possible collaborations and suggestions for future EGI actions.

 

Workshop URL: https://indico.egi.eu/indico/sessionDisplay.py?sessionId=10&confId=1994#20140520

Networking UL: https://indico.egi.eu/indico/contributionDisplay.py?contribId=4&confId=1994

Μηχανικός Πληροφορικής – Support Specialist (κωδ. SS101)

Το Κέντρο Επιστημονικών Υπολογιστικών Υπηρεσιών του Α.Π.Θ. επιθυμεί να προσθέσει στo δυναμικό του έναν νέο μηχανικό πληροφορικής για την υποστήριξη των χρηστών, των επιστημονικών εφαρμογών και των υπηρεσιών του Κέντρου (Scientific Support Unit).

Η ομάδα υποστήριξης χρηστών & εφαρμογών είναι υπεύθυνη για:

  • την παροχή και επικαιροποίηση της τεκμηρίωσης των παρεχόμενων υπηρεσιών
  • την εύρυθμη λειτουργία του helpdesk χρηστών
  • την υποστήριξη επιστημονικών εφαρμογών από όλους τους τομείς επιστημόνων του ΑΠΘ (aplication porting)
  • τον προγραμματισμό και την διεξαγωγή σεμιναρίων κατάρτισης χρηστών
  • τις δράσεις διάχυσης και ενημέρωσης του Κέντρου εντός και εκτός Α.Π.Θ.
  • την έρευνα και βελτιστοποίηση επιστημονικών εφαρμογών σε υπερυπολογιστικές υποδομές

Ο ιδανικός υποψήφιος θα πρέπει να διαθέτει τα παρακάτω χαρακτηριστικά:

Απαραίτητα προσόντα:

  • καλή γνώση προγραμματισμού σε C/C++ ή/και FORTRAN
  • καλή γνώση shell scripting (i.e. bash, python, ruby)
  • βασικές γνώσεις χρήσης Linux (κατά προτίμηση RHEL/Centos/Scientific/Fedora Linux)
  • ικανότητα να αναγνωρίζει, να αντιμετωπίζει και να επιλύει προβλήματα
  • πολύ καλές επικοινωνιακές ικανότητες τόσο στο προφορικό όσο και στο γραπτό λόγο στην ελληνική και την αγγλική γλώσσα
  • διάθεση για εκμάθηση νέων τεχνολογιών & συνεχή κατάρτιση
  • ικανότητα να μπορεί να αναλάβει πρωτοβουλίες, να θέσει προτεραιότητες, να οργανώσει και εκτελέσει πολλαπλά καθήκοντα-εργασίες

Επιθυμητά προσόντα:

  • Βασικές γνώσεις αριθμητικής ανάλυσης
  • Γνώση παράλληλου προγραμματισμού (MPI, OpenMP)
  • Γνώση προγραμματισμού για GPU υποδομές (CUDA, OpenCL, OpenACC)
  • Συμμετοχή σε ομάδες ανάπτυξης λογισμικού ανοιχτού κώδικα
  • Γνώσεις profiling & benchmarking εφαρμογών
  • Γνώση στην υποβολή και διαχείριση batch τύπου εργασιών
  • Διδακτική εμπειρία
  • Εκπληρωμένες στρατιωτικές υποχρεώσεις

Οι ενδιαφερόμενοι μπορούν να υποβάλλουν το βιογραφικό τους στη διεύθυνση jobs at grid.auth.gr μέχρι την Παρασκευή 7 Σεπτεμβρίου αναφέροντας τον κωδικό της θέσης στο θέμα του μηνύματος.

Διαχειριστής Υπολογιστικών Συστημάτων – Systems Engineer (κωδ. SE102)

Το Κέντρο Επιστημονικών Υπολογιστικών Υπηρεσιών του Α.Π.Θ. επιθυμεί να προσθέσει στην Ομάδα Λειτουργίας Υποδομής ένα μηχανικό/διαχειριστή υπολογιστικών συστημάτων.

Τα μέλη της Ομάδας Λειτουργίας Υποδομής:

  • φροντίζουν για την εύρυθμη λειτουργία της υποδομής καθώς και τη συνεχή βελτίωση και ανάπτυξή της.
  • έχουν ως πρωταρχικά μελήματα την αδιάλειπτη λειτουργία των παρεχόμενων υπηρεσιών (24×7) και τη συνεχή βελτίωσή τους
  • συμμετέχουν σε ελληνικές και διεθνείς ομάδες συνεργασίας
  • συμβάλλουν στη βελτίωση της έρευνας και εκπαίδευσης σε ιδρυματικό, εθνικό και διεθνές επίπεδο μέσω της ανάπτυξης και λειτουργίας αξιόπιστων και πρωτοπόρων ηλεκτρονικών υποδομών
  • εργάζονται σε ένα ευχάριστο περιβάλλον το οποίο στηρίζει και προάγει τη δημιουργικότητα και τη πρωτοβουλία

Ο ιδανικός υποψήφιος θα πρέπει να διαθέτει τα παρακάτω χαρακτηριστικά:

Απαραίτητα προσόντα:

  • Τουλάχιστον 3 χρόνια εμπειρία στη διαχείριση εξυπηρετητών σε επίπεδο υλικού και λογισμικού
  • Διάθεση για εκμάθηση νέων τεχνολογιών & συνεχή εξειδίκευση
  • Πολύ καλή γνώση λειτουργικών συστημάτων LINUX/UNIX και ιδιαίτερα στη διαχείριση διανομών RHEL/Centos/Scientific Linux
  • Πολύ καλή γνώση shell scripting (bash, perl, ruby)
  • Πολύ καλή γνώση των δικτυακών πρωτοκόλλων TCP/IP
  • Αποτελεσματική επικοινωνία τόσο στο προφορικό όσο και στο γραπτό λόγο στην ελληνική και αγγλική γλώσσα
  • Ικανότητα θέσπισης & υλοποίησης πολιτικών, διαδικασιών και στόχων
  • Ικανότητα να αντιμετωπίζει και να επιλύει προβλήματα
  • Να μπορεί να αναλάβει πρωτοβουλίες, να θέσει προτεραιότητες, να οργανώσει και εκτελέσει πολλαπλά καθήκοντα-εργασίες

Επιθυμητά προσόντα:

  • Εμπειρία στη διαχείριση παράλληλων συστημάτων αποθήκευσης δεδομένων (π.χ. GPFS/Lustre/Gluster)
  • Εμπειρία στη διαχείριση ιδεατών μηχανών (Virtual Machines) με χρήση libVirt/XEN/KVM
  • Εμπειρία χρήση και ανάπτυξη κεντρικών εργαλείων για την αυτοματοποίηση της διαχείρισης υπολογιστικών υποδομών
  • Εμπειρία στη χρήση και επέκταση κεντρικών εργαλείων παρακολούθησης (π.χ. Nagios/Monit/Ganglia/Cacti)
  • Ενεργή συμμετοχή σε ομάδες ανάπτυξης λογισμικού ανοιχτού κώδικα

Οι ενδιαφερόμενοι μπορούν να υποβάλλουν το βιογραφικό τους στη διεύθυνση jobs at grid.auth.gr μέχρι την Παρασκευή 7 Σεπτεμβρίου αναφέροντας τον κωδικό της θέσης στο θέμα του μηνύματος.

Investigating the nature of explosive percolation transition

The Laboratory of Computational Physics is actively involved in the field of investigating the phase transition of various natural and artificial systems. Currently, much effort is being concentrated on the definition of the type of phase transition for a new competitive model named “explosive” percolation: when filling sequentially an empty lattice with occupied sites, instead of randomly occupying a site or bond (according to the classical paradigm), we choose two candidates and investigate which one of them leads to the smaller clustering. The one that does this is kept as a new occupied site on the lattice while the second one is discarded (Figure 1). This procedure considerably slows down the emergence of the giant component, which is now formed abruptly, thus the term “explosive”.


Achlioptas Process

Figure 1: Achlioptas Process according to the sum rule (APSR) for site percolation. White cells correspond to unoccupied sites while colored cells correspond to occupied sites. Different colors (red,green,gray,blue) indicate different clusters. (a) We randomly select two trial unoccupied sites (yellow), noted by A and B, one at a time. We evaluate the size of the clusters that are formed and contain sites A and B, (s_A) and (s_B) respectively. In this example (s_A = 10) and (s_B = 14). (b) According to the Achlioptas Process, we keep site A which leads to the smaller cluster and discard site B.

 

Following the first publication of Achlioptas et al., a debate was initiated between various teams whether the procedure is continuous or discontinuous. Contributing to these considerations, we have investigated explosive site percolation, both using the product and sum rules. It was found that the exponent (beta / nu ) is vanishing small for both cases, pointing towards the continuity of the transition. Also, we performed numerical analysis for the case of a reverse Achlioptas process (Figure 2). It was shown that for finite systems there is a hysteresis loop between the reverse and forward procedure (Figure 2). This loop vanishes at infinity, giving strong evidence for the continuity of the “explosive” site percolation (Figure 3). Moreover, “explosive” site and bond percolation seem to belong to a different universality class

        

Figure 2: Reverse Achlioptas Process (AP1) for site percolation according to the sum rule. Blue is for the occupied sites while white for the unoccupied sites. Initially, the lattice is fully occupied. (a) An instance of the process. We randomly choose two trial sites (yellow), noted as A and B, and remove them from the lattice. (b) The clusters formed after the removal. (c) We place site A again in the lattice and calculate the size of the cluster in which it belongs, (s_A = 16). (d) We do the same as before for the case of site B and calculate (s_B = 26). We remove site A which leads to the formation of the smaller cluster and keep site B.

Reverse Achlioptas Process (1)
Reverse Achlioptas Process (2)

 

Hysteresis Loop

Figure 3: (a) Hysteresis loop between a reverse (red dots) and the forward (black squares) Achlioptas process for a (700times 700) system. (b) The loop vanishes in the thermodynamic limit

 

Simulations were performed on the EGI. A diagram of the number of jobs and CPU hours consumed per month is shown in Figure 4. We have used extensively the gLite parametric job submission mechanism, using as parameter the different realizations of the system. On average, more than 1000 jobs per simulation were submitted for each lattice size. Considering a typical ( 1000 times 1000 ) lattice, the average time consumed for one run approached 172 minutes. If we had to perform the calculations on a single CPU, this would mean that it would take us 120 days to get complete results for just one lattice size. Using the EGI thus has helped us minimize this time approximately to 172 minutes. This translates to a time gain of the order of (10^3). Moreover, given the availability of more resources, this gain may be even higher. This is a very important feature, because we can numerically analyze systems of the order of (10^6) in a tolerable amount of time.

Achlioptas Jobs

Figure 4: Number of jobs and CPU hours per month consumed for the simulations

 

References:

  1. D.Achlioptas, R.M. D’Souza and J. Spencer, Explosive Percolation in Random Networks, Science 323,p. 1453 ,(2009)
  2. R.A. da Costa, S.N.Dorogovtsev, A.V.Goltsev, and J.F.F.Mendes, “Explosive Percolation” Transition is Actually Continuous, Physical Review Letters 105(25),255701,(2010)
  3. P.Grassberger, C. Christensen, G. Bizhani, S-W Son, and M. Paczuski, Explosive Percolation is Continuous, but with Unusual Finite Size Behavior, Phys Rev Lett. 106(22), (2011)
  4. O. Riordan and L. Warnke, Explosive percolation is continuous, Science 333 (2011)
  5. R.M. Ziff, Explosive growth in biased dynamic percolation on two-dimensional regular lattice networks, Physical Review Letters 103(4),45701,(2009)
  6. F. Radicchi and S. Fortunato, Explosive Percolation: A numerical analysis, Physical Review E 81(3),036110,(2010)
  7. N.A.M. Araújo and H.J.Hermann, Explosive Percolation via Control of the Largest Cluster, Physical Review Letters 105(3),035701,(2010)

Protein classification algorithms over a distributed computing environment

One of the most important challenges in modern Bioinformatics is the accurate prediction of the functional behavior of proteins. To this end, researchers from the Intelligent Systems and Software Engineering Lab (Dept. of Electrical and Computer Engineering) have been working successfully for several years on the design and implementation of novel data mining algorithms [1-3].

The strong correlation that exists between the properties of a protein and its motif sequence (Figure 1) makes the prediction of protein function possible. The core concept of any approach is to employ data mining techniques in order to construct models, based on data generated from already annotated protein sequences. A major issue in such approaches is the complexity of the problem in terms of data size and computational cost. However, the utilization of the HellasGrid Infrastructure and the EGI Grid, coupled with the close support of the Scientific Computing Center at A.U.Th., helped overcome the computational difficulties often encountered in protein classification problems.

Figure 1: [a] P00747 (Plasminogen precursor – PLMN_HUMAN) protein chain, and [b] an amino-acid pattern expressed as a regular expression

 

G-Class was the first data-mining algorithm successfully ported to the EGI Grid infrastructure [4]. The G-Class methodology follows a “divide and conquer” approach comprised of 3 steps (Figure 2).

Figure 2: First, protein data from PROSITE, an expert-based database, are divided into multiple disjoint sets, each one preserving the original data distribution. The new sets are used as training sets, and multiple models are derived by means of standard data mining algorithms. Finally, the models are combined to produce the final classification rules, which can be used to classify a given instance and evaluate the methodology.

 

G-Class was a fairly simplistic approach to the protein classification problem, using generic data mining algorithms for the construction of several models simultaneously. However, the results were impressive both in terms of the speed-up ratio (ranging from 10 to 60) and the amount of data (ranging from 662 proteins over 27 different classes, to 7027 proteins over 96 classes) that were able to be processed (Figure 3).

Figure 3: The processing time in all cases follows the (e^{-alpha x}) model, where (alpha) depends on the size of the original dataset and (x) is the number of splits. The accuracy of the methodology is fairly constant over the number of splits, with minor fluctuations owing to the distribution of the instances of the overlapping protein classes over the different dataset splits.

 

A second approach was aiming towards the automatic annotation of protein sequences. Although there are a lot of tools for protein annotation, such as the Gene Ontology Project, ProDom, Pfam, and SCOP, in order to assign annotation terms to new non-annotated protein sequences, they have to be either processed directly in a lab or characterized through similarity to already annotated sequences. At the moment, the amino acid sequence of more than 1.000.000 proteins has been obtained. On the contrary, the properties and functions of only 4% of these proteins are known. Therefore, the need for a systematic way to derive clues for the properties of a protein by inspecting its amino acid sequence is obvious. PROTEAS is a novel parallel methodology for protein function prediction which predicts the annotation of an unknown protein, by running its motif sequence each model, producing similarity scores [5-6]. This methodology has been implemented so that it can effectively utilize various classification schemata, such as Gene Ontology, SCOP families, etc (Figure 4).

Figure 4: PROTEAS workflow diagram

 

The main drawback of this methodology is that it requires a substantial amount of computational time to complete. It has been shown experimentally that the execution time needed to process the entire dataset on a single processor is prohibitively long. In order to address this issue, PROTEAS has been implemented both as a standalone and as a grid-based application. The grid-based application utilizes the MPI library for communication between distinct processes and uses the EGI Grid infrastructure in order to minimize the execution times (Figure 5).

 

Figure 5: Execution times for model training

 

Moreover, the Grid provides for the seamless integration of the training process and the actual model evaluation by allowing the concurrent retraining of Gene Ontology models from different input sources or experts and the use of the existing ones (Figure 6).

Figure 6: Execution times for specific Train/Test set ratio and different number of input files (left column), and for different ratios but specific number of input files (right column)

 

The application was executed on available clusters using from 4 to 16 processors in various experiment configurations (Figure 7). In all cases the accuracy of the results was very high and the overall execution time was satisfactory.

 

Figure 7: Total processing times for the classification of a single protein sequence, based on the number of CPUs used and the number of input files used as the model construction base.

 

Contact details:

  • Pericles A. Mitkas, Professor, AUTH, mitkas (at) eng.auth.gr
  • Fotis E. Psomopoulos, Research Associate, CERTH, fpsom (at) issel.ee.auth.gr
  • Scientific Computing Center, AUTH, contact (at) grid.auth.gr

 

References:

  1. Fotis E. Psomopoulos and Pericles A. Mitkas, “Bioinformatics Algorithm Development for Grid Environments”, Journal of Systems & Software, vol. 83, No 7. (2010), pp. 1249-1257.
  2. Fotis E. Psomopoulos and Pericles A. Mitkas, “Data Mining in Proteomics using Grid Computing”, Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine and Healthcare, Editor: Mario Cannataro, Laboratory of Bioinformatics, University Magna Graecia of Catanzaro, 88100 Catanzaro, Italy, 2009, (chapter 13, pp. 245-267), UK: IGI Global.
  3. Fotis E. Psomopoulos and Pericles A. Mitkas: “Sizing Up: Bioinformatics in a Grid Context”, 3rd Conference of the Hellenic Society For Computational Biology and Bioinformatics – HSCBB ’08, 30-31 October 2008, Thessaloniki, Greece.
  4. Helen Polychroniadou, Fotis E. Psomopoulos and Pericles A. Mitkas: “g-Class: A Divide and Conquer Application for Grid Protein Classification”, Proceedings of the 2nd ADMKD 2006: Workshop on Data Mining and Knowledge Discovery (in conjunction with ADBIS’2006: The 10th East-European Conference on Advances in Databases and Information Systems), 3-7 September 2006, Thessaloniki, Greece, pp. 121-132.
  5. Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas, “A parallel data mining application for Gene Ontology term prediction”, 3d EGEE User Forum, Polydome Conference Centre, 11-14 February 2008, Clermont-Ferrand, France.
  6. Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas, “A parallel data mining methodology for protein function prediction utilizing finite state automata”, presented at the 2nd Electrical and Computer Engineering Student Conference, April 2008, Athens, Greece.

Spatial distribution of site-effects and wave propagation properties in Thessaloniki (N. Greece) using a 3D finite difference method

Scientists from the Geophysical Laboratory (Department of Geophysics, School of Geology of the Aristotle Univ. of Thessaloniki) have studied the site effects of seismic motion in the metropolitan area of the city of Thessaloniki (Northern Greece) for various seismic earthquake scenarios with a 3D finite-difference modeling approach, using the HellasGrid Infrastructure and the EGI with the support of the Scientific Computing Center at A.U.Th.

The city of Thessaloniki (Northern Greece) was selected since it is located in a moderate-to-high seismicity region (Papazachos et al., 1983), with the Servomacedonian massif and Northern Aegean through areas exhibiting the highest seismicity (Figure 1). The city suffered several large earthquakes throughout its history, many of them causing significant damages and human losses (Papazachos and Papazachou, 2002).

Thessaloniki Earthquake Map

Figure 1: Map of known earthquakes with M≥3.0 which occurred in the broader area of central–northern Greece from the historical times (550 BC) till 2007 (Figure after Skarlatoudis et al., 2011a).

 

An explicit 3D 4th-order velocity-stress finite-difference scheme with discontinuous spatial grid was used to produce synthetic waveforms with numerical simulations. The scheme solves the equation of motion and Hooke’s law for viscoelastic medium with rheology described by the generalized Maxwell body model. Details on the scheme, its grid and material parameterization are provided by Moczo et al. (2002), Kristek & Moczo (2003), Moczo & Kristek (2005), Moczo et al. (2007) and Kristek et al. (2009b).

The computational model used for the simulations is based on the geophysical-geotechnical model and the dynamic characteristics of the soil formations proposed by Anastasiadis et al. (2001) and covers an area of 22 x 16 Km2 (dotted rectangle in Figure 1) (Skarlatoudis et al., 2007; 2008b; Skarlatoudis et al., 2010).

Numerical simulations were performed for six seismic scenarios, corresponding to three different hypocentral locations and two different focal mechanisms for each one. Seismic scenarios with E-W trending normal faults are referred as scenarios (a), while the ones with NW-SE trending normal faults as scenarios (b) (Figure 2). Both types of normal faults (E-W and NW-SE) are the dominant types of faults in the vicinity of the broader Thessaloniki area (e.g. Vamvakaris et al., 2006). Synthetic waveforms were produced for a coarse grid of receivers, in order to study the spatial variation of site-effects on seismic motion in the broader metropolitan area of Thessaloniki (Figure 2).

Earthquake Simulation Scenarios          

Figure 2: Earthquake locations used for the examined seismic scenarios (red stars) and the focal mechanisms used for each scenario. The coarser grid of receivers used for studying the spatial variation of various waveform and site-effect parameters for the six earthquake scenarios is also shown (black diamonds). The location of site OBS, used as a reference station in computations, is denoted with a yellow triangle (Figure after Skarlatoudis et al., 2011a).

 

The application that implements the 3DFD method is using the MPI libraries for inter-process communications, namely the mpich2 implementation. The compilation and execution of the code was tested in different types of machines and different Fortran90 compilers (commercial and free). The most accurate results and the minimum execution time in each system were achieved with the use of the commercial compiler Pathscale (version 3.0) (Skarlatoudis et al., 2008a). The execution of the 3DFD code is demanding in terms both of CPU power and computer memory. For the aforementioned computational model the memory demands reached 20 GB and the time of computation (per model) was approximately 15 on the HellasGrid Infrastructure with the synchronous usage of 40 Intel Xeon processors.

The implemented workflow relies mainly on gLite middleware (Figure 3). Also a large number of test runs for checking the compatibility of the results on the Grid with the ones obtained from other computational infrastructures have been performed. Moreover the scaling of the execution of the code on the HellasGrid Infrastructure was examined (Skarlatoudis et al., 2008a).

3D FDTD Application Workflow          

Figure 3: Schematic representation of the workflow in HellasGrid infrastructure (Figure after Skarlatoudis et al., 2008a)

 

Various measures, estimated from the 3D synthetic waveforms that can provide a more detailed evaluation of site-effects, such as spectral ratios, Peak Ground Velocity (PGV), cumulative kinetic energy and Housner Intensity, were used to probe the site-effects spatial distribution and ground motion variability. In Figure 4 the Peak Ground Velocity (PGV) ratio is shown for the 3D over the corresponding 1D bedrock reference model [(PGV3D)/(PGV1D)], estimated for the coarser grid of receivers and for the two horizontal components of ground motion, for all scenarios studied (Skarlatoudis et al. 2011a). The observed relative PGV distribution from the six scenarios, exhibits high values along the coastal zone, with the highest value (~4) shown in the area near the city harbor for the E-W component. High values of relative PGV are also observed in the western parts of the model for the E-W component.

3DFD_Thessaloniki

Figure 4: Spatial variation of the average, from the six seismic scenarios, ratio [(PGV3D/PGV1D)], for the horizontal components of ground motion (Figure after Skarlatoudis et al., 2011a)

 

The 3D wave propagation characteristics of the 4th July, 1978 aftershock (M5.1) of the 20th June, 1978 strong mainshock (M6.5) that struck the city of Thessaloniki were also studied using the 3D finite-difference approach. In Figure 5 the spatial distribution of damages in the metropolitan area of Thessaloniki after the 1978 mainshock is presented (left figure) (Leventakis, 2003), together with the corresponding distribution of the RotD50 ground motion measure of the (PGV3D)/(PGV1D) ratio, for the frequency band 0.2Hz-3Hz (Skarlatoudis et al., 2011b). According to Leventakis, (2003) the largest damage was recorded in the city harbor area and parts of the eastern area of the Thessaloniki. Despite the various limitations of the comparison, a quite good correlation is observed between the damage distribution and the PGV spatial variation, suggesting that the role of local site amplifications studies here is much more important than other factors (e.g. differences in source radiation pattern, non-linearity, etc.).

Thessaloniki Damage Distribution

Figure 5: (Left) Spatial distribution of damage distribution in Thessaloniki caused by the mainshock of July 1978 according to Leventakis (2003). (Right) Spatial distribution of the RotD50 measure of relative PGV values (amplifications) from filtered (0.2Hz-3Hz) horizontal components (Figure after Skarlatoudis et al., 2011b).

 

This work has been partly performed in the framework of PENED-2003 (measure 8.3, action 8.3.4 of the 3rd EU Support Programme) and the Greek-Slovak Cooperation Agreement (EPAN 2004-2006). Most of the computations were realized on the EGI and HellasGrid infrastructureσ with the support of the Scientific Computing Center at the Aristotle University of Thessaloniki (AUTH). A significant part of the results presented here have been published in peer-review journals (see inline references) and/or presented in national and international conferences (see references at the end of this document).

 

Contact details:

  • Papazachos C.B., Professor, AUTH, kpapaza (at) geo.auth.gr
  • Skarlatoudis A.A, Dr. Seismologist, AUTH, askarlat (at) geo.auth.gr
  • Scientific Computing Center, AUTH, contact (at) grid.auth.gr

 

References:

  1. Papazachos, B. C., Tsapanos, T. M. and Panagiotopoulos, D., (1983). The time, magnitude and space distribution of the 1978 Thessaloniki seismic sequence. The Thessaloniki northern Greece earthquake of June 20, 1978 and its seismic sequence. Technical chamber of Greece, section of central Macedonia, 117-131, 1983.
  2. Skarlatoudis A.A., C.B. Papazachos, P. Moczo, J. Kristek, N. Theodoulidis and P. Apostolidis, (2007). Evaluation of ground motions simulations for the city of Thessaloniki, Greece using the FD method: the role of site effects and focal mechanism at short epicentral distances, European Geosciences Union (EGU) General Assembly, Vienna, Austria.
  3. Skarlatoudis A.A., Korosoglou, P., Kanellopoulos, C. and Papazachos C.B, (2008a). Interaction of a 3D finite-difference application for computing synthetic waveforms with the HellasGrid infrastructure, 1st HellasGrid User Forum, Athens, Greece, 3rd EGEE User Forum, Clermont-Ferrand, France.
  4. Skarlatoudis A.A., C.B. Papazachos, P. Moczo, J. Kristek and N. Theodoulidis, (2008b). Ground motions simulations for the city of Thessaloniki, Greece, using a 3-D Finite-Difference wave propagation method, European Geosciences Union (EGU) General Assembly, Vienna, Austria and 31st European General Assembly of the European Seismological Commission, Chania, Greece.
  5. Skarlatoudis A.A., C.B. Papazachos and N. Theodoulidis, (2011b). Site response study of the city of Thessaloniki (N. Greece), for the 04/07/1978 (M5.1) aftershock, using a 3D Finite-Difference wave propagation method, accepted for publication in Bull. Seism. Soc. Am.

Evaluating the impact of climate change on European air quality

Scientists from the Laboratory of Atmospheric Physics (School of Physics) and the Department of Meteorology and Climatology (School of Geology) have simulated the regional climate-air quality over Europe for two future decades (2041-2050 and 2091-2100) using the HellasGrid Infrastructure and the EGI Grid, with the support of the Scientific Computing Center at A.U.Th..

The computational models used for these simulations are the regional climate model RegCM3 and the air quality model CAMx, which have been off line coupled in this case (Figure 1). The control simulation for the decade 1991-2000 was performed twice, once externally forced by the ERA40 reanalysis and once using the global circulation model ECHAM5, in order to investigate the importance of external meteorological forcing on air quality (Katragkou et al., 2010). The RegCM3 model was forced by ECHAM5 under the A1B emission scenario for two future time slices, namely 2041-2050 and 2091-2100. These simulations served as a theoretical experiment of evaluating the impact of climate change on air pollution (Katragkou et al., 2011).

For each decadal simulation the computation consumed approximately 1000 CPU hours. CAMx simulations were performed on SMP machines as the model has been intrinsically parallelized with the OpenMP library. Using this feature of the CAMx model a significant reduction on the overall computation time was feasible.

In terms of storage, the resources required for archiving the CAMx output files are estimated to slightly more than 5TB.

Off-line coupling of RegCM3 and CAMx computational models

Figure 1: A schematic illustrating an outline of the modelling system RegCM3/CAMx applied in this study (from Zanis et al., 2011)

Surface ozone simulated by RegCM3/CAMx was evaluated against ground based measurements from the European database EMEP (Zanis et al., 2011). The air quality simulations available at AUTH for the three time slices (1991-2000, 2041-2050 and 2091-2100) over Europe with a resolution of 50 Km have been provided as air quality boundaries for higher resolution air quality simulations over sub-European grids (Huszar et al., 2011).

The results suggest that changes imposed by climate change until the 2040s in surface ozone concentration during summer will be below 1 ppbv (parts per billion by volume) . By the 2090s, however, changes are foreseen to be more significant especially over south-west Europe, where the median of near surface ozone has been found to increase by 6.2 ppbv.

Near surface ozone concentrations over Europe

Figure 2: Average summer surface ozone for the control simulation 1991-2000 (left). Differences in simulated average summer ozone between 2091-2100 and control simulation (right). The grey color corresponds to non-statistical significant differences (from Katragkou et al., 2011)

The median of summer near surface temperature for Europe at the end of the 21st century was calculated at 2.7K higher than the end of the 20th century with more intense temperature increase simulated for southern Europe. A prominent outcome was the decrease of cloudiness mostly over western Europe at the end of the 21st century associated with an anticyclonic anomaly which favours more stagnant conditions and weakening of the westerly winds (Katragkou et al., 2011).

Mean temperature differences

Figure 3: Mean differences between second future decade (2091-2100) and the present decade (1991-2000) for summer in the fields of surface temperature (left) and geopotential height at 500 hPa (right).The red contours correspond to geopotential height at 500 hPa during the control decade (from Katragkou et al., 2011).

This work has been accomplished in the framework of the FP6 European Project CECILIA (Central and Eastern Europe Climate Change Impact and Vulnerability Assessment, Contract Nr 037005). The results were produced on the EGI and HellasGrid infrastructure with the support of the Scientific Computing Center at the Aristotle University of Thessaloniki (AUTH). The results of this work have been published in peer-review journals (see references), presented in several national and international conferences and awarded by the Hellenic Meteorological Society (2008), the European Association for the Science of Air Pollution (2009) and the Research Committee of the Aristotle University of Thessaloniki (2010).

Contact details:

  • Dimitris Melas (PI), Associate Professor, AUTH, melas (at) auth.gr
  • Prodromos Zanis, Assistant Professor, AUTH, zanis (at) geo.auth.gr
  • Eleni Katragkou, Lecturer, AUTH, katragou (at) auth.gr
  • Scientific Computing Center, AUTH, contact (at) grid.auth.gr

References:

  1. Huszar P., K. Juda-Rezler, T. Halenka, H. Chervenkov, D. Syrakov, B. C. Krueger, P. Zanis, D. Melas, E. Katragkou, M. Reizer, W. Trapp, M. Belda, Potential climate change impacts on ozone and PM levels over Central and Eastern Europe from high resolution simulations, Climate Research (in press), 2011
  2. Katragkou Ε., P. Zanis, I. Tegoulias, D. Melas, I. Kioutsioukis, B. C. Krüger, P. Huszar, T. Halenka, S. Rauscher, Decadal regional air quality simulations over Europe in present climate: near surface ozone sensitivity to external meteorological forcing, Atmospheric Chemistry and Physics, 10, 11805-11821, 2010
  3. Κatragkou E., P. Zanis, I. Kioutsioukis, I. Tegoulias, D. Melas, B.C. Krüger, E. Coppola, Future climate change impacts on summer surface ozone from regional climate-air quality simulations over Europe, J Geophys Res (in press), 2011
  4. Zanis P., E. Katragkou, I. Tegoulias, A. Poupkou, D. Melas, Evaluation of near surface ozone in air quality simulations forced by a regional climate model over Europe for the period 1991-2000, Atmospheric Environment, 45, 6489-6500, 2011

Βελτιώνοντας τις εφαρμογές: Profiling

Ανάμεσα στα κύρια εργαλεία που χρειάζονται όταν αναπτύσσουμε ή αναβαθμίζουμε ένα πρόγραμμα, εκτός από έναν debugger (αποσφαλματωτή), πρέπει να είναι και ένας profiler (καταγραφέας ή αναλυτής προγράμματος).

Στην καταγραφή (profiling) συλλέγονται πληροφορίες κατά τη διάρκεια της εκτέλεσης ενός προγράμματος, με απώτερο σκοπό να προσδιοριστεί ο χρόνος εκτέλεσης και οι απαιτήσεις μνήμης που έχουν τα επιμέρους κομμάτια ενός προγράμματος. Με αυτήν την διαδικασία μπορεί να αποδοθεί ο χρόνος που προσδίδει στον τελικό χρόνο εκτέλεσης κάθε συνάρτηση ή τμήμα του προγράμματος.

Τα αποτελέσματα αυτά μπορούν να χρησιμοποιηθούν για μια στοχευμένη βελτιστοποίηση έτσι ώστε να δίδεται βάρος στα κομμάτια που πραγματικά αποσπούν σημαντικό χρόνο σε σχέση με λιγότερο χρονοβόρα τμήματα. Φυσικά αυτή η διαδικασία μπορεί να χρησιμοποιηθεί πέρα από τη βελτιστοποίηση και για την σταδιακή παραλληλοποίηση ενός προγράμματος, ξεκινώντας από τα πιο απαιτητικά σε χρόνο και μνήμη τμήματα του προγράμματος, και συνεχίζοντας με τα λιγότερα απαιτητικά. Έτσι, ακόμα και με μια παράλληλη συνάρτηση μπορεί να υπάρξει αισθητή διαφορά στον χρόνο εκτέλεσης.

Γενικά δίδονται εργαλεία καταγραφής από τις συλλογές και τις σουίτες μεταγλωττιστών που χρησιμοποιεί ο προγραμματιστής. Σαν βάση χρησιμοποιείται η πρόταση του ανοικτού λογισμικού gprof από την GNU Compiler Suite, με τους περισσότερους μεταγλωττιστές να βγάζουν αποτελέσματα συμβατά με τον συγκεκριμένο καταγραφέα.

Για να μπορέσει ο compiler να δώσει τις απαραίτητες εντολές στο πρόγραμμα και να παράγει την απαραίτητη πληροφορία που αναφέρεται πιο πάνω, το πρόγραμμα πρέπει να γίνει link και compile με κάποιο συγκεκριμένο FLAG, συνήθως το “-pg” (ανάλογα και με τον μεταγλωττιστή που χρησιμοποιείται). Φυσικά ο χρόνος εκτέλεσης ενός προγράμματος που παράγει δεδομένα profiling, είναι αισθητά πιο αργός σε σχέση με το κανονικό και γενικά ένα εκτελέσιμο που αποδίδει δεδομένα profiling χρησιμοποιείται μόνο για αυτόν τον σκοπό.

Gprof2Dot

Ένα αρκετά απλό και ελκυστικό εργαλείο που αποδίδει εποπτικά την πληροφορία που παράγεται από το profiling μιας εφαρμογής, είναι το Gprof2Dot. Το Gprof2Dot, το οποίο βρίσκεται εγκατεστημένο στον διακομιστή, αναπαριστά σε ένα γράφο (Call Graph) τις εξαρτήσεις και τα ποσοστά του χρόνου εκτέλεσης από κάθε συνάρτηση και μπορεί να τα αποθηκευτεί σε μορφή εικόνας, πχ PNG.

Ένα τυπικό workflow για την ανάλυση κάποιο προγράμματος είναι ο ακόλουθος:

Το αποτέλεσμα μπορείτε να το δείτε απευθείας από την κονσόλα του με την εντολή:

Τυπικό αποτέλεσμα μιας εφαρμογής είναι το ακόλουθο:

Επειδή η καταγραφή λειτουργεί δειγματοληπτικά, διαφορετικές εκτελέσεις της ίδιας εφαρμογής γίνεται να αποδώσουν διαφορετικά αποτελέσματα. Αυτός είναι ένας λόγος που προτείνεται να συγκεντρώνονται και να αναλύονται αποτελέσματα από πολλές εκτελέσεις.

Parallel Profilers

Λόγω της διαφορετικής αρχιτεκτονικής σύνδεσης και υλοποίησης κατανεμημένων υπολογιστικών πόρων, για τα προγράμματα που κάνουν χρήση παραλληλισμού δεν είναι δυνατό οι σειριακοί καταγραφείς να παρέχουν μια έγκυρη εικόνα για τον καταμερισμό την κατανάλωσης των πόρων στα τμήματα του προγράμματος. Για αυτό το σκοπό έχουν δημιουργηθεί ειδικοί καταγραφείς για παράλληλες εφαρμογές. Τυπικά παραδείγματα είναι η ανοιχτού λογισμικού εργαλειοθήκη Scalasca που έχει αναπτυχθεί από το εργαστήριο Jülich και η εμπορική σουίτα Allinea OPT. Οδηγίες χρήσης του παράλληλου καταγραφέα Scalasca στο μπορείτε να βρείτε εδώ.

Χρήσιμοι σύνδεσμοι

* Sourceware.org
* CS Utah
* Linuxtopia