Protein classification algorithms over a distributed computing environment

One of the most important challenges in modern Bioinformatics is the accurate prediction of the functional behavior of proteins. To this end, researchers from the Intelligent Systems and Software Engineering Lab (Dept. of Electrical and Computer Engineering) have been working successfully for several years on the design and implementation of novel data mining algorithms [1-3].

The strong correlation that exists between the properties of a protein and its motif sequence (Figure 1) makes the prediction of protein function possible. The core concept of any approach is to employ data mining techniques in order to construct models, based on data generated from already annotated protein sequences. A major issue in such approaches is the complexity of the problem in terms of data size and computational cost. However, the utilization of the HellasGrid Infrastructure and the EGI Grid, coupled with the close support of the Scientific Computing Center at A.U.Th., helped overcome the computational difficulties often encountered in protein classification problems.

Figure 1: [a] P00747 (Plasminogen precursor – PLMN_HUMAN) protein chain, and [b] an amino-acid pattern expressed as a regular expression

 

G-Class was the first data-mining algorithm successfully ported to the EGI Grid infrastructure [4]. The G-Class methodology follows a “divide and conquer” approach comprised of 3 steps (Figure 2).

Figure 2: First, protein data from PROSITE, an expert-based database, are divided into multiple disjoint sets, each one preserving the original data distribution. The new sets are used as training sets, and multiple models are derived by means of standard data mining algorithms. Finally, the models are combined to produce the final classification rules, which can be used to classify a given instance and evaluate the methodology.

 

G-Class was a fairly simplistic approach to the protein classification problem, using generic data mining algorithms for the construction of several models simultaneously. However, the results were impressive both in terms of the speed-up ratio (ranging from 10 to 60) and the amount of data (ranging from 662 proteins over 27 different classes, to 7027 proteins over 96 classes) that were able to be processed (Figure 3).

Figure 3: The processing time in all cases follows the (e^{-alpha x}) model, where (alpha) depends on the size of the original dataset and (x) is the number of splits. The accuracy of the methodology is fairly constant over the number of splits, with minor fluctuations owing to the distribution of the instances of the overlapping protein classes over the different dataset splits.

 

A second approach was aiming towards the automatic annotation of protein sequences. Although there are a lot of tools for protein annotation, such as the Gene Ontology Project, ProDom, Pfam, and SCOP, in order to assign annotation terms to new non-annotated protein sequences, they have to be either processed directly in a lab or characterized through similarity to already annotated sequences. At the moment, the amino acid sequence of more than 1.000.000 proteins has been obtained. On the contrary, the properties and functions of only 4% of these proteins are known. Therefore, the need for a systematic way to derive clues for the properties of a protein by inspecting its amino acid sequence is obvious. PROTEAS is a novel parallel methodology for protein function prediction which predicts the annotation of an unknown protein, by running its motif sequence each model, producing similarity scores [5-6]. This methodology has been implemented so that it can effectively utilize various classification schemata, such as Gene Ontology, SCOP families, etc (Figure 4).

Figure 4: PROTEAS workflow diagram

 

The main drawback of this methodology is that it requires a substantial amount of computational time to complete. It has been shown experimentally that the execution time needed to process the entire dataset on a single processor is prohibitively long. In order to address this issue, PROTEAS has been implemented both as a standalone and as a grid-based application. The grid-based application utilizes the MPI library for communication between distinct processes and uses the EGI Grid infrastructure in order to minimize the execution times (Figure 5).

 

Figure 5: Execution times for model training

 

Moreover, the Grid provides for the seamless integration of the training process and the actual model evaluation by allowing the concurrent retraining of Gene Ontology models from different input sources or experts and the use of the existing ones (Figure 6).

Figure 6: Execution times for specific Train/Test set ratio and different number of input files (left column), and for different ratios but specific number of input files (right column)

 

The application was executed on available clusters using from 4 to 16 processors in various experiment configurations (Figure 7). In all cases the accuracy of the results was very high and the overall execution time was satisfactory.

 

Figure 7: Total processing times for the classification of a single protein sequence, based on the number of CPUs used and the number of input files used as the model construction base.

 

Contact details:

  • Pericles A. Mitkas, Professor, AUTH, mitkas (at) eng.auth.gr
  • Fotis E. Psomopoulos, Research Associate, CERTH, fpsom (at) issel.ee.auth.gr
  • Scientific Computing Center, AUTH, contact (at) grid.auth.gr

 

References:

  1. Fotis E. Psomopoulos and Pericles A. Mitkas, “Bioinformatics Algorithm Development for Grid Environments”, Journal of Systems & Software, vol. 83, No 7. (2010), pp. 1249-1257.
  2. Fotis E. Psomopoulos and Pericles A. Mitkas, “Data Mining in Proteomics using Grid Computing”, Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine and Healthcare, Editor: Mario Cannataro, Laboratory of Bioinformatics, University Magna Graecia of Catanzaro, 88100 Catanzaro, Italy, 2009, (chapter 13, pp. 245-267), UK: IGI Global.
  3. Fotis E. Psomopoulos and Pericles A. Mitkas: “Sizing Up: Bioinformatics in a Grid Context”, 3rd Conference of the Hellenic Society For Computational Biology and Bioinformatics – HSCBB ’08, 30-31 October 2008, Thessaloniki, Greece.
  4. Helen Polychroniadou, Fotis E. Psomopoulos and Pericles A. Mitkas: “g-Class: A Divide and Conquer Application for Grid Protein Classification”, Proceedings of the 2nd ADMKD 2006: Workshop on Data Mining and Knowledge Discovery (in conjunction with ADBIS’2006: The 10th East-European Conference on Advances in Databases and Information Systems), 3-7 September 2006, Thessaloniki, Greece, pp. 121-132.
  5. Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas, “A parallel data mining application for Gene Ontology term prediction”, 3d EGEE User Forum, Polydome Conference Centre, 11-14 February 2008, Clermont-Ferrand, France.
  6. Christos N. Gkekas, Fotis E. Psomopoulos and Pericles A. Mitkas, “A parallel data mining methodology for protein function prediction utilizing finite state automata”, presented at the 2nd Electrical and Computer Engineering Student Conference, April 2008, Athens, Greece.

Spatial distribution of site-effects and wave propagation properties in Thessaloniki (N. Greece) using a 3D finite difference method

Scientists from the Geophysical Laboratory (Department of Geophysics, School of Geology of the Aristotle Univ. of Thessaloniki) have studied the site effects of seismic motion in the metropolitan area of the city of Thessaloniki (Northern Greece) for various seismic earthquake scenarios with a 3D finite-difference modeling approach, using the HellasGrid Infrastructure and the EGI with the support of the Scientific Computing Center at A.U.Th.

The city of Thessaloniki (Northern Greece) was selected since it is located in a moderate-to-high seismicity region (Papazachos et al., 1983), with the Servomacedonian massif and Northern Aegean through areas exhibiting the highest seismicity (Figure 1). The city suffered several large earthquakes throughout its history, many of them causing significant damages and human losses (Papazachos and Papazachou, 2002).

Thessaloniki Earthquake Map

Figure 1: Map of known earthquakes with M≥3.0 which occurred in the broader area of central–northern Greece from the historical times (550 BC) till 2007 (Figure after Skarlatoudis et al., 2011a).

 

An explicit 3D 4th-order velocity-stress finite-difference scheme with discontinuous spatial grid was used to produce synthetic waveforms with numerical simulations. The scheme solves the equation of motion and Hooke’s law for viscoelastic medium with rheology described by the generalized Maxwell body model. Details on the scheme, its grid and material parameterization are provided by Moczo et al. (2002), Kristek & Moczo (2003), Moczo & Kristek (2005), Moczo et al. (2007) and Kristek et al. (2009b).

The computational model used for the simulations is based on the geophysical-geotechnical model and the dynamic characteristics of the soil formations proposed by Anastasiadis et al. (2001) and covers an area of 22 x 16 Km2 (dotted rectangle in Figure 1) (Skarlatoudis et al., 2007; 2008b; Skarlatoudis et al., 2010).

Numerical simulations were performed for six seismic scenarios, corresponding to three different hypocentral locations and two different focal mechanisms for each one. Seismic scenarios with E-W trending normal faults are referred as scenarios (a), while the ones with NW-SE trending normal faults as scenarios (b) (Figure 2). Both types of normal faults (E-W and NW-SE) are the dominant types of faults in the vicinity of the broader Thessaloniki area (e.g. Vamvakaris et al., 2006). Synthetic waveforms were produced for a coarse grid of receivers, in order to study the spatial variation of site-effects on seismic motion in the broader metropolitan area of Thessaloniki (Figure 2).

Earthquake Simulation Scenarios          

Figure 2: Earthquake locations used for the examined seismic scenarios (red stars) and the focal mechanisms used for each scenario. The coarser grid of receivers used for studying the spatial variation of various waveform and site-effect parameters for the six earthquake scenarios is also shown (black diamonds). The location of site OBS, used as a reference station in computations, is denoted with a yellow triangle (Figure after Skarlatoudis et al., 2011a).

 

The application that implements the 3DFD method is using the MPI libraries for inter-process communications, namely the mpich2 implementation. The compilation and execution of the code was tested in different types of machines and different Fortran90 compilers (commercial and free). The most accurate results and the minimum execution time in each system were achieved with the use of the commercial compiler Pathscale (version 3.0) (Skarlatoudis et al., 2008a). The execution of the 3DFD code is demanding in terms both of CPU power and computer memory. For the aforementioned computational model the memory demands reached 20 GB and the time of computation (per model) was approximately 15 on the HellasGrid Infrastructure with the synchronous usage of 40 Intel Xeon processors.

The implemented workflow relies mainly on gLite middleware (Figure 3). Also a large number of test runs for checking the compatibility of the results on the Grid with the ones obtained from other computational infrastructures have been performed. Moreover the scaling of the execution of the code on the HellasGrid Infrastructure was examined (Skarlatoudis et al., 2008a).

3D FDTD Application Workflow          

Figure 3: Schematic representation of the workflow in HellasGrid infrastructure (Figure after Skarlatoudis et al., 2008a)

 

Various measures, estimated from the 3D synthetic waveforms that can provide a more detailed evaluation of site-effects, such as spectral ratios, Peak Ground Velocity (PGV), cumulative kinetic energy and Housner Intensity, were used to probe the site-effects spatial distribution and ground motion variability. In Figure 4 the Peak Ground Velocity (PGV) ratio is shown for the 3D over the corresponding 1D bedrock reference model [(PGV3D)/(PGV1D)], estimated for the coarser grid of receivers and for the two horizontal components of ground motion, for all scenarios studied (Skarlatoudis et al. 2011a). The observed relative PGV distribution from the six scenarios, exhibits high values along the coastal zone, with the highest value (~4) shown in the area near the city harbor for the E-W component. High values of relative PGV are also observed in the western parts of the model for the E-W component.

3DFD_Thessaloniki

Figure 4: Spatial variation of the average, from the six seismic scenarios, ratio [(PGV3D/PGV1D)], for the horizontal components of ground motion (Figure after Skarlatoudis et al., 2011a)

 

The 3D wave propagation characteristics of the 4th July, 1978 aftershock (M5.1) of the 20th June, 1978 strong mainshock (M6.5) that struck the city of Thessaloniki were also studied using the 3D finite-difference approach. In Figure 5 the spatial distribution of damages in the metropolitan area of Thessaloniki after the 1978 mainshock is presented (left figure) (Leventakis, 2003), together with the corresponding distribution of the RotD50 ground motion measure of the (PGV3D)/(PGV1D) ratio, for the frequency band 0.2Hz-3Hz (Skarlatoudis et al., 2011b). According to Leventakis, (2003) the largest damage was recorded in the city harbor area and parts of the eastern area of the Thessaloniki. Despite the various limitations of the comparison, a quite good correlation is observed between the damage distribution and the PGV spatial variation, suggesting that the role of local site amplifications studies here is much more important than other factors (e.g. differences in source radiation pattern, non-linearity, etc.).

Thessaloniki Damage Distribution

Figure 5: (Left) Spatial distribution of damage distribution in Thessaloniki caused by the mainshock of July 1978 according to Leventakis (2003). (Right) Spatial distribution of the RotD50 measure of relative PGV values (amplifications) from filtered (0.2Hz-3Hz) horizontal components (Figure after Skarlatoudis et al., 2011b).

 

This work has been partly performed in the framework of PENED-2003 (measure 8.3, action 8.3.4 of the 3rd EU Support Programme) and the Greek-Slovak Cooperation Agreement (EPAN 2004-2006). Most of the computations were realized on the EGI and HellasGrid infrastructureσ with the support of the Scientific Computing Center at the Aristotle University of Thessaloniki (AUTH). A significant part of the results presented here have been published in peer-review journals (see inline references) and/or presented in national and international conferences (see references at the end of this document).

 

Contact details:

  • Papazachos C.B., Professor, AUTH, kpapaza (at) geo.auth.gr
  • Skarlatoudis A.A, Dr. Seismologist, AUTH, askarlat (at) geo.auth.gr
  • Scientific Computing Center, AUTH, contact (at) grid.auth.gr

 

References:

  1. Papazachos, B. C., Tsapanos, T. M. and Panagiotopoulos, D., (1983). The time, magnitude and space distribution of the 1978 Thessaloniki seismic sequence. The Thessaloniki northern Greece earthquake of June 20, 1978 and its seismic sequence. Technical chamber of Greece, section of central Macedonia, 117-131, 1983.
  2. Skarlatoudis A.A., C.B. Papazachos, P. Moczo, J. Kristek, N. Theodoulidis and P. Apostolidis, (2007). Evaluation of ground motions simulations for the city of Thessaloniki, Greece using the FD method: the role of site effects and focal mechanism at short epicentral distances, European Geosciences Union (EGU) General Assembly, Vienna, Austria.
  3. Skarlatoudis A.A., Korosoglou, P., Kanellopoulos, C. and Papazachos C.B, (2008a). Interaction of a 3D finite-difference application for computing synthetic waveforms with the HellasGrid infrastructure, 1st HellasGrid User Forum, Athens, Greece, 3rd EGEE User Forum, Clermont-Ferrand, France.
  4. Skarlatoudis A.A., C.B. Papazachos, P. Moczo, J. Kristek and N. Theodoulidis, (2008b). Ground motions simulations for the city of Thessaloniki, Greece, using a 3-D Finite-Difference wave propagation method, European Geosciences Union (EGU) General Assembly, Vienna, Austria and 31st European General Assembly of the European Seismological Commission, Chania, Greece.
  5. Skarlatoudis A.A., C.B. Papazachos and N. Theodoulidis, (2011b). Site response study of the city of Thessaloniki (N. Greece), for the 04/07/1978 (M5.1) aftershock, using a 3D Finite-Difference wave propagation method, accepted for publication in Bull. Seism. Soc. Am.

Evaluating the impact of climate change on European air quality

Scientists from the Laboratory of Atmospheric Physics (School of Physics) and the Department of Meteorology and Climatology (School of Geology) have simulated the regional climate-air quality over Europe for two future decades (2041-2050 and 2091-2100) using the HellasGrid Infrastructure and the EGI Grid, with the support of the Scientific Computing Center at A.U.Th..

The computational models used for these simulations are the regional climate model RegCM3 and the air quality model CAMx, which have been off line coupled in this case (Figure 1). The control simulation for the decade 1991-2000 was performed twice, once externally forced by the ERA40 reanalysis and once using the global circulation model ECHAM5, in order to investigate the importance of external meteorological forcing on air quality (Katragkou et al., 2010). The RegCM3 model was forced by ECHAM5 under the A1B emission scenario for two future time slices, namely 2041-2050 and 2091-2100. These simulations served as a theoretical experiment of evaluating the impact of climate change on air pollution (Katragkou et al., 2011).

For each decadal simulation the computation consumed approximately 1000 CPU hours. CAMx simulations were performed on SMP machines as the model has been intrinsically parallelized with the OpenMP library. Using this feature of the CAMx model a significant reduction on the overall computation time was feasible.

In terms of storage, the resources required for archiving the CAMx output files are estimated to slightly more than 5TB.

Off-line coupling of RegCM3 and CAMx computational models

Figure 1: A schematic illustrating an outline of the modelling system RegCM3/CAMx applied in this study (from Zanis et al., 2011)

Surface ozone simulated by RegCM3/CAMx was evaluated against ground based measurements from the European database EMEP (Zanis et al., 2011). The air quality simulations available at AUTH for the three time slices (1991-2000, 2041-2050 and 2091-2100) over Europe with a resolution of 50 Km have been provided as air quality boundaries for higher resolution air quality simulations over sub-European grids (Huszar et al., 2011).

The results suggest that changes imposed by climate change until the 2040s in surface ozone concentration during summer will be below 1 ppbv (parts per billion by volume) . By the 2090s, however, changes are foreseen to be more significant especially over south-west Europe, where the median of near surface ozone has been found to increase by 6.2 ppbv.

Near surface ozone concentrations over Europe

Figure 2: Average summer surface ozone for the control simulation 1991-2000 (left). Differences in simulated average summer ozone between 2091-2100 and control simulation (right). The grey color corresponds to non-statistical significant differences (from Katragkou et al., 2011)

The median of summer near surface temperature for Europe at the end of the 21st century was calculated at 2.7K higher than the end of the 20th century with more intense temperature increase simulated for southern Europe. A prominent outcome was the decrease of cloudiness mostly over western Europe at the end of the 21st century associated with an anticyclonic anomaly which favours more stagnant conditions and weakening of the westerly winds (Katragkou et al., 2011).

Mean temperature differences

Figure 3: Mean differences between second future decade (2091-2100) and the present decade (1991-2000) for summer in the fields of surface temperature (left) and geopotential height at 500 hPa (right).The red contours correspond to geopotential height at 500 hPa during the control decade (from Katragkou et al., 2011).

This work has been accomplished in the framework of the FP6 European Project CECILIA (Central and Eastern Europe Climate Change Impact and Vulnerability Assessment, Contract Nr 037005). The results were produced on the EGI and HellasGrid infrastructure with the support of the Scientific Computing Center at the Aristotle University of Thessaloniki (AUTH). The results of this work have been published in peer-review journals (see references), presented in several national and international conferences and awarded by the Hellenic Meteorological Society (2008), the European Association for the Science of Air Pollution (2009) and the Research Committee of the Aristotle University of Thessaloniki (2010).

Contact details:

  • Dimitris Melas (PI), Associate Professor, AUTH, melas (at) auth.gr
  • Prodromos Zanis, Assistant Professor, AUTH, zanis (at) geo.auth.gr
  • Eleni Katragkou, Lecturer, AUTH, katragou (at) auth.gr
  • Scientific Computing Center, AUTH, contact (at) grid.auth.gr

References:

  1. Huszar P., K. Juda-Rezler, T. Halenka, H. Chervenkov, D. Syrakov, B. C. Krueger, P. Zanis, D. Melas, E. Katragkou, M. Reizer, W. Trapp, M. Belda, Potential climate change impacts on ozone and PM levels over Central and Eastern Europe from high resolution simulations, Climate Research (in press), 2011
  2. Katragkou Ε., P. Zanis, I. Tegoulias, D. Melas, I. Kioutsioukis, B. C. Krüger, P. Huszar, T. Halenka, S. Rauscher, Decadal regional air quality simulations over Europe in present climate: near surface ozone sensitivity to external meteorological forcing, Atmospheric Chemistry and Physics, 10, 11805-11821, 2010
  3. Κatragkou E., P. Zanis, I. Kioutsioukis, I. Tegoulias, D. Melas, B.C. Krüger, E. Coppola, Future climate change impacts on summer surface ozone from regional climate-air quality simulations over Europe, J Geophys Res (in press), 2011
  4. Zanis P., E. Katragkou, I. Tegoulias, A. Poupkou, D. Melas, Evaluation of near surface ozone in air quality simulations forced by a regional climate model over Europe for the period 1991-2000, Atmospheric Environment, 45, 6489-6500, 2011

Βελτιώνοντας τις εφαρμογές: Profiling

Ανάμεσα στα κύρια εργαλεία που χρειάζονται όταν αναπτύσσουμε ή αναβαθμίζουμε ένα πρόγραμμα, εκτός από έναν debugger (αποσφαλματωτή), πρέπει να είναι και ένας profiler (καταγραφέας ή αναλυτής προγράμματος).

Στην καταγραφή (profiling) συλλέγονται πληροφορίες κατά τη διάρκεια της εκτέλεσης ενός προγράμματος, με απώτερο σκοπό να προσδιοριστεί ο χρόνος εκτέλεσης και οι απαιτήσεις μνήμης που έχουν τα επιμέρους κομμάτια ενός προγράμματος. Με αυτήν την διαδικασία μπορεί να αποδοθεί ο χρόνος που προσδίδει στον τελικό χρόνο εκτέλεσης κάθε συνάρτηση ή τμήμα του προγράμματος.

Τα αποτελέσματα αυτά μπορούν να χρησιμοποιηθούν για μια στοχευμένη βελτιστοποίηση έτσι ώστε να δίδεται βάρος στα κομμάτια που πραγματικά αποσπούν σημαντικό χρόνο σε σχέση με λιγότερο χρονοβόρα τμήματα. Φυσικά αυτή η διαδικασία μπορεί να χρησιμοποιηθεί πέρα από τη βελτιστοποίηση και για την σταδιακή παραλληλοποίηση ενός προγράμματος, ξεκινώντας από τα πιο απαιτητικά σε χρόνο και μνήμη τμήματα του προγράμματος, και συνεχίζοντας με τα λιγότερα απαιτητικά. Έτσι, ακόμα και με μια παράλληλη συνάρτηση μπορεί να υπάρξει αισθητή διαφορά στον χρόνο εκτέλεσης.

Γενικά δίδονται εργαλεία καταγραφής από τις συλλογές και τις σουίτες μεταγλωττιστών που χρησιμοποιεί ο προγραμματιστής. Σαν βάση χρησιμοποιείται η πρόταση του ανοικτού λογισμικού gprof από την GNU Compiler Suite, με τους περισσότερους μεταγλωττιστές να βγάζουν αποτελέσματα συμβατά με τον συγκεκριμένο καταγραφέα.

Για να μπορέσει ο compiler να δώσει τις απαραίτητες εντολές στο πρόγραμμα και να παράγει την απαραίτητη πληροφορία που αναφέρεται πιο πάνω, το πρόγραμμα πρέπει να γίνει link και compile με κάποιο συγκεκριμένο FLAG, συνήθως το “-pg” (ανάλογα και με τον μεταγλωττιστή που χρησιμοποιείται). Φυσικά ο χρόνος εκτέλεσης ενός προγράμματος που παράγει δεδομένα profiling, είναι αισθητά πιο αργός σε σχέση με το κανονικό και γενικά ένα εκτελέσιμο που αποδίδει δεδομένα profiling χρησιμοποιείται μόνο για αυτόν τον σκοπό.

Gprof2Dot

Ένα αρκετά απλό και ελκυστικό εργαλείο που αποδίδει εποπτικά την πληροφορία που παράγεται από το profiling μιας εφαρμογής, είναι το Gprof2Dot. Το Gprof2Dot, το οποίο βρίσκεται εγκατεστημένο στον διακομιστή, αναπαριστά σε ένα γράφο (Call Graph) τις εξαρτήσεις και τα ποσοστά του χρόνου εκτέλεσης από κάθε συνάρτηση και μπορεί να τα αποθηκευτεί σε μορφή εικόνας, πχ PNG.

Ένα τυπικό workflow για την ανάλυση κάποιο προγράμματος είναι ο ακόλουθος:

Το αποτέλεσμα μπορείτε να το δείτε απευθείας από την κονσόλα του με την εντολή:

Τυπικό αποτέλεσμα μιας εφαρμογής είναι το ακόλουθο:

Επειδή η καταγραφή λειτουργεί δειγματοληπτικά, διαφορετικές εκτελέσεις της ίδιας εφαρμογής γίνεται να αποδώσουν διαφορετικά αποτελέσματα. Αυτός είναι ένας λόγος που προτείνεται να συγκεντρώνονται και να αναλύονται αποτελέσματα από πολλές εκτελέσεις.

Parallel Profilers

Λόγω της διαφορετικής αρχιτεκτονικής σύνδεσης και υλοποίησης κατανεμημένων υπολογιστικών πόρων, για τα προγράμματα που κάνουν χρήση παραλληλισμού δεν είναι δυνατό οι σειριακοί καταγραφείς να παρέχουν μια έγκυρη εικόνα για τον καταμερισμό την κατανάλωσης των πόρων στα τμήματα του προγράμματος. Για αυτό το σκοπό έχουν δημιουργηθεί ειδικοί καταγραφείς για παράλληλες εφαρμογές. Τυπικά παραδείγματα είναι η ανοιχτού λογισμικού εργαλειοθήκη Scalasca που έχει αναπτυχθεί από το εργαστήριο Jülich και η εμπορική σουίτα Allinea OPT. Οδηγίες χρήσης του παράλληλου καταγραφέα Scalasca στο μπορείτε να βρείτε εδώ.

Χρήσιμοι σύνδεσμοι

* Sourceware.org
* CS Utah
* Linuxtopia

Εγκατάσταση x86 open64 μεταγλωττιστών

Πρόσφατα προστέθηκε στο User Interface του Α.Π.Θ. η οικογένεια μεταγλωττιστών (compiler suite) x86 open64 της AMD στην έκδοση 4.2.4. Η οικογένεια περιλαμβάνει μεταγλωττιστές για C/C++ και FORTRAN εφαρμογές.

Για να χρησιμοποιήσετε τους μεταγλωττιστές αρκεί να προσθέσετε στο περιβάλλον το σχετικό modulefile ως εξής:

module load open64

Μπορείτε να κατεβάσετε τις σελίδες τεκμηρίωσης του μεταγλωττιστή εδώ (pdf).

Εγκατάσταση vbrowser στο User Interface του Α.Π.Θ.

Ο vbrowser έχει αναπτυχθεί από τα Ιδρύματα NIKHEF και SARA στην Ολλανδία και στην ουσία αποτελεί ένα browser για τη διαχείριση των αρχείων δεδομένων των χρηστών που βρίσκονται αποθηκευμένα σε Grid Storage Elements.

Για να το χρησιμοποιήσει κάποιος από το ui.afroditi.hellasgrid.gr χρειάζεται να κάνει login με δυνατότητα X-forwarding (περισσότερες πληροφορίες μπορείτε να βρείτε εδώ), να φορτώσει το modulefile με την

module load vbrowser

και να εκτελέσει την εξής εντολή

vbrowser.sh

Στο παράθυρο που ανοίγει ο χρήστης μπορεί να μεταφέρει αρχεία δεδομένων ανάμεσα στο τοπικό σύστημα αρχείων στο ui.afroditi.hellasgrid.gr και στα απομακρυσμένα Storage Elements, ακόμα και να επεξεργάζεται (edit) απευθείας αρχεία αποθηκευμένα σε Storage Elements.

Νέα υλοποίηση καταγραφέα Scalasca

Ο καταγραφέας (profiler) Scalasca, που αναπτύσσεται από το ερευνητικό Κέντρο Υπερυπολογιστών Juelich στη Γερμανία, χρησιμοποιείται για την καταγραφή παράλληλων (MPI, OpenMP κτλ) εφαρμογών.

Σήμερα εγκαταστάθηκε στο User Interface του Α.Π.Θ. έκδοση του καταγραφέα που λειτουργεί με την mpich2 βιβλιοθήκη οπότε καθίσταται πλέον δυνατή η μελέτη παράλληλων εφαρμογών που χρησιμοποιούν τη συγκεκριμμένη βιβλιοθήκη παραλληλίας.

Πληροφορίες χρήσης του καταγραφέα Scalasca μπορείτε να βρείτε στο TWiki.