User training workshops: Heavy User Communities leading the way

The EGI User Forum 2011 in Vilnius has a packed programme offering a diverse range of workshops, demonstrations, presentations and tutorials with a focus on end-user training. With more than twenty three workshops and tutorials on offer over four days, the EGI-InSPIRE Heavy User Communities (HUCs) shall be playing their part.

Here is an overview of the HUC training workshops:

SHIWA platform

The Life Sciences community will introduce the SHIWA platform, a multi-system workflow execution platform and interoperability solution, supporting Askalon, MOTEUR, P-GRADE and Triana workflows. Workflow environments shield the end-users from the details of the grid infrastructure. The examples used at this training event are taken from the Life Sciences domain, but the tools presented here have much wider applications and should be of interest to all user communities.

  • Location: Theta; Time 11-Apr-2011 @14:00; Duration 03h30'

Using a StratusLab cloud infrastructure

StratusLab open-source cloud distribution allows resource centres to expose their computing resources as an ‘Infrastructure as a Service’ (IaaS) type cloud.

This tutorial presents the main StratusLab features and how they can be used by system administrators and scientists alike. They will learn how StratusLab-based infrastructures can be integrated with the EGI, and how the cloud services complement grid services. Practical exercises will teach the participants how to launch virtual machines, customise their computing environment, share those environments, manage virtual disks, and define complete services.

Participants will be provided credentials to access a StratusLab cloud infrastructure and must bring a laptop with python (2.6+), java (1.6+), and an ssh client installed.

  • Location: Gamma; Time 12-Apr-2011 @11:00; Duration 01h30'

Earth Science Data Processing Tools and Applications

The Earth Science community has a rich and extensive repository of data stored outside EGI and access to this data during job execution is a mandatory requirement. The Earth Sciences community is also interested in using the OPeNDAP protocol and Hyrax Data Server. Hyrax offers many features that go beyond high-performance access to distributed datasets, such as an extensible component-based architecture, multiple data representations, static or dynamic THREDDS catalogues. Due to the many different technologies, data-centres, standards and pseudo-standards, however, it seems that no general solution can be found. This talk should be of interest to anyone interested in these or similar issues.

  • Location: Zeta; Time 12-Apr-2011 @16:00; Duration 00h30'

Shared Services Tools based on the Ganga job definition

Ganga is a user-targeted job management tool designed to provide a homogeneous environment for processing data on a variety of technology ‘back-ends’. Initially developed within the high-energy physics (HEP) domain, Ganga has been adopted by a wide variety of other user communities as their default analysis and task-management system. The modular nature of Ganga means that communities can easily, if desired, develop their own suite of tools independent of both the core code and those of other communities. This presentation will use case-studies to illustrate the ease with which non-LHC communities (for example medical research), have adopted Ganga as their chief job-submission tool.

  • Location: Zeta; Time: 13-Apr-2011 @12:00; Duration: 30'

Experiment Dashboard

The Experiment Dashboard applications for infrastructure monitoring are widely used by the LHC virtual organisations for the computing shifts and site commissioning activities. The LHC Experiment Dashboard consists of:

  • Site Usability Dashboard, which uses tailored VO tests within the existing Site Availability Monitoring (SAM) system;

  • Site Status Board, which allows VOs to construct customised monitoring views;

  • SiteView, a single point of entry for site administrators, to understand how their site is used by the LHC VOs and to detect potential problems and ensure effective site performance.

The Dashboard applications are essential LHC computing operations tools. However, they are generic and can be adapted for other community’s needs. The talk will give an overview the Dashboard applications, highlighting the possibility of exploiting these applications outside the LHC domain.

  • Location: Lambda; Time: 13-Apr-2011 @16:00; Duration: 01h30'

MPI Hands on training

The Message Passing Interface (MPI) standards and their implementations are currently the most prevalent frameworks on which parallel applications are built. A significant problem in exploiting MPI applications on the grid is the inherent nature of its heterogeneous environments. Different MPI implementations, system interconnects or job managers can be found at different resource centres. In order to run an application, the end-user needs some ‘a priori’ knowledge about the resources. MPI-Start offers a unique and stable interface to execute parallel applications at the gLite based grid sites. It aims to hide the differences and complexities of the heterogeneous systems that compose a grid infrastructure by providing a high-level abstract layer. This presentation introduces the basic concepts of MPI, together with a detailed description of MPI-Start and how to use it.

The tutorial is suitable to all users who wish to adapt their MPI-based applications for use on the grid using gLite.

  • Location: Iota; Time: 14-Apr-2011 @11:00; Duration: 01h30'


Kepler is a free and open source workflow engine used extensively by the FUSION community. It is designed to help scientists and developers to easily create, execute, share and reuse their models across the scientific and engineering domains. In particular, Kepler includes components that integrate with different middleware stacks (e.g. gLite or UNICORE). Kepler workflows can be decomposed into smaller parts, thus allowing complex tasks to be divided into much simpler ones. This feature provides workflow designers with ability to build re-usable, modular sub-workflows. These can be saved and applied to other workflow. This introductory tutorial should be of interest to all users keen to explore Kepler's powerful capabilities. It will begin by showing how to use Kepler to build basic workflows; use relation paths and synchronisation; and to create control structures such as "if-else" and loops. Finally, job submission, monitoring, and data management shall complete the tutorial.

  • Location: Zeta; Time: 14-Apr-2011 @14:00; Duration: 03h30'

John Walsh, Grid-Ireland Operations Centre, Trinity College Dublin