Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

PTP/designs/2.x

< PTP‎ | designs
Revision as of 14:31, 21 September 2007 by Unnamed Poltroon (Talk) (Implementation Details)

Overview

The Parallel Tools Platform (PTP) is a portable, scalable, standards-based integrated development environment specifically suited for application development for parallel computer architectures. The PTP combines existing functionality in the Eclipse Platform, the C/C++ Development Tools, and new services specifically designed to interface with parallel computing systems, to enable the development of parallel programs suitable for a range of scientific, engineering and commercial applications.

This document provides a detailed design description of the major elements of the Parallel Tools Platform version 2.x.

Architecture

The Parallel Tools Platform provides an Eclipse-based environment for supporting the integration of development tools that interact with parallel computer systems. PTP provides pre-installed tools for launching, controlling, monitoring, and debugging parallel applications. A number of services and extension points are also provided to enable other tools to be integrated with Eclipse an fully utilize the PTP functionality.

Unlike traditional computer systems, launching a parallel program is a complicated process. Although there is some standardization in the way to write parallel codes (such as MPI), there is little standardization in how to launch, control and interact with a parallel program. To further complicate matters, many parallel systems employ some form of resource allocation system, such as a job scheduler, and in many cases execution of the parallel program must be managed by the resource allocation system, rather than by direct invocation by the user.

In most parallel computing environments, the parallel computer system is remote from the user's location. This necessitates that the parallel runtime environment be able to communicate with the parallel computer system remotely.

The PTP architecture has been designed to address these requirements. The following diagram provides an overview of the overall architecture.

Ptp20 arch.png

The architecture can be roughly divided into three major components: the runtime platform, the debug platform, and tool integration services. These components are defined in more detail in the following sections.

Runtime Platform

The runtime platform comprises those elements relating to the launching, controlling, and monitoring of parallel applications. The runtime platform is comprised of the following elements in the architecture diagram:

  • runtime model
  • runtime views
  • runtime controller
  • proxy server
  • launch

PTP follows a model-view-controller (MVC) design pattern. The heart of the architecture is the runtime model, which provides an abstract representation of the parallel computer system and the running applications. Runtime views provide the user with visual feedback of the state of the model, and provide the user interface elements that allow jobs to be launched and controlled. The runtime controller is responsible for communicating with the underlying parallel computer system, translating user commands into actions, and keeping the model updated. All of these elements are Eclipse plugins.

The proxy server is a small program that typically runs on the remote system front-end, and is responsible for performing local actions in response to commands sent by the proxy client. The results of the actions are return to the client in the form of events. The proxy server is usually written in C.

The final element is launch, which is responsible for managing Eclipse launch configurations, and translating these into the appropriate actions required to initiate the launch of an application on the remote parallel machine.

Each of these elements is described in more detail below.

Runtime Model

Overview

The PTP runtime model is an hierarchical attributed parallel system model that represents the state of a remote parallel system at any particular time. It is the model part of the MVS design pattern. The model is attributed because each model element can contain an arbitrary list of attributes that represent characteristics of the real object. For example, an element representing a compute node could contain attributes describing the hardware configuration of the node. The structure of the runtime model is shown in the diagram below.

Model20.png

Each of the model elements in the hierarchy is a logical representation of a system component. Particular operations and views provided by PTP are associated with each type of model element. Since machine hardware and architectures can vary widely, the model does not attempt to define any particular physical arrangement. It is left to the implementor to decide how the model elements map to the physical machine characteristics.

universe 
This is the top-level model element, and does not correspond to a physical object. It is used as the entry point into the model.
resource manager 
This model element corresponds to an instance of a resource management system on a remote computer system. Since a computer system may provide more than one resource management system, there may be multiple resource managers associated with a particular physical computer system. For example, host A may allow interactive jobs to be run directly using the MPICH2 runtime system, or batched via the LSF job scheduler. In this case, there would be two logical resource managers: an MPICH2 resource manager running on host A, and an LSF resource manager running on host A.
machine 
This model element provides a grouping of computing resources, and will typically be where the resource management system is accessible from, such as the place a user would normally log in. Many configurations provide such a front-end machine.
queue 
This model element represents a logical queue of jobs waiting to be executed on a machine. There will typically be a one-to-one mapping between a queue model element and a resource management system queue. Systems that don't have the notion of queues should map all jobs on to a single default queue.
node 
This model element represents some form of computational resource that is responsible for executing an application program, but where it is not necessary to provide any finer level of granularity. For example, a cluster node with multiple processors would normally be represented as a node element. An SMP machine would represent physical processors as a node element.
job 
This model element represents an instance of an application that is to be executed.
process 
This model element represents an execution unit using some computational resource. There is an implicit one-to-many relationship between a node and a process. For example, a process would be used to represent a Unix process. Finer granularity (i.e. threads of execution) are managed by the debug model.

The only model element that is provided when PTP is first initialized is the universe. Users must manually create resource manager elements and associate each with a real host and resource management system. The remained of the model hierarchy is then populated and updated by the resource manager implementation.

Attributes

Each element in the model can contain an arbitrary number of attributes. An attribute is used to provide additional information about the model element. An attribute consists of a key and a value. The key is a unique ID that refers to an attribute definition, which can be thought of as the type of the attribute. The value is the value of the attribute.

An attribute definition contains meta-data about the attribute, and is primarily used for data validation and displaying attributes in the user interface. This meta-data includes:

ID 
The attribute definition ID.
type 
The type of the attribute. Currently supported types are ARRAY, BOOLEAN, DATE, DOUBLE, ENUMERATED, INTEGER, and STRING.
name 
The short name of the attribute. This is the name that is displayed in UI property views.
description 
A description of the attribute that is displayed when more information about the attribute is requested (e.g. tooltip popups.)
default 
The default value of the attribute. This is the value assigned to the attribute when it is first created.
display 
A flag indicating if this attribute is for display purposes. This provides a hint to the UI in order to determine if the attribute should be displayed.

Pre-defined Attributes

All model elements have at least two mandatory attributes. These attributes are:

id 
This is a unique ID for the model element.
name 
This is a distinguishing name for the model element. It is primarily used for display purposes and does not need to be unique.

In addition, model elements have different sets of optional attributes. These attributes are shown in the following table:

model element attribute definition ID attribute type description
resource manager rmState enum Enumeration representing the state of the resource manager. Possible values are: STARTING, STARTED, STOPPING, STOPPED, SUSPENDED, ERROR
rmDescription string Text describing this resource manager
rmType string Resource manager class
rmID string Unique identifier for this resource manager. Used for persistence
machine machineState enum Enumeration representing the state of the machine. Possible values are: UP, DOWN, ALERT, ERROR, UNKNOWN
numNodes int Number of nodes known by this machine
queue queueState enum Enumeration representing the state of the queue. Possible values are: NORMAL, COLLECTING, DRAINING, STOPPED
node nodeState enum Enumeration representing the state of the node. Possible values are: UP, DOWN, ERROR, UNKNOWN
nodeExtraState enum Enumeration representing additional state of the node. This is used to reflect control of the node by a resource management system. Possible values are: USER_ALLOC_EXCL, USER_ALLOC_SHARED, OTHER_ALLOC_EXCL, OTHER_ALLOC_SHARED, RUNNING_PROCESS, EXITED_PROCESS, NONE
nodeNumber int Zero-based index of the node
job jobState enum Enumeration representing the state of the job. Possible values are: PENDING, STARTED, RUNNING, TERMINATED, SUSPENDED, ERROR, UNKNOWN
jobSubId string Job submission ID
queueId string Queue ID
jobNumProcs int Number of processes
execName string Name of executable
execPath string Path to executable
workingDir string Working directory
progArgs array Array of program arguments
env array Array containing environment variables
debugExecName string Debugger executable name
debugExecPath string Debugger executable path
debugArgs array Array containing debugger arguments
debug bool Debug flag
process processState enum Enumeration representing the state of the process. Possible values are: STARTING, RUNNING, EXITED, EXITED_SIGNALLED, STOPPED, ERROR, UNKNOWN
processPID int Process ID of the process
processExitCode int Exit code of the process on termination
processSignalName string Name of the signal that caused process termination
processIndex int Zero-based index of the process
processStdout string Standard output from the process
processStderr string Error output from the process
processNodeId string ID of the node that this process is running on

Events

The runtime model provides an event notification mechanism to allow other components or tools to receive notification whenever changes occur to the model. Each model element provides two sets of event interfaces:

element events 
These events relate to the model element itself. Two main types of events are provided: a change event, which is triggered when an attribute on the element is added, removed, or changed; and on some elements, an error event that indicates some kind of error has occurred.
child events 
These events relate to children of the model element. Three main types of events are provided for each child: a change even, which mirrors the element change event; a new event that is triggered when a new child element is added; and a remove event that this triggered when a child element is removed. Elements that have more than one type of child (e.g. a resource manager has machine and queue children) will provided separate interfaces for each type of child.

Event notification is activated by registering a listener on the model element. Once the listener has been registered, events will begin to be received immediately.

Implementation Details

Runtime Views

Overview

The runtime views serve two functions: they provide a means of observing the state of the runtime model, and they allow the user to interact with the parallel environment. Access to the views is managed using an Eclipse perspective. A perspective is a means of grouping views together so that they can provide a cohesive user interface. The perspective used by the runtime platform is called the PTP Runtime perspective.

There are currently four main views provided by the runtime platform:

resource manager view 
This view is used to manage the lifecycle of resource managers. Each resource manager model element is displayed in this view. A user can add, delete, start, stop, and edit resource managers using this view. A different color icon is used to display the different states of a resource manager.
machines view 
This view shows information about the machine and node elements in the model. The left part of the view displays all the machine elements in the model (regardless of resource manager). When a machine element is selected, the right part of the view displays the node elements associated with that machine. Different machine and node element icons are used to reflect the different state attributes of the elements.
jobs view 
This view shows information about the job and process elements in the model. The left part of the view displays all the job elements in the model (regardless of queue or resource manager). When a job element is selected, the right part of the view displays the process elements associated with that job. Different job and process element icons are used to reflect the different state attributes of the elements.
process detail view 
This view shows more detailed information about individual processes in the model. Various parts of the view are used to display attributes that are obtained from the process model element.

Implementation Details

Runtime Controller

The runtime controller is the controller part of the MVC design pattern, and is implemented using a layered architecture, where each layer communicates with the layers above and below using a set of defined APIs. The following diagram shows the layers of the runtime controller in more detail.

The resource manager is an abstraction of a resource management system, such as a job scheduler, which is typically responsible for controlling user access to compute resources via queues. The resource manager is responsible for updating the runtime model. The resource manager communicates with the runtime system, which is an abstraction of a parallel computer system's runtime system (that part of the software stack that is responsible for process allocation and mapping, application launch, and other communication services.) The runtime system, in turn, communicates with the remote proxy runtime, which manages proxy communication via some form of remote service. The proxy runtime client is used to map the runtime system interface onto a set of commands and events that are used to communicate with a physically remote system. The proxy runtime client makes use of the proxy client to provide client-side communication services for managing proxy communication.

Each of these layers is described in more detail in the following sections.

Resource Manager

Overview

A resource manger is the main point of control between the runtime model, runtime views, and the underlying parallel computer system. The resource manager is responsible for maintaining the state of the model, and allowing the user to interact with the parallel system.

There are a number of pre-defined types of resource managers that are known to PTP. These resource manager types are provided as plugins that understand how to communicate with a particular kind of resource management system. When a user creates a new resource manager (through the resource manager view), an instance of one of these types is created. There is a one-to-one correspondence between a resource manager and a resource manager model element. When a new resource manager is created, a new resource manager model element is also created.

A resource manager only maintains the part of the runtime model below its corresponding resource manager model element. It has no knowledge of other parts of the model.

Implementation Details

Runtime System

Overview

A runtime system is an abstraction layer that is designed to allow implementers more flexibility in how the plugin communicates with the underlying hardware. When using the implementation that is provided with PTP, this layer does little more than pass commands and events between the layers above and below.

Implementation Details

Remote Proxy Runtime

Overview

The remote proxy runtime is used when communicating to a proxy server that is remote from the host running Eclipse. This component provides a number of services that relate to defining and establishing remote connections, and launching the proxy server on a remote computer system. It does not participate in actual communication between the proxy client and proxy server.

Implementation Details

Proxy Runtime Client

Overview

The proxy runtime client layer is used to define the commands and events that are specific to runtime communication (such as job submission, termination, etc.) with the proxy server. The implementation details of this protocol are defined in more detail here Resource Manager Proxy Protocol.

Implementation Details

Proxy Client

Overview

The proxy client provides a range of low level proxy communication services that can be used to implement higher level proxy protocols. These services include:

  • An abstract command that can be extended to provide additional commands.
  • An abstract event that can be extend to provide additional events.
  • Connection lifecycle management.
  • Concrete commands and events for protocol initialization and finalization.
  • Command/event serialization and de-serialization.

The proxy client is currently used by both the proxy runtime client and the proxy debug client.

Implementation Details

Proxy Server

Overview

The proxy server is the only part of the PTP runtime architecture that runs outside the Eclipse JVM. It has to main jobs: to receive commands from the runtime controller and perform actions in response to those commands; and keep the runtime controller informed of activities that are taking place on the parallel computer system.

The commands supported by the proxy server are the same as those defined in the Resource Manager Proxy Protocol. Once an action has been performed, the results (if any) are collected into one or more events, and these are transmitted back to the runtime controller. The proxy server must also monitor the status of the system (hardware, configuration, queues, running jobs, etc.) and report "interesting events" back to the runtime controller.

Implementation Details

Known Implementations

The following table lists the known implementations of proxy servers.

PTP Name Description Batch Language Operating Systems Architectures
ORTE Open Runtime Environment (part of Open MPI) No C99 Linux, MacOS X x86, x86_64, ppc
MPICH2 MPICH2 mpd runtime No Python Linux, MacOS X x86, x86_64, ppc
PE IBM Parallel Environment No C99 AIX ppc
LoadLeveler IBM LoadLeveler job-management system Yes C99 AIX ppc

Launch

Overview

The launch component is responsible for collecting the information required to launch an application, and passing this to the runtime controller to initiate the launch. Both normal and debug launches are handled by this component. Since the parallel computer system may be employing a batch job system, the launch may not result in immediate execution of the application. Instead, the application may be placed in a queue and only executed when the requisite resources become available.

The Eclipse launch configuration system provides a user interface that allows the user to create launch configurations that encapsulate the parameters and environment necessary to run or debug a program. The PTP launch component has extended this functionality to allow specification of the resources necessary for job submission. These resources available are dependent on the particular resource management system in place on the parallel computer system. A customizable launch page is available to allow any types of resources to be selected by the user. These resources are passed to the runtime controller in the form of attributes, and which are in turn passed on to the resource management system.

Normal Launch

Launch.png

Debug Launch

Launch debug.png

Implementation Details

The launch configuration is implemented in package org.eclipse.ptp.launch, which is built on top of Eclipse's launch configuration. It doesn't matter if you start from "Run as ..." or "Debug ...". The launch logic will go from here. What tells from a debugger launch apart from normal launch is a special debug flag.

if (mode.equals(ILaunchManager.DEBUG_MODE))
  // debugger launch 
else
  // normal launch


We sketch the steps following the launch operation.

First, a set of parameters/attributes are collected. For example, debugger related parameters such as host, port, debugger path are collected into array dbgArgs.

Then, we will call upon resource manager to submit this job:

 
final IResourceManager rm = getResourceManager(configuration);
IPJob job = rm.submitJob(attrMgr, monitor); 

Please refer to AbstractParallelLaunchConfigurationDelegates for more details.

Next,

Debug Platform

The debug platform comprises those elements relating to the debugging of parallel applications. The debug platform is comprised of the following elements in the architecture diagram:

  • debug model
  • debug views
  • PDI
  • SDM proxy
  • proxy debug
  • proxy client
  • scalable debug manager

The debug model provides a representation of a job and its associated processes being debugged. The debug views allow the user to interact with the debug model, and control the operation of the debugger. The parallel debug interface (PDI) is an abstract interface that defines how to interact with the job being debugged. Implementations of this interface provide the concrete classes necessary to interact with a real application. The SDM proxy provides an implementation of PDI that communicates via a proxy running on a remote machine. The proxy debug layer implements a set of commands and events that are used for this communication, and that use the proxy client communication services (the same proxy client used by the runtime platform). All of these elements are implemented as Eclipse plugins.

The scalable debug manager is an external program that runs on the remote system. It manages the debugger communication between the proxy client and low-level debug engines that control the debug operations on the application processes.

Each of these elements is described in more detail below.

Debug Model

Debug User Interface

Debug Controller

Debug API

Scalable Debug Manager

Tool Integration Services

Back to the top