Skip to main content
Jump to: navigation, search

About ICE

Revision as of 10:10, 19 October 2015 by Billingsjj.ornl.gov (Talk | contribs) (Project History)

Introduction

Many new, high-performance nuclear energy modeling and simulation tools are under development in the United States and many more existing projects are moving to high-performance computing. This shift to state-of-the-art modeling and simulation will not only result in an unprecedented use of computing resources but it will generate extremely large amounts of data in extraordinary detail. Domain scientists will find themselves simultaneously in two positions: managing the complexity of their own problem domain and managing the complexity of a nightmarish scenario of high-performance computational tools.

We believe that the computational science community should develop integrated tools for coupling, working with and analyzing data from simulation codes instead of leaving domains scientists and students to fend for themselves and come up with a “working” solution. While many projects offer some set of tools to work with their codes, to launch jobs remotely or to setup input files, most of these tools are rarely integrated in a holistic way to provide an easy, common and high-productivity environment for users. For that matter, most remote jobs are launched on “the machine in the other room,” but that certainly will not work for larger problems that require Leadership Class computing resources found only at U.S. National Laboratories or similar facilities. Overcoming the challenges of launching on these machines and even on public or private “clouds” are left as a challenge to the user.

We envision an “integrated computational environment” that provides an integrated set of capabilities for working with physics simulators for creating input files, managing and analyzing data, launching jobs and code coupling through data mapping. We are currently developing this system – the Eclipse Integrated Computational Environment (ICE) – at the Oak Ridge National Laboratory.

The Eclipse Integrated Computational Environment

The initial designs for ICE were born from a large effort early in the Nuclear Energy Advanced Modeling and Simulation program (NEAMS, DOE NE-71), to develop a single software framework for all of the NEAMS codes. The design team noticed that there was large set of required functionality missing from the software framework that also needed to be developed: tools to make the framework usable! These tools included, amongst others, the ability to launch the framework remotely, construct its input data and analyze its output. Eventually the focus of NEAMS changed and it was determined that many frameworks would be developed, but the need for a single, integrated computational environment remained.

The value that integration and coordination provides to analysts, students and everyday users cannot be overstated. For example, ICE will assist in code development and scientific understanding and discovery in the nuclear energy research community, which is currently working very hard to develop modeling and simulation capabilities. On the other hand, to users in industry (nuclear or otherwise) who deploy large systems that are wholly or partially designed with computational techniques, ICE will decrease training times, re-analysis turn-around times (re-licensing), and greatly improve initial evaluation and analysis of these systems. Regulators, whose primary concern is the safety of the public, would now have access to the same simulation codes as researchers and industry analysts and vice-versa! Experimentalists or facility operators compare stored data against new data or even against data from simulations.

Features and Use Cases

The design of ICE is based on requirements and features gathered from stakeholders. The requirements elicitation process for ICE started in February 2009 and all of the requirements are re-evaluated at the beginning of each development iteration. Interviews, surveys, informal discussions and literature reviews were all used to develop the initial set of functional and non-functional requirements for ICE. The interviews and surveys were very carefully planned and executed to remove as much bias as possible from the questions. A high level view of the features and use cases of ICE is provided below.

Features

The feature set of ICE is broad and covers both domain and computer science concerns. The features were initially developed within the NEAMS community and the requests of CASL were added later. The NEAMS and CASL communities have specified that ICE should:

  • Execute simulations on a wide variety of platforms, including HPC machines
  • Provide tooling to setup models for input files, 3D geometries, meshes, materials or other data for simulations
  • Provide a suite of analysis tools, including visualization and analytics for large amounts of data
  • Provide utilities for performing uncertainty quantification and parameter studies
  • Provide Web, Android and Eclipse-based clients that connect to the same server to facilitate “universal access”
  • Support the composition of new applications and workflows from existing codes and tools
  • Support the development of new compute modules in nuclear energy codes
  • Promote interoperability and loose coupling between different software packages
  • Provide extensive documentation for users and developers
  • Work across multiple platforms, including Linux, Windows and Mac.

This feature set and the results from requirements gathering interviews were used to develop eight unique use cases for ICE. These use cases are formally defined and cover many types of interactions and scenarios between users and the system. These use cases almost completely trace to the list of features. Executing simulations and setting up models are the only two use cases that have been implemented and delivered to date.

The primary challenges that face ICE are the large data requirements and the scale of the codes with which it will interact. Petascale simulators that generate petabytes of data and run on hundreds of thousands of cores are a reality and unlocking the potential of these massively parallel codes requires an extremely well designed and optimized computational environment! At the same time, ICE must also support those customers who do not run on the largest of machines or who want to branch out to public or private clouds. (The latter set in fact represents the lion's share of potential users.)

Use Cases

Our requirements elicitation efforts have discovered the following set of use cases. There are many definitions of the phrase “use case” and here we define a use case as as description of the interactions between actors and the system, the information that passes between the actors and the system and the before and after states of the system.

The ICE use cases were developed based on the results of the requirements interviews and surveys. The formal specifications of the use cases are managed digitally since use cases are a form of requirements and the documents are treated as “living documents” that are updated as needed in response to new information from stakeholders. The following sections provide brief descriptions of our uses cases and the scope of work they cover.

Execute a simulation
Executing and monitoring simulations is often very difficult for users who are newly acquainted with a particular code and, in some cases, even for long-time experts. This use case has been identified by members of the NEAMS and CASL communities as the number one barrier to code adoption and was prioritized as the most important use case. ICE defines a simulation as the combination of a particular application, such as a reactor simulator, with an input file that defines the model. The current realization of this use case in ICE uses a Secure Shell (SSH) client to connect to remote clusters.
Setup a model
Specifying the input parameters that are necessary to simulate a physical system is a natural partner to executing simulations and can be a very challenging task. Many software systems in the nuclear engineering space use file formats that evolved over the years and were never fully specified or documented, which makes it challenging for users. ICE helps alleviate this problem by providing a way to graphically specify the input parameters. This masks the complexity of file formats and allows ICE to provide error checking for the model while it is configured.
Perform data analysis
Many HPC codes do not provide results that are of immediate use to an analyst. Furthermore, the simulation data that is provided often exceeds the amount of information required by the analyst and is only available in a custom format that requires specialized tools for extraction. ICE provides tools to make data analysis easier and has integrated 3D visualization.
Create an application
Since applications and tools do not always exist to address the exact questions of an analyst, it is often necessary to create new applications, tools or workflows by combining existing tools or authoring new tools all together. ICE is designed to help users with these tasks.
Create a module
ICE will provide tools for extending select code systems, such as VERA from CASL or SHARP and MOOSE from NEAMS, by creating new modules that can be added to their pre-existing framework. ICE considers a module to be any reusable element of a simulation code, such as a new component in component frameworks. This allows more advanced users of ICE to extend the functionality of products in a way that does not require significant experience with the codes but provides customized capability. This also allows ICE to provide a “safer” coding experience through, for example, syntax highlighting or static analysis.
Submit an asset to the catalog
ICE is designed to work with software and data repositories, generically defined as catalogs, to provide both a way to share different assets between users and to store simulations as needed to meet regulatory requirements. These activities are rarely considered by standalone users throughout the course of their day,whereas larger organizations and corporations need to store data or share it across sites. It is possible to store experimental data right beside data produced by simulations and the two can be easily manipulated and reused. The back-end repository software can be any one of a number of popular version control systems, including SVN, git or Mercurial, and data that should not be version controlled is stored in a standard SQL database.
Search the catalog
Since searching for a particular asset is different task than storing it and often includes retrieval, ICE defines a second data management use case specifically for searching the catalog. Separating searching from submission allows the ICE Development Team to focus on the unique issue associated with viewing and retrieving data.

Stakeholder Descriptions

There are several entities that hold a stake in ICE, including two programs from the Office of Nuclear Energy withing the Department of Energy, NEAMS and the Consortium for the Advanced Simulation of Light-water reactors (CASL). Both programs feel that a critical component to their success will be the development of an integrated computational environment that provides a rich set of tools and capabilities to the users of their codes. The full set of ICE stakeholders includes these programs, the development team, the customers and others, as outlined in table 1. (Table to be added)

User and Development Environments

ICE Users and Developers find themselves in significantly different environments when compared to their counterparts using other systems. Users are greeted by an easy-to-use system and developers are greeted by an easy to develop system!

ICE is under development in the context of existing modeling and simulation capabilities in the nuclear energy field, as well as high-performance computational science and engineering in general. The NEAMS program is currently the primary stakeholder of ICE, but several other projects are also ICE stakeholders and the code is freely available. ICE is used on a wide range of computer platforms, from individual workstations (desktops and laptops) and small clusters to large-scale clusters and Leadership Class systems. Today, these target platforms offer thousands to tens of thousands of processors, and in some cases a hundred thousand processors or more, and both processor counts and cores-per-processor are expected to increase. Offering users a chance to work seamlessly and easily across these many platforms with a simple to use, rich client is the greatest strength of ICE.

The development process and environment of ICE is significantly different from that of many DOE projects.

Development Process

The development process of ICE is discussed in detail at Development Process and Guidelines. In short, the development team works very hard to develop and maintain close relationships with our customers and practice those pieces of software engineering that help us develop extremely high quality software while still being responsive to change.

Design and Implementation

ICE has a client-server architecture and is RESTful.The primary components of ICE and their relationships are depicted in Fig. 1. (Figure to be added)

The ICECore is the server component of ICE and is responsible for anything and everything related to data management. The ICECore accepts requests from a ICEClient to create or modify an Item or to perform an Action for an Item. The ICECore is also responsible for managing jobs on local or remote hardware and launching sub-processes since some activities, like visualization and meshing, may be too computationally expensive for a single node. The server does not maintain client state. The ICEItem component defines the Item class and the ItemBuilder interface. This Item class and its subclasses are responsible for performing activities in ICE that realize the use cases for a particular piece of software.

For example, a LAMMPSModelItem subclass of Item could be responsible for specifying all of the necessary parameters to run LAMMPS for a molecular dynamics simulation and a SomeOtherCodeModelItem would specify all of the necessary parameters to setup some other code. Each Item must also have an implementation of the ItemBuilder interface, which is registered with the ICECore. Each Item publishes a set of available Actions that it can perform on behalf of users.

The relationship between the ICECore, Items and ItemBuilders are a realization of the Builder pattern. The ICEClient is responsible for interacting with users and acting as a broker to ICECore on a user’s behalf. The primary client for ICE is built with the Eclipse RCP, and an Android and rich web client are currently in development. The ICEClient realizes the Mediator pattern so that the logic necessary to communicate between user inter-face pieces and the ICECore can be maintained separately from the code that defines the actual user interface. This also allows developers to create their own user interfaces—textual, graphical, etc.—without having to re-implement all of the communications logic.

The ICEDataStructures component publishes a small set of data structures that may be of use in both the client and server. These data structures include things like a ICEObject base class that defines operations for managing the unique identification number, the name and the description of an Item or other asset that is distributed by the ICECore and a RESTful Form class that represents the collection of in-formation contained in an Item. Each component is currently implemented in Java 1.6, although the web client is in AJAX. Java was chosen for its stability, tooling and ubiquity.

Leveraging Existing Technologies

The ICE Development team leverages existing technologies where possible to accelerate development and avoid re-inventing the wheel. The most important piece of technology adopted for ICE is the Open Services Gateway initiative (OSGi) framework. The OSGi is a component frame-work that dynamically manages components, handles many of the details that Java developers would rather avoid (like class paths), and offers a very large number of utility components to developers, free of charge. Each component in ICE is a collection of one or more OSGi bundles.

The OSGi is also the primary means by which the dynamic registration of ItemBuilders with the ICECore occurs. Each component that subclasses Item must also supply an implementation of the ItemBuilder interface that can be registered with the ICECore. This is called a service in the OSGi par-lance and is accomplished in ICE using OSGi Declarative Services. Handling plug-ins for ICE with the OSGi allows individual capabilities to be turned on or off based only on the presence or absence of the jar file and associated files that define the component.

Another technology used in ICE is JAX-RS. JAX-RS is a Java API that uses annotations to enable RESTful features on top of a Plain-Old Java Object (POJOs). Using the API allows the developers to focus on the domain concerns of the POJOs and spend less time worrying about the details of the HTTP.

Project History

Requirements gathering interviews, reviews and exercises for ICE started in late January 2009. The requirements were gathered from the members of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) community as part of a project to develop both a run-time simulator framework and an integrated computational environment. Other communities were also reviewed for relevant requirements to "round out" the requirement set of ICE—that is, to keep the requirements as general as possible—so that changes in the modeling and simulation activities of NEAMS would not greatly affect the design of ICE.

Requirements from the Consortium for Advanced Simulation of Light Water Reactors were incorporated into the development plan of ICE in October and December 2010.

Other sponsors and communities have adopted ICE since then and the ICE development team continues to evaluate requirements on a regular basis with input from stakeholders and customers from all of its communities.

Development on the first version of ICE started in February 2010 as a part-time project for Jay Jay Billings with some assistance from others at ORNL and a couple of summer students. The first version of ICE was released on January 24th 2011.

The U.S. budget woes resulting in the Continuing Resolution (C.R.) of 2011 forced development on ICE to a half for the six months immediately following the release of ICE 1.0, which was also delayed because of the C.R. No new code development was performed during the C.R. period.

ICE 2.0 was designed during the C.R. period. The C.R. turned out to have a very positive effect on ICE because it gave the team the opportunity to re-design ICE from scratch and focus on doing things that would make the development activities more sustainable on lower budgets in the future. Code Development on ICE 2.0 started in July 2011.

ICE has been the user environment for the Crystal Cove adiabatic quantum computer simulator at ORNL since May 2012. In Fiscal Year 2013, ICE joined the Computer Aided Engineering for Batteries (CAEBAT) project to serve as the workflow environment for that project.

ICE officially became an Eclipse project in the summer of 2014 and changed its name from the NEAMS Integrated Computational Environment (NiCE) to ICE.

Acknowledgments

The authors are grateful to members of the NEAMS and CASL communities, others that we have interviewed and our program managers. The authors are particularly grateful to David Bernholdt, Kevin Clarno, Lori Diachin, Mark Miller, David Pointer, Randy Summers, Tim Tautges, John Turner and our colleagues at IBM Rational. Finally, the authors would like to acknowledge two summer students, Allison Koenecke and Adrian Sanchez, for their work on development prototypes.

This work has been supported by the U. S. Department of Energy, Offices of Nuclear Energy and by the ORNL Post-graduate Research Participation Program which is sponsored by ORNL and administered jointly by ORNL and by the Oak Ridge Institute for Science and Education (ORISE). ORNL is managed by UT-Battelle, LLC for the U. S. Department of Energy under Contract No. DE- AC05-00OR22725. ORISE is managed by Oak Ridge Associated Universities for the U. S. Department of Energy under Contract No. DE-AC05-00OR22750.

Back to the top