Skip to main content
Jump to: navigation, search



The Eclipse AI, Cloud & Edge (AICE) OpenLab Working Group manages and operates an open lab that provides a set of resources to promote the advancement, implementation, and verification of open source software for AI, Cloud, and Edge computing.

The AICE OpenLab is the place where the AI, Cloud & Edge ecosystems meet to innovate and grow with open source

Welcome to AICE

The place where the AI, Cloud & Edge ecosystems meet to innovate and grow with open source

The Eclipse AI, Cloud & Edge (AICE) OpenLab Working Group accelerates the adoption of AI, Cloud & Edge technologies and standards, through the provision and operation of a collaborative work and test environment for its participants, the engagement with research and innovation initiatives and through the promotion of open source projects to AI, Cloud & Edge developers.

AICE Working Group Participants

23 Technologies
Castalia Solutions
Engineering Group
Fraunhofer FOKUS
University of Macedonia
University of Skövde


The WHY of this initiative

In progress

(picture needed)


The Eclipse AI, Cloud & Edge (AICE) OpenLab Working Group manages and operates an open lab that provides a set of resources, including test and development environments, to promote the advancement, implementation, and verification of open source software for AI, Cloud, and Edge computing.

The Eclipse AICE OpenLab Working Group drives the development, evolution and broad adoption of best practices for AI, Cloud and Edge.

The AICE OpenLab Working Group focus is to assemble, test and validate AI, Cloud & Edge solutions using calibrated test tools & datasets. The working group also oversees the development of the necessary blueprints and reference architectures that collaboratively combines open source projects, datasets, configurations, and test beds definitions. Together these blueprints and reference architectures aim to deliver end to end use cases that fulfill best practices for privacy, ethics, security, standardization and interoperability.


The AICE OpenLab Working Group does this by:

  • Fostering open and neutral collaboration amongst members for the adoption of open source technologies
  • Defining and publishing reference architectures, blueprints and distributions of open source software that have been verified for industry AI, Cloud, and Edge standards, requirements, and use cases
  • Developing and providing open source verification test suites, test tools, calibrated datasets and test infrastructure for industry AI, Cloud, and Edge standards, requirements, and use cases
  • Ensuring that key requirements regarding privacy, security and ethics are integrated into all the OpenLab activities
  • Partnering with industry organizations to assemble and verify open source software for their standards, requirements, and use cases
  • Promoting the AICE OpenLab in the marketplace and engaging with the larger open source community
  • Managing the lab infrastructure resources to support this work


Testbeds are setup to ...... To realize a testbed the idea is to prepare a use case/topic in a series of workshops and to execute so called Hack-Fests (do we use this term?) which assemble developers from the ccollaborating partners for a defined period of time, e.g. 3-4 days, in which they realize a demonstrator or prototype (terminology?).

To identify testbed candidates, everybody is invited to propose ideas here to build a starting point for development of the idea towards the requirements for the execution of a Hack-Fest and for winning further interested parties.

To execute a Hack-Fest, we have identified some minimum requirements to make this a fruitful event:

  • A minimum of two partners collaborating on the testbed, be it companies, universities or research organizations
  • A minimum of 5 committed participants (to be discussed)

Workflow for testbed candidates

  1. Verbalization of a task in use case form, which has potential for further work
  2. Contribution of the use cases by interested partners as testbed candidates
  3. Preparation of the testbed candidates in one or a series of workshops to a state that is sufficient for the participants of a HackFest to produce results
  4. Execution of the Hack-Fest
  5. Evaluation of the Hack-Fest results to decide on whether to pursue further or not
  6. Reworking the Hack-Fest results to build a contribution either to an existing project or as an initial contribution of a new open source project

Who We Are and How to Join

Our unique proposition

What makes us stand out from other AI collaboration platforms? (from perspective of open source, participants, anticipated results, etc)

The AICE OpenLab is an environment where participants:

  • hands on test and develop their use cases, components, solutions etc.
  • collaborate in a truly open and transparent fashion
  • work towards joint solutions
  • raise the profile of AI, Cloud & Edge in the market place

How to participate in AICE?

The AICE OpenLab WG is still in an early stage. We are in the early phase of creating and Eclipse Working Group. By joining now, organizations have the opportunity to participate in the definition of the Working Group governance.

Feel free to use our mailing list. This is where we announce meetings and progress in the definition of the Working Group and its activities.

Your participation is welcome!

If you would like to engage more, please contact marc dot vloemans at eclipse-foundation dot org and gael dot blondelle at eclipse-foundation dot org

Interested Parties

Please add the name of your organization if you are interested in AICE or tell us to do it for you.

  • Bosch

Roadmap towards the AICE OpenLab Working Group

PNG from slide



  • Improved compatibility and interoperability of different technologies
  • Validated tests of new AI and cloud projects
  • Coordination of component life cycles
  • Reproducibility for researchers and developers
  • Federation of complementary technologies
  • Access to free AI & Cloud computing capability platform for developers
  • Definition of open specifications and examples of open implementations
  • Meeting point for AI, cloud (and robotics, IoT a.o.) initiatives in Europe

To OEM's/Domains

In Progress

User insight (OpenADx example): "Developing automated driving functions is extremely complicated and requires the use of many complex software tools which do not work efficiently with one another. What I need is a set of tools which work with each other seamlessly so that my teams can move through the development process more quickly and efficiently."

Benefit (OpenADx example): The automated driving tool chain allows your team to work together more efficiently with a suite of highly integrated tools by enabling seamless transfer of data and code through each step of the automated driving development process.

To tools & technology providers

In Progress

User insight (OpenADx example): "Currently, tools used to create automated driving applications do not work efficiently with one another. If our tool/technology is compatible with other widely used technologies and tools, it will ease the development process for our customers and make our products even more attractive to them."

Benefit (OpenADx example): The seamless integration of your technology in the automated driving tool chain makes it more attractive to organizations developing automated driving applications by increasing their development efficiency.

To research institutes

see format above

To international/European initiatives

see format above

To universities

see format above

Program & Projects

We are currently in the process of drafting the program for the future Working Group.


The scope of the OpenLab program is largely determined by the AI, Cloud & Edge domain requirements as well as the specific European context.

In general terms, development and implementation of new (open) standards is usually very complex, time consuming to organise and test, and costly to execute. With the expected level of AI standardisation coming from European initiatives e.g. AI4EU, Gaia-X, and EBRAINS, activities as training, testing and development can best be shared. Because open standards have proven to provide a boost to new business opportunities and technological innovation, it is logical that leading industry players, knowledge institutes and european initiatives collaborate. As such the required comprehensive, safe and open ecosystem/environment does not yet exist, the Eclipse Foundation has taken the first step towards such an AI, Cloud & Edge OpenLab.

This means it does not:

  • Set standards
  • Provide AI products

This means it can/must do:

  • Provision of, but not necessarily restricted to, specifications, testbeds, reference implementations, open APIs, Cloud infrastructure, calibrated data-sets
  • Member participation through contribution of technology and/or projects and/or funding and/or in kind services e.g. manpower, cloud capacity.
  • Safeguard sustainability of operations through member contributions and proper allocation of resources
  • Promote and build the ecosystem of organisations and community of developers

As a starting point, the scope of the OpenLab program - in order to be relevant to all stakeholders - will address three principle areas of interest:

Policy adoption

  • Improve adoption of european standards for interoperability and portability
    • The OpenLab intends to collaborate with existing initiatives e.g. Gaia-X, AI4EU, EBRAINS, as guiding programs on behalf of the European Commission digital strategy. Thus enabling incorporation of new European standards by the AI sector.
    • The OpenLab provides its members the open collaborative platform where stakeholders test new standards and develop new approaches and open source (reference) implementations to comply with new standards. WIth the OpenLab as the European competency centre for AI Open Source, facilitated by the Eclipse Foundation with its long standing track record in the field.
  • Address privacy, ethics and security concerns
    • These are major items on the EC digital agenda to be addressed industry wide
    • The open source approach and the collaborative environment provided by the OpenLab will enable specific audit services to be developed and guaranteed in the context of the OpenLab

Business improvement

  • Avoidance of vendor lock-in and enabling of technology lock-out
    • This necessitates open source and standards, which best can be provided through a vendor neutral environment where all players can meet, collaborate and share information, and contribute technology in a safe and open environment (in terms of antitrust, legal, code of conduct)
    • The OpenLab can guarantee such an environment through proper proven open principles, governance and (specification a.o.) processes.
  • Fragmentation within the industry
    • The emerging AI sector is still heavily siloed in terms of technologies, organisations and solutions
    • A vendor neutral common ground as provided by the OpenLab will facilitate and promote synchronous releases, federation of technologies, complementary solutions.

Technology development

  • Speed of innovation
    • This requires permissionless open innovation principles applied to the development, test and implementation cycle
    • The open source nature of the OpenLab practices and principles enable shorter time to market, as a central place where the ecosystem can freely access platforms and testbeds to develop, test and implement AI & Cloud solutions
  • Lack of collaboration between education, research and industry
    • Many new innovations in the AI, Cloud & Edge field derive from universities and research institutes. On one hand, these organisations require validation through industry players eg. start-ups, SME’s and large industry players. On the other hand they contribute heavily to the capacity building of the AI work force.
    • The OpenLab will provide a comprehensive ecosystem and mix of companies, research institutes to enable transfer of technologies and expertise. Furthermore items such as provision of performance benchmarks and reproducibility can be addressed.

Potential projects

Candidate 1: Automotive - Tool Ecosystem for testing and increase robustness of AI

addressed areas of interest

( X ) Artificial Intelligence

( X ) Cloud

( ) Edge


Define and implement a tool ecosystem for testing and increase robustness of automotive AI applications. The tool ecosystem shall help to establish an automotive industry-wide accepted way of validating AI modules in automotive products. The tool ecosystem shall be available for use by any interested party.

Use Case Step 1

Testing approaches include, depending on the individual problem:

  • white box testing, i.e., testing with full insight into the internals of an AI module
  • black box testing, i.e., testing of an AI module without any knowledge of its internals

Testing levels include:

  • unit testing, i.e., testing of an AI module on it’s own, for example within it’s Deep Learning training framework’s context
  • integration testing, i.e., testing of the AI model after it has been transformed to be used in the final product. This includes among other things…

Some of these methods can be regarded as the transfer of best practices from Software Engineering to AI Development.

Success Criteria:
  • For well-specified problems, implementations shall be developed (e.g., running tests with different permutations of subtests and data)
  • For potentially new suitable evaluation methods, research shall be done
  • Requirements for suppliers shall be created (e.g., specification of guidelines for compiler suppliers or HW accelerator suppliers)
Additional information

Link to detailed proposal

Candidate 2: Health- Federated Learning

'Challenge Healthcare is complexe ecosystem composed of different partners with each a very specific role but each highly interdependent. Based on

Use Case Step 1

Success Criteria:

  • Development of an open-source framework that would allow any 3rd party to dispose of an easily deployable federated learning framework.
  • Deployment of the open-source framework in real use-cases research projects with hospitals, pharmaceuticals and research institution partners.
  • Deployment in Production of the federated learning framework by industrial partners.

Additional information (link and/or download)

Candidate 3: This can be your project


Use Case Step 1

Success Criteria:

Additional information (link and/or download)

Follow our progress




  • We have a mailing list: Subscribe for news and discussions (ask Matt for set up, then this link 2b changed!): Mailing list
  • We have workshops with introduction sessions and have just started to work on concrete testbeds to identify topics that we agree to collaborate on. Currently these workshops are weekly telecons. Please check the mailing list for invitations or ask questions regarding content or participation


(to be opened in new tab!)

  • Towards an open source AI initiative at the Eclipse Foundation: [1]
  • Political challenges and opportunities in making open source AI mainstream: [2]
  • Eclipse Deeplearning4j: How to run AI workloads on Jakarta EE compliant servers: [3]
  • Meet MindSpore, the new open source AI framework!: [4]
  • Q & A: [5]
  • Welcome Message | Gaël Blondelle | Open Source AI Workshop S1E2: [6]
  • Trustworthy AI & Open Source | Eclipse Open Source AI Workshop S1E2: [7]
  • Introduction to Pixano: an Open Source Tool to Assist Annotation of Image Databases | Open Source AI: [8]

Upcoming Presentations

Press releases

  • XXX published in Magazine XXX: [10]

Blog posts

A logo design contest is to be scheduled. Please check the mailinglist for further details (hot link!).

Copyright Eclipse Foundation

Meeting Minutes

Link to the Meeting Minutes


Pitch deck to be used inside of respective organizations

Back to the top