Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "COSMOSInitalPrototype"

(Initial Prototype Proposal)
(Proposal)
Line 5: Line 5:
 
The purpose of this prototype is to flush out technical details and issues as well as provide something usable that moves toward the kind of capabilities COSMOS intends to provide. The desire is also to do this quickly and iteratively so there is continuous value.
 
The purpose of this prototype is to flush out technical details and issues as well as provide something usable that moves toward the kind of capabilities COSMOS intends to provide. The desire is also to do this quickly and iteratively so there is continuous value.
  
TPTP provides some basic similar uses cases but does not have a robust and scalable data storage systems or a web orient user interface. This is why TPTP has committed to resolve it's scalabiltiy issues in the 4.4 release (part of Europa in June 07) and do it in collaboration with COSMOS. The end goal is to make the data and data collectors sharable between the projects in order to support life cycle use cases that span the target user types of each project.
+
TPTP provides some basic similar uses cases but does not have a robust and scalable data storage systems or a web orient user interface. This is why TPTP has committed to resolve it's scalability issues in the 4.4 release (part of Europa in June 07) and do it in collaboration with COSMOS. The end goal is to make the data and data collectors sharable between the projects in order to support life cycle use cases that span the target user types of each project.
  
 
The proposal for the first step is to leverage the TPTP data collectors for statistical, trace and log data (in that order) and store the data via a COSMOS isolation API layer. Storage can in fact initially be in the existing TPTP EMF models, although clearly this is not the end game for COSMOS. BIRT and web based UI can access this data via a new COSMOS api, thus maintaining storage system isolation and driving the COSMOS architectural intent.
 
The proposal for the first step is to leverage the TPTP data collectors for statistical, trace and log data (in that order) and store the data via a COSMOS isolation API layer. Storage can in fact initially be in the existing TPTP EMF models, although clearly this is not the end game for COSMOS. BIRT and web based UI can access this data via a new COSMOS api, thus maintaining storage system isolation and driving the COSMOS architectural intent.
Line 16: Line 16:
  
 
A few iterations of this prototype could be demo ready by EclipseCon time frame.
 
A few iterations of this prototype could be demo ready by EclipseCon time frame.
 +
 +
 +
== User flow(s) ==
 +
 +
Initially the user has a browser based visualization of a network topology. One logical view provides a tree style decomposition of an enterprise systems, perhaps known as the sales system. The sales system can be viewed as a network of large grained software components that are contained/serviced by a system unit. Each system unit and each software unit can be selected, and a number of possible statistical or health indicators/measurements can be selected for viewing. Each indicator can be turned on or off, and if there is data available it is shown.
 +
Initially this prototype will require the user to have manually entered and described each node that is visible, along with any connection information needed to control the data collection.
 +
It will have to be determined how many units can be viewed and if the views are aggregated or not.
 +
 +
This is all the user can do:
 +
- described a network of software and hardware component
 +
- look as some specific logical views of an application stack
 +
- turn data collection on and off
 +
- View the data based on a selectable time scale other than current
 +
 +
 +
== How it works ==
 +
 +
 +
Data collectors and controlled and data is collected via TPTP based agents and the XML event formats they support. Initially turning collection on and off is simply a matter of controlling the agents as they are today. In future iterations perhaps additional data collectors can be deployed on demand for more specific or detailed collection.
 +
 +
The agents are actually managed by a server process that established and maintains connections as well as streams any data to a data store. Initially this can be the TPTP EMF models, but this should be replaced by an RDB asap.
 +
This server process is the point of control to manage the state of the collectors. This state should also be persisted in a RDB data store for more flexible control in future iterations.
 +
 +
The user interface has two components. Naturally one is to simply query and render data and this can be done using BIRT. However when the information being rendered is the actual network or the data collectors, the user interface also must interact with the control server. Initially this can be a simple command structure interface as is down with TPTP today, but more regular interaction APIs need to be investigated similar to WSDM or JMX since the commands are very similar (start, stop, execute, query status...)

Revision as of 18:56, 28 December 2006

COSMOS Main Page

Proposal

The purpose of this prototype is to flush out technical details and issues as well as provide something usable that moves toward the kind of capabilities COSMOS intends to provide. The desire is also to do this quickly and iteratively so there is continuous value.

TPTP provides some basic similar uses cases but does not have a robust and scalable data storage systems or a web orient user interface. This is why TPTP has committed to resolve it's scalability issues in the 4.4 release (part of Europa in June 07) and do it in collaboration with COSMOS. The end goal is to make the data and data collectors sharable between the projects in order to support life cycle use cases that span the target user types of each project.

The proposal for the first step is to leverage the TPTP data collectors for statistical, trace and log data (in that order) and store the data via a COSMOS isolation API layer. Storage can in fact initially be in the existing TPTP EMF models, although clearly this is not the end game for COSMOS. BIRT and web based UI can access this data via a new COSMOS api, thus maintaining storage system isolation and driving the COSMOS architectural intent.

One key part of COSMOS is to be able to observe the "Data Center" and initially via user interaction select nodes in the system for observation. In TPTP this would be done by acting on the "hierarchy" model which captures a group of machines with processes and data collectors on them. In this prototype the hierarchy model could be replaced by a data center model that eventually would be SML based. Initially this could simply be reuse of the SML-IF sample we have seen.

This prototype would let COSMOS quickly string together all of it's sub-components along with TPTP use cases and begin to show it's unique value add as well as places for collaboration with TPTP etc.. Once this proof of concept is working each of the architectural areas of COSMOS Can begin to evolve in parallel.

In order to protect the user community COSMOS would not provide any "API" in Eclipse terms at this time, in order that everything can be replaced as needed going forward.

A few iterations of this prototype could be demo ready by EclipseCon time frame.


User flow(s)

Initially the user has a browser based visualization of a network topology. One logical view provides a tree style decomposition of an enterprise systems, perhaps known as the sales system. The sales system can be viewed as a network of large grained software components that are contained/serviced by a system unit. Each system unit and each software unit can be selected, and a number of possible statistical or health indicators/measurements can be selected for viewing. Each indicator can be turned on or off, and if there is data available it is shown. Initially this prototype will require the user to have manually entered and described each node that is visible, along with any connection information needed to control the data collection. It will have to be determined how many units can be viewed and if the views are aggregated or not.

This is all the user can do: - described a network of software and hardware component - look as some specific logical views of an application stack - turn data collection on and off - View the data based on a selectable time scale other than current


How it works

Data collectors and controlled and data is collected via TPTP based agents and the XML event formats they support. Initially turning collection on and off is simply a matter of controlling the agents as they are today. In future iterations perhaps additional data collectors can be deployed on demand for more specific or detailed collection.

The agents are actually managed by a server process that established and maintains connections as well as streams any data to a data store. Initially this can be the TPTP EMF models, but this should be replaced by an RDB asap. This server process is the point of control to manage the state of the collectors. This state should also be persisted in a RDB data store for more flexible control in future iterations.

The user interface has two components. Naturally one is to simply query and render data and this can be done using BIRT. However when the information being rendered is the actual network or the data collectors, the user interface also must interact with the control server. Initially this can be a simple command structure interface as is down with TPTP today, but more regular interaction APIs need to be investigated similar to WSDM or JMX since the commands are very similar (start, stop, execute, query status...)

Back to the top