Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "Talk:COSMOS Design 214576"

 
(2 intermediate revisions by one other user not shown)
Line 106: Line 106:
 
[[User:David whiteman.us.ibm.com|David whiteman.us.ibm.com]] 17:13, 25 January 2008 (EST)
 
[[User:David whiteman.us.ibm.com|David whiteman.us.ibm.com]] 17:13, 25 January 2008 (EST)
 
I see you copied and pasted the date of my comment, Jimmy :-)  The way to get your name and the date to appear is to simply insert 4 tildes without spaces where you want them to appear, like this: <nowiki>~~~~</nowiki>.
 
I see you copied and pasted the date of my comment, Jimmy :-)  The way to get your name and the date to appear is to simply insert 4 tildes without spaces where you want them to appear, like this: <nowiki>~~~~</nowiki>.
 +
 +
== Review Meeting 30th Jan - QA Criteria Document Section 3 ==
 +
 +
[[User:Strpa05.ca.com|Strpa05.ca.com]] 09:18, 31 January 2008 (EST)
 +
 +
During the Architecture call 30th Jan discussion concerning QA Criteria – Quality Perspective 3: COSMOS Operational Efficiency
 +
 +
Attendees: Mark Weitzel, Hubert Leung, Ali Mehregani, Bill Muldoon, Tania Makins, David Whiteman, Martin Simmonds, Jack Devine, Don Ebright, Jimmy Mohsin, Dominica Gjelaj, Paul Stratton, John Todd ,Sheldon Lee Loy , Kishore Adivishnu
 +
 +
# Opening remarks about the requirement to state whether there are any quality statements that could/should be made in the area of operational efficiency for COSMOS 1.0. To quantify parameters and characteristics will require a test harness applied to a reference system to provide baseline information. This would then lead to testing to provide acceptable limits and thresholds.
 +
# General discussion about what characteristics would be possible and beneficial for adopters to be provided up front and what would need to be derived from adopter testing.
 +
# Agreement that what ever load is applied to the system that it should not crash. If load such as data volumes are unacceptably high then the user should be warned but should not affect the integrity of the system.
 +
# Don suggested that characteristics such as footprint and security compliancy would be useful additional factors. The task to define h/w and s/w operational guidelines is documented under ER: 216210
 +
# Mark suggested that Data Volume testing might be performed with selected adopters although this would likely remain internal unless the tests/data could be externalized. Kishore (QA) commented that it should be possible for QA to setup an environment to perform some basic testing around Data Volumes. All agreed that this would then provide more information to assess what acceptance criteria could be set.
 +
 +
Agreed changes to the document:-
 +
 +
Availability, Capacity – remove.
 +
 +
Concurrency – limit to testing with 2 sessions to ensure that components are thread safe.
 +
 +
Data Volumes, Performance – Defer pending initial testing with volume data by QA
 +
 +
Scalability, Stability – Testing limited to ensuring that COSMOS components do not crash under load
 +
 +
 +
==Scalability and Performance Testing ==
 +
 +
--[[User:Martin.simmonds.ca.com|Marty]] 02:33, 1 February 2008 (EST)
 +
 +
Here are some things I think need to be noted with regard to the kind of testing, that I think we '''''really need to do''''' in COSMOS.  I am not making any implication as to when we do this, only implying that it is necessary to do, at some point.
 +
 +
Firstly I am going to split up the different areas:
 +
<ol>
 +
<li> Client queries the MDR
 +
<li> MDR returns query to Client
 +
<li> Client queries the Management Domain to determine where the broker is
 +
<li> Client queries the Data Broker for Data Managers
 +
<li> MDR queries the Management domain for where the broker is
 +
<li> MDR registers with the broker
 +
<li> The CMDBf graph response goes through an outputter to create JSON
 +
<li> The DOJO widgets are populated from the JSON
 +
</ol>
 +
 +
There may be some I have missed, but I think this is a good coverage.  The following should all be '''''documented''''' and the '''''response times and results recorded'''''.  We should also '''''note whether COSMOS ‘gracefully’ handles it!
 +
'''''
 +
<br><br>
 +
# Client queries the MDR
 +
## Use the Example MDR, as its repository, is an XML file that can be easily manipulated
 +
## Start with 10 records in the repository, query it with the CMDBf query from the COSMOS UI, note the time taken to get a response, and whether the query worked
 +
## Next 100 records, and so on until it breaks, it will break as the outputter outputs JSON, and the more JSON there is, the more memory it takes
 +
## 1000, then 10,000, then 100,000 then 1,000,000  and keep going till it breaks, keeping note of response times
 +
## When it breaks, binary chop back to the last successful execution, and try with that.  Do the binary chopping a few times so we get a more accurate idea of when it may break.
 +
# MDR Returns query to client
 +
## Try the client on the same machine (this overlaps with test 1.)
 +
## Try the client on a different machine
 +
## Try multiple clients accessing the MDR at the same time
 +
## This next bit is going to be used in a lot of places....  Create an application that reads an xml file, each record in the xml file is a cmdbf query.  When the full file is read in, thread calls to the MDR, ie simulate calling the mdr by multiple clients.  Adjust the xml input file to increase the volume.  Run the same ‘threader’ program at the same time on another machine.
 +
# Client queries the Management Domain to determine where the broker is
 +
## Use the ‘threader’ program with an xml input file to bombard the management domain.
 +
## Same as a but from multiple machines
 +
# Same as 3 (for DM).
 +
# Same as 3 (for MDR to MD)
 +
# Same as 3 (for MDR registration).
 +
# This is a subset 1.  However I am sure it will be very valuable to know how long things take at this stage, and compare the percentage of time in this stage against the overall response time.
 +
## Use the ‘threader’ program to read in graph responses and put them through each outputter that we have
 +
# Same as 7.
 +
## The dojo toolkit allows you to write the response time for loading the widget to the web page
 +
## Take the json that will be produced at each stage of 1.
 +
<br><br>End --[[User:Martin.simmonds.ca.com|Marty]] 02:33, 1 February 2008 (EST)<br><br>

Latest revision as of 03:33, 1 February 2008

Talk page for COSMOS QA Criteria.

--Popescu.ca.ibm.com 09:36, 8 January 2008 (EST)

  1. Assess localization support; validate that the code can run on non-english locales.
    1. This type of testing will not happen on every iteration, most likely not even on every release. This is a placeholder to keep track of translatability testing once COSMOS offers localization support.

Tmakins.us.ibm.com 16:33, 10 January 2008 (EST)

  • We should clarify which activities will be executed for each iteration and which will only be executed for milestones.
  • General Question: I assume that specifics on test cases, platforms, metrics, etc. will be documented in a QA plan which will be based on this document?

Paul Stratton. CA. 11-jan-2008

UNIT testing - what framework will be used to run the JUNIT tests - Eclipse ? Will QA perfrom regression testing using JUNITS from previous iterations ? Will manual tests be run within TPTP ? Are there any plans for QA to supplement current UNIT tests where they are considered inadequate ?

End to End Testing - What is the strategy to automate this testing and provide a more comprehensive coverage ?

start --Marty 08:32, 16 January 2008 (EST)

Quality Expectations:
Bug Free Implementation

  1. The Junits should contain the ER number, they should have a javadoc that shows the bugzilla URL
  2. The Junits should have javadoc added. The CMDBF junits are a good example of this (org.eclipse.cosmos.dc.cmdbf.services.tests).
  3. The Junits should state what is required in order for them to be executed. e.g The management domain should already be started.
  4. It would be nice to say that 100% of the code is covered, but how do you insure that ? We are using Junit version 3. Will 4 be allowed ? Any reason why we are using 3 and not 4?
  5. I think the Junits should state what they are testing, and not left up to the reader to figure it out.
  6. What about Junits that test error-handling? If my code is supposed to write a log record somewhere if a certain condition is found, there should be a Junit that forces that condition to occur, and it has to check that the log record is created. So far I have not seen these kinds of tests, but they should be there, and if they are not, then you can clearly state that the junits do not cover 100% of the code.
  7. Does release engineering produce javadoc for junits and put them somewhere ? (It should do)



Easy to use deployment package.
As well as having an easy to use deployment package, we should have an easy to execute 'self-test' program that runs out of the box, that determines if the test environment is correctly setup and working. This should be without the user having to start x number of things, execute x number of commands etc. This will save so much time in future.



Unit Level testing Code review

  1. How do I know I wrote the code in the right way ? Could I have done this better? More efficiently? You are never going to know the answers to these kinds of questions without reviewing the code.
  2. Template programs..... If for example we have an Mdr that we would like all Mdrs to based upon, that has good documentation, Junits tests, etc, Lets flag those programs and Junit tests as being templates for how we do things. At least if we don't have the time to code review we can state that the code written was based upon an agreed template.

Operational Efficiency
Quality Characteristics

  1. It depends on the use case as to what you expect on the Quality Characteristics. Its not 1 size fits all.
  2. If we are querying a data source currently with the current systems, that has to be a guide as to how long things may take. In general with the added COSMOS layers you expected things to have some degradation. But what degradation is acceptable? If we are saying 15% then someone somewhere has to test the query with current ways of doing it, in order to determine x+15%. That I think is necessary if you want to test degradation, but should not be underestimated on how much time is need to determine X.
  3. If a client retrieves data from more than one source, performance perception could be based on what actually is the slowest DataManager/Mdr. Which brings up an interesting point of whether client queries, are threaded or not.



end --Marty 08:32, 16 January 2008 (EST)

--sleeloy.ca.ibm.com 03:27, 24 January 2008 (EST)

  1. When talking about COSMOS team to specify the supported platforms we should also talk about supported browsers i.e. firefox and IE.
  2. We should also provide a strategy or best practice document to record test results. The current test result structure can be improved. For example the test results should follow some form of naming convention to indicate whether the test case is associated with a data collection, data visualization or resource modelling
  3. We should also determine if the test reports are sufficient for our needs or whether there needs to be additional test reports.


Review Meeting 24th Jan - QA Criteria Document Section 2

COSMOS meeting to discuss the QA Criteria meeting - QA Quality Perspective 2: Is COSMOS a consumable entity Attendees: Jimmy, Martin, JT, Tania, David, Shivi, Kishore, Leonard, Paul.

Successful integration of COSMOS components

  1. Modify quality expectation to “Successful Application integration of COSMOS components” as we will not be providing hardware/network etc integration tools.
  2. How will QA measure this – QA will check for existence and correct operation of tools
  3. Beta process will also be important to gauge the success of these tools

COSMOS stability during production deployments

  1. This expectation subsumed into the Operational efficiency section.

COSMOS team must state the kinds of MDRs that can be integrated and provide samples

  1. Remove ER’s 201302,201317. These are bug fixes to the Data Centre examples.
  2. Modify QA role to Test Sample MDR’s

User documentation

  1. Agreed that API Documentation should remain a separate quality expectation – need to raise ER against COSMOS build to generate JAVA DOC.
  2. Richard Vasconi is producing a template against which developers can start creating user doc material.

Samples / skeleton MDR implementation / any collateral

  1. Subsumed into previous integration quality expectation. Paul to follow up with Shivi to check that all points are covered.

Additional platforms

  1. Add link to M2 Dependencies and ER 216210

Dependencies on other open source software

  1. Add link to M2 Dependencies
  2. Discussion around whether QAshould test at later versions of dependencies than those specified. Agreed that QA should perform full testing at minimum versions and Sanity at latest version.

Future enhancements / bug reporting mechanism

  1. Bugzilla is the standard mechanism. Question from QA about POST 1.0 mechanism – support ? Still needs to be defined.
    Like any other Eclipse.org project, I would expect us to continue to accept bug reports and enhancement requests post-1.0. Obviously the "Version" field is going to become much more important at that point David whiteman.us.ibm.com 12:30, 25 January 2008 (EST)

Discussion on the 3rd section of this Doc deferred to Summit meeting 28th Jan, 2008.


David whiteman.us.ibm.com 14:01, 25 January 2008 (EST) The design document implies scalability is a build/doc problem. We should also have QA define some stress/scalability tests. These can be done via JUnit or other harness, but the idea is that we can test things like having an MDR contain thousands of records, and ensure that the response time of a query against it meets certain criteria.

Jimmy "Daddy" Mohsin 14:01, 25 January 2008 (EST) David, you are absolutely correct. This is really a QA issue. However, since Component cosmos.qa is yet to be created in Bugzilla, we have been (ab)using cosmos.build as the Component. This has also seeped into the design.

David whiteman.us.ibm.com 17:13, 25 January 2008 (EST) I see you copied and pasted the date of my comment, Jimmy :-) The way to get your name and the date to appear is to simply insert 4 tildes without spaces where you want them to appear, like this: ~~~~.

Review Meeting 30th Jan - QA Criteria Document Section 3

Strpa05.ca.com 09:18, 31 January 2008 (EST)

During the Architecture call 30th Jan discussion concerning QA Criteria – Quality Perspective 3: COSMOS Operational Efficiency

Attendees: Mark Weitzel, Hubert Leung, Ali Mehregani, Bill Muldoon, Tania Makins, David Whiteman, Martin Simmonds, Jack Devine, Don Ebright, Jimmy Mohsin, Dominica Gjelaj, Paul Stratton, John Todd ,Sheldon Lee Loy , Kishore Adivishnu

  1. Opening remarks about the requirement to state whether there are any quality statements that could/should be made in the area of operational efficiency for COSMOS 1.0. To quantify parameters and characteristics will require a test harness applied to a reference system to provide baseline information. This would then lead to testing to provide acceptable limits and thresholds.
  2. General discussion about what characteristics would be possible and beneficial for adopters to be provided up front and what would need to be derived from adopter testing.
  3. Agreement that what ever load is applied to the system that it should not crash. If load such as data volumes are unacceptably high then the user should be warned but should not affect the integrity of the system.
  4. Don suggested that characteristics such as footprint and security compliancy would be useful additional factors. The task to define h/w and s/w operational guidelines is documented under ER: 216210
  5. Mark suggested that Data Volume testing might be performed with selected adopters although this would likely remain internal unless the tests/data could be externalized. Kishore (QA) commented that it should be possible for QA to setup an environment to perform some basic testing around Data Volumes. All agreed that this would then provide more information to assess what acceptance criteria could be set.

Agreed changes to the document:-

Availability, Capacity – remove.

Concurrency – limit to testing with 2 sessions to ensure that components are thread safe.

Data Volumes, Performance – Defer pending initial testing with volume data by QA

Scalability, Stability – Testing limited to ensuring that COSMOS components do not crash under load


Scalability and Performance Testing

--Marty 02:33, 1 February 2008 (EST)

Here are some things I think need to be noted with regard to the kind of testing, that I think we really need to do in COSMOS. I am not making any implication as to when we do this, only implying that it is necessary to do, at some point.

Firstly I am going to split up the different areas:

  1. Client queries the MDR
  2. MDR returns query to Client
  3. Client queries the Management Domain to determine where the broker is
  4. Client queries the Data Broker for Data Managers
  5. MDR queries the Management domain for where the broker is
  6. MDR registers with the broker
  7. The CMDBf graph response goes through an outputter to create JSON
  8. The DOJO widgets are populated from the JSON

There may be some I have missed, but I think this is a good coverage. The following should all be documented and the response times and results recorded. We should also note whether COSMOS ‘gracefully’ handles it!



  1. Client queries the MDR
    1. Use the Example MDR, as its repository, is an XML file that can be easily manipulated
    2. Start with 10 records in the repository, query it with the CMDBf query from the COSMOS UI, note the time taken to get a response, and whether the query worked
    3. Next 100 records, and so on until it breaks, it will break as the outputter outputs JSON, and the more JSON there is, the more memory it takes
    4. 1000, then 10,000, then 100,000 then 1,000,000 and keep going till it breaks, keeping note of response times
    5. When it breaks, binary chop back to the last successful execution, and try with that. Do the binary chopping a few times so we get a more accurate idea of when it may break.
  2. MDR Returns query to client
    1. Try the client on the same machine (this overlaps with test 1.)
    2. Try the client on a different machine
    3. Try multiple clients accessing the MDR at the same time
    4. This next bit is going to be used in a lot of places.... Create an application that reads an xml file, each record in the xml file is a cmdbf query. When the full file is read in, thread calls to the MDR, ie simulate calling the mdr by multiple clients. Adjust the xml input file to increase the volume. Run the same ‘threader’ program at the same time on another machine.
  3. Client queries the Management Domain to determine where the broker is
    1. Use the ‘threader’ program with an xml input file to bombard the management domain.
    2. Same as a but from multiple machines
  4. Same as 3 (for DM).
  5. Same as 3 (for MDR to MD)
  6. Same as 3 (for MDR registration).
  7. This is a subset 1. However I am sure it will be very valuable to know how long things take at this stage, and compare the percentage of time in this stage against the overall response time.
    1. Use the ‘threader’ program to read in graph responses and put them through each outputter that we have
  8. Same as 7.
    1. The dojo toolkit allows you to write the response time for loading the widget to the web page
    2. Take the json that will be produced at each stage of 1.



End --Marty 02:33, 1 February 2008 (EST)

Back to the top