Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "Talk:COSMOS Design 214576"

Line 56: Line 56:
 
#When talking about COSMOS team to specify the supported platforms we should also talk about supported browsers i.e. firefox and IE.
 
#When talking about COSMOS team to specify the supported platforms we should also talk about supported browsers i.e. firefox and IE.
 
#We should also provide a strategy or best practice document to record test results.  The current test result structure can be improved.  For example the test results should follow some form of naming convention to indicate whether the test case is associated with a data collection, data visualization or resource modelling
 
#We should also provide a strategy or best practice document to record test results.  The current test result structure can be improved.  For example the test results should follow some form of naming convention to indicate whether the test case is associated with a data collection, data visualization or resource modelling
#We should also determine if the test reports are sufficient for our needs or there needs to be additional test reports.
+
#We should also determine if the test reports are sufficient for our needs or whether there needs to be additional test reports.
  
  

Revision as of 16:31, 25 January 2008

Talk page for COSMOS QA Criteria.

--Popescu.ca.ibm.com 09:36, 8 January 2008 (EST)

  1. Assess localization support; validate that the code can run on non-english locales.
    1. This type of testing will not happen on every iteration, most likely not even on every release. This is a placeholder to keep track of translatability testing once COSMOS offers localization support.

Tmakins.us.ibm.com 16:33, 10 January 2008 (EST)

  • We should clarify which activities will be executed for each iteration and which will only be executed for milestones.
  • General Question: I assume that specifics on test cases, platforms, metrics, etc. will be documented in a QA plan which will be based on this document?

Paul Stratton. CA. 11-jan-2008

UNIT testing - what framework will be used to run the JUNIT tests - Eclipse ? Will QA perfrom regression testing using JUNITS from previous iterations ? Will manual tests be run within TPTP ? Are there any plans for QA to supplement current UNIT tests where they are considered inadequate ?

End to End Testing - What is the strategy to automate this testing and provide a more comprehensive coverage ?

start --Marty 08:32, 16 January 2008 (EST)

Quality Expectations:
Bug Free Implementation

  1. The Junits should contain the ER number, they should have a javadoc that shows the bugzilla URL
  2. The Junits should have javadoc added. The CMDBF junits are a good example of this (org.eclipse.cosmos.dc.cmdbf.services.tests).
  3. The Junits should state what is required in order for them to be executed. e.g The management domain should already be started.
  4. It would be nice to say that 100% of the code is covered, but how do you insure that ? We are using Junit version 3. Will 4 be allowed ? Any reason why we are using 3 and not 4?
  5. I think the Junits should state what they are testing, and not left up to the reader to figure it out.
  6. What about Junits that test error-handling? If my code is supposed to write a log record somewhere if a certain condition is found, there should be a Junit that forces that condition to occur, and it has to check that the log record is created. So far I have not seen these kinds of tests, but they should be there, and if they are not, then you can clearly state that the junits do not cover 100% of the code.
  7. Does release engineering produce javadoc for junits and put them somewhere ? (It should do)



Easy to use deployment package.
As well as having an easy to use deployment package, we should have an easy to execute 'self-test' program that runs out of the box, that determines if the test environment is correctly setup and working. This should be without the user having to start x number of things, execute x number of commands etc. This will save so much time in future.



Unit Level testing Code review

  1. How do I know I wrote the code in the right way ? Could I have done this better? More efficiently? You are never going to know the answers to these kinds of questions without reviewing the code.
  2. Template programs..... If for example we have an Mdr that we would like all Mdrs to based upon, that has good documentation, Junits tests, etc, Lets flag those programs and Junit tests as being templates for how we do things. At least if we don't have the time to code review we can state that the code written was based upon an agreed template.

Operational Efficiency
Quality Characteristics

  1. It depends on the use case as to what you expect on the Quality Characteristics. Its not 1 size fits all.
  2. If we are querying a data source currently with the current systems, that has to be a guide as to how long things may take. In general with the added COSMOS layers you expected things to have some degradation. But what degradation is acceptable? If we are saying 15% then someone somewhere has to test the query with current ways of doing it, in order to determine x+15%. That I think is necessary if you want to test degradation, but should not be underestimated on how much time is need to determine X.
  3. If a client retrieves data from more than one source, performance perception could be based on what actually is the slowest DataManager/Mdr. Which brings up an interesting point of whether client queries, are threaded or not.



end --Marty 08:32, 16 January 2008 (EST)

--sleeloy.ca.ibm.com 03:27, 24 January 2008 (EST)

  1. When talking about COSMOS team to specify the supported platforms we should also talk about supported browsers i.e. firefox and IE.
  2. We should also provide a strategy or best practice document to record test results. The current test result structure can be improved. For example the test results should follow some form of naming convention to indicate whether the test case is associated with a data collection, data visualization or resource modelling
  3. We should also determine if the test reports are sufficient for our needs or whether there needs to be additional test reports.


Review Meeting 24th Jan - QA Criteria Document Section 2

COSMOS meeting to discuss the QA Criteria meeting - QA Quality Perspective 2: Is COSMOS a consumable entity Attendees: Jimmy, Martin, JT, Tania, David, Shivi, Kishore, Leonard, Paul.

Successful integration of COSMOS components

  1. Modify quality expectation to “Successful Application integration of COSMOS components” as we will not be providing hardware/network etc integration tools.
  2. How will QA measure this – QA will check for existence and correct operation of tools
  3. Beta process will also be important to gauge the success of these tools

COSMOS stability during production deployments

  1. This expectation subsumed into the Operational efficiency section.

COSMOS team must state the kinds of MDRs that can be integrated and provide samples

  1. Remove ER’s 201302,201317. These are bug fixes to the Data Centre examples.
  2. Modify QA role to Test Sample MDR’s

User documentation

  1. Agreed that API Documentation should remain a separate quality expectation – need to raise ER against COSMOS build to generate JAVA DOC.
  2. Richard Vasconi is producing a template against which developers can start creating user doc material.

Samples / skeleton MDR implementation / any collateral

  1. Subsumed into previous integration quality expectation. Paul to follow up with Shivi to check that all points are covered.

Additional platforms

  1. Add link to M2 Dependencies and ER 216210

Dependencies on other open source software

  1. Add link to M2 Dependencies
  2. Discussion around whether QAshould test at later versions of dependencies than those specified. Agreed that QA should perform full testing at minimum versions and Sanity at latest version.

Future enhancements / bug reporting mechanism

  1. Bugzilla is the standard mechanism. Question from QA about POST 1.0 mechanism – support ? Still needs to be defined.
    Like any other Eclipse.org project, I would expect us to continue to accept bug reports and enhancement requests post-1.0. Obviously the "Version" field is going to become much more important at that point David whiteman.us.ibm.com 12:30, 25 January 2008 (EST)

Discussion on the 3rd section of this Doc deferred to Summit meeting 28th Jan, 2008.


David whiteman.us.ibm.com 14:01, 25 January 2008 (EST) The design document implies scalability is a build/doc problem. We should also have QA define some stress/scalability tests. These can be done via JUnit or other harness, but the idea is that we can test things like having an MDR contain thousands of records, and ensure that the response time of a query against it meets certain criteria.

Jimmy "Daddy" Mohsin 14:01, 25 January 2008 (EST) David, you are absolutely correct. This is really a QA issue. However, since Component cosmos.qa is yet to be created in Bugzilla, we have been (ab)using cosmos.build as the Component. This has also seeped into the design.

Back to the top