Talk:COSMOS QA i9 Activities
This talk page is tied to the URL for this ER.
Paul Stratton 29th Jan, 2008.
Jimmy, some updates and comments
1) Cosmos Dev process link included.
2) Are QA responsible for moving the ER from FIXED to VERIFIED ? If so, this is an update to the Cosmos Dev process and this doc. If not then is there some signoff from QA ?
Jimmy says> This is a process question for Tania Makins. The COSMOS development process must reflect this process clearly.
3) The Platforms/Operating systems version (and all other dependencies )to be tested should be stated explicitely or provided via a link to the COSMOS M2 Dependencies.These should be the minimum versions for full testing. Sanity testing can then be performed for current GA versions of the dependencies - I added a couple of points to the relevent section.
Jimmy says> Defer to Kishore / Srinivas to specify the exact platform coverage and documents it here.
4) QA should be responsible for checking the the test results to CVS ? Would need comitter status. This doc needs to specify how/what test results are submitted.
Jimmy says> Since JT is a committer, perhaps he can help QA for i8? Please note that more committers are expected in i9.
5) There should be some physical registering that QA are done with the iteration to avoid any misunderstandings with emails.
Jimmy says> Defer to Tania Makins; there should be some sort of a "QA phase complete" at the end of each iteration. The COSMOS development process must reflect this process clearly.
David whiteman.us.ibm.com 14:29, 31 January 2008 (EST)
Paul, re: #2 above, I don't know if this is documented anywhere, but in i8 we certainly followed the practice where QA is supposed to move ERs from FIXED to VERIFIED and in the case of regular defects it is whoever opened the defect that must verify it. We did have the problem in i8 where QA was only able to change to VERIFIED status if she opened the ER/defect, so I need to check with Mark to see if this restriction can be relaxed.
Re: #4, agreed. If there is no committer status, than QA should probably give test results to subproject leads for checkin.
David whiteman.us.ibm.com 14:49, 31 January 2008 (EST)
Jimmy, some comments on the document:
1. The weekly integration build will NOT be run on ALL platforms. They will run only on Windows. How do we address the lack of ongoing testing on additional platforms, i.e. Linux in i9?
I think you mean "weekly integration build testing", right?
Perhaps this is one place we might want to engage QA, to smoke test an integration build on Linux, since it doesn't appear that any developers are setup on that platform. We could define as minimal of a smoke test as makes sense.
QA will ***not*** run the JUnits; they will simply verify that the JUnits have been run and this is documented in Bugzilla
I though the process was: QA will not run the full JUnit suite. QA might run individual JUnit tests if these have been identified as verification steps for a completed ER.
You should also explicitly state that QA does not have any responsibility for testing weekly integration builds.
All ERs should have JUnits in place.
Maybe that should say "All ERs should have JUnits or a manual TPTP test in place."
Jimmy Mohsin 17:49, 31 January 2008 (EST) David, I have addressed all your comments from this section.
--Domsr01.ca.com 04:54, 7 February 2008 (EST)
Jimmy, I have some comments on this document:
In scope platforms, OS's & configurations
I have added platform details, which QA would be working for i9. However, I am not clear on what configuration should QA work for i9. It's given as "TBD MDRs running on TBD machines and TBD Data Manager(s) running on TBD machines"
Jimmy says> Please propose the configurations that you intend to test / setup. Replace the TBD's by what you think is logical / minimal. I say minimal since at this juncture, performance testing is not a priority. The main source of this clause is to prevent the situation that occurred in i8, i.e. testing of multiple Brokers with one Domain, which was deemed out of scope.
Currently we have all 4 Data Managers bundled with single OSGi container, I hope i9 build will be having MDR's with tomcat deployment with individual bundles. If so, QA can attempt to do configuration of running MDR's on different machines and test this.
Iteration QA Entrance and Exit Criteria
As part of Exit Criteria, is QA resposible for checking test results into CVS ? As QA has no commiter status, we will only pass the results to sub-project leads to check-in.
Jimmy says> This is fine for now. Since JT is a committer, perhaps he can help QA for i8? Please note that more committers are expected in i9.
Also, QA will not be able to move an ER to verified state, instead QA can post the comment over ER with test results.
Jimmy says> This is a process question for Tania Makins. The COSMOS development process must reflect this process clearly. I have added this as an agenda item for the Feb 13 call.
i9 Test Cases
What all test cases should be covered here ? I assume it should cover all possible scenarios of i9 End2End testing with different configurations and i9 ER's specific tests.
Jimmy says> Any test cases you intend to cover as part of your testing must be captured here. Again, depending on the scope of your testing, this may be a very small task. The main source of this clause is to prevent the situation that occurred in i8, i.e. testing of multiple Brokers with one Domain, which was deemed out of scope.
Also, where we need to place these test cases ? Should we need to embed them on the same page or do you like to create new wiki page ?
Jimmy says> This is a QA decision.
Jack Devine 10:32, 11 February 2008 (EST)
IMO test cases should be recorded in the design page associated with the ER.
Sheldon firstname.lastname@example.org 16:01, 12 Feb 2008 (EST) Under the "In scope platforms, OS's & configurations" section we should test the following browsers:
- FireFox 1.5+/Mozilla
- Internet Explorer 6.0+
Shouldn't the results from the test cases be published and persisted? How is this going to be accomplished? So there's going to be two types of tests. Tests that are run by the developer and tests that are run by QA. Is there going to be an overall test report or are we going to have two separate reports?
Maybe QA can attach the word document to a TPTP manual test and check the manual tests in CVS. The test results can be recorded and reported on. However I think there's still a question how the tests should be organized and named. We should follow some naming convention so that one can associate a test with a particular er. This also assumes that QA has cvs access.
Suggested Naming Convention
<ER #>_.testsuite Note Sub Project is optional