Testing Process Part 1 Draft 1 Comments
Revision as of 12:54, 14 November 2007 by Jgwest.ca.ibm.com
- Alan Haggarty:
- The goal for this document is a set of straightforward instructions for executing TPTP tests and requirements when creating them.
- It is very much taken from Paul's strategy and represents much more reorganization than original content.
- The AGR execution part I could not get to work myself (only the first case passes in a suite) and do not understand the steps as written. They are currently a direct copy of the original and I will rewrite although anyone with experience running AGR tests can comment on what should be in a generic, straightforward set of steps for executing our gui test suites.
- For the execution section I am torn. In this document I tried to list only the steps which are physically required in all cases (ie: comments, project structure and references to allTests) and the items that are general "best practices" such as short cases, reduce redundancy, etc will go in part 2. However I am wondering if we need/should have step by step instructions for creating the tests in this document (new->TPTP JUnit Test-> ....) similar to the step by step for execution?
- There are a few [red] sections of text which I have my own questions or comments on the current draft.
- Part 2 will contain the expanded information that was in the original strategy and any new information we want to add. This way Part 1 can remain short, direct and hopefully easy to use to quickly get running tests and contributing to the project.
- Topics for inclusion in Part 2.
- TPTP Test Naming Conventions
- Test Creation Best Practices
- Test Execution Best Practices
- TPTP Test Project Exceptions
- The Common Test Infrastructure
- Jonathan West:
- Would it be possible to glue together some of the tests into one single .testsuite file? So for instance, aggregate all of the various Platform.Communication.Request_Peer_Monitoring.(platform).testsuite files into a single Platform.Communication.Request_Peer_Monitoring.testsuite. That is, keep the existing files, but run the aggregate and check in the aggregate .execution file. This would save a significant amount of time on the results generation side, as we would not have to go through the rigmarole of execution generation for each individual test. Is this feasible on the report generation side? -- Jgwest.ca.ibm.com 11:54, 14 November 2007 (EST)