Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Testing Process Part 1 Draft 1 Comments

Revision as of 13:46, 19 November 2007 by Paulslau.ca.ibm.com (Talk | contribs) (Paul Slauenwhite)

Alan Haggarty

  • The goal for this document is a set of straightforward instructions for executing TPTP tests and requirements when creating them.
  • It is very much taken from Paul's strategy and represents much more reorganization than original content.
  • The AGR execution part I could not get to work myself (only the first case passes in a suite) and do not understand the steps as written. They are currently a direct copy of the original and I will rewrite although anyone with experience running AGR tests can comment on what should be in a generic, straightforward set of steps for executing our gui test suites.
  • For the execution section I am torn. In this document I tried to list only the steps which are physically required in all cases (ie: comments, project structure and references to allTests) and the items that are general "best practices" such as short cases, reduce redundancy, etc will go in part 2. However I am wondering if we need/should have step by step instructions for creating the tests in this document (new->TPTP JUnit Test-> ....) similar to the step by step for execution?
  • There are a few [red] sections of text which I have my own questions or comments on the current draft.
  • Part 2 will contain the expanded information that was in the original strategy and any new information we want to add. This way Part 1 can remain short, direct and hopefully easy to use to quickly get running tests and contributing to the project.
    • Topics for inclusion in Part 2.
    • Introduction
    • TPTP Test Naming Conventions
    • Test Creation Best Practices
    • Test Execution Best Practices
    • TPTP Test Project Exceptions
    • The Common Test Infrastructure

Jonathan West

  • Would it be possible to glue together some of the tests into one single .testsuite file? So for instance, aggregate all of the various Platform.Communication.Request_Peer_Monitoring.(platform).testsuite files into a single Platform.Communication.Request_Peer_Monitoring.testsuite. That is, keep the existing files, but run the aggregate and check in the aggregate .execution file. This would save a significant amount of time on the results generation side, as we would not have to go through the rigmarole of execution generation for each individual test. Is this feasible on the report generation side? -- Jgwest.ca.ibm.com 11:54, 14 November 2007 (EST)
  • As per Alan's suggestion that any questions that I answered for myself upon review of the document would be beneficial, here is the comment I was going to post: Under '2.2 Extracting the Test Resources', the second step is, "2. To ensure that the local workspace contains the latest test projects, re-extract at the beginning of each unit of work (for example, test pass, report generation, etc.)." Based on the context of this section the extraction term implies a check-out from CVS. So presumably re-extraction at the beginning of each "unit of work" would be selecting, in Eclipse, the results project -> Team -> Update, or does it mean a straight wipe of the directory contents, and then a check-out? My answer to this was, yes, technicaly it could be either, though erasing and re-checking out has the advantage of removing any extraneous executions that may have run. -- Jgwest.ca.ibm.com 09:41, 19 November 2007 (EST)

Paul Slauenwhite

  • Naming conventions for root level test suites and execution results should include test plug-in ID.
  • Execution results need to be saved in a directory with a meaningful name (e.g. OS, JRE, type (full, smoke, BVT)).
  • Test suites for specific JREs and OSes (e.g. JVMPI versus JVMTI) will not always run on the reference platform(s).
  • The Agent Controller and profiler test suites need to migrated to use the TPTP Test Tools instead of requiring a custom configuration.
  • Define a root-level test suite for BVTs (possibly replacing the smoke tests).
  • We do not need to check-in the execution results to CVS due to disk space limitations and polluting of our test pass results.
    • Each developer can rerun the automated tests to reproduce a failure.

Back to the top