Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: for the plan.

Jump to: navigation, search

Testing Process Part 1 Draft 1 Comments

Alan Haggarty

  • The goal for this document is a set of straightforward instructions for executing TPTP tests and requirements when creating them.
  • It is very much taken from Paul's strategy and represents much more reorganization than original content.
  • The AGR execution part I could not get to work myself (only the first case passes in a suite) and do not understand the steps as written. They are currently a direct copy of the original and I will rewrite although anyone with experience running AGR tests can comment on what should be in a generic, straightforward set of steps for executing our gui test suites.
  • For the execution section I am torn. In this document I tried to list only the steps which are physically required in all cases (ie: comments, project structure and references to allTests) and the items that are general "best practices" such as short cases, reduce redundancy, etc will go in part 2. However I am wondering if we need/should have step by step instructions for creating the tests in this document (new->TPTP JUnit Test-> ....) similar to the step by step for execution?
  • There are a few [red] sections of text which I have my own questions or comments on the current draft.
  • Part 2 will contain the expanded information that was in the original strategy and any new information we want to add. This way Part 1 can remain short, direct and hopefully easy to use to quickly get running tests and contributing to the project.
    • Topics for inclusion in Part 2.
    • Introduction
    • TPTP Test Naming Conventions
    • Test Creation Best Practices
    • Test Execution Best Practices
    • TPTP Test Project Exceptions
    • The Common Test Infrastructure

Jonathan West

  • Would it be possible to glue together some of the tests into one single .testsuite file? So for instance, aggregate all of the various Platform.Communication.Request_Peer_Monitoring.(platform).testsuite files into a single Platform.Communication.Request_Peer_Monitoring.testsuite. That is, keep the existing files, but run the aggregate and check in the aggregate .execution file. This would save a significant amount of time on the results generation side, as we would not have to go through the rigmarole of execution generation for each individual test. Is this feasible on the report generation side? -- 11:54, 14 November 2007 (EST)
  • As per Alan's suggestion that any questions that I answered for myself upon review of the document would be beneficial, here is the comment I was going to post: Under '2.2 Extracting the Test Resources', the second step is, "2. To ensure that the local workspace contains the latest test projects, re-extract at the beginning of each unit of work (for example, test pass, report generation, etc.)." Based on the context of this section the extraction term implies a check-out from CVS. So presumably re-extraction at the beginning of each "unit of work" would be selecting, in Eclipse, the results project -> Team -> Update, or does it mean a straight wipe of the directory contents, and then a check-out? My answer to this was, yes, technicaly it could be either, though erasing and re-checking out has the advantage of removing any extraneous executions that may have run. -- 09:41, 19 November 2007 (EST)

I assumed the opposite. That the main point is to get the latest projects so I will change the term re-extract to update. 20:56, 28 November 2007 (EST)

Paul Slauenwhite

  • Need a section to define TP1 and TP2 per iteration and explain what we test for each TP.
  • The \test-results\platform\org.eclipse.hyades.tests\All<test type>Tests.testsuite will be renamed to \test-results\platform\org.eclipse.hyades.tests\All<test type>TP1Tests.testsuite to reflect that these tests are ran in TP1.
  • The \test-results\platform\org.eclipse.hyades.tests\All<test type>SmokeTests.testsuite will be renamed to \test-results\platform\org.eclipse.hyades.tests\All<test type>TP2Tests.testsuite to reflect that these tests are ran in TP2.
  • Need a section explaining BVTs and how they relate to TP1 and TP2 testing (e.g. only automated tests executed on the reference platform).
  • Need a section (or second document) for hooking test suites into the BVT.
  • No need for the 1.0.
  • Section 2.1 should just reference the TPTP download site, Installation Guide, Release Notes, with an additional note on the AC (e.g. remote execution) and AGR (you will need a reminder to restart eclipse with the -clean option). These should not be version-specific links. Point the user to section where to find the references for their release.
  • Add a separate Help section for the TPTP (e.g Help >> Help Contents >> ...) and AGR documentation. Remove references from other sections.
  • In section 2.2:
    • The projects are plug-in projects, not Java projects.
    • Stress what needs to be extracted from CVS (may be a screen capture).
    • Discuss the structures of the test resources (e.g. top level test suites linking to root test suites in the test plugins) and how the test plugins should be structured.
  • For 2.3 remove step 4b.
  • For 2.3 and 2.4, remove the Examples list and step 1.
  • For 2.4 and 2.5, add the steps to execute the test suite on a remote machine, move the verdict table to end of the section, remove inconclusive from the verdict table, and explain that the fail/error verdict event in the test log will contain a stack trace that the test can jump to source.
  • For 2.5 step 5, there is no inconclusive verdict.
  • For 2.5 step 5, add the exact steps.
  • For 2.5 step 6, change recorder to runner.
  • For 2.6 step 1, explain the structure.
  • For 2.6 steps 2/3 are only if the tester is not a committer.
  • For 3.1, there are two #1s, remove 'structured manual test case' from step 1, move step 3 (and section 3.2 step 1) to a common section, and remove step 4.
  • In 3.2, replace step 1 with a reference to the common section.
  • For 3.3, the AGR guide should not be a version-specific link. Point the user to section where to find the reference for their release.
  • There is some additional white space around the test verdicts tables.
  • For 3.3, steps 1 - 3 move to a common section since the same for JUnit/JUnit Plug-in and replace step 4 with a reference to the common section.
  • For 3.3, remove the figure in step 4.
  • For 4.0, these should not be version-specific links. Point the user to section where to find the reference for their release.
  • Add a common section and reference it from the test types for adding defects numbers to a test log (see the Defects section on the Events tab when selecting a verdict event).
  • Each test type should reference a common section on the directory structure and naming convention for test suites and test results.
  • Remove the first sentence in section 2.1.
  • Naming conventions for root level test suites and execution results should include test plug-in ID.
  • Execution results need to be saved in a directory with a meaningful name (e.g. OS, JRE, type (full, smoke, BVT)).
  • Test suites for specific JREs and OSes (e.g. JVMPI versus JVMTI) will not always run on the reference platform(s).
  • The Agent Controller and profiler test suites need to migrated to use the TPTP Test Tools instead of requiring a custom configuration.
  • Define a root-level test suite for BVTs (possibly replacing the smoke tests).
  • We do not need to check-in the execution results to CVS due to disk space limitations and polluting of our test pass results.
    • Each developer can rerun the automated tests to reproduce a failure.
  • Provide a glossary for defining common terms, which can be referenced in the document. For example:
    • Test resources
    • TPTP Test Tools
  • For common tasks (e.g. extract test resources from CVS), create a common section and reference it using a link each time it is required.

Back to the top