Tuesday, July 31, 2007

Sample Test Plan

1. TEST PLAN: Provides an overview of the testing effort. Sections for inclusion:



  1. Test Plan identifier: A unique name or number, useful if you store all documents in a database.
  2. Introduction: Include references to all relevant policy and standards documents, and high-level product plans.
  3. Test items: A test item is a software item (function, module, feature, etc) that is to be tested. List them all, or refer to a document that lists them all. Include references to specifications (e.g. requirements and design) and manuals.
  4. Features to be tested: Cross-reference them to test design specifications.
  5. Features not to be tested: Which ones and why not.
  6. Approach: Describe the overall approach to testing: who does it, main activities, techniques, and tools used for each major group of features. How will you decide that a group of features is adequately tested? The Standard also says that this section, not the schedule section, is the plat to identify constraints, including deadlines and the availability of people and test items.
  7. Item pass/fail criteria: How does a tester decide whether the program passed or failed a given test?
  8. Suspension criteria and resumption criteria: List anything that would cause you to stop testing until it’s fixed. What would have to be done to get you to restart testing? What tests should be redone at this point?
  9. Test deliverables: List of all the testing documents that will be written for this product.
  10. Testing tasks: List all tasks necessary to prepare for and do testing. Show dependencies between tasks, special skills (or people) needed to do them, who does each, how much effort is involved, and when each will be done.
  11. Environmental needs: Describe the necessary hardware, software, testing tools, lab facilities, etc.
  12. Responsibilities: Name the groups (or people) responsible for managing, designing, preparing, executing, witnessing, checking, fixing, resolving, getting you the equipment, etc.
  13. Staffing and training needs: How many people you need at each skill level, and what training they need.
  14. Schedule: List all milestones with dates, and when all resources (people, machines, tools, and facilities) will be needed.
  15. Risks and contingencies: What are the highest risk assumptions in the test plan? What can go sufficiently wrong to delay the schedule, and what will you do about it?
  16. Approvals: Who has to approve this plan? Provide space for their signatures.

2. TEST DESIGN SPECIFICATION: Specifies how a feature or group of features will be tested.



  • Test design specification identifier: This is a unique name or number.
  • Features to be tested: Describe the scope of this specification.
  • Approach refinements: Expand on the approach section of the test plan.


Describe the specific test techniques. How will you analyze results (e.g. visually or with a comparison program)? Describe boundary or other conditions that lead to selection of specific test cases. Describe any constraints or requirements common to all (most) tests.

  • Test identification: List and briefly describe each test associated with this design. You may list a test case under many different designs if it tests many different types of features.
  • Feature pass/fail criteria: How can the tester decide whether the feature or combination of features has passed the test?

3. TEST CASE SPECIFICATION: Defines a test case

  1. Test case specification identifier: A unique name or number.
  2. Test items: What features, modules, etc. are being tested? References to specifications and manuals are in order.
  3. Input specifications: List all inputs by value, by range of values, or by name if they are files. Identify anything else that’s relevant, including memory-resident areas, values passed by the operating system, supporting programs or databases, prompt messages displayed, and relationships between inputs.
    Describe any timing considerations. For example, if the tester should enter data while the disk light is flashing, or within a half second after a certain message, say so. For very short intervals, describing the rhythm can be more effective than describing the exact times involved.
  4. Output specifications: List all output values and messages. Consider including response times.
  5. Environmental needs: List special requirements, including hardware, software, facilities, and staff.
  6. Special procedural requirements: List anything unusual in the setup, tester’s actions, or analysis to be done of the output.
  7. Inter-case dependencies: What tests have to be executed before this one, why, and what if the program fails them?


4. TEST PROCEDURE SPECIFICATION: Describes the steps for executing a set of test cases and analyzing their results.

  1. Test procedure specification identifier: Unique name or number. Purpose: What is this procedure for? Cross-reference all test cases that use this procedure.
  2. Special requirements: List any prerequisite procedures, special tester skills, and special environmental needs.
  3. Procedure steps: Include the following steps as applicable: o Log: any special methods or formats for logging results or observations.
    o Setup: preparations for execution of the procedure.
    o Start: how to begin execution of the procedure
    o Proceed: any actions necessary during procedure execution.
    o Measure: how test measurements are made (e.g. response times).
    o Shut down: how to suspend testing in the face of unscheduled events
    (or when the tester goes home for the night).
    o Restart: where to restart and how, after a shut down.
    o Stop: how to bring execution to an orderly halt.
    o Wrap up: how to restore the environment to its original state.
    o Contingencies: what to do when it all goes wrong.

No comments: