Test and evaluation master plan

Test and evaluation master plan (TEMP) is a critical aspect of project management involving complex systems that must satisfy specification requirements. The TEMP is used to support programmatic events called milestone decisions that separate the individual phases of a project. For military systems, the level of funding determines the Acquisition Category and the organization responsible for the milestone decision.[1][2]

A traceability matrix is generally used to link items within the TEMP to items within specifications.

Definition

The Test and Evaluation Master Plan documents the overall structure and objectives of the Test & Evaluation for a program.[3] It covers activities over a program’s life-cycle and identifies evaluation criteria for the testers.[4]

The test and evaluation master plan consists of individual tests. Each test contains the following.

  • Test Scenario
  • Data Collection
  • Performance Evaluation

Test scenario

The test scenario establishes test conditions. This is typically associated with a specific mission profile. For military systems, this would be a combat scenario, and it may involve Live Fire Test and Evaluation (LFT&E). For commercial systems, this would involve a specific kind of situation involving the use of the item being developed.

For example, cold weather operation may require operation to be evaluated at temperatures below C using an environmental chamber. Evaluation of operation with a vehicle interface may require compatibility evaluation with a vibration test. Evaluation of an Internet store would require the system to take the user through a product purchase while the system is loaded with other traffic.

The test scenario identifies the following.

  • Items required for testing
  • Instructions to set up the items that will be used during the test
  • General description for how to operate the system under test
  • Specific actions and events that will take place during the test

Data collection

Data collection identifies information that must be collected during the test. This involves preliminary setup before the test begins. This may involve preparation for any of the following.

  • Settings for the system under test
  • Separate instrumentation
  • Written notes from direct observation
  • Sample collection

Systems that incorporate a computer typically require the ability to extract and record specific kinds of data from the system while it is operating normally.

Electronic data collection may be started and stopped as one of the actions described in the test scenario.

When data access is restricted, so the transfer of data between organizations may require a Data Collection Plan. This can occur with classified military systems.

Data is analyzed after testing is complete. This analysis is called performance evaluation.

Performance evaluation

Measures of effectiveness are specific metrics that are used to measure results in the overall mission and execution of assigned tasks.

These may have flexible performance limits associated with the outcome of a specific event. For example, the first round fired from a gun aimed using a radar would not impact a specific location, but the position can be measured using the radar, so the system should be able to deliver a round within a specific radius after several rounds have been fired. The number of rounds required to land one inside the limit is the MOE. The radius would be a Measure Of Performance (MOP).

Measures of performance are specific metrics that have a pass or fail limit that must be satisfied. These are generally identified with the words shall or must in the specification.

One type of MOP is the distance that a vehicle with a specific load must travel at a specific speed before running out of fuel.

Another type of MOP is the distance that a radar can detect a 1 square meter reflector.

Measures of suitability evaluate the ability to be supported in its intended operational environment.

As an example, this may be an evaluation of the Mean Time Between Failure (MTBF) that is evaluated during other testing. A system with excessive failures may satisfy all other requirements and not be suitable for use. A gun that jams when dirty is not suitable for military use.

These requirements are associated with ilities.

  • Reliability
  • Availability
  • Maintainability
  • Supportability
  • Usability

Purpose

The results of the TEMP evaluation are used for multiple purposes.

  • Rejection: funding termination decision
  • Redesign: modification and re-test funding decision
  • Full rate production: acceptance funding decision
  • Mission planning

Mission planning involves translation of MOEs, MOPs, and MOSs into the following.

ROC and POE are specific expectations evaluated by the TEMP that are used to determine how to deploy assets to satisfy a specific mission requirement. Diagnostic testing ensures these expectations are satisfied for the duration of the mission.

Other usages

The term 'test and evaluation master plan' as a distinct overall guide to the Test and Evaluation functions in a development has also been used by the Australian Department of Defence.[5] Others do use the term, or similar terms such as 'Master Test and Evaluation Plan'.

See also

References

  1. "Incorporating Test and Evaluation into DoD Acquisition Contracts" (PDF). Defense Acquisition University.
  2. "Test and Evaluation Master Plan Procedures and Guidelines" (PDF). University of Idaho, Idaho Falls.
  3. "Test & Evaluation Master Plan (TEMP)". Retrieved 6 Apr 2016.
  4. "Test & Evaluation, Test and Evaluation Master Plan". Retrieved 6 Apr 2016.
  5. "Test and Evaluation of Major Defence Equipment Acquisitions" (PDF). Australian National Audit Office. 24 Jan 2002. ISBN 0642-80612-8. Retrieved 6 Apr 2016.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.