Defect Management in Performance Testing vs. Functional Testing

Defect Management in Performance Testing

Although performance testing comes under the umbrella of testing, various factors distinguish it from other types of testing. One of the major differences in the defect management process. Unlike functional testing, performance testing does not require the tester to find deviations from the expected results for test cases to meet the requirements mentioned in the product specification.

Among other differences, attitude is also one determinant that needs to be adjusted for performance testing to achieve optimal results.

  • To validate the non-functional requirements and finalize your workload to be tested, you need an attitude of a business analyst
  • To decide on the type of tests & identify the violations in the application, you need an attitude of a tester
  • To identify the root cause of the problems based on the test observations, you need an attitude of a developer and an architect
  • To evaluate the hardware footprints & its projections to meet the target user loads, you need an attitude of an infrastructure capacity planner

Defect management, in simple words, is a process of gauging and ensuring the quality of a software application by reporting the defects found during the testing. This process varies for both Agile and Waterfall environments.

The implementation of defect management in performance testing and functional testing is different from one another. Best practices for functional testing involve the use of popular metrics such as % test efficiency, % test case passed/failed, defect discovery rate, first-run fail rate, % test coverage, etc. using a defect tracking software. But these metrics are not applicable in case of performance testing. This is why using defect tracking software to track and close bugs may not be appropriate. Performance testing requires the use of different metrics that need to be reported and measured based on performance investigation of various layers. Instead of using the term ‘defect’, it would be appropriate to call violations on the non-functional requirements as test observation or finding.

Following are some of the metrics that should be used in performance testing:

  • Transaction response time (seconds)
  • Component-wise/layer-wise response time backup (seconds)
  • System throughput (transactions per second)
  • Server load (hits per unit time or pageviews per unit time or users per unit time)
  • Server resource utilization (memory, CPU, disk & network)
  • Scalability level (# peak users supported)

In some instances, tests might be required to rerun to confirm the observation or behavior. The analysis required to be performed on the test results is solely based on the work’s scope. Every performance tester should be able to carry out test analysis by using various techniques such as scatter plot analysis, drill-down analysis, correlation, trend analysis, etc. to provide more insight on the bottlenecks/problems.

Leave a comment