Quality Assurance (QA) testing should be done for every software development project regardless of whether the performing organization has a Quality Assurance department or not. Organizations that create QA departments do so because of recognition that the skills required to perform thorough QA testing are unique and different from those required to create the software, not because the organization suddenly recognizes the need for QA testing. QA testing should deliver a product which meets all the stated requirements and all quality goals and objectives.
Your organization may view quality and QA testing as the domain of the QA department but you must remember that you have overall responsibility for quality as project manager. You must ensure that the QA resources devoted to testing your project are sufficient to deliver quality goals and objectives. The QA department may take responsibility for planning their activities but you must review the plan for its ability to deliver on quality goals. Project managers who don’t have a QA department must acquire testing resources and plan the testing activities themselves.
The Test Plan
QA activities, including the acquisition of resources should be set out in a Quality Management Plan. This can be something as simple as an e-mail, or it can be a full blown project document. The Quality Management plan should capture the project’s quality goals and objectives and describe the approach the project will take to meeting quality goals and objectives. It may also include details such as roles and responsibilities, reports to be produced etc. The activities called for by the plan should be captured in the WBS and scheduled. The first activity to be performed will be the creation of test cases.
Test cases should be created from the Business Requirements Document, Commercial Specification, or other document used to capture business requirements for the software system. It is important for the testers to work from a business document to create their test cases, particularly if the testers are not part of a QA department. The purpose of QA testing is to test the system against business requirements, not the design. Creating test cases based on system design could cause design errors to be missed. For example, the system design may be delivered by the code but the design may not satisfy the business requirements. Test cases may be grouped into test suites by sub-system, functional area, tester, or some other means. Test cases which have been grouped together should be treated as a single entity.
Test cases should be reviewed by the Business Analyst to ensure the test cases exercise the system the way the business intended. Test cases should be completed before the coding phase begins and developers should have access to the test cases so there is less risk of the system design or code failing to deliver the business requirements. Taking this approach will require testers to be on board during the planning phase of the project, but the return on this investment is the elimination of bugs caused by requirements miss-interpretation. The test cases become components of the software system and should be controlled by the organization’s configuration management system.
If your project doesn’t have a QA department to write test cases, try to acquire team members who have experience with QA testing (not just development testing). Making these people responsible for writing test cases is the closest you can come to having a QA department. Try to keep test case writing and QA testing activities separate from development activities if you don’t have a QA department and if you have to mix testing and development activities try to at least have a developer other than the one responsible for the code write the test cases. Having a developer who is responsible for the design and code write the test cases risks propagation of errors caused by miss-interpretation of the requirements.
QA testing should start with a fresh environment with all software applications and necessary software licenses. A database instance should also be installed and populated with test data. Data can be sourced from several areas. The common or global data used in the development environment can be ported to the QA environment, data created by developers to perform tests similar to the QA test cases can be ported, and testers can supply data. QA testing should try to make as efficient use of data as possible; an order created by one test case can be the order that is processed by the next test case, etc.
Monitor the bug reports in the queue used to report QA testing results. Rapid growth of the queue will indicate a deluge of re-work. This may have a negative impact on the development phase of an iterative project, or overload the remaining developers on a waterfall project. Bug reports should also be monitored for rapid closure of severity 1 bugs. I usually reserve the severity 1 designation to bugs that prevent further testing of at least 1 test suite. These bugs have the potential of delaying the test schedule and therefore final delivery of the project. You should also monitor bug reports for bugs that should have been detected in development testing. Examples of this type of bug are code that crashes instead of displaying an error message, or code that doesn’t handle inputs outside the acceptable range cleanly. A high ratio of development related bugs may indicate inadequate development testing. Be especially suspicious if these bugs are related to one developer.
You should also monitor reports for bugs that are caused by a difference between tester and developer expectations of how the system should behave. An example of this type of bug is a system which displays an error message complaining of an invalid character in a log in userid and then exits when the tester expects the system to collect the password before it displays the error message. These bugs should be rare when the developer has access to the test case before coding begins but will occur when they don’t.
Report on Quality Assurance results to the project stakeholders. Your reports should communicate information required by those stakeholders, but you can help set those expectations so that you inform them of the number of executed test cases, test cases to be executed, the number of bugs by severity, the number of bugs opened this period and the number closed this period. Don’t try to hide bad news in the form of excessive bugs to report, but don’t be alarmist either. There may be legitimate reasons for a high volume of bug reports in a given period: a larger volume of code being tested, or a particularly complex piece of code for example. The execution of a higher volume of test cases may also be an explanation. If it is, put the metric in context by reporting the increased volume of test cases executed as well as the increase in bug reports.
Every time code is touched there is a risk of breaking it. This includes when the code is updated during re-work to fix a bug. Regression testing is the only way to mitigate this risk, not just the re-execution of the test that failed, but the re-execution of all the tests. Regression testing may be costly, particularly if the system has a GUI interface. The most effective way of reducing this cost is to automate regression testing. This will significantly reduce the cost of regression testing, although maintaining the test suite during the project will have a cost. The cost of the tool will include the cost of creating the tests initially and this can be costly. Failing the acquisition of automated regression testing tools, you can try to eliminate duplication by regression testing by sub-system for example.
Excessive failures during regression testing should be analyzed for possible root causes. A change in one piece of code that causes an unexpected failure in another area not visibly related is cause for concern. One cause for this phenomenon is the improper handling of global variables. Check the entire source library for more instances of the error when you find this type of problem.
Performance, Load, and Stress Testing
These tests should be performed against benchmarks established for the system in the Project Charter or SOW. Performance testing should measure key system functions against benchmarks. Examples of these benchmarks would be a maximum of 1 second for every page on the site to load, a maximum of 2 seconds to log in after the userid and password are supplied, or a maximum of 5 seconds to check an order with a maximum of 25 items. Performance testing can be accomplished manually or by an automated test tool such as Xrunner or Loadrunner.
Load testing is almost impossible to perform without an automated tool to replicate the load. Load Testing tests system capability to handle certain conditions resulting from user demand. The system may have a requirement to support a peak logged in user threshold of 1,000 users for instance. No-one should attempt to replicate this condition manually; if capacity handling is critical buy an automated test tool. Load testing and performance testing often go hand in hand. For example you may be required to deliver a system which can handle 1,000 logged in users and log the last user in under 2 seconds, or process 50 orders of 25 items simultaneously and have each process in a maximum of 5 seconds.
Stress testing is simply the systems response to a demand outside of the limits set for it. In the previous examples your stress test would be to log 1,000 users into the system and then observe system behavior when the 1,001st user attempts to log in. Normally the system should fail “gracefully”, that is an error message notifying the 1,001st user that the maximum number of users has been exceeded rather than have the system “crash”
Performance, load, and system testing require a platform which duplicates the production platform in terms of hardware capacity, processing capability, network composition, etc. Performing this type of testing on a platform that has less capacity, or does not have the constraints of the production platform will render test results unusable. Regression, performance, load, and stress testing should all form part of the quality reports you communicate to the project’s stakeholders.
About the Author
Dave is a principal with three O Project Solutions, the vendors of AceIt©. Dave was also the key architect responsible for the creation of the product. AceIt© has prepared Project Managers from around the world to pass their PMP® exams. You can find endorsements from some of his customers on three O’s web site (http://threeo.ca/testimonialsc48.php)