Why do we test software/system implementations? Testing is about ensuring quality and risk management. When software or systems are delivered, the QA (Quality Assurance) role determines if the implementation meets the stated requirements and if the quality is delivered to the user community. At SMITH, our QA focuses on both completely custom global commerce architectures, as well as platform deployments with systems like SAP C/4HANA (formerly SAP Hybris) and Adobe Experience Manager.
There is always a risk when implementing a new system:
- Will it function? The worst end-state of any system implementation is a system that crashes or is unable to meet the requirements set forth in developing the system.
- Will it perform? A system must be able to keep up with the speed of the business and should improve productivity. Many factors can impact performance – design, load, capacity, and more.
- Will it be used? Will the user community accept the new system and be able to meet the business requirements? A badly implemented system that does not match what and how the users use the system will be met with poor productivity and/or not useable information.
The QA process, as part of a well-planned system lifecycle, can assist in reducing these risks.
Having worked on many system implementations over my 20 years, I’ve seen both success and failure. The majority of the failures were due to not having a well thought out and implemented QA processes to identify and reduce the risks. While other factors can lead to project failures, it is the role of QA to identify these issues through thorough testing with the responsible teams as early in the life cycle as possible.
QA should be involved in any project as early as possible. Starting with a review of the requirements, QA can begin to identify potential risks prior to design. Then with a review of design specification again potential risks can be determined, and so on through the life cycle. Finding risks and issues further into the project can and will increase costs.
For a good overview of the ROI of testing, QualityCrush provides an excellent bit of thinking here.
To ensure good system quality, the QA team should be involved from the requirements gathering stage. At this stage, the QA team can test the requirements to make sure they are understood and testable, that there is a proper workflow of the business process and to identify any possible shortcomings before design and development.
A review of the design is also recommended to ensure that prior to code, the design meets the requirements. This can prevent any misunderstanding and put the project on the path to be successful with well-presented requirements and design.
Both the reviews of the requirements and design will allow the QA team to provide a good and thorough Testing Strategy document that provides a solid understanding of what will be delivered and how it can be tested.
The Test Strategy document should provide the following information:
- Scope and Objectives
- Business Issues and Concerns
- Roles and Responsibilities
- Deliverables and Reporting of Results
- Testing Methods and Tools
- Defect reporting and Tracking
- Link to Configuration Management Plan and Change Management Process
An important item coming out of this strategy is Test Case Management. This is how test cases will be recorded and managed throughout the process. Test Case Management is important as it provides the tester with the details for testing features and as part of recording the test execution, it provides metrics to the project management team about the progress of the process. The test cases should be linked to the requirements to ensure accuracy of the test steps to ensure the feature is tested as designed. It is also highly recommended that Test Cases be versioned so that any changes made during the process are recorded and updated accordingly as part of the Change Management process.
A big pet-peeve is Configuration Management. For the best quality and accuracy of testing, a good Configuration Management plan should be created and followed. This ensures the testers are testing in an environment that has a known state and recording of passed or failed tests can be traced to a specific release or configuration.
With the Testing Strategy completed, and approved by the project team, the next step is to create the Test Plan. The Test Plan will provide the operational oversight to the testing team for the specifics of what and how to complete the testing.
A Testing Plan should include the following:
- Test Items, i.e. features to be tested and not tested
- Specifics of Testing Techniques
- Testing Suspension Criteria
- Feature Pass/Fail Criteria
- Test Environment, including environment acceptance criteria
- Task and Schedule
Two important factors coming out of the test plan are the Testing Suspension Criteria and the Feature Pass/Fail Criteria. The Testing Suspension Criteria will provide the requirements in both the status of the delivery and the readiness of the implementation to be tested. This basically will state what is required for the testing to begin, to ensure all checkpoints have been met and testing can begin.
As for the Feature Pass/Fail Criteria, this will define what a fail feature or test is. This should also incorporate definitions of the differing levels of failures, to include what failures will block any further testing, and what failures are major or minor. These definitions are important, as proper classification of bugs will allow the project management and stakeholders know where the project stands in readiness for completing and going live.
If a project is to be successful, QA plays an important role. QA’s importance starts with the project conception and goes up until the project go live and ensures that what is delivered is a quality system ready for use and will not be an aggravation for users.
By integrating QA more deeply into the project design process, earlier on in the lifecycle, risk can be considerably decreased, and small issues can be resolved early, when the cost to resolve is still negligible.