Testing is a nonnegotiable part of the software development cycle. But what does it entail, exactly? Let’s take a look at how we do it at Syberry, discuss some of the most popular types of testing that our team implements, and show how all these activities contribute to the success of projects and benefit our clients.
Where Do We Start?
Before testing begins, it is of great importance to go over and establish the key goals and aspects of the planned testing with other team members, including the business analyst, project manager, and development team. Here are some of the main talking points for that discussion:
- Goal(s) of testing, from the respective parties’ perspectives
- Types of testing required
- Distinguishing characteristics of the product and its target audience
- Types of devices that testing needs to be carried out on
- Browsers and operating systems (OS) that should be tested, and at which screen resolutions
- Requirements for various types of forms and style guides, as well as software requirements specification (SRS)
- Any required documentation on the results of testing, such as reports, checklists, test cases, etc
By gathering as much information as possible in advance, the team becomes well prepared for the job ahead and can plan their work thoroughly.
Let’s now take a look at some of the most commonly used types of testing, which are relevant on most projects.
What would happen if a large number of requests hit the server all at once? That depends on how stable the product is, and this is precisely what is tested during backend testing. Using specialized tools, QA engineers simulate a scenario in which the number of users rises sharply, which leads to a sharp increase in requests sent to the server. Testers then monitor at what point the server crashes—meaning it stops responding to requests. If the critical threshold is too low, developers suggest ways of optimizing the server. The process described is called load testing.
The process of load testing involves more than just the efforts of QA team, however. To start with, the team must establish exactly what to test and how to test it. The “what to test” part is usually covered by the development team, as they are able to establish what kinds of requests are sent to the server most frequently, as well as which requests transmit the most data and generate the highest loads. The “how to test” part would usually come from the ordering client or the product owner, as these stakeholders would know the business aspect best. The team would need to know the target audience and user metrics, such as the most frequent behavior scenarios in the system, time periods of increased and decreased activity, types of activities, and periods and nature of peak traffic. Without this input, load testing might not produce the real picture.
The results of load testing depict the size of the load the system can withstand. It is possible to emulate a scenario where the system crashes, and then study its behavior afterwards. Can it recover on its own, or is manual intervention needed? This work will help to prepare for similar situations in production.
Let us take the example of a large commercial product that works with different types of contracts. One day the company released a highly anticipated new type of contract, but the following day the system ceased working completely, and their contact center was overwhelmed with the number of incoming inquiries. The reason was a miscalculation during testing: the team was testing simultaneous work with a certain number of contracts, but in reality, the load was 100 times that number. This case goes to show just how important testing—and all the planning that goes into it—is.
Functional testing is designed to check the functionality of the product as defined by the software requirements specifications. The task of the developer is to bring the outlined functionality to life, and the task of the QA engineer is to prepare test documentation (e.g., test cases and checklists) to test the functionality.
After developers complete their tasks, they pass the product components on to the QA engineer and mark them available for testing in the task tracker. QA engineers then inspect the components for operability and adherence to specifications. If they pass inspection, the task is marked “done” and is closed. If they do not pass, any bugs are entered into bug tracking system, the task is declined, and the information on the problems found is entered into the system. The task is not accepted as complete until the bugs are fixed.
Before a product is released, it’s important to conduct usability testing to reveal any weak points in the interface or the overall idea of the product. To do this, designers pass on the interactive prototype to a group of users and collect feedback—most importantly, how easy they find it to reach their goal using the product. QA engineers compose a questionnaire for the users that helps understand their experience, and the responses are used to guide any changes that may be required.
This is one of the most important and labor-intensive parts of QA engineers’ work. Regression testing is similar to functional testing, though in addition to checking new functionality, it also checks that any new functionalities do not interfere with previously tested and completed components.
Regression testing also includes testing of product versions. If a product is already being used by real users, it is essential to make sure that version upgrades work, without breaking the current version.
The father along the development is, the more functionality needs to be tested, and the more time-consuming regression testing becomes.
Integration testing is needed when the development team finishes and connects two big parts of the system, such as the admin panel and the web service. After such a substantial integration, it is imperative to be sure the two parts work together like they should.
Finally, it is important to emphasize that, as the number of new technologies is growing, so is the number of potential bugs that can occur in any software product. Development without testing along the way can be compared to painting while blindfolded, as different elements of the project may not behave according to the initial expectations. If you wait until development is “complete” to find that out — or worse, let users find out for you after the software is live — you’re creating much more trouble and expense for yourself than if you conduct this compelx and diverse series of tests at key intervals throughout the development process.