8 November 2021

Software Development
Quality Assurance

What is software testing and why is it important for the SDLC

16 minutes reading

What is software testing and why is it important for the SDLC

Developing any application is a long-term process engaging the whole team. After months of intensive work, everyone eagerly awaits the results—both your team and your client. You’re planning to release the product to a deadline—that’s obvious. Keeping errors to the bare minimum throughout the application development process helps you meet the deadline. There will definitely be errors—we are all human after all, and mistakes are inevitable—but effective use of software testing helps to limit such unpleasant surprises. 

In this article, you will learn about the testing process in software engineering, popular types of tests, different approaches, and how testing influences the software development life cycle (SDLC). 

Software testing—what is it? 

Software testing is a part of the software development life cycle (SDLC). Its main aim is to compare results with expectations. It is also a reliable source of information on code quality and the product itself. 

What does “quality” really mean in this context? According to ISO 9000, “Quality is the totality of features and characteristics of a product or service that bear on its ability to satisfy given needs.” 

Below are a few tips that make software testing high quality and efficient. 

How to ensure testing process clarity? 

Legibility is a principle worth remembering when you’re planning to include testing in your software development life cycle: 

  1. Testing should be as straightforward and understandable as possible for other teams involved in the software development process. For example, you can consider using Gherkin. This domain-specific language makes project documentation understandable for business stakeholders. 
  2. Prepared reports should be readable for other participants in the software testing process. 
  3. Testers have to be sure what, why, and how they want to test. What’s more, they should also be able to explain to their teammates why a particular test is necessary, and what its goal is. 

The “software” in the acronym SDLC would suggest that only software developers are engaged in the process. But that’s not true—if you want to make the testing process efficient, different parties have to be involved. Which ones? Keep reading to get the answer. 

Who is involved in the software testing process? 

Depending on the software and the process of development, the answer will vary. However, common industry standards are available. In this article, the adopted standard comes from the ISTQB International Software Testing Qualifications Board. But keep in mind that this is not the only solution and might not suit every project. 

What are the basic roles for software testing?

The key roles are test manager, test automation engineer, tester, and test analyst, all of which are necessary for successful testing. Test managers are responsible for supervising the creation of a test strategy, testing activities, resources, and evaluation of the tested project. Test automation engineers are responsible for the design, technical development, implementation, and maintenance of automated test architecture. Testers carry out the actual activity of testing, while test analysts analyze application requirements and design documents, develop test plans and a comprehensive testing structure, and analyze test results.

Sometimes, especially in relation to smaller projects, some or all of the above-mentioned roles are covered by a single person in the company. 

Product owners also have a voice in the testing process. Their fundamental role is to manage and monitor every stage of the development process to ensure the project goes in the right direction, including defining the acceptance criteria. This makes the product owner an influential stakeholder in the process of testing. The success of the software testing process often depends on the quality of collaboration between the testing team and the PO. 

The type of test used has to be adjusted to the application to provide the team with reliable results. But what are the different types of tests, how are they different from each other, and what are their specific applications? 

Different test types—an overview 

One type of test is not enough. Each test has a different purpose and use. They can be used in combination, to complement each other, and tend to be more effective when used together. The variety of test types can initially be overwhelming but it's simpler than it looks. 

To better understand the purpose of particular test types, let's look at the software testing process from the test managers' and testers' perspectives—so what, how, and which elements of the application we can test. 

Which software elements could you test?

As mentioned above, there is no single test to meet all needs. One way to categorize tests is by the software aspect or element that the test is checking, as follows: 

  • Unit tests—a particular section of code is isolated and tested.
  • Module tests—this testing focuses on checking modules, program procedures, classes, subroutines, etc. (It’s worth bearing in mind that unit tests are sometimes categorized as a type of module test.) 
  • Integration tests—as a next step, particular modules are now checked for how they interact together. 
  • System tests—following integration testing, the whole system is now tested. 
  • End-to-end tests—end-to-end testing is effectively a simulation of actual usage of the application in a production-like environment, with all external dependencies. 

You can see from this quick introduction that there is a natural sequence to the different types of test, from a focus on small pieces of code to checking how the whole application works from a user perspective. 

None of these test types are used in isolation, they interact and complement each other. The possibilities for mixing test types are almost unlimited. 

Which application aspects can you test? Functional and non-functional tests 

Besides testing the software to check the system as a whole, it is also necessary to check if individual planned functionalities operate as expected. However, the functionalities aren’t the only thing on the agenda—checking how the application will perform under specific conditions is another significant part of the testing process. You can choose from test types such as: 

  • Functional tests—these tests aim to check the application's functionality and features; whether it works properly and produces the correct outputs. 

  • Non-functional tests—used when we want to test everything besides the functionality: 

    • Performance tests check the speed, reliability, and how the system performs under a specific workload. Basically, does it stay responsive and stable? 
    • Long run/stability tests focus on the quality and behavior of the software, checking whether the application performs appropriately over a more extended period. 
    • Stress testing, or torture testing as some prefer to call it, puts the software under extreme conditions, pushing it beyond its regular and predictable capabilities to check if and when the software crashes. 
    • Usability testing brings the UX to the fore. This kind of test focuses on the user perspective and checks if your software is straightforward, usable, and user-friendly. 

The UX is a critical part of any application and an essential part of any software testing strategy. For more information, check out our article on the UX perspective and its function. 

How to fill up the testing process?

Despite the differences due to the company or the application itself, one thing stays the same: every test aims to identify bugs and therefore ensure the whole SDLC process runs more efficiently. Below, you can find some more common test types, which complement those mentioned above: 

  • Regression tests—checking the impact of any modifications or added code to the existing functionality. 
  • Sanity tests (a subset of regression testing)—this kind of testing focuses on changed pieces of code and whether they are working properly. The main aim is to check if the changed functionality is operating correctly. 
  • Acceptance tests are often performed by the customer or end users. The application is tested in different usage scenarios; in acceptance testing, both functionality and UX matter
  • Smoke testing (a subset of acceptance testing) verifies whether the crucial functionalities of the application work. 

Always keep in mind that any effective testing strategy requires the use of multiple tests. One more aspect that influences the whole testing process is the choice between proactive and reactive approaches. 

Types of testing approach

The testing approach or strategy determines how the testing process is set up and carried out during software development. The most basic division of strategies is to differentiate between proactive and reactive. The proactive approach focuses on the early implementation of testing to catch and fix potential bugs without delay. A reactive testing approach begins when the design and coding stages are finished. 

Which other methods of testing could you consider? 

  • Black Box testing—this software testing method checks the application's functionality, but it doesn't inspect its internal structures.
  • White Box testing—this testing technique assumes the tester is analyzing the internal state of the system and is focused on testing the software code itself.
  • Manual testing—tests are executed manually, without any automated methods. Although this method is more time-consuming, it’s also necessary, given that it’s almost impossible to achieve full automation in testing. 
  • Automated testing—dedicated testing software tools enable test execution and also aid manual testing. See a full comparison of manual testing vs automated testing to better understand these two methods.

There are many less common types of test, such as mutation testing, fuzzy testing, contract testing, and more. The examples in the lists above are some of the most popular types of test, but your project needs will indicate the right tests and testing approach for each application. 

To facilitate the management of the tests used in the project, you can divide tests into smaller groups. Putting them into the proper place in a product testing pipeline makes them easily manageable.

One thing remains: the test coverage. What does this term mean, and why is it significant? 

Why is test coverage important? 

Test coverage tells us to what extent the test covers the application code and if the required tests have been executed. 100% test coverage is a myth—it is not possible to test all features and aspects of an application. 

Nowadays, software is too complex and multifaceted. Attempting to test every aspect of the application is inefficient and ill-suited to the market, which requires fast and regular updates. The answer is to choose wisely what needs to be tested and what can be skipped, or checked in a different way. How do you select the right testing areas? Here comes the ROI (return on investment). We choose to test those aspects which are most directly connected to the application’s ROI (i.e. the elements which are most profitable) and we carry out those tests first. ROI can also be used to select which tests should be run manually and which should be automated. The decision, in this case, is based on a cost comparison between execution of a manual test and the creation and maintenance of the automatic test. Return on investment can also be used as a criteria to select only the most valuable and significant features for testing with end-users. 

A recent trend is to use risk analysis to prioritize test cases that cover the most likely and severe risks—otherwise known as risk-based testing. This technique carries advantages, including reduction of risk probability, lessened impact of negative risks, increased customer focus, and the opportunity to make better decisions for the project as a whole, based on risk. You should also bear in mind that the number of test cases doesn't indicate test quality or coverage. More does not always mean better—the number of tests is not a suitable metric to evaluate testing.

Conclusion 

If you want to provide your clients with a top-notch software product, you should consider software testing as a standard element of your development process. The right testing strategy saves both time and money. By using the right combination of tests and test types, you can create a customized testing approach to fit each specific project and its needs.

Stanisław

Stanisław Madaliński-Piętka

Senior Software Engineer