Glossary of Software Testing Terms: A to Z

This software testing glossary aims to provide a comprehensive overview of the essential definitions used in software testing. It breaks down technical terms into simpler matters and can be a handy book for finding a quick explanation.
The glossary of software testing terms is crucial for testing teams, developers, quality assurance (QA) experts, analytics, and project managers.
Below, you will find terms and their definitions sorted alphabetically for better and quicker access to the needed information.
A to Z Terms
A
- A/B testing: Also known as split testing, it relates to testing two variants (A and B) and comparing the results with the set requirements. The winner is the variant that better meets the set metrics.
- Acceptance testing: A form of product quality validation that relies on end-user opinion on whether to accept the result. For instance, the bank develops new software for employees. They test it and make a verdict if it is efficient.
- Accessibility testing: It tests the product for being adapted for people with special needs.
- Ad hoc testing: An informal testing that usually occurs randomly without any preliminary set criteria or plan.
- Agile testing: An approach used in agile software development where testing and development start together and complement each other throughout the development cycle.
- Alpha testing: A practice of implementing testing early before the product is released to end users.
- API testing: The procedure verifies if two systems properly communicate with each other through an application programming interface (API) without data loss, invalid inputs, or errors.
- Automation testing: The usage of automation tools that run pre-scripted tests. A practice of reducing human error and increasing efficiency and test coverage.
B
- Back-to-back testing: Also referred to as comparison testing, this technology is similar to A/B testing; however, it requires several variants to check and then match the requirements.
- Baseline testing: The process of validation of product performance compared to its previous state or a set criterion.
- Beta testing: This type of acceptance testing is conducted during the final development processes before the product release. An example can be a newly created social media platform sent to a group of testers — real users. They examine the beta version for crushes and errors reporting to QA teams.
- Big data testing: This process refers to evaluating big data systems applied in healthcare, finance, or retail for data reliability and processing.
- Big Bang testing: Testing separate units together simultaneously without considering the integration side. As a real example, an engineer tests different modules of the banking system, which are accounting, customer support, and billing, as a whole project without testing every part for its performance.
- Black box testing: Testing functional and non-functional platform parts without considering the internal code.
C
- Canary testing: A method of testing a new feature or software update on a small group of users. For instance, it is applied in the A/B testing when multiple changes are rolled out so that only a limited pool of users experiences them in case of bugs.
- CAST: Computer-Aided Software Testing is the approach to using tools, frameworks, and techniques to automate testing procedures, reducing time and money.
- Code review: A synonym for peer review, this ensures code quality throughout the development and maintenance phases.
- Code coverage: A metric determining what part of the code is being tested. It is used in white box testing to analyze the internal code base.
- Compatibility testing: A method of confirming software compatibility with other devices, browsers, operating systems, and networks.
- Component testing: Testing of different software modules separately from each other. It is usually conducted by developers for every unit performance validation.
- Concurrency testing: A technique of examining platform stability and reliability when multiple people use it.
- Content testing: A form of content-checking procedure where content is tested for accuracy, accessibility, consistency, and localization.
- Cross-browser testing: Validation of website high-quality performance when run on different browsers.
- Cucumber testing: This method is used in behavior-driven development (BDD) and refers to writing test cases in plain English for better understanding among developers, testers, and stakeholders.
D
- Database testing: Testing of databases to ensure data accuracy, integrity, and security. This approach tests backend elements such as schemes, triggers, tables, models, and procedures.
- Data-driven testing: Software testing technique where test cases are written based on data from tables. An example is to make an input from a row from a table and receive an output as indicated in the opposite row.
- Debugging: The process of locating and fixing the errors.
- Defect: A root of the issue that prevents software from performing as expected.
- Dependency testing: A testing method that verifies smooth interaction between different system modules and components. It aims to validate a solid ground for integrating external dependencies like third-party services.
- Distributed testing: Often used for large systems, distributed software testing is responsible for testing a platform on different machines. Testers can imitate diverse environments and analyze software performance under various configurations.
E
- End-to-end testing: A process of verification of software performance from the beginning to the end of the workflow. For instance, from entering the web application to processing payment and successfully ordering a product.
- Emulator: A computer that mimics the environment of another computer and is usually used as a guest system.
- Exhaustive testing: A technique that inputs all possible scenarios and uses cases to identify if the system will not crash.
- Exploratory testing: A flexible approach to testing that aims to discover bugs without any pre-written test cases or a strictly defined plan. It is a part of a simultaneous learning and testing concept.
F
- Failure testing: Testing the application under intensive overload to verify it can withstand severe conditions and perform as expected.
- Field testing: Testing a product in a real “field” where it will be used rather than a testing environment.
- Flow testing: Usually applied in software development, it ensures smooth data transfer and processing.
- Front-end testing: A procedure that aims to examine the user interface (UI) and how it communicates with other software parts.
- Functional testing: Being a type of black-box testing, functional testing verifies if a system provides correct output. However, it doesn’t consider digging into the source code. If the output is accurate, the testing outcome is considered successful.
- Future proof testing: Analyzing if the application can be compatible with future advances. It requires researching technology trends and assuming potential updates that are to happen.
G
- Generation testing: Comparing and testing two versions of the product, usually the old and the updated one. It ensures that implemented changes didn’t influence previously existing functionality.
- Glass box testing: Also referred to as white box testing or clear testing, it analyzes a system’s logic and structure. For example, if you develop a calculator, glass box testing would look at the correct sum and the exceptions, such as large numbers or null.
- Gray box testing: Derived from black box testing and white box testing, a software testing technique where a tester is given partial access to the internal knowledge base while examining the system from the end user’s point of view.
- GUI testing: Graphical user interface testing is the process of user interface analysis in terms of its functionality and ability to provide a smooth user experience.
H
- Happy path testing: Testing the most common paths users take. The procedure is conducted under the ideal conditions validating application high quality performance.
- Hardware testing: The method refers to testing physical product components like central processing unit (CPU), memory, or storage. For instance, before new smartphones are released, testers check if their battery can stand intensive usage.
- Headless browser testing: A process of testing web browsers without a user interface. It is like examining a background that has no visual representation.
I
- Integration testing: When unit testing is completed, QA specialists go with integration testing to check how separate modules communicate.
- Incremental testing: Compared to integration testing, incremental one goes through checking modules by being added one by one, not as a whole mechanism.
- Inspection: A process of documentation peer review by highly trained specialists. It consists of product and process improvement steps.
- Interface testing: Testing of communication between interfaces: APIs, web services, etc.
- Isolation testing: Testing separate modules as they are without considering the surroundings.
J
- Jest testing: A Java testing framework designed by Facebook for testing JavaScript and React projects.
- JUnit testing: Another JavaScript testing framework suited for creating automated tests.
- JIT testing: Just-in-time testing is the approach in Agile and Lean methodologies that dictate applying testing when it is needed, not as a planned activity.
K
- Keyword-driven testing: An approach to script writing in automated functional testing that instructs tests to be written based on keywords. The keywords are user actions. Keyword-driven testing doesn’t require testers to comprehend advanced programming knowledge.
- KPI: Key performance indicators are metrics used to determine successful task completion or worker performance.
- Knockout testing: The concept of knockout testing is to turn off some functions and observe how the system works without them.
L
- Load testing: A process of evaluating system capability to handle multiple requests simultaneously.
- Localization testing: Testing applications against cultural differences and analyzing their adaptation to specific region requirements without engineering teams’ input.
- Logic-coverage testing: Similar to path testing, this technique tests software performance following designer paths.
- Loop testing: Testing all types of loops, simple, nested, concatenated, and unstructured, within the application.
- Limit testing: A process of confirmation of the system’s capability to handle maximum and minimum existing inputs.
M
- Maintainability: A parameter identifying the system’s ability to be modified and updated.
- Manual testing: Testing operation is performed by testers who do not use automated tools.
- Monkey testing: An ad-hoc testing method by which testers make random inputs without expecting specific results. The purpose is to see if the system crashes unexpectedly.
- Mutation testing: A process of purposely adding bugs to the system to check the test case quality: whether they recognize errors correctly.
N
- Negative testing: Testing software by intentionally incorrect data input to see if it reacts to it. For instance, inputting a six-digit password when a nine-digit one is required.
- Network testing: Examining network performance in terms of security, reliability, and efficiency.
- Non-functional testing: The method doesn’t relate to any software functional part; instead, it tests aspects such as competency, effectiveness, scalability, usability, etc.
O
- Operational testing: Testing conducted in an operational environment by the end user.
- Output comparison testing: Checking system outputs and their conformity with correct or other system outputs.
- Open box testing: Similar to white box testing, where testers have full access to internal data.
P
- Peer testing: An approach to testing where several parties are included, typically a tester and a developer or business analyst.
- Performance testing: Evaluating system stability and quality under different workloads.
- Penetration testing: Also called ethical hacking, it is a process of entering software to check its vulnerabilities and security weak points.
Q
- QA: Quality assurance is a set of practices that ensures that the customer receives the best product.
- Quantitative testing: A type of testing that measures the results of metrics such as response time, defect density, etc.
R
- Regression testing: Running a test to determine if the application performs as intended after implemented changes.
- Release testing: A broad-scope testing technology that aims to verify that the product is ready to meet its end users.
- Recovery testing: Testing the system’s ability to recover from crashes and failures.
S
- Sanity testing: It is a part of regression testing that determines whether the code changes don’t bring new issues to a system. Sanity testing has a limited scope and acts as a signal to prevent further testing procedures if errors are spotted.
- Scalability testing: Testing software’s ability to scale up or scale down.
- Security testing: A process of identifying system vulnerabilities that can lead to data, revenue, and reputational damages.
T
- Test case: A specific set of conditions or variables determining whether the system function works as intended.
- Test suite: A set of test cases that intends to check a certain application functionality.
- Test plan: A document that outlines all important testing details like objectives, deliverables, instruments, schedule, etc.
U
- Usability testing: A procedure of testing a platform for its user-friendliness, intuitiveness, and easy navigation.
- UAT: User acceptance testing relates to system validation by the end user.
V
- Visual testing: Ensuring the interface is correctly presented to users across the website and application.
- Vulnerability testing: Analyzing system reliability in terms of security and spotting its weak points that can lead to data loss, etc.
W
- White box testing: Examining the platform’s performance by knowing internal code structures and observing code operations.
- Worst case testing: Putting an application in extreme conditions to determine its limits.
Conclusion

Knowing QA terms is essential to becoming a professional in the software testing sphere. To deepen your knowledge, you can also read our other articles in the blog or contact us directly to get information on your specific project testing!



