Black-box Testing
1. Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing.
2. Black box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.
3. Black-box testing relies on the specification of the system or the component that is being tested to derive test cases. The system is a black-box whose behavior can only be determined by studying its inputs and the related outputs
4. Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.
Clear box testing?
Clear box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.
Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing, since “white boxes” are considered opaque and do not really permit visibility into the code. This is also known as glass-box or open-box testing.
Glass box testing?
Glass box testing is the same as white box testing. It's a testing approach that examines the application's program structure, and derives test cases from the application's program logic.
Open box testing?
Open box testing is same as white box testing. It's a testing approach that examines the application's program structure, and derives test cases from the application's program logic.
Grey box testing?
Grey box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. In grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the grey box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.
Forced-Error Test
The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages thatthe program issues should be generated. The list is used as a baseline for developing test cases. An attempt is made to generate each error message in the list. Obviously, test to validate error-handling schemes cannot be performed until all the handling and error message have been coded. However, FETs should be thought through as early as possible. Sometimes, the error messages are not available. The error cases can still be considered by walking through the program and deciding how the program might fail in a given user interface such as a dialog or in the course of executing a given task or printing a given report. Test cases should be created for each condition to determine what error message is generated.
Real-world User-level Test
These tests simulate the actions customers may take with a program. Real-World user-level testing often detects errors that are otherwise missed by formal test types.
Exploratory Test
Exploratory Tests do not involve a test plan, checklist, or assigned tasks. The strategy here is to use past testing experience to make educated guesses about places and functionality that may be problematic. Testing is then focused on those areas. Exploratory testing can be scheduled. It can also be reserved for unforeseen downtime that presents itself during the testing process.
Interface Testing
Testing conducted to evaluate whether systems or components pass data and control correctly to one another. Contrast with testing, unit; testing, system.
Mutation Testing
A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect differences in the mutations.
Operational Testing
Testing conducted to evaluate a system or component in its operational environment. Contrast with testing, development; testing, acceptance;
Parallel Testing
Testing a new or an altered data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison.
No comments:
Post a Comment