The following definitions are taken from accepted and identified sources.
Ad Hoc Testing: Testing carried out using no recognised test case design technique. [BCS]
Assertion Testing: (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.
Background testing: is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned.
Benchmarks: Programs that provide performance comparison for software, hardware, and systems.
Benchmarking: is specific type of performance test with the purpose of determining performance baselines for comparison.
Big-bang testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system
Breadth test: - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail.
Clean test: A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)
Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. Syn: Fagan Inspection
Code Walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. Contrast with code audit, code inspection, code review.
Coexistence Testing: Coexistence isn’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It’s probably an exponentially hard problem rather than a square-law problem.
Compatibility bug: A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code.
Compatibility Testing: The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.
Composability testing:Testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks.
Condition Coverage: A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage
Conformance directed testing: Testing that seeks to establish conformance to requirements or specification.
CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion.
Data-Driven testing An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script.
Data flow testing Testing in which test cases are designed based on variable usage within the code
Database testing. Check the integrity of database field values.
Defect Age. A measurement that describes the period of time from the introduction of a defect until its discovery. .
Defect Discovery Rate.A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form.
Defect Removal Efficiency (DRE).A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness.
Defect Seeding.The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding.
Defect Masked.An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed.
Depth test. A test case, that exercises some part of a system to a significant level of detail.
Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage
Dirty testing Negative testing.
End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.
Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.
Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them
Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program.
Exception Testing. Identify error messages and exception handling processes an conditions that trigger them.
Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.
Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test.
Software Testing Dictionary
Posted at 10:00 PM | | 0 Comments
Bug Reporting
1). Classical Bug Reporting Process:
1.Test Lead
2.Project Lead
2). Common Repository Oriented Bug Reporting Process:
3). Bug Tracking Tool Oriented Bug Reporting Process:
Big Tracking Tool:
It is a software application that can be accessed only by the otherwise person and used for managing the complete bug tracking process by providing all the facilities along with a defect profile template.
Note:
At the end of the testing process usually the test lead will prepare the test summary report which is also called as test closure.
Posted at 9:54 PM | | 1 Comments
Test Design Techniques
While developing the test cases if at all the test engineer feels complex in some areas to over come that complexity usually the test engineer will use test design techniques.
Generally two types of techniques are used in most of the companies.
1. Boundary Value Analysis (BVA).
2.Equableness Class Partition (ECP).
1). Boundary Value Analysis (BVA):
When ever the engineers need to develop test cases for a range kind of input then they will go for boundary value analysis. Which describes to concentrate on the boundary of the rang.
Usually they test with the following values.
LB-1, LB, LB+1, MV, UB-1, UB, UB+1
2). Equableness Class Partition (ECP):
When ever the test engineer need to develop test cases for a feature which has more number of validation then one will go for equableness class partition. Which describe first divide the class of inputs and then prepare the test cases.
Ex: Develop the test cases for E-Mail Test box whose validations are as follows.
Requirements:
1.It should accept Minimum 4 characters Maximum 20 characters.
2.It should accept only small characters.
3.It should accept @ and _ special symbols only.
Boundary Value Analysis:
LB-1, LB, LB+1, MV, UB-1, UB, UB+1
3ch, 4ch, 5ch, 12ch, 19ch, 20ch, 21ch
Equableness Class Partition (ECP):
Valid
4char
5char
12char
19char
20char
a – z
@
_
Invalid
3char
21char
A – Z
0 – 9
All the Special Symbols apart form @ and _.
Alpha Numeric.
Blank Space
Dismal Numbers.
Test Case Document:
Test Case ID
Test Case Type
Description
Expected Value
Valid Input Table (VIT):
Sl NO Input
1 abc
2 ABCD
3 ABCD123
Posted at 9:45 PM | | 1 Comments