Software Testing Dictionary

The following definitions are taken from accepted and identified sources.

Ad Hoc Testing: Testing carried out using no recognised test case design technique. [BCS]

Assertion Testing: (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Background testing: is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned.

Benchmarks: Programs that provide performance comparison for software, hardware, and systems.

Benchmarking: is specific type of performance test with the purpose of determining performance baselines for comparison.
Big-bang testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system

Breadth test: - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail.

Clean test
: A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)

Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. Syn: Fagan Inspection

Code Walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions. Contrast with code audit, code inspection, code review.

Coexistence Testing: Coexistence isn’t enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It’s probably an exponentially hard problem rather than a square-law problem.

Compatibility bug: A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code.

Compatibility Testing: The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Composability testing:Testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks.

Condition Coverage: A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage

Conformance directed testing: Testing that seeks to establish conformance to requirements or specification.
CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion.

Data-Driven testing An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script.

Data flow testing Testing in which test cases are designed based on variable usage within the code
Database testing. Check the integrity of database field values.

Defect Age. A measurement that describes the period of time from the introduction of a defect until its discovery. .

Defect Discovery Rate.A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form.

Defect Removal Efficiency (DRE).A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness.

Defect Seeding.The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding.

Defect Masked
.An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed.
Depth test. A test case, that exercises some part of a system to a significant level of detail.

Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage
Dirty testing Negative testing.

End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Error Guessing
: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them
Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program.
Exception Testing. Identify error messages and exception handling processes an conditions that trigger them.
Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.
Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test.

Bug Reporting

1). Classical Bug Reporting Process:

1.Test Lead
2.Project Lead

2). Common Repository Oriented Bug Reporting Process:


3). Bug Tracking Tool Oriented Bug Reporting Process:

Big Tracking Tool:

It is a software application that can be accessed only by the otherwise person and used for managing the complete bug tracking process by providing all the facilities along with a defect profile template.

Note:

At the end of the testing process usually the test lead will prepare the test summary report which is also called as test closure.

Test Design Techniques

While developing the test cases if at all the test engineer feels complex in some areas to over come that complexity usually the test engineer will use test design techniques.

Generally two types of techniques are used in most of the companies.

1. Boundary Value Analysis (BVA).

2.Equableness Class Partition (ECP).

1). Boundary Value Analysis (BVA):

When ever the engineers need to develop test cases for a range kind of input then they will go for boundary value analysis. Which describes to concentrate on the boundary of the rang.

Usually they test with the following values.

LB-1, LB, LB+1, MV, UB-1, UB, UB+1

2). Equableness Class Partition (ECP):

When ever the test engineer need to develop test cases for a feature which has more number of validation then one will go for equableness class partition. Which describe first divide the class of inputs and then prepare the test cases.

Ex: Develop the test cases for E-Mail Test box whose validations are as follows.
Requirements:

1.It should accept Minimum 4 characters Maximum 20 characters.

2.It should accept only small characters.

3.It should accept @ and _ special symbols only.

Boundary Value Analysis:
LB-1, LB, LB+1, MV, UB-1, UB, UB+1

3ch, 4ch, 5ch, 12ch, 19ch, 20ch, 21ch

Equableness Class Partition (ECP):

Valid

4char
5char
12char
19char
20char
a – z
@
_

Invalid

3char
21char
A – Z
0 – 9
All the Special Symbols apart form @ and _.
Alpha Numeric.
Blank Space
Dismal Numbers.

Test Case Document:
Test Case ID


Test Case Type


Description



Expected Value


Valid Input Table (VIT)
:
Sl NO Input
1 abc
2 ABCD
3 ABCD123

Bug Tracking

Bug tracking is a process in which the defects are identifying, isolated and managed.

Defect Profile Document
Defect ID:
The sequences of defect numbers are listed out here in this section.

Steps of Reproducibility:
The list of all the steps that are followed by a test engineer to identity the defect are listed out here in this section.

Submitter:
The test engineer name who submits the defect will be mentioned here in this section.

Date of Submission
:
The date on which the defects submitted is mentioned here in this section.

Version Number:
The corresponding version number is mentioned here in this section.

Build Number:
Corresponding build number is mentioned here is this section.

Assigned to:
The project lead or development lead will mentioned the corresponding developers name for name the defect is assigned.

Severity:
How serious the defect is, is described in terms of severity. It is classified in to 4 types.

1. FATAL :Sev1 S1 1
2. MAJOR :Sev2 S2 2
3. MINOR :Sev3 S3 3
4. SUGGESION :Sev4 S4 4

FATAL:

It is all the problems are related to navigational blocks or unavailability of functionality then such types of problems are treated to be FATAL defect.

Note: It is also called as show stopper defects.

MAJOR:

It at all the problems are related to the working of the features then such types of problems are treated to be MAJOR defects.

MINOR:

It at all the problems are related to the look and feel of the application then such types of problems are treated to be MINOR defects.

SUGGITIONS:

If at all the problems are related to the value of the application then such types of problems are treated to be suggestions.

Priority:

The sequence in which the defects have to be rectified is described in terms of priority. It is classified in to 4 types.

1.CRITICAL
2.HIGH
3.MEDIUM
4.LOW

Usually the FATAL defects are given CRITICAL priority, MAJOR defects are given HIGH priority, MINOR defects are given MEDIUM priority and SUGGITION defects are given LOW priority sent depending upon the situation the priority may be changed by the project lead or development lead.

Ex: -

Low Severity High Priority Case:
In the case of customer visit all the look and feel defects, which are usually less savior, are given highest priority.

High Severity Low Priority Case:
If at all some part of the application is not available because it is under development still the test engineer will treat team as FATAL defect, but the development lead will give less priority for those defects.

BUG LIFE CYCLE :


1.New / Open
:

When ever the defect is found for the first time the test engineer will set the status as New / Open. But some companies will say to set the status as only new at this situation and once the developers accept the defect they will set the status as open.

2.Reopen and Closed:

Once the defects are rectified by the developer and the next build is released to the testing department then the testers will check whether the defects are rectified properly or not.

If they feel rectified they will set the status as Closed. Other wise they will set the status as Reopen

3.Fixed for Verification / Fixed / Rectified:

When ever the test engineer raises the defects, accepted in the developers. Rectified then they will set the status as Fixed.

Hold:
Whenever the developer confused to accept or Reject the defect he will set the status as hold.

Testers Mistake / Testers Error / Rejected:
Whenever the developer is confused it is not at all a defect then he will set the status as reject.

As Per Design (This is a Rare case)
When ever some new changes are incorporated engineers then the test engineers will raze then as defects but the developers will set the status as ‘As Per Design’.
Error:
It is a problem related to the program.

Defect:
If the test engineer with respect to the functionality identifies a problem then it is called defect.

Bug:
If the developer accepts the defect, that is called as Bug.

Fault / Failure:
The customer identity the problem, after delivery. It is called Fault / Failure.

Result Analysis

In this phase the test engineer will compare the expected value with actual value and mention the result as pass if both are match other wise mentioned the result as fail.

Test Development

Types of Test Cases
Test cases are broadly divided into two types.

G.U.I Test Cases.
Functional test cases.
Functional test cases are further divided into two types.

Positive Test Cases.
Negative Test Cases.
Guidelines to prepare GUI Test Cases:

1.Check for the availability of all the objects.
2.Check for the alignments of the objects if at all customer has specified the requirements.
3.Check for the consistence of the all the objects.
4.Check for the Spelling and Grammar.
5.Apart from these guidelines anything we test with out performing any action will fall under GUI test cases.

Guidelines for developing Positive Test Cases
.

1.A test engineer must have positive mind setup.
2.A test engineer should consider the positive flow of the application.
3.A test engineer should use the valid input from the point of functionality.

Guidelines for developing the Negative Test Cases
:

1.A test engineer must have negative mind setup.
2.He should consider the negative flow of the application.
3.He should use at least one invalid input for a set of data.

Test Case Template:

Test Objective :
Test Scenario :
Test Procedure :
Test Data :
Test Cases :


1.Test Objective:
The purpose of the document is clearly described here in this section.

2.Test Scenarios:
The list of all the situations that are to be tested, that are listed out here in this section.

3.Test Procedure:
Test procedure is a functional level term, which describe how to test the functionality. So in this section one will describe the plan for testing the functionality.

4.Test Data:
The data that is required for testing is made available here in this section.

5.Test Cases:
The list of all the detailed test cases is- listed out here in this section.

Note:

Some companies even maintain all the above five fields individually for each and every scenario. But some companies maintain commonly for all the scenarios.

Test Execution

During the test execution phase the test engineer will do the following..

1.He will perform the action that is described in the description column.
2.He will observe the actual behavior of the application.
3.He will document the observed value under the actual value column.

Test Planning

Plan:

Plan is a strategic document, which describes how to perform a task in an effective, efficient and optimized way.

Optimization:

Optimization is a process of reducing or utilizing the input resources to their maximum and getting the maximum possible output.

Test Plan:

It is a strategic document, which describe how to perform testing on an application in an effective, efficient and optimized way. The Test Lead prepares test plan.

CONTANTS OF THE TEST PLAN

INTERDUCTION.
1.1 Objective.
The main purpose of the document is clearly described here in this section.

1.2 Reference Document.
The list of all the documents that are referred to prepare the test plan will be listed out here in this section.

2.0 COVERAGE OF TESTING.
2.1 Features To Be Tested
The list of all the features with in the scope are mentioned here in this section

2.2 Features Not To Be Tested
The lists of all the features that are not planed for testing based on the following criteria are mentioned here in this section.

Out of scope features
Low risk areas
Future functionalities.
The features that are skipped based on the time constraints.
3.0 TEST STRATEGY
It is defined as an organization level term, which is used for testing all the projects in the organization.

TEST PLAN
It is defined as a project level term, which is describes how to test a particular project in an organization.
Note:
Test strategy is common for all the projects. But test plan various from project to project.

3.1 Levels of Testing
The list of all the levels of testing that are maintained in that company are listed out here in this section.

3.2 Types of Testing
The list of all the types of testing that are followed by that company are listed out here in this section.

3.3 Test Design Technique
The list of all the techniques that are followed by that company during the test case development are listed out here in this section.
Ex: BVA (Boundary Value Analysis)
ECP (Equable Class Partition)

3.4 Configuration Management

3.5 Test Metrics
The lists of all the tasks that are measured and maintain in terms of metrics are clearly mentioned here in this section.

3.6 Terminologies
The list of all the terms and the corresponding meanings are listed out here in this section

3.7 Automation plan
The list of all the areas that are planed for automation in that company are listed out her in this section.

3.8 List of Automated Tools
The list of all the automated tools that are used in that company are listed out here in this section.



4.0 BASE CRITERIA
4.1 Acceptance Criteria.
When to stop testing in a full pledged manner thinking then enough testing is done on the application is clearly described here in this section.

4.2 Suspension Criteria.
When to stop testing suddenly and suspended the build will be clearly mentioned here in this section.

5.0 TEST DELIVERABLE.
The list of all the documents that are to be prepared and deliver in the testing phase are listed out here in this section.

6.0 TEST ENVIRONMENT.
The customer specified environment that is about to be used for testing is clearly describes here in this section.

7.0 RESOURCE PLANNING.
Who has to do what is clearly described here in this section.

8.0 SCHEDULING.
The starting dates and the ending dates of each and ever task is clearly described here in this section.

9.0 STAFFING AND TRAINING.
How much staff is to be requited what kind of training is to be provided is clearly planned and mentioned here in this section.

10.0 RISK AND CONTINGENCES.
The list of all the potential risks corresponding solution plans are listed out here in this section.
Risks

Unable to deliver the software with in the dead lines.
Employees may leave the organization in the middle of the project development.
Customer may impose the dead lines.
Unable to test all the features with in the time.
Lake of expatriation.
Contingences

Proper plan ensurence.
People need to be maintained on bench.
What not to be tested has to be planed properly.
Severity priority based execution.
Proper training needs to be provided.
11.0 ASSUMPTIONS.
The list of all the assumptions that are to be assumed by a test engineer will be listed out here in this section.

12.0 APPRUVAL INFORMATION.
Who will approve what is clearly mentioned here in this section.

Types Of Testing

There are 18 types of testing.

1.Build Verification Testing.

2.Regression Testing.

3.Re-Testing.

4.Alpha Testing.

5.Beta Testing.

6.Static Testing.

7.Dynamic Testing.

8.Installation Testing.

9.Compatability Testing.

10.Monkey Testing.

11.Exploratory Testing.

12.Usability Testing.

13.End-To-End Testing.

14.Port Testing.

15.Reliability Testing.

16.Mutation Testing.

17.Security Testing.

18.Adhoc Testing.

1) Sanitary Testing / Build Verification Testing / Build Accepting Testing:

It is a type of testing in which one will conduct overall testing on the released build in order to check weather it is proper for further details testing or not.

Some companies even call it as Sanitary Testing and also Smoke Testing. But some company’s will say that just before the release of the built the developer’s will conduct the overall testing in order to check weather the build is proper for detailed testing or not that is known as Smoke Testing and once the build is released once again the testers will conduct the over all testing in order to check weather the build is proper for further detailed testing or not. That is known as Sanitary Testing.

2) Regression Testing:

It is a type of testing in which one will perform testing on the already tested functionality again and again this is usually done in scenarios (Situations).

Scenario 1:

When ever the defects are raised by the Test Engineer rectified by the developer and the next build is released to the testing department then the Test Engineer will test the defect functionality and it’s related functionalities once again.

Scenario 2:

When ever some new changes are requested by the customer, those new features are incorporated by the developers, next built is released to the testing department then the test engineers will test the related functionalities of the new features once again which are already tested. That is also known as regression testing.

Note:

Testing the new features for the first time is new testing but not the regression testing.

3) Re – Testing
:

It is a type of testing in which one will perform testing on the same function again and again with multiple sets of data in order to come to a conclusion whether the functionality is working fine or not.

4) Alpha - Testing:

It is a type of testing in which one (I.e., out Test Engineer) will perform user acceptance testing in our company in the presents of the customer.

Advantages:

If at all any defects are found there is a chance of rectifying them immediately.

5) Beta - Testing
:

It is a type of testing in which either third party testers or end users will perform user acceptance testing in the client place before actual implementation.

6) Static Testing
:

It is a type of testing in which one will perform testing on an application or it’s related factors with out performing any actions.

Ex:GUI Testing, Document Testing, Code reviewing and etc…

7) Dynamic Testing:

It is a type of testing in which one will perform testing on the application by performing same action.

Ex:Functional Testing.

8) Installation Testing:

It is a type of testing in which one will install the application in to the environment by following the guidelines given in the deployment document and if the installation is successful the one will come to a conclusion that the guidelines are correct otherwise the guidelines are not correct.
9) Compatability Testing:

It is a type of testing in which one may have to install the application into multiple number of environments prepared with different combinations of environmental components in order to check whether the application is suitable with these environments or not. This is use usually done to the products.

10) Monkey Testing:

It is a type of testing in which one will perform some abnormal actions intentionally (wanted) on the application in order to check its stability.

11) Exploratory Testing
:

It is a type of testing in which usually the domain expert will perform testing on the application parallel by exploring the functionality with out having the knowledge of requirements.

12) Usability Testing:

It is a type of testing in which one will concentrate on the user friendliness of the application.

13) End-To-End Testing:

It is a type of testing in which one will perform testing on a complete transaction from one end to another end.

14) Port Testing
:

It is a type of testing in which one will check weather the application is comfortable or not after deploying it into the original clients environment.

15) Reliability Testing (Or) Soak testing:

It is a type of testing in which one will perform testing on the application continuously for long period of time in order to check its stability.

16) Mutation Testing
:

It is a type of testing in which one will perform testing by doing some changes

For example usually the developers will be doing any many changes to the program and check it’s performance it is known as mutation testing.

17) Security Testing
:

It is a type of testing in which one will usually concentrate on the following areas.

i) Authentication.
ii) Direct URL Testing.
iii) Firewall Leakage Testing.

i) Authentication:

It is a type of testing in which a Test Engineer will enter different combinations of user names and passwords in order to check whether only the authorized persons are accessing the application or not.

ii) Direct URL Testing:

It is a type of testing in which a test engineer will specified the direct URL’s of secured pages and check whether they are been accessing or not.

iii) Firewall Leakage Testing:

It is a type of testing in which one will enter as one level of user and try to access the other level unauthorized pages in order to check whether the firewall is working properly or not.

18) Adhoc Testing:

It is a type of testing in which one will perform testing on the application in his own style after understanding the requirements clearly.

Levels Of Testing

There are 5 levels of testing:
1.Unite level testing
2.Module level testing
3.Integration level testing
4.System level testing
5.User acceptance level testing

1). Unite level testing:

If one performs testing on a unit then that level of testing is known as unit level testing. It is white box testing usually developers perform it.

Unit: - It is defined as a smallest part of an application.



2). Module level testing:

If one perform testing on a module that is known as module level testing. It is black box testing usually test engineers perform it.



3). Integration level testing:

Once the modules are developing the developers will develop some interfaces and integrate the module with the help of those interfaces while integration they will check whether the interfaces are working fine or not. It is a white box testing and usually developers or white box testers perform it.

The developers will be integrating the modules in any one of the following approaches.

A). Top Down Approach (TDA).

In this approach the parent modules are developed first and then integrated with child modules.

B). Bottom Up Approach (BUA).

In this approach the child modules are developed first and the integrated that to the corresponding parent modules.

C). Hybrid Approach.

This approach is a mixed approach of both Top down and Bottom up approaches.

D). Big Bang Approach.

Once all the modules are ready at a time integrating them finally is known as big bang approach.

STUB:While integrating the modules in top down approach if at all any mandatory module is missing then that module is replace with a temporary program known as STUB.

DRIVER:While integrating the modules in bottom up approach if at all any mandatory module is missing then that module is replace with a temporary program known as DRIVER.

4). System level testing:

Once the application is deployed into the environment then if one performs testing on the system it is known as system level testing it is a black box testing and usually done by the test engineers.

At this level of testing so many types of testing are done.

Some of those are


System Integration Testing
Load Testing
Performance Testing
Stress Testing etc….

5). System level testing:

The same system testing done in the presents of the user is known as user acceptance testing. It s a black box testing usually done by the Test engineer

Testing Methodology(OR) Testing Techniques

There are 3 methods are there

1.Black Box Testing.
2.White Box Testing.
3.Gray Box Testing.

Black Box Testing:

It is a method of testing in which one will perform testing only on the functional part of an application with out having any structural knowledge. Usually test engineers perform it.



White Box Testing (Or) Glass Box Testing (Or) Clear Box Testing
:

It is a method of testing in which one will perform testing on the structural part of an application. Usually developers are white box testers perform it.


Gray Box Testing:

It is a method of testing in which one will perform testing on both the functional part as well as the structural part of an application.


Note: The Test engineer with structural Knowledge will perform gray box testing.

SDLC (Software Development Life Cycle)

It contains 6 phases.
1.Initial phase / Requirement phase.
2.Analysis phase.
3.Design phase.
4.Coding phase.
5.Testing phase.
6.Delivery and maintenance phase.

Initial Phase:
Task : Interacting with the customer and gathering the requirements.

Roles : BA (Business Annalist) , EM (Engagement Manager)

Process:
First of all the business analist will take an appointment from the customer, collects the templates from the company meats the customer on the appointed date gathers the requirements with the support of the templates and comeback to the company with a requirements documents. Then the engagement manager will check for the extra requirements if at all he fined any extra requirements he is responsible for the excess cast of the project. The engagement manager is also responsible for prototype demonstration in case of confused requirements.

Template: It is defined as a pre-defined format with pre-defined fields used for preparing a document perfectly.

Prototype: It is a rough and rapidly developed model used for demonstrating to the client in order to gather clear requirements and to win the confidence of a customer.


Proof: The proof of this phase is requirements document which is also called with the following name.

FRS - (Functional Requirement Specification)
BRS - (Business Requirement Specification)
CRS - (Client/Customer Requirement Specification)
URS - (User Requirement Specification)
BDD - (Business Design Document)
BD - (Business Document)

Note:
Some company’s may the over all information in one document called as ‘BRS’ and the detailed information in other document called ‘FRS’. But most of the company’s will maintain both of information in a single document.

Analysis Phase:

Task: Feasibility study,Tentative planning,Technology selection,Requirement A\analysis.

Roles: System Annalist (SA),Project Manager (PM),Team Manager (TM)

Process:
(I) Feasibility study It is detailed study of the requirements in order to check whether all the requirements are possible are not.
(II) Tentative planning The resource planning and time planning is temporary done in this section.
(III) Technology selection The lists of all the technologies that are to be used to accomplish the project successfully will be analyzed listed out hear in this section.
(IV) Requirement analysis
The list of all the requirements like human resources, hardware, software required to accomplish this project successfully will be clearly analyzed and listed out hear in this section.

Proof : The proof of this phase is SRC (Software Requirement Specification).

Design phase:

Tasks: HLD (High Level Designing),LLD (Low Level Designing)

Roles: HLD is done by the CA (Chief Architect),LLD is done by the TL (Technical Lead).

Process:
The chief architect will divided the whole project into modules by drawing some diagrams and technical lead will divided each module into sub modules by drawing some diagrams using UML (Unified Modeling Language).
The technical lead will also prepare the PSEUDO Code.

Proof: The proof of this phase is technical design document (TDD).

Pseudo Code: It is a set of English instructions used for guiding the developer to develop the actual code easily.

Module: Module is defined as a group of related functionalities to perform a major task.

Coding Phase:

Task : Programming / Coding.

Roles : Developers / Programmers.

Process:
Developers will develop the actual source code by using the PSUEDO Code and following the coding standards like proper indentation, color-coding, proper commenting and etc…

Proof
: The proof of this phase is SCD (Source Code Document).

Testing Phase:

Task: Testing.

Roles: Test Engineer.

Process:

First of all the Test Engineer will receive the requirement documents and review it for under studying the requirements.

If at all they get any doubts while understanding the requirements they will prepare the Review Report (RR) with all the list of doubts.

Once the clarifications are given and after understanding the requirements clearly they will take the test case template and write the test cases.

Once the build is released they will execute the test cases.

After executions if at all find any defects then they will list out them in a defect profile document.

Then they will send defect profile to the developers and wait for the next build.

Once the next build is released they will once again execute the test cases

If they find any defects they will follow the above procedure again and again till the product is defect free.

Once they feel product is defect free they will stop the process.

Proof : The proof of this phase is Quality Product.

Test case : Test case is an idea of a Test Engineer based on the requirement to test a particular feature.

Delivery and Maintenance phase:

Delivery:

Task : Installing application in the client environment.

Roles: Senior Test Engineers / Deployment Engineer.

Process:

The senior test engineers are deployment engineer will go to the client place and install the application into the client environment with the help of guidelines provided in the deployment document.

Maintenance:
After the delivery if at all any problem occur then that will become a task based on the problem the corresponding roll will be appointed. Based on the problem role will define the process and solve the problem.

Where exactly testing comes in to picture?
Which many sort of testing are there?


There are two sorts of testing.

1.Un conventional testing
2.Conventional testing

Un conventional Testing:
It is a sort of testing in which quality assurance people will check each and every out come document right from the initial phase of the SDLC.

Conventional Testing:
It is a sort of testing in which the test engineer will test the application in the testing phase of SDLC.

Software Testing Concepts (Manual Testing)

Definitions:

Project : It is something developed based on a particular customer requirement and used by that particular customer only.

Product: Product is some thing that is developed based on the company’s specifications and used by multiple customers.

Quality: Quality is defined as not only the justification of the requirement but also the present of value (user friendly).

Defect: Defect is defined as deviation from the requirements.

Testing:
Testing is a process in which the defects are identified, isolated (separated), subjected (sending) for rectification and ensured that the product is defect free in order to produce a quality product in the end and hence customer satisfaction.
(Or)
Testing is the process of executing a program with the intent of finding errors.
(Or)
Verifying and validating the application with respect to customer requirements.
(Or)
Finding the differences between customer expected and actual values.
(Or)
Testing should also ensure that a quality product is delivered to the customer.

Process of Developing Project in the Software Company.


BIDDING THE PROJECT : Bidding the project is defined as request for proposal, estimation and signoff.

KICK OF MEETING:
It is a initial meeting conducted in the software company soon after the project is signed off in order to discus the over view of the project and to select a project manager for the project.

Usually High Level Manager, Project Manager, Technical Manager, Quality Managers, Test leads and Project leads will be involved in this meeting.

PIN (Project Initiation Note)
PIN is a mail prepaid by the project manager and send to the CEO of the software company in order to get the permission to start the project development.

Beginners Guide To Software Testing

Beginners Guide To Software Testing introduces a practical approach to testing software. It bridges the gap between theoretical knowledge and real world implementation. This Site helps you gain an insight to Software Testing - understand technical aspects and the processes followed in a real working environment.

Who will benefit?

Beginners. For those of you who wish to mould your theoretical software engineering knowledge into practical approach to working in the real world.


Those who wish to take up Software Testing as a profession.


Developers! This is an era where you need to be an “All rounder”. It is advantageous for developers to posses testing capabilities to test the application beforehand. This will help reduce overhead on the testing team.


Already a Tester! You can refresh all your testing basics and techniques and gear up for Certifications in Software Testing.


An earnest suggestion: No matter which profession you choose, it is advisable that you posses the following skills:


  • Good communication skills – oratory and writing
  • Fluency in English
  • Good Typing skills