Monthly Archives: June 2009

Use Cases !!!

What is a Use Case?

A use case is a description of how a system’s behavior in response to a request from a stakeholder known as an actor. The actor could be a person or an external system that interacts with the system being described.

The actor initiates an action with the system with the purpose of accomplishing a goal. The system responds to the actor’s action in a way that fulfills the interests of all of its stakeholders. A use case summarizes a complete series of related scenarios that may unfold.

Use cases by definition are developed in text format using natural or non-technical language. However, they can also be developed in use case diagram form using UML notation or a combination of both text and diagrams.

Parts of a Use Case

There is no set template for use cases. There are some core sections that are considered the most useful in use cases. You should use as many or as few of these sections as needed to successfully document your system’s requirements. The core sections for use cases are:

– Use case ID
– Use case name
– Version
– Goal
– Summary / Description
– Primary Actor
– Secondary Actor
– Pre-conditions
– Trigger
– Basic course of events
– Exceptions
– Alternative paths
– Post-conditions
– Business rules
– Notes
– Author and date

Why Use Cases?

Use cases can be used to document any type of system. They can be used to document a software system or to document a company’s business processes.

Use cases are useful because they quickly and early clarify how the system will behave when the users interact with it. Use cases are easy for users to understand.

In addition, use cases are helpful for brainstorming conditions under which the system may fail and working out solutions to the problems that fulfill the stakeholders’ interests.

How to Write Use Cases

– Develop a list of usage goals from your stake holders. This is your initial list of use cases.
– Develop a short paragraph describing each use case this will be your summary or description.
– Develop the header section of the use case. The header for each use case should include:
– The Use Case ID Number
– The Use Case Name
– Pre-conditions
– Post-conditions
– Primary Actor
– Secondary Actor
– Trigger

Verify that your use case headers are correct then iterate through them again to add more sections and more details. Further detail sections are:

– Basic Course of Events – steps that the actor and system go through to accomplish a goal
– Exceptions – steps for handling errors and exceptions.
– Alternative Paths – steps for handling variations.

Check your use case for failure points and missing requirements.
As your write your use cases, do it in a way that makes them easy to read. Use natural language. Keep your statements simple and concise.

What is Cookie? What are the Advantages and Disadvantages of Cookies ?

Cookies are messages that web servers pass to your web browser when you visit Internet sites. Your browser stores each message in a small file. When you request another page from the server, your browser sends the cookie back to the server. These files typically contain information about your visit to the web page, as well as any information you’ve volunteered, such as your name and interests.

Advantage:
You can get back the session / page / thread of page you are looking for quickly

Disadvantage:
In automation or manual testing if deployment of new feature happens / deployment after bug fix happens then you wont see the latest changes unless & until you delete the cookies from browser cache….

What is the difference between Load Testing and Performance Testing ?

You can’t find difference between Load Testing and Performance Testing Because Load Testing is a part of Performance Testing.

Performance Testing:

Performance testing is “ To demonstrate the system functions to specifications with acceptable response times while processing the required transaction volume on a production sized data base”. That means the objective is Performance enhancement.

It determines which modules execute most often or use the most computer time.Then these modules are re-examined and recorded to run more quickly.
It can be tested using both black box and white box testing, but the white box testing yields finer analysis.

Load Testing:

Load testing is subset of performance testing. Load testing studies the behavior of the program when it is working at its limits.

Performance Testing is a combination of 3 type of Testing :

Performance Testing:
1. Load Testing
2. Stress Testing
3. Volume Testing

1) Load Testing:

In this we test number of users on application as per client requirements by applying gradually increase or at a time. Load testing is mainly based on giving the data only for mandatory fields in an application. Load testing is for required fields (for mandatory fields).

2) Volume testing:

In this we test an application by providing more and more data for all the fields. Here is the main eg. We can find the difference between Load and Volume testing

Fields Man. fields MF* People
Load 100 10 10 * 25
Volume 100 100 100*25

3) Stress testing:

Finding the break point of an application is Stress testing. At a time load is Stress on the application.

Eg: client mentions no. Of users. Let us assume 25.
In this context Load & Stress will be as follows.
Load: one by one hits the application up to 25 users
Stress: At a time 25 users hits on the application

What are User Acceptance Test Cases for any Application ?

Acceptance testing is done to determine whether to accept the app or not.

This could be in the form of
-Alpha Testing – Where test are conducted at the development site by the end users. Environment can be controlled a little bit in this case.
– Beta Testing – Where test are conducted at customer site and development team do not have any control on the test environment.

User Acceptance Test Cases are taken from the requirements of the application,
It basically to demonstrate that “The User Got what He asked For”…

Difference between QC & Test Director..

There are lot of differences between the two,

QC is the next version of TD.

QC uses a 2 Repositories, where in TD only 1.
I don’t remember the all, simple difference is there is dashboard in QC which is not there in TD.

We can integrate QTP scripts to be executed from QC, not so in TD.

Test Director

1. Why is TestDirector used for?

TestDirector is a test management tool. The completely web-enabled TestDirector supports high level of communication and association among various testing teams, driving a more effective and efficient global

application-testing process. One can also create reports and graphs to help review the progress of planning tests, executing tests, and tracking defects before a software release.

2. Why are the requirements linked to the test cases?

TestDirector connects requirements directly to test cases, ensuring that all the requirements have been covered by the test cases.

3. What are the benefits and features of TestDirector?

TestDirector incorporates all aspects of the testing process i.e. requirement management, test planning, test case management, scheduling, executing tests and defect management into a single browser-based

application. It maps requirements directly to the test cases ensuring that all the requirements have been covered by the test cases. It can import requirements and test plans from excel sheet accelerating the testing

process. It executes both manual and automated tests.

4. What is the use of filters in TD?

Filters in TestDirector are mainly used to filter out for the required results. It helps to customize and categorize the results. For eg: to quickly view the passed and failed tests separately filters are used.

5. What is Test Lab?

In the Test Lab the test cases are executed. Test Lab will always be linked to the test plan. Usually both are given the same name for easy recognition.

6. How to customize the defect management cycle in Quality Center?

Firstly one should collect all the attributes that has to be part of the defect management like version, defect origin, defect details, etc. Later using the modify options in QC one can change the defect module accordingly.

7. What is the advantage of writing test cases in Quality Center than writing in excel sheet?

Although creating test cases in excel sheet will be faster than doing it in QC as excel is more user friendly when compared to QC one require to upload them to QC and this process may cause some delay due to various

reasons. Also QC provides link to other tests which in turn is mapped to the requirements.

8. What is the difference between TestDirector and Quality Center?

The main difference is QC is more secured than TestDirector. In Quality Center the login page shows projects associated only to the logged in user unlike in Test Director where one can see all the available projects.

Also test management is much more improved in QC than TD. Defect linkage functionality in QC is more flexible when compared to TD.

9. What is meant by Instance?

Test instance is an instance of test case in Test Lab. Basically it means the test case which you have imported in Test lab for execution.

10. What is the use of requirement option in TestDirector?

Requirement module in TD is used for writing the requirements and preparing the traceability matrix.

11. Is it possible to maintain test data in TestDirector?

Yes one can attach the test data to the corresponding test cases or create a separate folder in test plan to store them.

12. If one tries to upgrade from TestDirector 7.2 to QC 8.0 then is there risk of losing any data?

No there is no risk of losing the data during the migration process. One has to follow proper steps for successful migration.

13. How is a bug closed in TestDirector?

Once the test cases are executed in the Test Lab and bugs are detected, it is logged as a defect using the Defect Report Tab and sent to the developer. The bug will have 5 different status namely New, Open, Rejected,

Deferred and Closed. Once the bug has been fixed and verified its status is changed to closed. This way the bug lifecycle ends.

14. In TD how are the test cases divided into different groups?

In the test plan of TestDirector one can create separate folder for various modules depending on the project. A main module in the test plan can be created and then sub modules be added to that.

15. What is the difference between TD and Bugzilla?

TestDirector is a test management tool. In TD one can write manual and automated test cases, add requirements, map requirements to the test cases and log defects. Bugzilla is used only for logging and tracking the

defects.

16. Are TestDirector and QC one and the same?

Yes TestDirector and Quality Center are same. Version of TD 8.2 onwards was known as Quality Center. The latest version of Quality Center is 9.2. QC is much more advanced when compared to TD.

17. What is the instance of the test case inside the Test Set?

Test set is a place containing sets of test cases. We can store many test cases inside test set. Now instance of test case is the test case which you have imported in the test tab. If an another test case has the same

steps as this test case till half way then you can create the instance of this test case.

18. What are the various types of reports in TestDirector?

In TD reports are available for requirements, test cases, test execution, defects, etc. The reports give various details like summary, progress, coverage, etc. Reports can be generated from each TestDirector module

using the default settings or it can be customized. When customizing a report, filters and sort conditions can be applied and the required layout of the fields in the report can be specified. Sub-reports can also be added

to the main report. The settings of the reports can be saved as favorite views and reloaded as required.

19. How can one map a single defect to more than one test script?

Using the ‘associate defect’ option in TestDirector one can map the same defect to a number of test cases.

20. Is it possible to create custom defect template in TestDirector?

It is not possible to create ones own template for defect reporting in TestDirector but one can customize the template that is already available in TestDirector as required.

21. Can a script in TD be created before recording script in Winrunner or QTP?

Any automation script can be created directly in TD. You need to open the tool (Winrunner or QTP) and then connect to TD by specifying the url, project, domain, userid and password. And then you just record the script

like you always do. When you save the script, you can save it in TD instead of your local system.

22. How to ensure that there is no duplication of bugs in TestDirector?

In the defect tracking window of TD there is a “find similar defect” icon. When this icon is clicked after writing the defect, if anybody else has entered the same defect then it points it out.

23. How is the Defect ID generated in TestDirector?

The Defect ID is automatically generated once the defect is submitted in TD.

24. What does the test grid contain?

The test grid displays all the relevant tests related to a project in TD. It contains the some key elements like test grid toolbar with various buttons for commands which are commonly used when creating and modifying

the tests, grid filter which displays the filter that is currently applied to a column, description tab which displays a description of the selected test in the test grid and history tab that displays the changes made to a test.

25. What are the 3 views in TD?

The three views in TD are Plan Test which is used to prepare a set of test cases as per the requirements, Run Test which is used for executing the prepared test scripts with respect to the test cases and finally Track

Defects which is used by the test engineers for logging the defects.

26. How to upload data from an excel sheet to TestDirector?

In order to upload data from excel sheet to TD firstly excel addin has to be installed, then the rows in the excel sheet which has to be imported to TD should be selected, and then finally the Export to TD option in the

tools menu of TestDirector should be selected.

27. How many types of tabs are available in TestDirector?

There are 4 types of tabs available in TestDirector. They are Requirement, Test Plan, Test Lab and Defect. It is possible to customize the names of these tabs as desired.

28. Is ‘Not covered’ and ‘Not run’ status the same?

Not Covered status means all those requirements for which the test cases are not written whereas Not Run status means all those requirements for which test cases are written but are not run.

29. How does TestDirector store data?

In TD data is stored on the server.

30. Why should we create an Instance?

Test Instance is used to run the test case in the test lab. It is the test instance that you can run since you can’t run the test case in test set.

Quality Centre

1. What is meant by test lab in Quality Centre?
Test lab is a part of Quality Centre where we can execute our test on different cycles creating test tree for each one of them. We need to add test to these test trees from the tests, which are placed under test plan in

the project. Internally Quality Centre will refer to this test while running then in the test lab.

2. Can you map the defects directly to the requirements(Not through the test cases) in the Quality Centre?
In the following methods is most likely to used in this case:
Create your Req.Structure
Create the test case structure and the test cases
Map the test cases to the App.Req
Run and report bugs from your test cases in the test lab module.

The database structure in Quality Centre is mapping test cases to defects, only if you have created the bug from Application. test case may be we can update the mapping by using some code in the bug script

module(from the customize project function), as per as i know, it is not possible to map defects directly to an requirements.

3. how do you run reports from Quality Centre. Does any one have good white paper or articles?
This is how you do it
1. Open the Quality Centre project
2. Displays the requirements modules
3. Choose report
Analysis > reports > standard requirements report

4. Can we upload test cases from an excel sheet into Quality Centre?
Yes go to Add-In menu Quality Centre, find the excel add-In, and install it in your machine.
Now open excel, you can find the new menu option export to Quality Centre. Rest of the procedure is self explanatory.

5. Can we export the file from Quality Centre to excel sheet. If yes then how?
Requirement tab– Right click on main req/click on export/save as word, excel or other template. This would save all the child requirements

Test plan tab: Only individual test can be exported. no parent child export is possible. Select a test script, click on the design steps tab, right click anywhere on the open window. Click on export and save as.

Test lab tab: Select a child group. Click on execution grid if it is not selected. Right click anywhere. Default save option is excel. But can be saved in documents and other formats. Select all or selected option

Defects Tab: Right click anywhere on the window, export all or selected defects and save excel sheet or document.

6. How many types of tabs are there in Quality Centre. Explain?
There are four types of tabs are available

1. Requirement : To track the customer requirements
2. Testplan : To design the test cases and to store the test scripts
3. test lab : To execute the test cases and track the results.
4. Defect : To log a defect and to track the logged defects.

7. How to map the requirements with test cases in Quality Centre?
1. In requirements tab select coverage view
2. Select requirement by clicking on parent/child or grandchild
3. On right hand side(In coverage view window) another window will appear. It has two tabs
a) Tests coverage
b) Details
Test coverage tab will be selected by default or you click on it.
4. Click on select tests button a new window will appear on right hand side and you will see a list of all tests. You cans elect any test case you want to map with your requirements.

8. How to use Quality Centre in real time project?
Once completed the preparing of test cases
1. Export the test cases into Quality Centre( It will contained total 8 steps)
2. The test cases will be loaded in the test plan module
3. Once the execution is started. We move the test cases from test plan tab to the test lab module.
4. In test lab, we execute the test cases and put as pass or fail or incomplete. We generate the graph in the test lab for daily report and sent to the on site (where ever you want to deliver)
5. If we got any defects and raise the defects in the defect module. when raising the defects, attach the defects with the screen shot.

9. Difference between Web Inspect-QA Inspect?
QA Inspect finds and prioritizes security vulnerabilities in an entire web application or in specific usage scenarios during testing and presents detail information and remediation advise about each vulnerability.
Web Inspect ensures the security of your most critical information by identifying known and unknown vulnerabilities within the web application. With web Inspect, auditors, compliance officers and security experts can

perform security assessments on a web enabled application. Web inspect enables users to perform security assessments for any web application or web service, including the industry leading application platforms.

10. How can w add requirements to test cases in Quality Centre?
Just you can use the option of add requirements.
Two kinds of requirements are available in TD.
1. Parent Requirement
2. Child requirements.

Parent Requirements nothing but title of the requirements, it covers high level functions of the requirements

Testing Software Licenses

What is Software License?

Software license is a file that verifies User’s permission to use a software program. Usually, a license is used to activate the software for the user and can be used over a period of time, after which the user has to renew it again by connecting to internet.

What is License Management?

License Management feature covers the acquisition, storage, usage (binding), backup/restore and destruction (deletion) of licenses.

What need to be tested?

· 1 X 1 Licenses: These licenses should be useful only for one machine and one user. If we try to use the same license in another machine or for another user product, it should give an appropriate error message.

· n X n Licenses: These are licenses for multiple machines and multiple users. For ex: 1X2 license should be allowed to activate the product in single machine for two different users and 2X1 license should be allowed to activate the product in 2 machines for the same user.

· License specific to a User: Here we need to make sure that license file created for a user while activating the product should not be useful for the other user in the same machine or in different machine by copying to another machine or user’s folder.

· License specific to a Machine: Here we need to make sure that license file created for a machine while activating the product should not be useful in other machine for the same user or different user.

· Checking validity of license while using the product every time (Online/Offline): When user logs-in to the product, product should check for the validity of the license. We may use the product while it is online to the license server or it may be offline. So this validity check should be done in both cases.

· Checking License expiration: We need to make sure that license expires after the period specified by the license provider.

· Renewal of License files from Server: In some products, license may be renewed in frequent intervals if the user is online. The functionality of the product should remain intact after license renewal.

· Checking License validation process: We need to know the way license is validated while using the product. Many products validate the license only when the user logs-in to the applications. If the user doesn’t log-out, S/he may use the product after license expiration. We many need to control this process. In our project we used Tickers to check the license at frequent intervals even though the user is not logged-out

· Checking License after changing the System Time (forward/Backward): Some license servers detect the change in system time settings. We need to check the product functionality when the system time is changed like rolling the time forward or backward.

· Checking License while installing after changing the System Time: We also need to check the installation of the product after changing the system time. If we install the product at different system time, it should not allow using the product.

· Checking Time Zone dependency on License file: Licenses created for the product should be useful in any time zones. The license validity or expiration time should not be changed even though time zone setting is changed in the machine.

V(for Vidya) Model :)

I have tried to compile the Testing stages involved in V-model.These are basically my notes from Foundations of Software testing book by Rex Black.

Four types of testing namely-
1) Component Testing
2) Integration Testing
3) System Testing
4) Acceptance Testing

COMPONENT TESTING(CT)

– CT is also known as Unit,module and program testing.Searches for defects and verifies the functionality of software e.g. modules,programs,objects etc. that are separately testable.

– May be done in isolation from the rest of the system.

– Stubs and Drivers are used to replace the missing software and simulate the interface between software components.

– A stub is called from the software component to be tested.

– A driver calls a component to be tested.

– CT includes testing of functionality and specific non-functional characteristics such as resource behavior( e.g. memory leaks) performance or robustness testing as well as structural testing.

– Test cases are derived from work products such as software design or data model.

– A module can be tested by a different programmer from the one who writes the code.

– One approach in CT is used in Extreme Programing(XP) is to prepare and automate test cases before coding.This is called test-first approach or test-driven development.It is highly iterative and based on cycles of developing test cases.

INTEGRATION TESTING(IT)

– Interfaces between components interactions to different parts of system such as OS hardware or interface between systems.

– Carried out by a an integrator.

– Can be carried out after component testing-known as component integration testing or after system testing known as System Integration testing.

– Greater the scope of integration the more difficult is to isolate the failures to a specific interface which increases risk.

– Big-bang approach-all the components or systems are integrated simultaneously after which everything is tested as a whole.
Advantages- Everything is finished before integration testing starts
Disadvantage- Time consuming;Difficult to trace the cause of failures with late integration.

– Incremental Testing approach-all programs are integrated one by one and a test is carried out after each step.
Advantage- Defects are found early
Disadvantage- Time-consuming since stubs and drivers are to be used in the test.

Types of Incremental Testing

1) Top-Down-Testing follows the architectural structure.Components or systems are substituted by stubs.

2) Bottom-Up-Control flows upwards.Components or systems are substituted by drivers.

3) Functional Incremental-Integration & testing takes place on the basis of the functions or functionality as documented in functional specification.

– Incremental testing is preferred over Big Bang.

– Testing of specific non-functional characteristics(e.g. performance) may also be included in integration testing.

– May be done by developer or an independent testing team.

SYSTEM TESTING
– Concerned with the behavior of the whole system/product as defined by the scope of a development project or product.

– Include test cases based on risks/business requirements,business processes,use cases and interaction with Operating system.

– It is most often the final test on behalf of development to verify that the system to be delivered meets the specification and its purpose to find more defects.

– Carried out by independent testing team or by specialist testers.

– Should investigate both functional and non-functional requirements by the system.

– Non functional requirements include-performance and reliability.Deals with incomplete or undocumented requirements.

– System testing of functional requirements is done by Black Box Testing.

– Requires controlled test environments with regard to control of software versions,test ware and test data.

ACCEPTANCE TESTING(AT)

– After system testing almost all defects are corrected the system is then delivered to user or customer.

– Testing is responsibility of user or customer,stakeholders might get involved.

– Goal is to establish confidence in the system,part of the system or specific non-functional characteristics i.e. usability of the system.

– Focused on validation type of testing

– COTS(Commercial Off The Shelf) software product may be acceptance tested when it is installed or integrated.

– Acceptance testing of the usability of a component may be done during component testing.

– Acceptance testing of a new functional enhancement may come before system testing.

– User Acceptance test focuses mainly on the functionality thereby validating its fitness-for-use of the system by business user.

– Operational acceptance test focuses or validates whether the system meets the requirements for operation.System administrator performs it.

– Two types of AT-
1) Contract AT
2) Compliance AT

1) Contract AT
– Performed against a contracts acceptance criteria
– Acceptance is defined when contract is agreed.

2) Compliance AT
– Performed against the regulations which must be adhered to such as governmental,legal or safety regulations
– Also known as Regulation AT.

For COTS mass market product there are two types of testing

1) Alpha Testing
– Takes place at developers site.
– Developer observe the users and note problems.

2) Beta Testing or Field Testing
– Sends the system to users who install and use it under real world conditions.
– User sends observation notes

Cost Of Quality (COQ) !!!

Cost Of Quality (COQ):

The “cost of quality” isn’t the price of creating a quality product or service. It’s the cost of NOT creating a quality product or service.

Every time work is redone, the cost of quality increases. Obvious examples include:

* The reworking of a manufactured item.
* The retesting of an assembly.
* The rebuilding of a tool.
* The correction of a bank statement.
* The reworking of a service, such as the reprocessing of a loan operation or the replacement of a food order in a restaurant.

In short, any cost that would not have been expended if quality were perfect contributes to the cost of quality.

Total Quality Costs

Quality costs are the total of the cost incurred by:

* Investing in the prevention of nonconformance to requirements.
* Appraising a product or service for conformance to requirements.
* Failing to meet requirements.

Quality Costs—general description

Prevention Costs:

The costs of all activities specifically designed to prevent poor quality in products or services.

Examples are the costs of:

· New product review

· Quality planning

· Supplier capability surveys

· Process capability evaluations

· Quality improvement team meetings

· Quality improvement projects

· Quality education and training

Appraisal Costs:

The costs associated with measuring, evaluating or auditing products or services to assure conformance to quality standards and performance requirements. These include the costs of:

* Incoming and source inspection/test of purchased material
* In-process and final inspection/test
* Product, process or service audits
* Calibration of measuring and test equipment
* Associated supplies and materials

Failure Costs:

The costs resulting from products or services not conforming to requirements or customer/user needs. Failure costs are divided into internal and external failure categories.

Internal Failure Costs

Failure costs occurring prior to delivery or shipment of the product, or the furnishing of a service, to the customer.

Examples are the costs of:

* Scrap
* Rework
* Re-inspection
* Re-testing
* Material review
* Downgrading

External Failure Costs

Failure costs occurring after delivery or shipment of the product and during or after furnishing of a service to the customer.

Examples are the costs of:

* Processing customer complaints
* Customer returns
* Warranty claims
* Product recalls

Total Quality Costs:The sum of the above costs. This represents the difference between the actual cost of a product or service and what the reduced cost would be if there were no possibility of substandard service, failure of products or defects in their manufacture.

Bug Life Cycles..!!!

What is a Bug Life Cycle?
The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.

(Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug during a bug life cycle, you can refer to article ‘Software Testing – Bug & Statuses Used During A Bug Life Cycle’)

There are seven different life cycles that a bug can passes through:

Cycle I:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) Test lead finds that the bug is not valid and the bug is ‘Rejected’.

Cycle II:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as ‘New’.
4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of ‘Pending Reject’ before passing it back to the testing team.
5) After getting a satisfactory reply from the development side, the test leader marks the bug as ‘Rejected’.

Cycle III:
1 ) A tester finds a bug and reports it to Test Lead.
2 ) The Test lead verifies if the bug is valid or not.
3 ) The bug is verified and reported to development team with status as ‘New’.
4 ) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.
5 ) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.
6 ) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.
7 ) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.
8 ) The tester retests the bug and it is working fine, so the tester closes the bug and marks it as ‘Closed’.

Cycle IV:
1 ) A tester finds a bug and reports it to Test Lead.
2 ) The Test lead verifies if the bug is valid or not.
3 ) The bug is verified and reported to development team with status as ‘New’.
4 ) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.
5 ) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.
6 ) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.
7 ) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.
8 ) The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with ‘Reopen’ status. And the bug is passed back to the development team for fixing.

Cycle V:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as ‘New’.
4) The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing, but fails in that and asks for help from testing team.
5) The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it ‘Rejected’.

Cycle VI:
1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug is postponed for indefinite time and it is marked as ‘Postponed’.

Cycle VII:
1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as ‘Deferred’.

This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.

What is Heuristic Or Intuitive Testing ?

In the Heuristic testing (also known as Intuitive Testing) approach, the tester separately reviews a program, categorizing and justifying problems based on a short set of heuristics.

During heuristic testing the tester goes through the program numerous times carrying out a variety of tasks and inspecting how the program scores against a list of identified heuristics.

The defects in software can in general be classified as Omissions, Surprises and Wrong Implementations. Omissions are requirements that are missed in the implementation, Surprises are implementation that are not found in the requirements and wrong implementation is incorrect implementation of a requirement. Heuristic or intuitive testing techniques help catch all types of defects.