V(for Vidya) Model :)

I have tried to compile the Testing stages involved in V-model.These are basically my notes from Foundations of Software testing book by Rex Black.

Four types of testing namely-
1) Component Testing
2) Integration Testing
3) System Testing
4) Acceptance Testing

COMPONENT TESTING(CT)

– CT is also known as Unit,module and program testing.Searches for defects and verifies the functionality of software e.g. modules,programs,objects etc. that are separately testable.

– May be done in isolation from the rest of the system.

– Stubs and Drivers are used to replace the missing software and simulate the interface between software components.

– A stub is called from the software component to be tested.

– A driver calls a component to be tested.

– CT includes testing of functionality and specific non-functional characteristics such as resource behavior( e.g. memory leaks) performance or robustness testing as well as structural testing.

– Test cases are derived from work products such as software design or data model.

– A module can be tested by a different programmer from the one who writes the code.

– One approach in CT is used in Extreme Programing(XP) is to prepare and automate test cases before coding.This is called test-first approach or test-driven development.It is highly iterative and based on cycles of developing test cases.

INTEGRATION TESTING(IT)

– Interfaces between components interactions to different parts of system such as OS hardware or interface between systems.

– Carried out by a an integrator.

– Can be carried out after component testing-known as component integration testing or after system testing known as System Integration testing.

– Greater the scope of integration the more difficult is to isolate the failures to a specific interface which increases risk.

– Big-bang approach-all the components or systems are integrated simultaneously after which everything is tested as a whole.
Advantages- Everything is finished before integration testing starts
Disadvantage- Time consuming;Difficult to trace the cause of failures with late integration.

– Incremental Testing approach-all programs are integrated one by one and a test is carried out after each step.
Advantage- Defects are found early
Disadvantage- Time-consuming since stubs and drivers are to be used in the test.

Types of Incremental Testing

1) Top-Down-Testing follows the architectural structure.Components or systems are substituted by stubs.

2) Bottom-Up-Control flows upwards.Components or systems are substituted by drivers.

3) Functional Incremental-Integration & testing takes place on the basis of the functions or functionality as documented in functional specification.

– Incremental testing is preferred over Big Bang.

– Testing of specific non-functional characteristics(e.g. performance) may also be included in integration testing.

– May be done by developer or an independent testing team.

SYSTEM TESTING
– Concerned with the behavior of the whole system/product as defined by the scope of a development project or product.

– Include test cases based on risks/business requirements,business processes,use cases and interaction with Operating system.

– It is most often the final test on behalf of development to verify that the system to be delivered meets the specification and its purpose to find more defects.

– Carried out by independent testing team or by specialist testers.

– Should investigate both functional and non-functional requirements by the system.

– Non functional requirements include-performance and reliability.Deals with incomplete or undocumented requirements.

– System testing of functional requirements is done by Black Box Testing.

– Requires controlled test environments with regard to control of software versions,test ware and test data.

ACCEPTANCE TESTING(AT)

– After system testing almost all defects are corrected the system is then delivered to user or customer.

– Testing is responsibility of user or customer,stakeholders might get involved.

– Goal is to establish confidence in the system,part of the system or specific non-functional characteristics i.e. usability of the system.

– Focused on validation type of testing

– COTS(Commercial Off The Shelf) software product may be acceptance tested when it is installed or integrated.

– Acceptance testing of the usability of a component may be done during component testing.

– Acceptance testing of a new functional enhancement may come before system testing.

– User Acceptance test focuses mainly on the functionality thereby validating its fitness-for-use of the system by business user.

– Operational acceptance test focuses or validates whether the system meets the requirements for operation.System administrator performs it.

– Two types of AT-
1) Contract AT
2) Compliance AT

1) Contract AT
– Performed against a contracts acceptance criteria
– Acceptance is defined when contract is agreed.

2) Compliance AT
– Performed against the regulations which must be adhered to such as governmental,legal or safety regulations
– Also known as Regulation AT.

For COTS mass market product there are two types of testing

1) Alpha Testing
– Takes place at developers site.
– Developer observe the users and note problems.

2) Beta Testing or Field Testing
– Sends the system to users who install and use it under real world conditions.
– User sends observation notes

Cost Of Quality (COQ) !!!

Cost Of Quality (COQ):

The “cost of quality” isn’t the price of creating a quality product or service. It’s the cost of NOT creating a quality product or service.

Every time work is redone, the cost of quality increases. Obvious examples include:

* The reworking of a manufactured item.
* The retesting of an assembly.
* The rebuilding of a tool.
* The correction of a bank statement.
* The reworking of a service, such as the reprocessing of a loan operation or the replacement of a food order in a restaurant.

In short, any cost that would not have been expended if quality were perfect contributes to the cost of quality.

Total Quality Costs

Quality costs are the total of the cost incurred by:

* Investing in the prevention of nonconformance to requirements.
* Appraising a product or service for conformance to requirements.
* Failing to meet requirements.

Quality Costs—general description

Prevention Costs:

The costs of all activities specifically designed to prevent poor quality in products or services.

Examples are the costs of:

· New product review

· Quality planning

· Supplier capability surveys

· Process capability evaluations

· Quality improvement team meetings

· Quality improvement projects

· Quality education and training

Appraisal Costs:

The costs associated with measuring, evaluating or auditing products or services to assure conformance to quality standards and performance requirements. These include the costs of:

* Incoming and source inspection/test of purchased material
* In-process and final inspection/test
* Product, process or service audits
* Calibration of measuring and test equipment
* Associated supplies and materials

Failure Costs:

The costs resulting from products or services not conforming to requirements or customer/user needs. Failure costs are divided into internal and external failure categories.

Internal Failure Costs

Failure costs occurring prior to delivery or shipment of the product, or the furnishing of a service, to the customer.

Examples are the costs of:

* Scrap
* Rework
* Re-inspection
* Re-testing
* Material review
* Downgrading

External Failure Costs

Failure costs occurring after delivery or shipment of the product and during or after furnishing of a service to the customer.

Examples are the costs of:

* Processing customer complaints
* Customer returns
* Warranty claims
* Product recalls

Total Quality Costs:The sum of the above costs. This represents the difference between the actual cost of a product or service and what the reduced cost would be if there were no possibility of substandard service, failure of products or defects in their manufacture.

Bug Life Cycles..!!!

What is a Bug Life Cycle?
The duration or time span between the first time bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.

(Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug during a bug life cycle, you can refer to article ‘Software Testing – Bug & Statuses Used During A Bug Life Cycle’)

There are seven different life cycles that a bug can passes through:

Cycle I:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) Test lead finds that the bug is not valid and the bug is ‘Rejected’.

Cycle II:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as ‘New’.
4) The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of ‘Pending Reject’ before passing it back to the testing team.
5) After getting a satisfactory reply from the development side, the test leader marks the bug as ‘Rejected’.

Cycle III:
1 ) A tester finds a bug and reports it to Test Lead.
2 ) The Test lead verifies if the bug is valid or not.
3 ) The bug is verified and reported to development team with status as ‘New’.
4 ) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.
5 ) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.
6 ) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.
7 ) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.
8 ) The tester retests the bug and it is working fine, so the tester closes the bug and marks it as ‘Closed’.

Cycle IV:
1 ) A tester finds a bug and reports it to Test Lead.
2 ) The Test lead verifies if the bug is valid or not.
3 ) The bug is verified and reported to development team with status as ‘New’.
4 ) The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it marking the status as ‘Assigned’.
5 ) The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.
6 ) The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.
7 ) The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.
8 ) The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with ‘Reopen’ status. And the bug is passed back to the development team for fixing.

Cycle V:
1) A tester finds a bug and reports it to Test Lead.
2) The Test lead verifies if the bug is valid or not.
3) The bug is verified and reported to development team with status as ‘New’.
4) The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing, but fails in that and asks for help from testing team.
5) The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it ‘Rejected’.

Cycle VI:
1) After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug is postponed for indefinite time and it is marked as ‘Postponed’.

Cycle VII:
1) If the bug does not stand importance and can be/needed to be postponed, then it is given a status as ‘Deferred’.

This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.

What is Heuristic Or Intuitive Testing ?

In the Heuristic testing (also known as Intuitive Testing) approach, the tester separately reviews a program, categorizing and justifying problems based on a short set of heuristics.

During heuristic testing the tester goes through the program numerous times carrying out a variety of tasks and inspecting how the program scores against a list of identified heuristics.

The defects in software can in general be classified as Omissions, Surprises and Wrong Implementations. Omissions are requirements that are missed in the implementation, Surprises are implementation that are not found in the requirements and wrong implementation is incorrect implementation of a requirement. Heuristic or intuitive testing techniques help catch all types of defects.

What type of bugs, we cannot find in white box testing?

The Bugs we cannot find in Whitebox testing are ‘

1) GUI Related (textual bugs,text fields related, cosmetic bugs, etc..)
2) Perfromance
3) Sometimes Integration Bugs
4) Some Functional bugs may not be found….that depends on the Unit testing done
5) Memory Leakage

Which is the most difficult part of Testing?

Integration Testing level is quite difficult for testers than other levels because in this level tester has to depend on developers.

and

While Testing a Web application the most difficult part of testing is performance testing and Security testing since the website will be access by many performance level comes in to picture. Coming to security it should not allow to hack .

What is Red Box testing?

Grey box testing is the combination of white box testing & black box testing.

Yellow box testing is testing the for warning messages

Red box testing is testing the error messages.

User Acceptance Testing is RED Box Testing,

System Testing is Orange Box Testing,

Integration Testing is Yellow Box Testing,

Is QA comes under Verification or Validation..?

QA comes under VALIDATION because:

 1. It’s a process based activity.
2. QA & Validation helps in prevention of defects with help of STATIC testing.
3. It’s a Process based activity.
4. QA & Validation adheres that “Are we building the right product?”
5. QA & Validation is a planned & systematic activity.

Quality Assurance: Activities and Responsibilities:
1. Release of qualification and validation protocols
2. Release of documents: e.g. specifications; Master Batch Records, SOPs
3. Batch review and release, archiving
4. Release of batch records
5. Change control, deviation control, investigations
6. Approval of validation protocols
7. Training
8. Internal audits, compliance
9. Supplier qualification and supplier audits
10. Claims, recalls, etc.

Quality Control: Activities and Responsibilities:
1. Development and approval of specifications
2. Sampling, analytical check and release of raw materials, intermediates and cleaning samples
3. Sampling, analytical check and approval of APIs and finished products
4. Release of APIs and final products
5. Qualification and maintenance of equipment
6. Method transfer and validation
7. Approval of documents: e.g. analytical procedures, SOPs
8. Stability tests
9. Stress test

QA – Nothing but Verification

QC – Nothing but Validation

Example of “High Priority & Low Severity” and “High Severity & Low Priority”..

Severity : Impact of Bug/Defect/Issue on the Application/Software.
Priority : Importance of Bug/Defect/Issue to fix before release.

Severity decided by checking that how much Bug is impacting the functionality of the system.
Priority decided by checking the Importance of Bug. Which may not be a Bug, but it may have high priority b’coz that need to be fixed before release.

Get the best example of High Priority & Low Severity :

On any LogIn Screes, “OK” button have text “KO”

Now try to understand, Button is working fine, means No functionality is affecting by that, it means its a low Severity Bug.
But………..
User will not understand what is “KO”. Because of this their application has no use, and they can’t release the product without fixing the bug. This is the High Priority but.

Now get the one example of High Severity & Low Priority :

Suppose you have an application which is having functionality of exporting to Excel File. But that functionality is totally not working. So in this case the Severity is Very High. But for current release this functionality is not useful, means user may not use the Export function, so here is have Low Priority.

Important interview questions for QA?

1. What types of documents would you need for QA, QC, and Testing?
2. What did you include in a test plan?
3. Describe any bug you remember.
4. What is the purpose of the testing?
5. What do you like (not like) in this job?
6. What is quality assurance?
7. What is the difference between QA and testing?
8. How do you scope, organize, and execute a test project?
9. What is the role of QA in a development project?
10. What is the role of QA in a company that produces software?
11. Define quality for me as you understand it
12. Describe to me the difference between validation and verification.
13. Describe to me what you see as a process. Not a particular process, just the basics of having a process.
14. Describe to me when you would consider employing a failure mode and effect analysis.
15. Describe to me the Software Development Life Cycle as you would define it.
16. What are the properties of a good requirement?
17. How do you differentiate the roles of Quality Assurance Manager and Project Manager?
18. Tell me about any quality efforts you have overseen or implemented. Describe some of the challenges you faced and how you overcame them.
19. How do you deal with environments that are hostile to quality change efforts?
20. In general, how do you see automation fitting into the overall process of testing?
21. How do you promote the concept of phase containment and defect prevention?
22. If you come onboard, give me a general idea of what your first overall tasks will be as far as starting a quality effort.
23. What kinds of testing have you done?
24. Have you ever created a test plan?
25. Have you ever written test cases or did you just execute those written by others?
26. What did your base your test cases?
27. How do you determine what to test?
28. How do you decide when you have ‘tested enough?’
29. How do you test if you have minimal or no documentation about the product?
30. Describe me to the basic elements you put in a defect report?
31. How do you perform regression testing?
32. At what stage of the life cycle does testing begin in your opinion?
33. How do you analyze your test results? What metrics do you try to provide?
34. Realising you won’t be able to test everything – how do you decide what to test first?
35. Where do you get your expected results?
36. If automating – what is your process for determining what to automate and in what order?
37. In the past, I have been asked to verbally start mapping out a test plan for a common situation, such as an ATM. The interviewer might say, “Just thinking out loud, if you were tasked to test an ATM, what items might you test plan include? “These type questions are not meant to be answered conclusively, but it is a good way for the interviewer to see how you approach the task.
38. If you’re given a program that will average student grades, what kinds of inputs would you use?
39. Tell me about the best bug you ever found.
40. What made you pick testing over another career?
41. What is the exact difference between Integration & System testing, give me examples with your project.
42. How did you go about testing a project?
43. When should testing start in a project? Why?
44. How do you go about testing a web application?
45. Difference between Black & White box testing
46. What is Configuration management? Tools used?
47. What do you plan to become after say 2-5yrs (Ex: QA Manager, Why?)
48. Would you like to work in a team or alone, why?
49. Give me 5 strong & weak points of yours
50. Why do you want to join our company?
51. When should testing be stopped?
52. What sort of things would you put down in a bug report?
53. Who in the company is responsible for Quality?
54. Who defines quality?
55. What is an equivalence class?
56. Is a “A fast database retrieval rate” a testable requirement?
57. Should we test every possible combination / scenario for a program?
58. What criteria do you use when determining when to automate a test or leave it manual?
59. When do you start developing your automation tests?
60. Discuss what test metrics you feel are important to publish an organization?
61. In case anybody cares, here are the questions that I will be asking:
62. Describe the role that QA plays in the software lifecycle.
63. What should Development require of QA?
64. What should QA require of Development?
65. How would you define a “bug?”
66. Give me an example of the best and worst experiences you’ve had with QA.
67. How does unit testing play a role in the development / software lifecycle?
68. Explain some techniques for developing software components with respect to testability.
69. Describe a past experience with implementing a test harness in the development of software.
70. Have you ever worked with QA in developing test tools? Explain the participation Development should have with QA in leveraging such test tools for QA use.
71. Give me some examples of how you have participated in Integration Testing.
72. How would you describe the involvement you have had with the bug-fix cycle between Development and QA?
73. What is unit testing?
74. Describe your personal software development process.
75. How do you know when your code has met specifications?
76. How do you know your code has met specifications when there are no specifications?
77. Describe your experiences with code analyzers.
78. How do you feel about cyclomatic complexity?
79. Who should test your code?
80. How do you survive chaos?
81. What processes / methodologies are you familiar with?
82. What type of documents would you need for QA / QC / Testing?
83. How can you use technology to solve problem?
84. What type of metrics would you use?
85. How to find that tools work well with your existing system?
86. What automated tools are you familiar with?
87. How well you work with a team?
88. How would you ensure 100% coverage of testing?
89. How would you build a test team?
90. What problem you have right now or in the past? How you solved it?
91. What will you do during the first day of job?
92. What would you like to do five years from now?
93. Tell me about the worst boss you’ve ever had.
94. What are your greatest weaknesses?
95. What are your strengths?
96. What is a successful product?
97. What do you like about Windows?
98. What is good code?
99. Who is Kent Beck, Dr Grace Hopper, Dennis Ritchie?
100. What are basic, core, practises for a QA specialist?
101. What do you like about QA?
102. What has not worked well in your previous QA experience and what would you change?
103. How you will begin to improve the QA process?
104. What is the difference between QA and QC?
105. What is UML and how to use it for testing?
106. What is CMM and CMMI? What is the difference?
107. What do you like about computers?
108. Do you have a favourite QA book? More than one? Which ones? And why.
109. What is the responsibility of programmers vs QA?
110. What are the properties of a good requirement?
111. Ho to do test if we have minimal or no documentation about the product?
112. What are all the basic elements in a defect report?