Basically when you face Software Testing Interview, whether it is Manual/Automation, Interviewer will cover the below areas.
- Requirements Analysis
- Test Plan Preparation
- Test Case Preparation
- Test Execution Efficiency
- Prepare Defect Reports
- Defect Management and Tools
- Ability to think outside the box
- Business understanding
- Technical Understanding
- Agile Methodology
- Participation in stand ups
- Participation in Demo/Retrospectives
- Knowledge/Experience on Automation tools
- QA Process
Q: What is black box testing?
A: Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on requirements and functionality.
Q: What is white box testing?
A: White box testing is based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths and conditions.
Q: What is unit testing?
A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers.
Unit testing is performed after the expected test results are met or differences are explainable/acceptable.
Q: What is functional testing?
A: Functional testing is black-box type of testing geared to functional requirements of an application. Test engineers should perform functional testing.
Q: What is usability testing?
A: Usability testing is testing for ‘user-friendliness’. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.
Q: What is incremental integration testing?
Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application’s functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed.
Incremental testing may be performed by programmers, software engineers, or test engineers.
Q: Explain the process followed in your project?
In my organization, Whenever we get a new project there is an initial project Kick of meeting. In this meeting we basically discuss on who is client? What is the tentative project duration and delivery, and identify the resources like Project manager, Tech leads, QA leads, developers, testers etc…
From the SRS (software requirement specification) project plan is developed. The testers (Manager/Lead/Sr.Software Engineer) role is to create Software Test Plan . Dev Team start coding from the design. In meantime we testers create test scenario and write test cases according to assigned modules and save them in repository/Test Management tools.
When developers finish their modules, those modules will be assigned to testers. Smoke testing is performed on these modules and if they fail this test, modules are reassigned to respective developers for fix. For passed modules manual testing is carried out from the written test cases. If any bug is found that get assigned to module developer and get logged in bug tracking tool. On bug fix tester do bug verification and regression testing of all related modules. If bug passes the verification it is marked as verified and marked as closed. Otherwise above mentioned bug cycle gets repeated.
Different tests are performed on individual modules and integration testing on module integration. These tests includes Compatibility testing i.e testing application on different hardware, OS versions, software platform, different browsers etc. Load and stress testing is also carried out according to SRS. Finally system testing is performed by creating virtual client environment. On passing all the test cases test report is prepared and decision is taken to release the product!
See my Post QA Process for a scenario testing
Q: How do u raise a defect?
When my Test Case is failed, I will analyse the failing causes, Then I will raise a defect that will be get assigned to the Developer. I will provide the short description of the bug, Title of the bug and steps to reproduce the bug, Provide snap shots, Logfiles and other supported files if needed. Also, the bug priority, severity,OS versions, software platform, browser and browser version number etc. should be mentioned.
Q: What if the software is so buggy it can’t really be tested at all?
The best way to solve this situation is –
- Report the blocking issues
- Report the critical bugs.
- Notify your Managers with some documentation as evidence of the problem.
Q: How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends
Q: What if there isn’t enough time for thorough testing?
Use risk analysis to determine where testing should be focused. Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
- Which functionality is most important to the project’s intended purpose?
- Which functionality is most visible to the user?
- Which functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which aspects of the application can be tested early in the development cycle?
- Which parts of the code are most complex, and thus most subject to errors?
- Which parts of the application were developed in rush or panic mode?
- Which aspects of similar/related previous projects caused problems?
- Which aspects of similar/related previous projects had large maintenance expenses?
- Which parts of the requirements and design are unclear or poorly thought out?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?
- Which tests will have the best high-risk-coverage to time-required ratio?
Q: When do you feel automation is better?
- When our product is stable to automate
- For regression test, we can save time by using automation
- Release acceptance testing, quickly we can perform regression on every release!
But we need a skilled resources to develop automation, to maintain automation and to upgrade automation suites,tools etc..
Q: If developer says the defect raised is not a defect, what do you do? or Defect can be seen in QA machine not on Developer machine, how do you handle this situation?
- Check the requirement functionality, to make sure your test case is correct
- Analyse the root causes of the bug
- Take the help of your senior(Manager/Lead) to reproduce the bug in a better way
- Provide the snapshots, logfiles and system information
- If the developer is sitting in your office just show the bug how it is reproducible
- Reproduce the bug on another machine
- Include the Product owner/Business owner in the email thread related to the bug.
Q: What is the difference between waterfall model and Agile model?
Both of these are usable, mature methodologies. Having been involved in software development projects for a long time!
The Waterfall Methodology
Waterfall is a linear approach to software development. In this methodology, the sequence of events is something like- Information gathering, Design, Coding and Testing, UAT and Maintenance…
In Waterfall development each of these stages are distinct of software development, and each stage generally finishes before the next one can begin. There is also typically a stage gate between each; for example, requirements must be reviewed and approved by the customer before design can begin.
Developers and customers agree on what will be delivered early in the development lifecycle. This makes planning and designing more straightforward.
Progress is more easily measured, as the full scope of the work is known in advance.
Throughout the development effort, it’s possible for various members of the team to be involved or to continue with other work,
The Agile Methodology
Agile is an iterative, team-based approach to development. This approach emphasizes the rapid delivery of an application in complete functional components. Rather than creating tasks and schedules, all time is “time-boxed” into phases called “sprints.” Each sprint has a defined duration (usually in weeks) with a running list of deliverables, planned at the start of the sprint. Deliverables are prioritized by business value as determined by the customer. If all planned work for the sprint cannot be completed, work is re-prioritized and the information is used for future sprint planning.
As work is completed, it can be reviewed and evaluated by the project team and customer, through daily builds and end-of-sprint demos. Agile relies on a very high level of customer involvement throughout the project, but especially during these reviews.
Q: What is the difference between priority and severity of bug?
Severity tells about the level of impact of the bug, where as priority tells the order in which a bug should be fixed. Higher effect on the system functionality will lead to the assignment of higher severity to the bug.