❔Interview Questions
Last updated
Last updated
1. What is manual testing?
Manual testing is the process of manually testing software for defects without using any automated tools. The testing is done manually by a human sitting in front of a computer carefully executing the test steps.
2. What are the different types of manual testing?
The main types of manual testing are:
Functional Testing
Usability Testing
Integration Testing
System Testing
Regression Testing
Acceptance Testing
3. What is functional testing?
Functional testing is a type of manual testing where the features and functionality of the software application are tested. The functional requirements are validated against the actual behavior of the application.
4. What is usability testing?
Usability testing is a manual testing technique where the ease of use and convenience of an application is tested from an end user perspective. Factors like navigation, workflow, design and accessibility are validated.
5. What are the steps involved in manual testing?
The main steps are:
Understanding requirements, specifications and business need
Preparing test plans and test cases
Setting up test environment and test data
Executing test cases and recording results
Reporting bugs and issues to developers
Regression testing after fixes
Retesting until all issues are resolved
6. What is integration testing?
Integration testing is a manual testing task where individual software modules/components are combined and tested. The interaction between integrated modules and communication is validated.
7. What is system testing?
System testing is the manual testing of a completely integrated system on the whole. Black box and white box testing techniques are used to validate that the integrated system meets business and technical requirements.
8. What is regression testing?
Regression testing is a type of manual software testing where new bugs/defects are ensured to not be introduced to previously working features and functionality. It is done after fixes, updates or changes to the software code.
9. What is acceptance testing?
Acceptance testing is formal manual testing done by end users to validate if the delivered system meets business requirements and is acceptable for use. User acceptance, operational acceptance, alpha and beta testing are types of acceptance testing.
10. How important is manual testing in projects?
Manual testing is very important. Even with test automation, manual testing cannot be eliminated as some types of testing can only be done manually. Human observation, creative thinking and experience are needed.
11. What are some benefits of manual testing?
Benefits include identifying complex issues that automation may not detect, simulating real user scenarios, measuring non-functional aspects like UI, ease of use, accessibility, and having control over test execution.
12. What are some limitations of manual testing?
Limitations include being time consuming, labor intensive, prone to human errors, inconsistent results between testers, not being feasible for large test suites, being repetitive and only as good as the tester's skills.
13. What is exploratory testing?
Exploratory testing is an informal manual testing technique where the tester does not follow any formal test cases but rather explores the application randomly to discover bugs and issues.
14. How can you be an effective tester?
Being detail oriented, having strong domain knowledge, technical knowledge of the system, analytical thinking, creativity to design test cases, communication skills, ability to understand requirements and patience are some characteristics of an effective tester.
15. What are test scenarios, test cases and test steps?
Test scenario is a short description of the functionality to be tested. Test case includes preconditions, test steps and expected results. Test steps are the precise steps to execute the test.
16. What are test suites?
Test suites are a collection of related test scenarios and test cases grouped together to test specific functionality, component or system. Executing a test suite validates the system as a whole under different conditions.
17. What are test scripts?
Test scripts are automated scripts created to execute a sequence of test cases and perform testing. In manual testing, test scripts refer to documented step-by-step guidelines to manually execute a test case.
18. How do you report bugs in manual testing?
Bugs are reported with details like system environment, steps to reproduce, actual result, expected result, screenshots and any additional details. This is usually reported via email or bug tracking systems like JIRA.
19. What is severity vs priority?
Severity refers to the impact or criticality of the bug. Priority refers to the urgency to fix the bug. High severity means the bug has a huge impact. High priority means it needs immediate attention.
20. How do you perform regression testing?
Execute existing test cases to ensure bugs are not introduced. Automate regression test suite since executing all test cases manually is not feasible every time. Regression tests have high priority.
21. What are verification and validation?
Verification is ensuring the product meets design requirements specification. Validation is ensuring the product meets user needs in the real world. Verification is before validation.
22. How do you decide which tests to automate?
Tests that need to be run repeatedly without any variation are good candidates for automation. Critical business functionality that is used frequently is automated. Regression suites are also automated.
23. Can manual testing be a career option?
Yes, manual testing can definitely be a career option. There are specialized roles like QA lead, test manager, QA analyst and more that focus on manual testing. Good manual testing skills are still in demand.
24. What are testing artifacts?
Testing artifacts are the documents, reports and work products generated during tests. Some examples are requirements documents, test plans, traceability matrix, test cases, bug reports, test summary reports.
25. How do you perform usability testing?
Usability testing can be done by creating a small set of real users, asking them to perform tasks on the application and observing if they are able to complete the tasks easily and quickly. Feedback is collected.
26. What are test design techniques?
Test design techniques are methods used to create effective test cases and scenarios from requirements. Some common techniques are boundary value analysis, equivalence partitioning, use case testing, state transition testing.
27. What metrics are used to measure testing effectiveness?
Defect rejection percentage, defects found by user vs tester, automated vs manual tests, test coverage, time to execute, test costs, severity of defects, defects over time are examples of testing metrics.
28. How do you prepare for a manual testing interview?
Understand commonly asked questions, refresh fundamental concepts, review your experience and projects, test your skills with online tests, think of good scenarios and examples to interview questions.
29. What challenges are faced in manual testing?
Lack of automated tools, time and effort required, testing complex scenarios, repetitive tasks, staying up to date with system changes, maintaining concentration levels, pressure to deliver quality systems are some challenges.
30. What are good habits for a manual tester?
Some good habits include being detail oriented, tracking time spent on tests, logging results methodically, staying patient and motivated, learning continuously, collaborating with team members, owning your work, asking questions.
31. What are test basis and test conditions?
Test basis are the documents that provide the inputs for testing - requirements, design, business rules, etc. Test conditions are the variables under which testing needs to be done like languages, browsers, devices etc.
32. What are test stubs and drivers?
Test stubs simulate the behavior of modules that are unavailable. Test drivers invoke the components being tested and monitor results. Both are used for component testing.
33. How do you test user interfaces?
Check alignment, size, color, spelling and behavior of UI elements. Validate consistency across UI. Check navigation and workflow. Test on multiple devices, browsers and resolutions. Do usability testing.
34. How is testing affected for distributed systems?
Testing needs to cover components across multiple systems, end to end workflow, data exchange, system interactions, network latency, load/stress testing, infrastructure dependency.
35. What are test completion criteria?
Test completion criteria are defined to identify when a test cycle is complete. Some examples are all planned tests executed, all high priority bugs fixed, no high severity defects outstanding, test coverage thresholds met.
36. What is component testing?
Testing individual software components in isolation before integrating them is called component testing. Stubs and drivers are used to simulate dependencies. It identifies defects early in the dev cycle.
37. How do you evaluate the quality of your tests?
By metrics like percentage of planned tests that passed, high priority test cases completion, severity of unresolved defects, automated tests vs manual tests executed, number of escaped defects in production.
38. What information do you include in a bug report?
Bug ID, summary, description, environment details, steps to reproduce, actual result, expected result, test case reference, screenshots, severity, priority, attachments, reporter details.
39. What makes a good test report?
Good test reports are clear, concise, accurate, complete, include summary, metrics, environment, results and resolution of failed tests. Charts, graphs and visual elements can help reports.
40. Explain equivalence partitioning with an example.
Equivalence partitioning is dividing input data into valid and invalid classes to reduce tests. For example, to test login, inputs can be valid user ID/valid password, invalid user ID/valid password, valid user ID/invalid password, invalid user ID/invalid password.
41. How do you handle a conflict with your team members?
Understand the other perspective calmly, highlight mutual goals, focus discussions on facts and solutions, use non-accusatory language, involve a manager if needed, follow up with constructive feedback and learning.
42. How do you determine which pieces of code need more testing?
Complex algorithms, conditional statements, interfaces between components, error handling logic, security controls, business critical features, public interfaces and newly developed code need more testing.
43. Explain boundary value analysis with examples.
Boundary values include maximum, minimum and out of range values. If a field accepts 6-20 characters, test with 6, 20 and 5 (invalid value) characters. Testing boundary values has a higher chance of identifying defects.
44. How do you improve your testing skills?
Improve testing skills by learning new techniques, keeping updated with testing tools, technologies and trends, doing certifications, participating in peer discussions and reviews, shadowing experienced testers and doing hands on testing.
45. What makes a good tester?
A good tester has strong analytical, problem solving, communication, collaboration and technical skills. They also have good business and product knowledge, ability to work independently, creativity, meticulous attention to detail and passion for quality.
46. How do you test database functionality?
Validate schema, data integrity constraints, procedures, triggers and transactions. Test CRUD operations through front end and directly on database. Use SQL queries to test. Verify correct data persistence and mappings.
47. How do you evaluate if your test cases covered the testing scope adequately?
Ensure test cases link back to requirements. Review traceability matrix coverage of features. Check complexity, risk and size of items not covered by test cases. Get feedback from dev and business teams on coverage.
48. What are some best practices for manual testing?
Have well defined processes, right team skills mix, focus on priority scenarios and cases, apply risk based techniques, minimize duplicate tests, automate when feasible, collaborate with teams, track key metrics.
49. How do you handle last minute changes?
Assess impact, identify parts affected, re-prioritize testing if needed, execute related regression tests, communicate timeline clearly, get customer signoff on adjusted scope if needed, add team resources if possible.
50. What makes you a good fit for this manual testing role?
[Mention your key strengths aligned with role requirements, years of diverse testing experience, domain knowledge, technical expertise, analytical and communication skills, expertise in related tools and processes, problem-solving ability and passion for quality].
Here is the revised version with emojis, no numbers, and header 1 not bolded:
📌 When should we start testing our project?
Software testing should start early in the Software Development Life Cycle. This helps to capture and eliminate defects in the early stages of SDLC i.e., requirement gathering and design phases. An early start to testing helps to reduce the number of defects and ultimately the rework cost in the end.
What are some of the key benefits of starting testing early in the SDLC?
Ans: Finds defects earlier when cheaper to fix, validates requirements upfront, influences better design, establishes testing processes and environments proactively.
If testing starts too late in the cycle, what risks does that introduce?
Ans: More defects slip through to production, harder to fix defects later on, testing gets rushed and poor coverage, not enough time to execute full regression cycles.
How can testers and developers collaborate effectively in early stages?
Ans: Include testers in requirement reviews to assess testability, focus on component-level testing while code is being developed, emphasize a shift left mindset.
What test planning activities should occur early?
Ans: Define scope, priorities, timeline, test strategy, type of testing to be done, environment needs, automation plans, risks to mitigate.
📝 If we don’t have clear written user requirements, how can we test the software?
Work with whatever little documentation you can get your hands on.
Use the older/current version of the application as a reference to test the future release of a software product.
Talk to the project team members.
Use exploratory testing to test the application when it is ready.
What techniques can help elicit requirements when documentation is poor?
Ans: User interviews, surveys, use case workshops, competitor analysis, prototyping, feature walkthroughs with dev team.
💡 What is exploratory testing, and why do we use it?
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution.
In exploratory testing, test cases are not created in advance, but testers check the system on the fly.
Exploratory testing is used for two reasons:
When we don’t have time to design test cases
When there are poor or no requirements
What are some key benefits of exploratory testing?
Ans: Adaptable, insight driven, takes advantage of tester skills, useful for unknown or risky areas.
What are some good scenarios to use exploratory testing?
Ans: New or complex features, areas with little documentation, usability testing, investigating odd behavior.
📈 A defect that could have been removed during the initial stage is removed in a later stage. How does this affect the cost?
The cost of defects identified during Software Testing, completely depends on the impact of the defects found. The earlier the defect is found, the easier and less costly it is to fix these defects.
If defects or failures are found in the design of the software, then the product design is corrected and then re-issued. However, if these defects somehow get missed by testers and if they are identified during the user acceptance phase, then it can be way too expensive to fix such types of errors.
What metrics help quantify the cost of fixing defects late?
Ans: Root cause analysis, defect age, defect containment metrics, escape to production figures.
How can testers demonstrate the value of early testing?
Ans: Track defect find rates by phase, calculate costs of late defect fixes, show reduction in escaped defects over time.
What steps can be taken to encourage early defect discovery?
Ans: Shift left testing, unit test coverage goals, automated regression suites, independent peer code reviews.
🔀 What is change-related testing? And why do we use it?
What is the difference between confirmation testing and regression testing?
Confirmation testing or re-testing When a test fails because of the defect then that defect is reported, and a new version of the software is expected that has had the defect fixed. In this case, we need to execute the test again to confirm whether the defect got actually fixed or not.
Regression testing is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features.
Impact analysis is used to know how much regression testing will be required.
What types of changes should trigger regression testing?
Ans: Code, database, configuration, hardware/environmental changes.
If a QA has customer facing responsibilities, it typically means:
Acting as a liaison between the customers/users and the development/QA team. This involves gathering requirements, feedback and bug reports from customers.
Understanding customer pain points, typical usage patterns, and priorities in order to guide testing and prioritization. QA helps ensure the product is meeting customer needs.
Communicating testing progress, releases and known issues to customers. This helps set expectations and coordinates user sign-offs when needed.
Validating customer documentation, help resources and support portals to ensure they provide accurate and helpful information for users.
Participating in customer demos, discussions, and workshops to incorporate direct feedback into testing.
Ownership of pre-sales/post-sales testing if the QA is involved in product trials or evaluations with prospects.
Management of the customer bug/issues database and working with customers on fixes and workarounds.
Ensuring customer contracts, SLAs and support commitments are tested and met from a quality standpoint.
Representing the customer perspective internally to advocate for their testing needs and priorities.