What's the difference between static and dynamic software testing? Can you explain what phantom means in software testing? What does the software testing process consist of? Can you compare top-down testing with bottom-up testing? What types of software testing would you automate? Can you explain what you would include in a test report?
What is end-to-end testing and why is it important? Related: 12 Tough Interview Questions and Answers. Here are some common interview questions and sample answers related to software testing:.
When the interviewer asks this question, they want to know about your background knowledge regarding software testing. You might consider listing the software tests you have performed before as well as other software tests you know about. There are dozens of types of software tests you perform and knowing about a variety of tests can help an interviewer gauge your competence about the job. Example: "I've performed all kinds of software tests.
Unit testing, system testing, performance, load and stress testing, usability testing, interface testing, to name a few. I also understand what most tests are used for and can adapt to the different processes used in each. I've performed more black box tests than white box tests but understand the benefits of both. This interview question tests your knowledge about end-to-end testing and allows you to speak about the importance of this type of testing.
Even beginner software testers understand what end-to-end testing entails, so it can be beneficial to prepare an answer about its importance. Example: "End-to-end testing is performed after the functional testing stage and tests the entire software in a browser or real environment. Testing in a real environment ensures that the software works both in a test setting as well as in the environment that the end-user would be using it in.
Because every type of software test differs, your answer will be unique from other applicants. The interviewer asks this question to gauge your understanding of the basics of the software testing process. In your answer, be sure to briefly talk the interviewer through the process.
You can also provide reasoning for each step. Example: 'At first, I make sure to analyse the project and map it out in a visual form to divide it into smaller chunks. That way, I know what individual tasks we need to take care of, and this also allows me to allocate them to team members. Once testers have their tasks assigned, a group challenge is to estimate how much time and effort we need to dedicate to the project to complete it. Lastly, we validate the estimation.
Interviewers may want to ask questions that help them predict how you'd react in a situation when your project fails. The way you deal with failure can significantly influence the company's overall success. In your answer, be sure to give some examples of past projects and why they failed but consider keeping the rhetoric optimistic by providing an example solution.
Assuring a potential employer that you're prepared for any issue and know how to deal with it can position you as an effective and highly qualified candidate.
Example: 'In my previous roles, I've encountered plenty of issues with testing projects. Many times the cause of those failures was limited budget or insufficient time to test software.
Rarely has the testing environment been set up improperly, but that has also happened within my team. Luckily, the testing environment can be easily fixed by someone with excellent time-management and problem-solving skills.
Understanding your flaws and knowing how to work on improving them shows that you're a self-aware candidate who's ambitious about your work and career. Interviewers may ask about weaknesses directly related to the role you're interviewing for because understanding them can help them better predict how much training and time you may need to adapt to the new work environment.
In your answer, be sure to mention a weakness that's not essential to successfully completing software tests and show that you're ambitious about professional growth.
Example: 'I think that my greatest weakness is that I sometimes need more time to understand new software testing processes. It may take me a bit longer to implement them because I'm the type of person who likes to do further research at home to be sure I understand everything. Naturally, I've been working on improving this by making my research more strategic and organised because I strongly believe that once I've got this research method figured out, my co-workers and the company could also benefit from it, as it helps to predict more things.
This is a question that tests your basic knowledge about best practices in software testing. They should be able to make a good case to the management if they are uncomfortable releasing the software if it contains unresolved issues. Functional testing is a form of black-box testing.
As the name suggests, it focuses on the software's functional requirements rather than its internal implementation. A functional requirement refers to required behavior in the system, in terms of its input and output. It validates the software against the functional requirements or the specification, ignoring the non-functional attributes such as performance, usability, and reliability.
During testing, a tester records their observations, findings, and other information useful to the developers or the management. All this data belongs to a test record, also called a bug report. A detailed bug report is an important artifact produced during testing. It helps the team members with:. Here are a few bits of information that a good bug report should contain. Image Source: Bugzilla. Non-functional testing tests the system's non-functional requirements, which refer to an attribute or quality of the system explicitly requested by the client.
These include performance, security, scalability, and usability. Non-functional testing comes after functional testing. It tests the general characteristics unrelated to the functional requirements of the software.
Non-functional testing ensures that the software is secure, scalable, high-performance, and won't crash under heavy load. Testing metrics provide a high-level overview to the management or the developers on how the project is going and the next action steps.
Test-Driven-Development TDD is a popular software development technique, first introduced by Kent Beck in his book with the same name, published in In TDD, a developer working on a feature first writes a failing test, then writes just enough code to make that test pass. Once they have a passing test, they add another failing test and then write just enough code to pass the failing test.
This cycle repeats until the developer has the fully working feature. If the code under the test has external dependencies such as database, files, or network, you can mock them to isolate the code.
In manual testing, a tester manually verifies the functionality of the software. The tester has a comprehensive list of all the test cases they should test, along with the test data. They go through each case, one by one. They launch the software as an end-user would, enter the input, and manually verify the output. It may seem that manual testing is inefficient when compared to automated testing.
It is slow, not repeatable in a consistent manner, and prone to human misjudgment. However, manual testing allows the tester to realistically test the software, using actual user data in a natural user environment, subject to similar external conditions.
Only a human, not a computer, can evaluate the usability and accessibility of the application and how it looks and feels to the end-user. It also gives a broader perspective of the system. Finally, some test scenarios just can't be automated and need to be manually tested. Though they all work primarily the same in implementing the web standards, there are subtle differences in all of them.
In cross-browser testing, a software tester launches the web application in all the supported browsers and tries to test the same functionality on all of them. This helps the programmer to fix the behavior in all the browsers where it doesn't work as intended. As the name suggests, automated testing, which is also called test automation, is the programmatic execution of the tests.
The tester uses an automation tool or software like Selenium to write code that performs the following tasks. Once a test is automated, you can run it as often as you want, to check if any new code has broken it. It enables you to spend your time on other high-value tests, such as exploratory testing that help find bugs that an automated test would miss.
Humans get tired and bored from conducting the same tests repeatedly and seeing the same results. Software is much better at doing repetitive tasks without getting tired or making mistakes than a human operator would. QA stands for Quality Assurance. In a software development team, a QA ensures that the software is thoroughly tested before releasing it to the end-users. QA activities are generally performed while the product is being developed and focuses on improving the software development process.
In many software organizations, a tester and a QA can be the same person, but they can be different depending on the organization's size.
The goal of the QA is to ensure quality in the shipped software. Its main aim is to ensure that the developed products meet the required standards or not. QC is a process in software engineering that is used to ensure Software product quality by testing and reviewing its functional and non-functional requirements.
QC activities are generally performed after the product is developed as it examines the quality of the end products and the final outcome. A software bug is an error in the software that produces wrong results. A software tester tests the software to find bugs in it.
There are many causes for the bugs—for example, poor design, sloppy programming, lack of version control, or miscommunication. Throughout development, developers introduce hundreds or thousands of bugs in the system. The goal of the tester is to uncover those bugs. You can find a bug in many different ways, regardless of your role. When building the software, the software developer might notice the bug in another module, written by another developer or by themselves.
The tester actively tries to find the bugs as part of a routine testing process. Finally, the users could see the bugs when the software is in production. All bugs, no matter how they are found, are recorded into a bug-tracking system. A triage team triages the bugs and assigns a priority to the bug, and assigns the bug to a software developer to fix it.
Once the developer resolves the problem, they check in the code and mark that bug as ready for testing. If not, they assign it to the same developer with a description of the exact steps to reproduce the bug.
Some examples of popular bug-tracking systems include BugZilla, FogBugz, etc. After they opened a malfunctioning piece of hardware, they found an insect stuck in the relay. Image Source: Link. All software has a target user. A user story describes the user's motivations and what they are trying to accomplish by using the software.
Finally, it shows how the user uses the application. It ignores the design and implementation details. A user story aims to focus on the value provided to the end-user instead of the exact inputs they might enter and the expected output.
In a user story, the tester creates user personas with real names and characteristics and tries to simulate a real-life interaction with the software. A user story often helps fish out hidden problems that are often not revealed by more formal testing processes.
Whenever a new build of the software is released, the tester updates the test environment with the latest build and runs the regression tests suite.
Once it passes, the tester moves on to testing new functionality. Though it varies depending on the size and structure of the software development teams, typically, a bug can be assigned the following types of severities, going from low to high:.
In black-box testing, the tester views the software as a black box, ignoring all the internal structure and behavior. Their only concern is the input provided to the system and the generated output.
White-box testing is an alternative strategy to black-box testing, in which a tester views the system as a transparent box. They are allowed to observe the internal implementation of the system, which guides the test.
Typically, the software developers perform the white-box testing during the development phase. In white-box testing, we assume that the tester has some programming knowledge. They try to test each possible branch a program could take in a running system. Before you ship the software to the customers, the internal testing team performs alpha testing. Alpha testing is part of the user acceptance testing.
Its goal is to identify bugs before the customers start using the software. Once you ship the software to the customers after alpha testing, the software's actual users perform the beta testing in a real production environment. It is one of the final components of user acceptance testing.
Beta testing is helpful to get feedback from real people using your software in real environments. With the first approach, the tourist follows a predetermined plan and executes it. Though they may visit famous spots, they might miss out on hidden, more exciting places in the city.
With the second approach, the tourist wanders around the city and might encounter strange and exotic places that the itinerary would have missed. A tester is similar to a tourist when they are testing software. They can follow a strict set of test cases and test the software according to them, with the provided inputs and outputs, or they can explore the software. When a tester doesn't use the test scripts or a predefined test plan and randomly tests the software, it is called exploratory testing.
As the name suggests, the tester is exploring the software as an end-user would. It's a form of black-box testing. In exploratory testing, the tester interacts with the software in whatever manner they want and follows the software's instructions to navigate various paths and functionality.
They don't have a strict plan at hand. Exploratory testing primarily focuses on behavioral testing. It is effective for getting familiar with new software features. It also provides a high-level overview of the system that helps evaluate and quickly learn the software. Though it seems random, exploratory testing can be powerful in an experienced and skilled tester's hands. As it's performed without any preconceived notions of what software should and shouldn't do, it allows greater flexibility to the tester to discover hidden paths and problems along those paths.
End to End testing is the process of testing a software system from start to finish. The tester tests the software just like an end-user would. For example, to test a desktop software, the tester would install the software as the user would, open it, use the application as intended, and verify the behavior.
Same for a web application. There is an important difference between end-to-end testing vs. In end-to-end testing, the software is tested along with all its dependencies and integrations, such as databases, networks, file systems, and other external services.
Static testing is a technique in which you test the software without actually executing it. It involves doing code walkthroughs, code reviews, peer-reviews, or using sophisticated tools such as eslint, StyleCop to perform static analysis of the source code.
Static testing is typically performed during software development. The tester runs the software in a test environment and goes through all the steps involved, entering the inputs and verifying the actual output with the expected result.
The tester writes code that makes an API request to the server that provides the API, provides the required inputs, collects the output from the response, and matches the actual output with the expected output. It does not involve the look and feel, accessibility, or usability of the software. API testing can be automated to make it repeatable and reproducible each time they run.
Code coverage is one of the important testing metrics. It indicates the ratio of the codebase under unit tests to the entire codebase. It just means that the unit tests cover all the code. Latent defect, as the name suggests, is a type of defect or bug which has been in the software system for a long time but is discovered now.
A latent defect is an existing defect that can be found effectively with inspections. It usually remains hidden or dormant and is a low-priority defect. Validation: It is defined as a process that involves dynamic testing of software products by running it. This process validates whether we are building the right software that meets that customer requirement or not. It involves various activities like system testing, integration testing, user acceptance testing, and unit testing.
Verification: It is defined as a process that involves analyzing the documents. This process verifies whether the software conforms to specifications or not. Its ultimate goal is to ensure the quality of software products, design, architecture, etc. Testbed is generally referred to as a digital platform that is used for testing an application. It includes an operating system, hardware, network configuration, database, software application under test, and all other software-related issues.
Some of the commonly applied documentation artifacts that are associated with software testing include:. Test case is basically a document that includes a set of test data, preconditions, expected results, and postconditions. This document is specially developed for a specific test scenario to ensure whether the software product meets the specific requirement or not.
In manual testing, test cases are executed manually by a tester without using any of the automated tools.
One can easily identify loopholes in the specifications while developing test cases. There are various attributes of test cases that make them more reliable, clear, and concise, avoiding any sort of redundancy.
Some of them are given below:. A test plan is basically a dynamic document monitored and controlled by the testing manager. The success of a testing project totally depends upon a well-written test plan document that describes software testing scope and activities.
It basically serves as a blueprint that outlines the what, when, how, and more of the entire test process. Test report is basically a document that includes a total summary of testing objectives, activities, and results.
It is very much required to reflect testing results and gives an opportunity to estimate testing results quickly. It helps us to decide whether the product is ready for release or not.
It also helps us determine the current status of the project and the quality of the product. A test report must include the following details:. Test deliverables, also known as test artifacts, are basically a list of all of the documents, tools, and other components that are given to the stakeholders of a software project during the SDLC.
Test deliverables are maintained and developed in support of the test. At every phase of SDLC, there are different deliverables as given below:. It generally involves both verification activities and validation activities. In this, different activities are executed in a specific order throughout the software testing process. Error : It is defined as a programming mistake in coding because of which we can't compile or run a program.
Defect : It is defined as the variation or difference between the actual result and the expected result founded by a tester or developer. The defect is basically detected after the product goes into production and is resolved in the development phase only. Bug : It is defined as a fault or mismatch in a software system that is detected during the testing phase. It has an impact on software functionality and performance.
Use case testing is basically defined as a technique that helps developers and testers to identify test cases that exercise the whole system on each transaction basis right from start to finish. It is a part of black-box testing that is used widely in developing tests or systems for acceptable levels. Test Matrix: It is referred to as a testing tool that is used to capture actual quality, effort, resources, plan, and time required to capture all the phases of software testing.
It only covers the testing phase of the life cycle. Requirement Traceability Matrix RTM : It is referred to as a document, usually present in the form table, that is used to trace and demonstrate the relationship between the requirements and other artifacts of the project right from start to end.
In simple words, it maps between test cases and customer requirements. Positive Testing : It is a type of testing process where the software application is validated against the valid data sets as an input. It is simply used to check whether the application does what it is supposed to do or not. Negative Testing : It is a type of testing process where the software application is validated against invalid data sets as an input.
It is simply used to check whether the system shows an error when it is supposed to do or not. In test case execution, negative testing is considered a very crucial factor. A critical bug is referred to as a bug that affects the majority of the functionality of the given application.
Test Strategy is a high-level document static document and usually developed by the project manager. It is a document that captures the approach on how we go about testing the product and achieve the goals. Documents like Test Plan are prepared by keeping this document as a base. Test plan document is a document which contains the plan for all the testing activities to be done to deliver a quality product.
It is usually prepared by the Test Lead or Test Manager. Test Suite is a collection of test cases. The test cases which are intended to test an application. Test Scenario gives the idea of what we have to test.
Test Scenario is like a high-level test case. Test cases are the set of positive and negative executable steps of a test scenario which has a set of pre-conditions, test data, expected result, post-conditions and actual results.
An environment configured for testing. Test bed consists of hardware, software, network configuration, an application under test, other related software. Test Environment is the combination of hardware and software on which Test Team performs testing. Test data is the data that is used by the testers to run the test cases.
Whilst running the test cases, testers need to enter some input data. To do so, testers prepare test data. It can be prepared manually and also by using tools. For example, To test a basic login functionality having a user id, password fields. We need to enter some data in the user id and password fields. So we need to collect some test data. A test harness is the collection of software and test data configured to test a program unit by running it under varying conditions which involves monitoring the output with the expected output.
Test Closure is the note prepared before test team formally completes the testing process. This note contains the total no. Test Completion Check: To ensure all tests should be either run or deliberately skipped and all known defects should be either fixed, deferred for a future release or accepted as a permanent restriction.
Test Artifacts handover: Tests and test environments should be handed over to those responsible for maintenance testing. Known defects accepted or deferred should be documented and communicated to those who will use and support the use of the system.
Lessons learned: Analyzing lessons learned to determine changes needed for future releases and projects. In retrospective meetings, plans are established to ensure that good practices can be repeated and poor practices are not repeated. Archiving results, logs, reports, and other documents and work products in the CMS configuration management system. Test coverage helps in measuring the amount of testing performed by a set of tests. Test coverage can be done on both functional and non-functional activities.
It assists testers to create tests that cover areas which are missing. Code coverage is different from Test coverage. Code coverage is about unit testing practices that must target all areas of the code at least once. It is usually done by developers or unit testers. It is done to check whether the individual unit or module of the source code is working properly. Learn more about Unit Testing in detail. Integration Testing is the process of testing the interface between the two software units.
Integration testing is done in three ways. Learn more about Integration Testing in detail. Verifying the completed system to ensure that the application works as intended or not. Combining all the modules once and verifying the functionality after completion of individual module testing.
Top-down and bottom-up are carried out by using dummy modules known as Stubs and Drivers. These Stubs and Drivers are used to stand in for missing components to simulate data communication between modules.
Testing takes place from top to bottom. High-level modules are tested first and then low-level modules and finally integrating the low-level modules to a high level to ensure the system is working as intended. Stubs are used as a temporary module if a module is not ready for integration testing. It is a reciprocate of the Top-Down Approach. Testing takes place from bottom to up.
Lowest level modules are tested first and then high-level modules and finally integrating the high-level modules to a low level to ensure the system is working as intended. Drivers are used as a temporary module for integration testing. In simple words, end-to-end testing is the process of testing software from start to end. Check this End-To-End Testing guide for more information. Also, refer System Testing tutorial. In simple words, what the system actually does is functional testing.
To verify that each function of the software application behaves as specified in the requirement document. Testing all the functionalities by providing appropriate input to verify whether the actual output is matching the expected output or not. It falls within the scope of black box testing and the testers need not concern about the source code of the application. Learn more about Functional Testing here. In simple words, how well the system performs is non-functionality testing. Non-functional testing refers to various aspects of the software such as performance, load, stress, scalability, security, compatibility etc.
It is also known as pre-production testing. This is done by the end-users along with the testers to validate the functionality of the application. After successful acceptance testing. Formal testing conducted to determine whether an application is developed as per the requirement. It allows the customer to accept or reject the application. Alpha testing is done by the in-house developers who developed the software and testers before we ship the software to the customers.
Sometimes alpha testing is done by the client or outsourcing team with the presence of developers or testers. It is a part of User Acceptance Testing.
The purpose of doing this is to find bugs before the customers start using the software. Beta testing is done by a limited number of end-users before delivery. It is done after the Alpha Testing. Learn more about Beta Testing here.
Gamma testing is done when the software is ready for release with specified requirements. It is done at the client place. It is done directly by skipping all the in-house testing activities. Smoke Testing is done to make sure if the build we received from the development team is testable or not. Sanity Testing is done during the release phase to check for the main functionalities of the application without going deeper.
It is also called as a subset of Regression testing. To ensure that the defects which were found and posted in the earlier build were fixed or not in the current build. Say, Build 1. Test team found some defects Defect Id 1. Build 1. Repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software components.
Read a detailed guide on Regression Testing. Regression Testing: Testing team re-execute the tests against the modified application to make sure whether the modified code breaks anything which was working earlier. Confirmation Testing: Usually testers report a bug when a test fails. Dev Team releases a new version of the software after the defect is fixed. Now the testing team will retest to make sure the reported bug is actually fixed or not. Graphical User Interface Testing is to test the interface between the application and the end user.
Recovery testing is performed in order to determine how quickly the system can recover after the system crash or hardware failure.
0コメント