Software Testing

eyeglasses-679696_640

What is Software Testing?

Software testing is a process of examining a program or an application with the intent of finding the software bugs (errors or other defects) and provide information about the quality of the product or service under test.

Software testing methods are traditionally divided into white-box and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

Why is software testing necessary?

Software testing is really required to point out the defects and errors that were made during the development phases. It makes sure of the Customer’s reliability and their satisfaction in the application. It is very important to ensure the Quality of the product. Quality product delivered to the customers helps in gaining their confidence. Testing is necessary in order to provide the facilities to the customers like the delivery of high quality product or software application which requires lower maintenance cost and hence results into more accurate, consistent and reliable results.

Testing methods:

1. Black-box testing: It is a way of testing software without having much knowledge of the internal workings of the software itself. Black box testing is often referred to as behavioral testing, in the sense that you want to test how the software behaves as a whole. It is usually done with the actual users of the software in mind, who usually have no knowledge of the actual code itself.

2. White box testing: Also known as clear box testing, glass box testing, transparent box testing and structural testing. It is a testing of the structural internals of the code – it gets down to the for loops, if statements, etc. It allows one to peek inside the ‘box’. Tasks that are typical of white box testing include boundary tests, use of assertions, and logging.

Testing levels:

There are generally four recognized levels of tests: unit testing, integration testing, component interface testing, and system testing.

Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. It is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. Unit testing is often automated but it can also be done manually.

Integration testing is a type of software testing that seeks to verify the interfaces between components against a software design. It is a testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Component interface testing is an integration test type that is concerned with testing the interfaces between components or systems. The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components.

System testing, or end-to-end testing, is a process of testing an integrated system to verify that it meets specified requirements. It tests a completely integrated system to verify that it meets its requirements. For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff.

Testing Types:

1. Installation testing: An installation test assures that the system is installed correctly and working at actual customer’s hardware.

2. Compatibility testing: It is a type of software testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if it’s very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a system performs in a particular environment that includes hardware, network, operating system and other software etc.

3. Smoke and sanity testing: Smoke test is a subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Sanity testing determines whether it is reasonable to proceed with further testing.

4. Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

5. Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

6. Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing.

7. Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquired feedback from the market.

8. Functional testing: Testing based on an analysis of the specification of the functionality of a component or system.

9. Non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

10. Performance Testing: The process of testing to determine the performance of a software product.

11. Usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

12. Accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system.

13. Security testing: Testing to determine the security of the software product. Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.

14. Internationalization and localization testing: Internationalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes. Whereas Localization is a process of adapting internationalized software for a specific region or language by adding local specific components and translating text.

15. Load test: A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.

16. Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

17. Ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and randomness guides the test execution activity.

Testing artifacts:

The software testing process can produce several artifacts.

1. Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and test measurement techniques to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

2. Traceability matrix: The ability to identify related items in documentation and software, such as requirements with associated tests. It is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage.

3. Test case: A set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

4. Test script: It is a procedure, or programing code that replicates user actions. Commonly used to refer to a test procedure specification, especially an automated one.

5. Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

6. Test data: Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.