Introduction and Principles of Software Testing

24 February 2017

Introduction to Software Testing

Software never runs as we want it to work. Creating a software is an attempt at formally specifying a solution to a problem. Assuming that the software implements the correct solution. It is very often implements this solution incorrectly. Even if it does implement the solution correctly, the solution may itself not be what the customer wants, or may have unforeseen consequences in the software's behaviour.

In all of these situations the software may produce unexpected and unintended behaviour. The error which causes this unexpected and unintended behaviour is called a software bug.

Software testing is the process of analysing and finding the defect in a software and to detect the differences between existing and required conditions and to evaluate the features of the software. Software testing is an activity that should be done throughout the whole development process of a software.

Finding errors in the software solution itself is an important aspect to software testing, but it is not the only aspect. Testing can also be used to find how well the software conforms to the software requirements, including performance requirements, usability requirements etc., Understanding how to test software in a methodical manner which is a fundamental skill required in Software Engineering of acceptable quality.

Principles of Software Testing

1. Testing shows presence of defects: Testing can show the defects that are present in the software, but cannot say that there are no defects. Even after testing the application or product or Software thoroughly, we cannot say that the product is fully defect free. Testing always reduces the number of un-finding defects remaining in the software but even if no defects are found, it is not a proof that the software implemented is correct.

2. Exhaustive testing is impossible: Testing everything in the software including all combinations of inputs and preconditions is not possible. So, instead of doing the exhaustive testing we can use           risks and priorities to focus testing efforts.

3. Early testing: In the software development life cycle testing activities should start as early as possible and should be focused on defined objectives.

4. Defect clustering: A small number of modules contains most of the defects discovered during pre-release of testing or shows the most operational failures.

5. Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide Paradox” is   very important to review the test cases regularly. So new cases and different tests need to be written to exercise different parts of the software to potentially find more defects in the software.

6. Testing is context dependent: Testing is basically context dependent. Different kinds of sites are tested differently.

7. Absence of errors fallacy: If the system built is unusable and if the User is not fulfil with the software done by the development team and did not reach the user’s requirements and expectations then   finding and fixing defects does not help.

     Software testing never completes. It is an ongoing process that begins with the project's inception and continues until the project is no longer supported. During the software lifetime the burden of testing slowly shifts from the developers during design and programming, to independent testing teams, and finally to the customers. Every action that the customer performs with the software can be considered a test of the software itself.

Time and money constraints often intervene and it may not be worth the developer's time or money to fix particular software errors. This trade-off between resources spent vs potential benefit can easily occur for small errors, and for errors which are not often encountered by the software's users.

It is also possible and also difficult to be statistically be careful when discussing software errors. For instance, it is possible to develop a statistical model of the number of expected software failures with respect to execution time. Error rates over a given period can then be specified given a particular probability. When that probability is low enough then the testing of the software could be considered as complete.

Categories: Software Testing

Comments

Comments are closed

Feedback

Feel free to send us your comments, suggestions and insights on the Netpeach Blog to info@netpeach.com

*These blogs by Netpeach's employees reflect the opinions of the bloggers and may not reflect Netpeach's official opinions.

Netpeach Technologies on Facebook