Date:
3 December 2019
Author:
Sami Ullah

Our new series

Testing is becoming more and more critical for validation, confidence and agility in today’s software and web development market. Over the coming months we’ll be blogging about a variety of testing subjects. We launch the series with today’s blog, which gives you a brief overview of the importance of software testing before focusing on the evolution of testing from the early 1950s to now.

The importance of software testing

Everyone who’s worked in, or is connected with, the software/web industry has heard of the term “software testing” or “testing” in general. From disease diagnosis to rocket/satellite launch, testing is an essential and necessary part of the software and web process.

In fact, incomplete or missed software testing has led to disasters such as the crash of an Airbus A400M in 2015 and NASA's Mars Climate Orbiter loss in 1999 (which caused damage of $125 million). Through its evolution, the software industry has understood the need for more process-oriented testing in a phased manner.

Many current-day processes happen because of testing. Online shops deploy millions of lines of code because of the testing in place. Facebook and Instagram developers push code to the live site without any downtime, because of the testing mechanism they’ve set up to ensure zero failures.

The history

Software testing didn’t evolve in a single day; it took time and sweat to get it where it is today. Testing gurus like Hetzel and Dave Gelprin divide testing into five significant eras:

  1. Debugging-oriented era: This phase was during the early 1950s, when there was no distinction between testing and debugging. The focus was on fixing bugs. Developers used to write code, and when faced with an error would analyse and debug the issues. There was no concept of testing or testers. (However, in 1957, Charles L Baker distinguished program testing from debugging in his review of the book Digital Computer Programming by Dan McCracken.)

  1. Demonstration-oriented era: From 1957 to 1978, the distinction between debugging and testing was made and testing was carried out as a separate activity. During this era, the major goal of software testing was to make sure that software requirements were satisfied. As an example, the requirement might have been ‘We need a web application that displays a list of 10 products only’. Testers used to make sure that only 10 products were displayed. This failed because of the probability that a software’s function decreases as testing increases, i.e. the more you test, the more likely you'll find a bug. The concept of negative testing (or breaking the application) was not practiced in this era.

  2. Destruction-oriented era: From 1979 to 1982, the focus was on breaking the code and finding the errors in it. It was Glenford J. Myers who initially introduced the separation of debugging from testing in 1979 although his attention was on breakage testing. (‘A successful test case is one that detects an as-yet-undiscovered error.’) It illustrated the software engineering community’s desire to separate fundamental development activities, such as debugging, from verification. As an example, a tester would test software in such a way that it would break (e.g. entering letters in a field that should only accept numbers). There was no defect prevention approach during this phase. However, the destruction-oriented approach also failed because software would never get released because you could find one bug after another. Also, fixing a bug could also lead to another bug.

  3. Evaluation-oriented era: From 1983 to 1987, the focus was on evaluating and measuring the quality of software. Testing improved the confidence index on how the software was working. Testers tested until they reached an acceptable point, where the number of bugs detected was reduced. This was mainly applicable to large software.

  4. Prevention-oriented era: 1988 to 2000 saw a new approach, with tests focusing on demonstrating that software met its specification, detecting faults and preventing defects. Code was divided into testable and non-testable. Testable code had fewer bugs than code that was hard to test. In this era, identifying the testing techniques was the key. The last decade of the 20th Century also saw exploratory testing, where a tester explored and deeply understood the software in an attempt to find more bugs.

The early 2000s saw the rise of new concepts of testing like test-driven development (TDD) and behavioural-driven development (BDD). We’ll be highlighting these in upcoming articles.

The year 2004 saw a major revolution in testing, with the advent of automation testing tools like Selenium. Likewise, API testing using tools like SOAP UI marked another turning point in the history of testing. These will be examined in detail in upcoming blogs.

Finally, the current era is moving towards testing using artificial intelligence (AI) tools, and cross-browser testing using tools like SauceLabs, Browserstack, etc.

We look forward to bringing you the next blog in our user testing series!

Subscribe to Salsa Source

Subscribe to Salsa Source to keep up to date with technical blogs. 

Subscribe