Revolutionising Software Testing | Byte Orbit
Back to blogs

Revolution of Software Testing

Written by: Written by: Jo Jackson

18 February 2019

Artificial Intelligence & Software Testing

Scroll

Revolution of Software Testing

There’s a lot at stake when a new piece of software is launched. First and foremost, there’s the reputation of the software developers. Will their product deliver as promised? Does it meet expectations? Will the target audience be happy with it? Will it dazzle with how fast, usable and versatile is it? Secondly, the software could be designed to shoulder enormous responsibility - think banking apps or payroll software. Just imagine the damage that could be done if the software didn’t behave as intended!

To prevent large-scale disaster and an inundation of complaints, software needs to be tested as deeply and as widely as possible during its development. The software-testing process is intended to provide concrete evidence of the quality of the software product or service being tested through verification and validation, and usually involves both a team of software quality assurance (QA) engineers, as well as software designed to run repeatable tests, programmed by the team. Over and above safeguarding the reputation of a software company, testing before launch can also be a major money saver. “IBM estimates that the cost of finding and fixing a bug early in the development phase is $100. But if found by the quality assurance (QA) team rises to $1500 and when found by a customer in production is at least $10,000.” 

The problem is, application complexity is increasing faster than test teams and tools can keep up, and as a result, users are reporting substantially different functional, performance and security issues than were found - or even can be found - by test tools. After all, automated script can only look for what you ask it to look for. What’s more, load testing, UX-level performance testing and security testing are often dropped in order to meet deadlines.

Software and applications are only set to become more complex, so what are the options out there to better test them both quickly and thoroughly? Considering that only 20% of testing needs to be done manually by the QA team, that leaves a whopping 80% that can be automated. The answer lies in optimizing that process.

That’s where artificial intelligence, or AI, comes in - and in particular a subset of AI called machine learning. As Senior Data Scientist at OLSPS Analytics Rikus Combrinck explains, “Machine learning is a set of mathematical techniques for learning patterns from large amounts of data in order to classify or predict things.” By crunching large amounts of data, machine learning can help us to classify; predict or estimate; find similars; and create compound systems and generative models. These “intelligent” mathematical techniques are powerful enough to compose music or even drive cars, so assisting in the process of software testing should be a walk in the park. As Appdiff’s Jason Arbon puts it, “Testing is a ripe field for applying AI, because testing is fundamentally about inputs and expected outputs—the same things needed to train bots.” 

AI driven test automation products are cropping up all over the place in response to this growing need for speedy, thorough software testing, but how are they actually performing? According to Kevin Surace, President, CEO and Cofounder of Appvance, their AI test automation system “has been in use by several large companies since spring of 2017. In typical cases, after a short learning phase, the system automatically generates 1200 valid test cases in 5 seconds. The resulting tests increased test coverage from under 50% to over 90%, and represent real user workflows far better than achievable with traditional scripting.” Surace goes on to say that, “Now with full analysis of production user flows, the system can intelligently create scripts which more closely match what users actually do. And attaining user-flow coverage of nearly 100%.” The results are impressive and far exceed what human teams can achieve in terms of scope and speed, saving them from the headaches of setting up and troubleshooting huge scripting tasks.

But where does that leave QA teams? Is there any room left for humans in the future of software testing? This is a question being asked by many, not just in the software testing field but in every profession where AI looks set to provide cheaper, faster, more accurate service. As artificial intelligence starts to play a greater role in the workplace, it will no doubt be accompanied by a shift towards honing the unique skill sets that human staff can provide. In the case of software testing, Jason Arbon predicts a positive collaboration of efforts. “The real value in human-powered testing is the creativity required to either identify problems that are subjective or discover bugs that some of the smartest people around (software engineers) didn’t think of or weren’t able to predict at the time of implementation.” While AI will shoulder the grunt work, “testers in the near future will be able to focus on the most interesting and valued aspects of software testing.”