Artificial Intelligence is advancing at a healthy pace and trending at an alarming pace.
We are already surrounded by everyday examples of machine learning. We are witnessing the emergence of self-drive cars, we talk to Natural Language Processing assistants and gain data-driven based Weather predictions. Certain tasks in our lives are more ready to be handled by AI than others. Something that is already automated for example. Something as frequently automated as testing, surely has a place where AI can lend a hand. If you're in testing, don't worry too much, I think we'll still need you for a good while longer. I believe machine learning will play some part in testing soon but it's a patch that will always be tended to by a humans. Why? Because software is a service we provide for humans to use, and nobody understands what humans want more than other humans.
So where will machine learning most readily impact software testing?
Let's start with the Oxford Dictionary definition of Artificial Intelligence which includes "development of computer systems able to perform tasks normally requiring human intelligence". I believe there are examples of Automation that we occasionally mistake for AI. If it's not learning it's perhaps not intelligent.
So what aspects of the software lifecycle currently require human intelligence and why?
During the design phase of any software there's a marked emphasis on understanding. We build software to fulfil human desire for a certain service, or sometimes to solve a problem. Although it's not always done right by humans, I cannot see artificial intelligence doing a better version of this in the near future. It still requires human intelligence in my opinion.
Expressing a software design as requirements is an important part of the lifecycle. Any embellishments or omissions at this point can cause problems later. Occasionally in the case of back-end systems like API or Bus Implementations, drag and drop tools are used to automate the design process, opening up the way for increased automation. However, typically the expression of requirement be they functional, security, performance or experience related require a fair degree of reasoning.
At some point, a product is built to match the expressed requirements. This where coders code usually. One of the earliest pieces of automation involved is Static Code Analysis to check the code for potential issues, bad coding habits etc. Adding extra awareness or learning to this would definitely be worthwhile. Using AI in Static Code Analysis is something Facebook have pioneered with their(Now open source) Infer. So let's call it, Intelligent Static Code Analysis will impact how we test in the near future.
When we design tests we conceive a set of tests that assess the whether or not the software complies with requirements. Automatic Test generation exists today but could it have a use for self learning? I believe that in marrying requirements and tests is the right way to go, hence mechanised test design is not a good direction, at least not in any way that requires machine learning.
During test execution we record as a defect, cases where the built product does not conform to an executed test. Additionally, we usually observe any other defects not caught by the tests. In a day where software teams are adding conditions like "creates a very negative emotional response" to their definition of a high severity defect one would assume that human emotions are a requirement of successful test execution. However quite a lot of unintelligent automated testing takes place every day.
Test automation that logs/remembers data associated with failed tests, and raises a suspicion if encounters this data again, would add value. Intelligent test automation. Additionally, intelligent test automation could consider the order of tests to be run based on previous defects. This idea needs further development, something I might cover in a other post.
For now at least I see we have two potential areas(one of which already exists thanks to Facebook.) where machine learning can might change or benefit testing:
1. Intelligent Static Code analysis
2. Intelligent Test Automation
In a later post I'll develop the second idea a bit further. In the meantime I'll be taking a deeper look at Infer.