Friday, November 22, 2024

Generative AI development requires a different approach to testing

Generative AI has the potential to have a positive impact on software development and productivity, but with that increased productivity comes increased pressure on software testing. 

If you can generate five or even 10 times the amount of code you previously could, that’s also five to 10  times more code that needs to be tested. 

“Many CFOs right now are looking at $30 per month per developer to go get them a GitHub Copilot or similar product,” said Jim Scheibmeir, senior director analyst at Gartner. “And I feel like we’ve kind of forgotten that frequently a bottleneck in software development is not the writing of code, but the testing of code. We’re gonna make developers so much more productive, which includes making them more productive at writing defects.”

Unlike AI-assisted dev tools where developers want to write more code, the goal with AI-assisted testing tools is to enable less testing. For instance, according to Scheibmeir, things like test impact analysis tools can create a testing strategy that is properly sized for the actual code change that is being pushed, so that only the tests that need to be run are run, rather than just running every test you have for every change. 

“These tools provide focus for testers,” he said. “And it’s so very difficult to give testers focus today. There’s this feeling like we must go test all of the things and yet we’re always crunched on time.”

Arthur Hicken, chief evangelist at Parasoft, agrees that we’ve already reached a point where test suites are taking hours, or even days, to complete, and using generative AI to help optimize test coverage can help with that.  “You can put together with AI these days a pretty good estimation of what you need to do to validate a change,” he said.

Generative AI helping with test generation, management, and more

Beyond helping testers test less, AI is creeping into other aspects of the process to make it more efficient end to end. For instance, Madhup Mishra, SVP at SmartBear, says that generative AI can now be used to create the tests themselves. “The tester can actually express their software test in simple English, and AI can actually create the automated test on their behalf,” he said. 

“Behind the scenes, GenAI should be understanding the context of the test, understanding what’s happening on the screen, and they can actually come up with a recommended test that actually solves the user’s problem without the user having to do a lot more,” he said.

Scheibmeir explained that the idea of making test generation easier had already been explored by low-code and no-code tools with their intuitive drag-and-drop interfaces, and generative AI is now taking it to that next level. 

And according to Eli Lopian, CEO of Typemock, AI is really good at exploring edge cases and may come up with scenarios that a developer might have missed. He believes that it can understand complex interactions in the codebase that the tester might not see, which can result in better coverage. 

AI can also help with generation of test data, such as usernames, addresses, PIN codes, phone numbers, etc. According to Mishra, generating test data can often be a lengthy, time-consuming process because testers have to think up all the possible variations, such as the characters that can go in a name or the country codes that come before phone numbers. 

“Generative AI can create all the different combinations of test data that you can ultimately use to be able to test all the corner cases,” Mishra explained. 

Another potential opportunity is using AI in test management. Companies often have a repository of all the different tests they have created, and AI can sort through all that and make suggestions on which to use. This allows testers to utilize what they’ve already created and free up more of their time to create new tests they need, explained Mishra. 

Parasoft’s Hicken added that AI could sort through older tests and validate if they are still going to work. For instance, if a test is capturing today’s date, then that test won’t work tomorrow. 

AI might make testing more accessible, but won’t eliminate need for it

Together, all of these AI enhancements are helping organizations take more responsibility for software quality themselves, where in the past they might have outsourced testing, Scheibmeir said. 

Similar to the citizen developer movement, the capabilities for testing that are now available make it easier for anyone to run a test, so it doesn’t require such specialized skills like it once did. 

“The hype and capabilities that generative AI are offering have brought some of these organizations back to the table of should we own more of that testing ourselves, more of that test automation ourselves,” Scheibmeir said. 

However, it’s still important to keep in mind that AI does have its drawbacks. According to Lopian, one of the biggest downsides is that AI doesn’t understand the emotion that software is supposed to give you. 

“AI is going to find it difficult to understand when you’re testing something and you want to see, is the button in the right place so that the flow is good? I don’t think that AI would be as good as humans in that kind of area,” he said.

It’s also important to remember that AI won’t replace testers, and testers will still need to keep an eye on it for now to ensure all the right coverage and the right tests are happening. Lopian likened it to a “clever intern” that you still need to keep an eye on to make sure they’re doing things correctly. 

AI’s impact on development skills will drive need for quality to shift further left

Another important consideration is the potential that if developers rely too heavily on generative AI, their development skills might atrophy, Mishra cautioned. 

“How many times have you gotten an Uber and realized the Uber driver knows nothing about where you’re going, they’re just blindly following the direction of the GPS, right? So that’s going to happen to development, and QA needs to sort of come up to speed on making sure that quality is embedded right from the design phase, all the way to how that application code will behave in production and observing it,” he said.  

Hicken agrees, likening it to how no one memorizes phone numbers anymore because our phones can store it all. 

“If I was a young person wanting to have a good long-term career, I would be careful not to lean on this crutch too much,” he said.

This isn’t to say that developers will totally forget how to do their jobs and that in 20, 30 years no one will know how to create software without the help of AI, but rather that there will emerge a new class of “casual developers,” which will be different from citizen developers.

Hicken believes this will lead to a more stratified developer community where you’ve got the “OG coders” who know how the computer works and how to talk to it, and also casual developers who know how to ask the computer questions — prompt engineers. 

“I think we are going to have to better define the people that are creating and managing our software, with roles and titles that help us understand what they’re capable of,” he said. “Because if you just say software engineer, that person needs to actually understand the computer. And if you say developer, it might be that they don’t need to understand the computer.”


You may also like…

The evolution and future of AI-driven testing: Ensuring quality and addressing bias

RAG is the next exciting advancement for LLMs

Related Articles

Latest Articles