AI Fuzzing: A Key Component of Cybersecurity

October 6, 2020

AI fuzzing may sound like something cute, but it’s a key component in testing applications or systems for vulnerabilities. The general idea of fuzzing is not new. Fuzzing is an automated testing technique that uses unexpected or invalid data to see if – or how – a system fails and has been a known computer science term since the late 80s. It’s found a new era, however, with the advent of Artificial Intelligence (AI).

Traditional methods of fuzzing, which were often difficult to do and involved manual procedures, resulted in a time-consuming process for identifying issues. AI fuzzing takes the basic tenets of fuzzing and uses artificial intelligence or machine learning to offer continuous, scalable, and more efficient and effective results.

“AI-based tools can identify potential attack options and generate probable test cases. Once a test case offers a promised path to explore, the new tool will follow suit and delve deeper to see if problems in one area of the application lead to exploitable vulnerabilities elsewhere,” reports Ensar Seker for Towards Data Science.

Several companies, such as Google, Microsoft, and Synopsys, have developed AI fuzzing software. Google’s, known as ClusterFuzz, is an open source program accessible to anyone who wishes to use it through OSS-Fuzz.

According to Google, as of February 2019, ClusterFuzz “has found more than 16,000 bugs in Chrome and more than 11,000 bugs in over 160 open source projects integrated with OSS-Fuzz.”

Microsoft just made their fuzzing software available as open source tool in mid-September. Known as Project OneFuzz, the tool replaces Microsoft Security and Risk Detection, which was discontinued in June.

“Enabling developers to perform fuzz testing shifts the discovery of vulnerabilities to earlier in the development lifecycle and simultaneously frees security engineering teams to pursue proactive work,” states Microsoft.

AI fuzzing can also benefit the software development lifecycle, which has grown increasingly agile.

“Because [agile software development processes] often takes many iterative cycles, advanced testing methods are not usually given high priority,” says Robert Lemos for DarkReading. With multiple companies offering AI fuzzing software options, fuzz testing may be more easily integrated into the software development lifecycle.

The team behind Google’s ClusterFuzz agrees. “For software projects written in an unsafe language such as C or C++, fuzzing is a crucial part of ensuring their security and stability.”

Though AI fuzzing may be beneficial to the software and system testing process, like many emerging technologies, it can be used by malicious agents. The same information that is used by cybersecurity professionals to identify vulnerabilities may be sold to cyber criminals allowing them to exploit those same vulnerabilities, says Seker.

Want to learn about cybersecurity? Capitol Tech offers bachelor’s, master’s and doctorate degrees in cyber and information security. Many courses are available both on campus and online. To learn more about Capitol Tech’s degree programs, contact admissions@captechu.edu.