arrow-backBACK
Cybersecurity

Using AI to Build More Secure Software

By March 10, 2020

  • facebook
  • twitter
  • pinterest
  • linkedin

Mark Sherman
Technical Director, Cyber Security Foundations, CERT Division
Carnegie Mellon University Software Engineering Institute

Introduction

MITRE's Common Vulnerabilities and Exposures (CVE) list -- nearly 150,000 incidents (and growing) -- is a testament to the fact that building software that is resistant and resilient to attack remains difficult. 

The introduction of SecDevOps looks to improve the security of software. However, it has not taken advantage of how artificial intelligence could be used for security improvements. 

This presentation will discuss:

  • Why the construction of secure software is a concern beyond the IT industry
  • The elements of a secure software development process
  • How artificial intelligence could be applied to improve that process. 
  • Additional security issues introduced when constructing artificial intelligence software.

https://youtu.be/2V1I8tO6s5E
Watch Mark Sherman's full presentation here

Vulnerability defined 

There are often many bugs in cybersecurity that result in vulnerability. Millions of lines of code for commercial and government sponsors have been analyzed. We have fairly good confidence that, on average, you will find one exploitable vulnerability in every 10,000 lines of code. So a ten-million-lines-of-code weapon system may have between 300-1400 exploitable vulnerabilities. 

  • Example: the F-35 weapon system (the stealth fighter) has about 35 million lines of code in it. Today’s high-end automobiles have about 100 million lines of code. Even with the best practices, vulnerabilities are possible. 

How do we address this? 

  • Malware analysis
  • insider threat
  • organizational resiliency
  • intrusion detection

Addressing security should come at the beginning of the life cycle, because almost all problems seem to be introduced at the beginning. This is also where mistakes are made. 

Problem: unfortunately, because of the way that we prioritize our attention, these vulnerabilities are usually found at the end of the life cycle. We often focus on how to approach an attack after it has happened, when it should be addressed beforehand. 

Bottom line: we spend enormous amounts of money trying to fix vulnerabilities after the fact. 

How AI is used to help 

At the beginning of the life cycle, you have to know what the bad guys are thinking of doing. What may help you tell: unusual sensors and behaviors throughout the network. This may reveal how the bad guys intend to come after you. 

From there, you can start building requirements for more secure software. 

The actual mechanics of software has a lot of AI involved in it as well. 

  • Example: The Spiral Project. A high-level specification of a system generates the algorithms and the code, asking: what specialized hardware should be built in order to improve the performance? The ultimate goal is to build a high-performance system. 

Finding vulnerabilities in source code

This has now become pretty routine in most programming shops. You buy your favorite industrial tool that performs source code analysis. It spits out all kinds of things that it finds wrong with your code, including vulnerability. 

AI has been introduced into these kind of tasks. How: it treats a programming language like natural language. As a result, it is able to find new ways to find problems in programs. 

  • The challenge with these systems: they generate huge numbers of false negatives. Example: 10,000 diagnostics spit out over a typical program. The programmer has to look at each one and analyze it. However, AI can be used to improve this. AI can consider the context and find only the diagnostics worth paying attention to. The other ones can be ignored reasonably safely. This has a lot of promise. 

AI solutions: Fuzzing and hill-climbing techniques 

Fuzzing is a type of research that has been remarkably effective. It uses AI techniques to generate what inputs should be. 

  • Example: looking at a series of sample files and deducing what types of structures those files have. Then it generates programs that look like those structures but are “off” in one way or another.

“Hill-climbing” techniques are used in order to try to crash the program. This has been proven very effective in being able to test for how a system may crash. 

Summary

There is a lot of work being done to combine these two techniques. 

  • Example: government programs concerning planes, transportation, or energy. Perfect properties, where nothing can be attacked or failed, is often not achievable. Instead, there is a government notion of “assurance:” mechanical and manual reviews of information that give evidence that the program will work well and that they contain the right kind of safety features. In the military, this kind of assurance is called “software assurance.” 

  • Example: IBM’s Watson on Jeopardy: To be able to look at an enormous amount of documents and extract the right information. To be able to get an answer to this question: “do I believe this system is ready to go or not?”

Bottom line: traditionally, these reviews take an enormous amount of time and require individual experts who can read through everything. This tedium is increasingly being replaced by AI. 

What’s happening next 

In addition to making conventional programs better with AI, we should start thinking about how to make AI programs themselves more resilient. This requires a different kind of programming, but you still need to implement the math correctly. However, even when you do this,  there are problems that could result. 

Example: you can make dramatic types of changes (for instance, changing individual pixels), but remember to train your defenses so that you will not misclassify when adversaries are trying to move in. Doing this with current technology is exceedingly expensive, and can risk feeding bad images to the systems. 

Current research is trying to make this quantifiable, so as you build your own AI system, you can make precise and reasoned statements about how much risk you are going to take while evaluating the impact of making errors. 

An unfortunate current technical circumstance: people are publishing attacks and sharing the ways to get around those attacks. Each one of these solutions have been defeated within two weeks. This tells us that there is still work to be done.
Bottom line: there is a need to build software better; simply defending against attack is often too late and too expensive. Instead, work on building the software itself better. There are new kinds of attacks coming in addition to all the old kinds of attacks.

For more information, please visit the Software Engineering Institute website (www.sei.cmu.edu) or send me an email at [email protected].

Recent Posts

https://ai4.io/blog/2020/06/11/the-changing-roles-within-cybersecurity-due-to-ai/
https://ai4.io/blog/2020/06/11/how-covid-19-is-impacting-cybersecurity/