Using AI to Build More Secure Software

Mark Sherman
Technical Director, Cyber Security Foundations, CERT Division
Carnegie Mellon University Software Engineering Institute


MITRE‘s Common Vulnerabilities and Exposures (CVE) list — nearly 150,000 incidents (and growing) — is a testament to the fact that building software that is resistant and resilient to attack remains difficult. 

The introduction of SecDevOps looks to improve the security of software. However, it has not taken advantage of how artificial intelligence could be used for security improvements. 

This presentation will discuss:

  • Why the construction of secure software is a concern beyond the IT industry
  • The elements of a secure software development process
  • How artificial intelligence could be applied to improve that process. 
  • Additional security issues introduced when constructing artificial intelligence software.
Watch Mark Sherman’s full presentation here

Vulnerability defined 

There are often many bugs in cybersecurity that result in vulnerability. Millions of lines of code for commercial and government sponsors have been analyzed. We have fairly good confidence that, on average, you will find one exploitable vulnerability in every 10,000 lines of code. So a ten-million-lines-of-code weapon system may have between 300-1400 exploitable vulnerabilities. 

  • Example: the F-35 weapon system (the stealth fighter) has about 35 million lines of code in it. Today’s high-end automobiles have about 100 million lines of code. Even with the best practices, vulnerabilities are possible. 

How do we address this? 

  • Malware analysis
  • insider threat
  • organizational resiliency
  • intrusion detection

Addressing security should come at the beginning of the life cycle, because almost all problems seem to be introduced at the beginning. This is also where mistakes are made. 

Problem: unfortunately, because of the way that we prioritize our attention, these vulnerabilities are usually found at the end of the life cycle. We often focus on how to approach an attack after it has happened, when it should be addressed beforehand. 

Bottom line: we spend enormous amounts of money trying to fix vulnerabilities after the fact. 

How AI is used to help 

At the beginning of the life cycle, you have to know what the bad guys are thinking of doing. What may help you tell: unusual sensors and behaviors throughout the network. This may reveal how the bad guys intend to come after you. 

From there, you can start building requirements for more secure software. 

The actual mechanics of software has a lot of AI involved in it as well. 

  • Example: The Spiral Project. A high-level specification of a system generates the algorithms and the code, asking: what specialized hardware should be built in order to improve the performance? The ultimate goal is to build a high-performance system. 

Finding vulnerabilities in source code

This has now become pretty routine in most programming shops. You buy your favorite industrial tool that performs source code analysis. It spits out all kinds of things that it finds wrong with your code, including vulnerability. 

AI has been introduced into these kind of tasks. How: it treats a programming language like natural language. As a result, it is able to find new ways to find problems in programs. 

  • The challenge with these systems: they generate huge numbers of false negatives. Example: 10,000 diagnostics spit out over a typical program. The programmer has to look at each one and analyze it. However, AI can be used to improve this. AI can consider the context and find only the diagnostics worth paying attention to. The other ones can be ignored reasonably safely. This has a lot of promise. 

AI solutions: Fuzzing and hill-climbing techniques 

Fuzzing is a type of research that has been remarkably effective. It uses AI techniques to generate what inputs should be. 

  • Example: looking at a series of sample files and deducing what types of structures those files have. Then it generates programs that look like those structures but are “off” in one way or another.

“Hill-climbing” techniques are used in order to try to crash the program. This has been proven very effective in being able to test for how a system may crash. 


There is a lot of work being done to combine these two techniques. 

  • Example: government programs concerning planes, transportation, or energy. Perfect properties, where nothing can be attacked or failed, is often not achievable. Instead, there is a government notion of “assurance:” mechanical and manual reviews of information that give evidence that the program will work well and that they contain the right kind of safety features. In the military, this kind of assurance is called “software assurance.” 
  • Example: IBM’s Watson on Jeopardy: To be able to look at an enormous amount of documents and extract the right information. To be able to get an answer to this question: “do I believe this system is ready to go or not?”

Bottom line: traditionally, these reviews take an enormous amount of time and require individual experts who can read through everything. This tedium is increasingly being replaced by AI. 

What’s happening next 

In addition to making conventional programs better with AI, we should start thinking about how to make AI programs themselves more resilient. This requires a different kind of programming, but you still need to implement the math correctly. However, even when you do this,  there are problems that could result. 

Example: you can make dramatic types of changes (for instance, changing individual pixels), but remember to train your defenses so that you will not misclassify when adversaries are trying to move in. Doing this with current technology is exceedingly expensive, and can risk feeding bad images to the systems. 

Current research is trying to make this quantifiable, so as you build your own AI system, you can make precise and reasoned statements about how much risk you are going to take while evaluating the impact of making errors. 

An unfortunate current technical circumstance: people are publishing attacks and sharing the ways to get around those attacks. Each one of these solutions have been defeated within two weeks. This tells us that there is still work to be done.
Bottom line: there is a need to build software better; simply defending against attack is often too late and too expensive. Instead, work on building the software itself better. There are new kinds of attacks coming in addition to all the old kinds of attacks.

For more information, please visit the Software Engineering Institute website ( or send me an email at [email protected].

Tags   •   Cybersecurity


Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Recent Posts

COVID-19 Curfew and Social Distancing Enforcement using AI enabled Drones - Amarjot Singh, PhDFounder & CEOSkyLark Labs LLC The SARS-CoV-2(also known as COVID-19 or Novel Corona Virus) infectious outbreak has rapidly…
“Ask Me Anything” with Roboticist & Program Lead of Racing Vehicles at General Motors - Harish SkumarRoboticist & Program Lead of Racing Vehicles General Motors Ai4 recently hosted an “Ask Me Anything” session with Harish…
Bottlenecks in Supply Chains & How AI Can Help - During this panel, industry experts (showed above) discussed the impact of COVID-19 on AI on Supply Chains. We’ve included a…
How COVID-19 is Impacting the State of AI in Supply Chains - During this panel, industry experts (showed above) discussed the impact of COVID-19 on AI on Supply Chains. We’ve included a…
How COVID-19 is Impacting the State of AI in Investment Management - On this panel, industry experts (listed above) discussed the impact of COVID-19 on AI on Investment Management. We've included a…
The State of AI in Investment Management - On this panel, industry experts (listed above) discussed the affects of AI on Investment Management. We've included a short transcription…
The State of AI in Banking - On this panel, industry experts (listed above) discussed what they are most excited about in AI in Banking. We've included…
How COVID-19 is Impacting the State of AI in Banking - On this panel, industry experts (listed above) discussed The State of AI in Banking and how COVID-19 is affecting it.…
The Ethics of AI: Who will be Responsible for the Decisions of AI Applications? - Ayodele Odubela Data Scientist SambaSafety One of the issues often debated in AI as it regards to ethics is who…
“Ask Me Anything” with Krzysztof Geras, PhD - Krzysztof Geras Assistant Professor NYU Department of Radiology Ai4's recently hosted an "Ask Me Anything" session with one of our…

Popular Posts

Does Healthcare AI Meet Basic Ethics Principles? - Ingrid Vasiliu-Feltes Chief Quality and Innovation Officer MEDNAX, Health Solutions Partner Over the past decade we have noticed an exponential…
AI/ML in Investment and Risk Management: Recent Applications, Use Cases, and Implementation Challenges - Arvind Rajan Managing Director - Head of Global & Macro PGIM Fixed Income Introduction Investing is a completely different ballgame…
Top AI Conferences - Interested in learning the latest in AI this year? We’ve compiled a list of the top artificial intelligence conferences in…
Artificial Intelligence & Cybersecurity: Math Not Magic - Wayne Chung CTO FBI Introduction The field of cybersecurity has slowly progressed from an art to a science. It has…
Machine Learning for Pricing and Inventory Optimization @ Macy’s - Jolene Mork Senior Data Scientist Macy's Iain Stitt Data Scientist Macy's Bhagyesh Phanse VP, Data Science Macy's Overview In this…
“Ask Me Anything” with Zappos’s Head of AI/ML Research & Platforms, Ameen Kazerouni - Ameen Kazerouni Head of AI/ML Research & Platforms Zappos Family of Companies Ai4 recently hosted an "Ask Me Anything" session…
Advancements at Siemens Healthineers in AI for Medical Imaging - Bimba Rao Head of Global Artificial Intelligence Engineering Siemens Healthineers Ultrasound Siemens Healthineers background  Siemens Healthineers builds healthcare products and…
Leveraging AI in Cybersecurity Risk Modeling & Mitigation - Christopher Novak Director, Threat Advisory Research Verizon Wireless Introduction Originally, there was a poor understanding of why cyber breaches were…
An Ensemble Approach to Predict Default Risk in Stress Testing - Yun Zheng VP of Innovation & Global Risk Analytics HSBC Overview This presentation discussed the importance of performing stress tests…
Machine Learning and Artificial Intelligence in Banking - Artit "Art" Wangperawong Distinguished Engineer US Bank Introduction Every company’s AI journey is different. We’re all trying to figure out…