AI And Cybersecurity For Financial Services

Join The Discussion | Ai4 Finance 2019

Unlike image recognition or natural language processing, which are considered to be nearly fully solvable by AI, it’s unlikely that machine learning will ever be the “silver bullet” for cybersecurity.

However, it is equally true that machine learning, used as a tool for assisting humans in the detection of cyber threats, has been incredibly effective. It is now seen as an almost obligatory part of every good security program.

The same ML techniques used in various other fields – such as regression, classification, clustering, association rule learning, dimensionality reduction, and generative models – have all been effectively applied to cybersecurity.

Further, it is generally acknowledged that ML can be useful across all five security task categories (including prediction, prevention, detection, response, monitoring) as well as each level of monitoring (including network, endpoint, application, user, and process).

While many companies lack the resources to develop effective AI security tools in-house, there is a rapidly growing vendor pool of AI cybersecurity providers.

Here are some examples of how artificial intelligence and machine learning technology can used by cybersecurity programs within financial services:

1) Automating the analyst: Not only are security analysts are costly, but it’s becoming less possible for humans to keep up with the volume of threats that large financial institutions are facing on a daily basis. Cybersecurity vendors are working on AI versions of human analysts to use machine learning models to detect anomalies. In some case, the AI is able to then resolve the issue itself, or it will pass the event to a human who can further investigate.

2) Insider threat detection: Insider threats pose a huge cyber risk to enterprise firms with tens of thousands of employees. Leveraging AI, SOCs can detect insider threats before they happen. As an example, CERT researcher Shing-Hon Lau has built a machine learning model that is able to detect stress levels of a person based on their typing profile. E.g. if a person is making many mistakes and typing very quickly, perhaps they are more stressed than someone making less mistakes and typing more slowly. This type of research is still early, but you could imagine using such an AI system to segment potential insider threats based on typing form.

Join the AI + Cybsecurity discussion at Ai4 Finance!