Artificial Intelligence (AI) and Cybersecurity

A lot of hype still surrounds Artificial Intelligence (AI). Even Gartner has it covered in a “Hype Cycle” (Its Hype Cycles covers new, emerging technologies). As part of a series of webinars we’re running, this blog post looks at AI, what can it mean for cyber security, what are the pros and cons when deploying it and how you might use it with more traditional deterministic defences? There is plenty to read on the internet about AI, so this is just a taster

CISOs have probably one of the toughest jobs in IT. They must allow users access to critical data and systems. They must also protect that data from compromise and human error. Security teams tend to be budget constrained, overwhelmed and understaffed, so it is no wonder that they are turing to the allure of AI.

The history of AI

AI research started soon after World War II. Alan Turing, famous for his work in breaking the Enigma encryption system, published a paper on it in 1950. In 2004, John McCarthy of Stanford University wrote a paper entitled “What is Artificial Intelligence?”.

He stated that AI “is the science and engineering of making intelligent machines, especially intelligent computer programs.” Intelligence relates to “the computational part of the ability to achieve goals in the world.”

It is thought that AI systems should think and act like humans. Who knows, this may be the ultimate endpoint (hopefully without the dystopian Terminator movie legacy). Others believe AI systems should think and act rationally to enable problem-solving.

AI primer

There are 2 accepted types of AI: Artificial Narrow Intelligence (ANI) which is trained and focused to perform specific tasks, and Artificial General Intelligence (AGI) where a machine would have intelligence equal to or more advanced than humans.

The AI probably most in use today is ANI. Many cyber security technologies use this approach; combining their algorithms with large datasets to automate tasks such as facial recognition.

AI differs from machine learning in that the algorithms try simulating the reasoning that people use to learn from new information and make decisions. Machine learning is a close relative that allows systems to identify patterns, make decisions, and improve themselves through experience and data.

AI for Cyber security – benefits

With cyberattacks are increasing each year, and the increasing complexity of IT systems, the number of variables an organisation must deal with to maintain an adequate defence is more and more challenging. While traditional signature-based systems can detect 90% of threats, AI could increase that to 95%.

So, AI is starting to help in Threat Hunting which has been a manual task until recently. The market also refers to this use of AI as User Behaviour Analytics (UBA). The UBA looks for patterns in traffic and usage that might be a result of system abuse or behaviour consistent with the presence and spread of malware. AI-based threat hunting comes into its own when dealing with zero-day threats (where there is no signature file yet available).

AI is also being applied to improving incidence response time; this can reduce the impact and potential spread of a breach or incidence of ransomware. Taking smarter decisions more quickly without the need for human intervention.

Some are looking at AI to help address people shortages. AI can fulfil roles that would have traditionally required much staffing.

AI for Cyber security – issues

Market analysts – looking at AI tech companies – have described AI as “snake oil”. Some customers believe there to be a “gap between the marketing and the reality”. Some of this is down to expectations and some known limitations of AI.

AI can increase the detection rates (including zero-day), but you can get “an explosion of false positives”. Each of these needs to be dealt with (just in case). This leads to the first major issue:

Resources: Companies need to invest a lot of time and money in people and resources like computing power, memory, and data to build and maintain AI systems.

Telemetry: To look for UBA type issues, AI systems need data. This often means tapping the network. Tapping can be expensive to implement. It can also entail moving huge quantities of data to where it needs processing (and then securing it). Tapping, by its nature, is a localised collection technology, so you may not see the whole network. It can result in blind spots and could result in a breach not being detected.

Finally, AI is a learning technology and is dependent on telemetry for learning/performing its tasks. If the quality of the data used to train AI is incomplete, then learning is incomplete. The nature of AI may not let you see these learning gaps, so may be an additional risk factor preventing the coveted 100% protection promised


AI does offer benefits but needs to be combined with effective systems that offer checks and balances to ensure security issues are picked up. Many recommend combining AI with traditional cyber security systems. Hardening your underlying environment by:

These actions mean that any alerts from an AI system can be quickly cross-referenced and checked against inventory, controls in place and other indicators of compromise.

AI future

Great things are expected from AI in the future. Artificial Intelligence in Cybersecurity is expected to grow at CAGR of 23.6% from 2020 to 2027″ to a $43Bn market. It will represent about 10% of the market by 2027/28. With this money going into AI, there is no doubt technology will continue to evolve and improve.