Artificial intelligence, or AI, is becoming influential in many aspects of our life – from our personal lives to our work – and how we secure and safeguard against potential threats is no different. By 2025, the WEF predicts that AI and automation technology will create 12 million more jobs than it replaces, demonstrating the growth potential of the sector. However, power can also mean problems, and the application of AI in cybersecurity can be just as frightening as it can be ground-breaking.
As the Practice Director of Cybersecurity at Lorien, I’m fortunate enough to see and experience the biggest industry trends as they take place, from client demands to candidate desires. In this blog, I’ll be sharing my thoughts on how AI is taking cybersecurity by storm, and how this presents both threats and opportunities for the sector.
Opportunities for AI in cybersecurity
Many businesses have identified the value of AI, with a 270% growth of AI usage within companies in just four years. 72% of business leaders believe that AI will be a fundamental business advantage in the near future, highlighting that investment in this sector is likely to grow in the coming years – and cybersecurity is one likely area to benefit due to its critical place in the tech ecosystem.
Within cybersecurity, AI has proven successful in establishing data patterns to enable security systems to learn from past experiences, such as hacking and cyberattacks, alongside reducing incident response times – with all tasks carried out without any human commands. AI is also being developed to boost cybersecurity in the wake of increased homeworking, with threat detection being a core use of AI within cybersecurity.
An AI based network will have the tools to identify potential threats and evaluate what may be abnormal data within seconds, protecting a network from a potential cyber-attack. Additionally, advanced AI programmes will offer proactive security solutions and resilience to attacks, with the promise of businesses remaining operational even if a cyber-attack were to occur. Finally, autonomous AI can be self-corrective, reducing the need for human intervention and the likelihood of accidental error.
In this sense, AI application in cybersecurity can create safer, stronger, and more proactive systems for safeguarding against cyber threats.
Challenges for AI in cybersecurity
Whilst AI can assist in bridging gaps in a network in order to prevent a cyberattack, it can also be used by cybercriminals to identify potential vulnerable areas that can be targeted during an attack. For example, AI can be trained to identify particular behaviours that can lead to it successfully convincing potential victims that a scam email or phone call is actually real – a current tactic used by cybercriminals that would be made much easier with the weaponization of AI.
Cybercriminals also use AI’s advantage of performing tasks efficiently to perform phishing tasks such as impersonation and monitoring. AI bots can trick people into clicking links that open their network to hacking, or AI can be used to monitor senior executive behaviour for precise spearphishing. Spearphishing, as a more advanced form of phishing, has the capacity for increased success due to the volume of data and content that AI can search and compile in order to convince recipients of their legitimacy.
Hackers may also use AI to find vulnerabilities within a company’s architecture. Traditional IT systems are typically targeted through cyber doors that are left open, allowing viruses to infiltrate the network. This is similar for how AI systems can be targeted – however the risk is much greater as data is at the core of AI and assists the system to function. An attack on a business’s AI system can lead to stolen or infected data and a compromised AI system that makes errors due to the attack. Many organisations that plan to integrate AI within their infrastructure as a security measure don’t consider that the AI itself could be vulnerable to attack.
The weaponization of AI
Recently, the weaponization of AI and cybersecurity have become a hot topic in my market. It goes without saying that there are a lot of opportunities for this technology in the defence space – it is more precise, faster, and cheaper than sheer manpower. It could be used for recon missions and to support teams on the ground, minimising risks and ultimately saving lives. And it could eliminate the margin for human error, reducing inadvertent casualties and hesitation.
But it’s also important to highlight the vast risks for this technology, and I couldn’t write an article on the future of AI in cybersecurity without touching on this.
Firstly, AI in a defensive situation is just as vulnerable to cyberattacks and hacking as in any industry – but when we consider a warzone situation, the implications are catastrophic. Secondly, the weaponization of AI could make conflicts more dangerous through desensitisation. When operations are easier to mount because of AI, it reduces the time we take to think, and replaces human compassion with logic. Lastly, the use of autonomous or AI-driven weapons must have a common code of conduct. But it doesn’t appear that we are any closer to achieving that, with many countries yet to commit a clear view on its use. Global co-operation needs to be reached quickly to determine the future of autonomous and AI-powered weapons, but with changing views, strong opinions, and many players to consider – including across the public and private sectors as well as globally – there is still a lot of work to be done.
For my part, I think we will eventually see AI and automation being used more in the defence industry, but for now, how we think about cybersecurity is more local.
At Lorien, we align ourselves with technology hotspots and ensure we stay up to date with the latest cybersecurity trends and information. For more information on the future of cybersecurity, please reach out to me here.