In all forms, function and application, technology has always been an ever-evolving landscape, but recent, more rapid advancements can only be described as groundbreaking. Emerging technologies like Artificial Intelligence (AI) have the potential to reshape industries and revolutionise our daily lives. However, the implications for cyber security cannot be overstated as these technologies present new security challenges and solutions.
The Rise of AI
Artificial Intelligence refers to the simulation of intelligence in machines that are programmed to fulfil many of the more mundane tasks that a human would be expected to undertake AND be able to display some level of intelligent decision making. This is in contrast to pure automation, which has been around since the first industrial era, and let’s be honest, not much changed for about 150 years. However, the last decade has proven to be pivotal. As disruptive to the status quo as the steam engine would have been to the mule or the water wheel — AI has made significant inroads into various sectors, including healthcare, finance, manufacturing, and more. Its ability to analyse vast datasets, make decisions, and perform tasks at unprecedented speeds makes it a game-changer.
Generative AI models like OpenAI’s ChatGPT, and DALL-E, as well as Google Gemini (formerly, Bard) and Microsoft Copilot — all great tools for creatives and routine tasks, enabling new levels of productivity — have seen rapid adoption, whilst the more frightening/amazing Transformative AI sector focuses on technologies such as Artificial General Intelligence (AGI). AGI seeks to truly replicate (or at least try to emulate) human abilities such as ‘common sense’, abstract thinking, and complex gross and fine motor skills in response to stimuli that it had not specifically been trained to deal with. Along with other, as yet undefined abilities, we can absolutely say that we are living in an era of science fiction, fast becoming, science fact.
Indeed, there seems to be an AI research stream tailored to suit every profession and interest with a vast multitude of sectors now awash with terms such as Machine Learning, Deep Learning, Natural Language Processing to name just a few. While we won’t go into what they all mean here, please set aside a good rainy afternoon to research, and learn how they all interconnect… and then you’ll find you need 10x as much time to try and understand what you just found!
Before we get over excited at the thought of the endless possibilities these technologies offer, let’s focus on the ‘real world’ for a moment. As AI becomes more integrated into our lives, it introduces new vulnerabilities, enabling and empowering cyber criminals and hackers to exploit weaknesses and conduct more and more sophisticated and targeted attacks. For instance, AI-driven malware can adapt and evolve, making it challenging for traditional cyber security measures to detect and mitigate threats. This increased level of sophistication makes it crucial to develop advanced security measures particularly when critical data may be exposed to such previously unforeseen risks.
For Generative AI and associated Large Language Models (LLMs) to improve they need data, and lots of it. AI’s reliance on data introduces data privacy concerns. AI algorithms often process and store large volumes of data, including personal and sensitive information. Protecting this data becomes paramount to ensure that it does not fall into the wrong hands. Safeguarding data privacy is not just about securing the data itself but also the processes and systems that handle it.
Furthermore, AI’s versatility means it can be used for both good (for those interested, simply search the internet for stories around the AlphaFold ML/Deep Learning Algorithm) as well as nefarious purposes – automating attacks such as phishing campaigns, while making them ever more convincing and difficult to detect. As AI technologies become more accessible, addressing misuse becomes a critical challenge.
Mitigating AI-Related Cyber Security Risks
Holistically, while cyber threats are no more diverse than are already widely recognised, and certainly the mitigations are nowhere near as exciting as the threat itself, AI and the automation inherent within the technology means that attack volumes can be greater and the speed in which vulnerabilities can be discovered and exploited can be alarmingly rapid.
As is the case with many cyber strategies, getting the fundamentals in place is essential and prevention is the best first step to ensuring and maintaining a robust security posture.
- Regular updates and patch management for systems and devices are essential. Timely updates can fix vulnerabilities and protect against known threats, ensuring that the systems remain resilient to evolving cyber threats.
- Robust authentication methods, such as multi-factor authentication (MFA), can safeguard against unauthorised access to systems and sensitive data. Implementing these methods helps prevent unauthorised individuals from gaining access to critical systems.
- Data encryption is crucial for protecting data from interception or theft. Strong encryption ensures that even if data is compromised, it remains unreadable to unauthorised parties.
- Regular security audits and penetration testing to identify vulnerabilities and weaknesses that need addressing. These proactive measures help organisations stay one step ahead of potential threats.
- AI model transparency and explain-ability are also key to ensuring security. This is especially pertinent to organisations and systems critical to either national security or that fulfil some form of ‘fail safe’ guarantee (such as fire suppression or flood defences). We need to be able to understand how the model would normally behave in a given situation vs. how it is behaving now, and what has caused this response. This understanding is crucial when defending against manipulation of training data or ingested data sets.
Note: there are already companies who provide explain-ability and transparency in their products as a service, meaning they know that organisations responsible for decision making and support of national security or critical infrastructure assurance are likely going to want to see this evidenced before giving it the official tick to be used as part of any operational effort.
The Future of AI and Cyber security
As AI continues to advance, the landscape of cyber security will undoubtedly evolve alongside it. AI-driven security solutions will play a crucial role in safeguarding digital assets. However, cyber criminals will likely become more sophisticated in their use of AI, making attacks harder to detect and mitigate.
Governments and regulatory bodies will likely introduce more stringent regulations around AI and cyber security, and organisations will need to navigate these regulations to ensure compliance, but also not stifle innovation, which is occurring across the globe, and in some cases completely un-regulated. Papers such as the Dept for Science, Technology and Innovation, and Office for Artificial Intelligence, AI regulation: a pro-innovation approach – GOV.UK (www.gov.uk), the National AI Strategy – GOV.UK (www.gov.uk) , and the Defence Artificial Intelligence Strategy – GOV.UK (www.gov.uk), and Guidelines for secure AI system development – NCSC.GOV.UK provide helpful but ‘surface scratching’ guidance on AI assurance and policy making.
Collaboration and leveraging the knowledge and expertise of cyber security experts will be key for organisations to address potential threats to data and information security. Staying ahead of evolving threats and technologies will require a collective effort.
One of the key weaknesses identified at the last Defence conference on AI (AI Fest 2023) is that the current security workforce is neither equipped with the understanding or the means to access relevant training to upskill with adequately robust qualifications quick enough to respond to the current speed of progress within the AI sector. This is applicable for both sides of the coin – AI that’s being developed for ‘good’ and conversely, ‘not so good’ (or just downright evil). This gap looks likely to widen until we can either limit the speed of development of ‘shadow’ AI models, such as WormGPT and DarkBART (to name just two which have been developed without the inclusion of ethical standards by their developers), or significantly upskill our workforce at pace to counter the threat through improved understanding and suitably robust policy.
Conclusion
Cyber and information security specialists now find themselves operating under a more dynamic and evolving set of challenges than ever before. Clearly then, there is a strand of work around the concept of ‘if you can’t beat them, join them!’ — we need to seek and identify additional efficiencies that are already being leveraged for more corrupt purposes.
Emerging technologies like AI hold incredible promise for the future, but they also present new challenges for cyber security. As we continue to integrate AI into our daily lives and critical infrastructure, it’s imperative that we remain vigilant and proactive in addressing the associated risks.
Further Reading:
AI and cyber security: what you need to know: https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know (NCSC)
Guidelines for secure AI system development: https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development (NCSC)