• A call to arms

    As ‘bad’ AI transforms cybercrime, is ‘good’ AI the best defence?

    A call to arms

    Recent research from Cybersecurity at MIT Sloan (CAMS) and Safe Security analysed data from 2 800 ransomware attacks and found that four out of every five – 80% of them – were powered by artificial intelligence (AI). As is true in so many fields, AI is revolutionising the big business of cybercrime, accelerating the speed, scale and sophistication of cyberattacks.

    Just a cursory glance at the report (or, if you prefer, a Copilot-generated AI summary) reveals the extent of the problem. In one example, cybercriminals use AI to ‘generate highly personalised phishing emails that mimic legitimate communication’, analysing your digital behaviour, online activity and email/browser history to lure you in. (The report noted that AI-generated phishing emails alone have increased by more than 1 000% since 2022.)

    You name it, crooks using AI can do it, from AI-generated voice cloning to AI-accelerated password cracking… and hackers are even manipulating legitimate AI systems by feeding them deceptive data to cause incorrect outputs. Added to that, the attack surface is expanding. Between business computers, personal laptops and personal/business mobile devices, attackers have so much more to aim at now than they did even five years ago. It’s frightening, and it’s clear that traditional cyber defence strategies are no longer enough to keep AI-enhanced cybercriminals at bay.

    ‘The autonomous nature of things has caused there to be a re-examination of the way in which we defend ourselves and the way in which we have to look at both old- and new-style attacks,’ says Michael Siegel, report author and principal research scientist at CAMS.

    But… don’t fear the robots. That’s the message from Tony Anscombe, chief security evangelist at cybersecurity firm ESET. Speaking ahead of the 2024 Africa Tech Festival, where he was part of a panel discussion on How to Harness AI Instead of Fear It, Anscombe emphasised that fear of new technology is not unusual. ‘If we go back in history to the industrial revolution, there was a huge fear that technology such as steam engines and industrial automation would remove the need for people in manual jobs – and, of course, it did to some extent. But we also evolved; we started doing more interesting things or other jobs related to the new technology. While it changed some jobs, it also created new opportunities.’

    While that may be true, previous technologies didn’t pose the same security threat that AI does. After all, the steam engine didn’t generate a flawless reproduction of your face and voice and then use that to clear out your bank account. Again, though, Anscombe pointed to the potential of using emerging AI technologies to thwart security threats posed by emerging AI technologies.

    New defence strategies against cyberattacks call for a mix of digital and human interventions

    ‘AI can play a pivotal role in fortifying threat detection and prevention by analysing vast datasets in real time, quickly identifying patterns and anomalies that indicate new threats and security vulnerabilities,’ he said. ‘Integrated into protective solutions, AI can enhance threat detection and response capabilities, improve threat awareness and the accessibility of services such as threat intelligence and threat hunting, all contributing to better protection, thus preventing advanced attacks.’

    An obvious strategy is to fight fire with fire, deploying AI-powered cybersecurity tools to combat AI-powered cybersecurity threats. After all, one of AI’s big selling points is that it is able to deploy large numbers of solutions in small amounts of time. (If you’ve used a generative AI tool such as ChatGPT or Copilot, you’ll know what that looks like: it’ll spit out a 5 000-word homework assignment in a matter of seconds.) As cybercriminals use AI to deploy vast volumes of attacks, the only way organisations can possibly keep up is by using AI to deploy vast volumes of defence.

    Even then, ‘keeping up’ is proving to be all but impossible. Looking at data breaches alone, IBM’s Cost of a Data Breach Report 2025 notes the mean time it took organisations to identify a breach was 181 days, followed by a further 60 days to contain it. The same report indicates that 97% of organisations reported AI-related security incidents and lacked proper AI access controls.

    ‘The speed at which cyber threats are evolving is unprecedented,’ says Kumar Vaibhav, lead senior solution architect for cybersecurity at tech consultancy In2IT. He says that while AI ‘excels at real-time threat identification by examining enormous volumes of data to find abnormalities suggestive of cyber threats’, it also uses machine learning algorithms to identify minor patterns or behaviours that depart from the norm – unlike traditional systems that depend on predetermined rules.

    ‘Importantly, generative AI dramatically increases the efficacy of incident response, a crucial aspect of cybersecurity,’ he says. ‘Manual intervention is a common component of traditional response techniques, which can cause delays in mitigation attempts. Important procedures like evaluating security events and ranking issues according to their seriousness are automated by generative AI. By speeding up reaction times, this automation lessens the effect of cyberattacks.’

    Vaibhav adds that AI’s use extends beyond responding to cyberattacks and into preventative, predictive threat intelligence. ‘AI makes remarkably accurate predictions about future threats by examining historical data on vulnerabilities and attack trends. Organisations can use this capacity to rank risks according to their potential impact and likelihood of exploitation. AI, for instance, can predict patterns in the evolution of malware or spot new attack methods aimed at specific sectors.

    ‘Generative AI further enhances predictive intelligence, mimicking novel attack strategies that opponents may use. By creating defences before attacks arise, these simulations help companies stay ahead of cybercriminals. A dynamic defence plan that adjusts to the constantly shifting threat scenario combines predictive intelligence with generative simulations,’ he says.

    However, the CAMS and Safe Security researchers warn that building AI-powered defences is only part of what’s needed. ‘AI-powered cybersecurity tools alone will not suffice,’ their report states. ‘A proactive, multi-layered approach – integrating human oversight, governance frameworks, AI-driven threat simulations, and real-time intelligence sharing – is critical.’

    Ryan Boyes, governance, risk and compliance officer at Johannesburg-based cybersecurity firm Galix, echoes that sentiment. He warns that while AI’s capabilities in cybersecurity are vast, the same technology that enhances security can also introduce new vulnerabilities – and he warns against an over-reliance on AI-driven security measures.

    While protecting systems in an evolving environment is challenging, the cybersecurity arsenal is expanding

    ‘The automation of security processes can sometimes lead to complacency, with businesses assuming their AI tools are infallible,’ he says. ‘The reality is that AI is not perfect. It can make mistakes, and it can be manipulated, and its effectiveness depends on the quality of the data it is trained on. Blind trust in AI without human oversight can create a false sense of security, leading to vulnerabilities being overlooked.’

    Boyes recommends integrating AI with robust governance frameworks, continuous human oversight and expert-led security strategies – a mix, in other words, of digital and human defences. ‘AI is undeniably transforming information security, but it is not a silver bullet,’ he says.

    CAMS’ Siegel, meanwhile, likens the current state of cybersecurity to ‘asymmetric warfare’, where one side has significantly greater power than the other. The bad news is that, unless you’re a cybercriminal, the side that has the advantage is not yours.

    ‘Remember that the attacker only needs one point of entry and exploitation while the defender must stop all entry points and be resilient to all exploitations,’ Siegel says. ‘For cybersecurity, there are tremendous opportunities for things to go wrong. Protecting in this new environment that is moving at light speed is challenging, but we can learn from our previous work. Many researchers and products are already addressing management, prevention, detection, response, and resilience issues.’

    AI technologies are maturing and evolving by the nanosecond, and the threat of AI-powered cyberattacks is only going to grow greater and more complex. The problem’s not going away, and while ‘good’ AI is the best weapon in the fight against ‘bad’ AI, it’s certainly not the only weapon.

    By Mark van Dijk
    Images: Gallo/Getty Images