• Resources
  • Blogs
  • Offensive AI Lowers the Barrier of Entry for Bot Attackers

Offensive AI Lowers the Barrier of Entry for Bot Attackers

Alex McConnell
Alex McConnell
4 Minute read

Article Contents

    The use of artificial intelligence (AI) for defense allows for the better scanning of networks for vulnerabilities, automation, and attack detection based on existing datasets. However this is all in defense against an unknown attacker, who can have varying offensive tools all designed to overcome the most sophisticated defense.

    Is the biggest challenge for defensive AI that there is an offensive AI operator with unknown capabilities? And has offensive AI lowered the barrier of entry for bot attackers?

    The evolution of cyber threats

    To assess where we are now with offensive AI and automated bot attacks, we need to look back at where we have come from. In the early days, attackers spread worms and viruses via fairly simple means. Back in 1986, victims received the Brain virus via floppy discs. To bring us to this century, the LoveBug spread via emails in 2000.

    Since then, and in the past 15 years in particular, we’ve seen attacks conducted in a variety of forms, from the deliberate intrusion of Stuxnet, to advanced threats like Flame which used rootkit functionality to spread over a local network or via USB drive, to the rugged WannaCry ransomware which spread via vulnerable server message block ports.

    Now, attacks are often automated – carried out by bots programmed to perform actions over and over. Automated attacks like credential stuffing are commonplace and devastating to businesses and consumers alike.

    Humans ultimately created these attacks. Whilst there was the use of deliberately targeted intrusions and exploiting vulnerabilities, the attacks were often reliant on human actions – either from the victim’s side, for example to open an email or plug in a USB stick, or on the attacker side, such as programming bots to bypass defenses or attack specific targets.

    With offensive AI, the barrier to entry is lowered. The AI can write the phishing message to catch the eye of the victim, or even create the malware and bot scripts itself.

    How offensive AI is accelerating attacks

    Andy Still, CTO and co-founder of Netacea believes AI will “massively lower the barrier to entry to launching a cyber-attack” as the need for a relatively high level of technical ability is removed. Anyone with the skills to use AI-based code generation tools could have the ability to create an attack; especially when also using automated tools to execute it.

    Matthew Gracey-McMinn, head of threat research at Netacea, does not believe we are yet at the stage of attackers setting their kit up and saying: “Hey, I would like you to go and launch this type of attack against this company or this person” as setting up a successful attack from start to finish does still need some degree of skill.

    “Increasingly we’re seeing a sort of early-stage offensive AI, where we’re seeing attackers using large language models, and the capabilities offered by what is really early-stage AI in many regards, and essentially using it to supplement and speed up their own processes,” Gracey-McMinn says.

    This can lead to wannabe attackers getting assistance from large language models to give them help and advice in coding, and even writing large amounts of their bot scripts for them.

    AI-powered attack techniques

    Ultimately this lowered barrier to offensive AI means attackers can use it in a variety of attack techniques. In one example, OpenAI’s GPT-4 was able to hire someone on TaskRabbit – telling them it was a vision impaired human – to solve a CAPTCHA for them. The researchers were able to deceive a real human in the physical world to get what it wanted done. This was part of a test to demonstrate how AI could exhibit this level of power-seeking behavior.

    Perhaps the larger threat is from CAPTCHA farms, which we discussed in the last decade. Back in 2019, we were able to determine that someone could earn $0.17 (£0.13) per 1,000 CAPTCHAs solved, meaning they would need to complete 100 million CAPTCHAs to earn £13k.

    CAPTCHA farms bridge the gap between bot operator and the site they want to access, requiring the human element. So, if that person could be replaced by AI, would that be more of a threat? In one study, an operation run by one person was able to complete CAPTCHAs using AI, and at the same cost as using humans in a CAPTCHA farm.

    Are we closer to AI being able to solve the most standard form where bots have been defeated? We are at the stage where AI is able to craft phishing messages which are almost on par with those created by skilled humans in minutes.

    In more general terms, AI could learn why bot attacks aren’t succeeding, use machine learning to adjust their approach, then use automation to iterate further attacks indefinitely until they break through defenses.

    Challenges in defending against AI-enhanced bot attacks

    Bot attacks have increased in sophistication. With the consideration of AI tools now in use, the evolution of attacks will only increase further. Andy Still, CTO and co-founder of Netacea says that the growth in attacks is restricted by the limitations of humans, who have only so much time, energy and money to undertake a task.

    Therefore, the best defense is to frustrate the attacker to the point where they run out of ideas, energy and interest and move on to something else.

    “If you’re taking away that requirement for that human element, the attacks will become much higher volume, much, much higher longevity, and increasing sophistication, so we will need ways of responding to those kinds of attacks,” he says.

    As Gracey-McMinn concludes: “I think we’re at an early stage of AI, where there’s a lot of capabilities that are really supplementing and facilitating attacks, but that things are going to get a lot worse very, very quickly.”

    Automated response to automated attacks

    This is where defensive AI comes into play: using AI tools to spot patterns of malicious activity in large volumes of web traffic, automating the detection and blocking of threats, and using data processing to mitigate those threats more rapidly.

    This is how Netacea has been developed since our inception in 2018, and even in our pre-launch R&D phase in the years prior. Our supervised machine learning algorithms get more accurate as they process more data, whilst our anomaly-detecting unsupervised models quickly flag previously unseen behaviors automatically.

    This combination is built to keep pace with the current and future advances in AI-powered bot attacks, delivering rapid protection from a wide range of bot threats.

    Block Bots Effortlessly with Netacea

    Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.

    Related Blogs

    Knight chess piece
    Alex McConnell

    What is a Sophisticated Bot Attack?

    Learn about the growing sophistication of bot attacks. Find out how to improve defenses and detect these attacks effectively.
    Worker helmet
    Alex McConnell

    What is Defensive AI and Why is it Essential in Bot Protection?

    Discover the potential of defensive AI in bot protection. Explore how machine learning can protect against automated attacks.
    Man with binoculars
    Alex McConnell

    SEO Poisoning Part 2: How Bots Fuel SEO Poisoning Attacks

    Learn how bots and automation are expanding the scope and reach of SEO poisoning attacks, and how businesses can detect and protect against malicious campaigns.

    Block Bots Effortlessly with Netacea

    Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
    • Agentless, self managing spots up to 33x more threats
    • Automated, trusted defensive AI. Real-time detection and response
    • Invisible to attackers. Operates at the edge, deters persistent threats
    Book a Demo