• Resources
  • Blogs
  • The Dark Side of AI: How Malicious Bots May Exploit ChatGPT

The Dark Side of AI: How Malicious Bots May Exploit ChatGPT

Alex McConnell
Alex McConnell
07/02/23
5 Minute read
dark side of ai banner image

Article Contents

    In recent years, the world of artificial intelligence (AI) has seen a significant increase in the use of language models. ChatGPT, a language generation model developed by OpenAI, has been making waves in the news with its ability to process large amounts of data, which can be used to train machine learning models and to test them.

    One feature that’s grabbed headlines is its ability to write code and provide feedback on the accuracy and efficiency of code. To do this, ChatGPT uses a combination of machine learning algorithms and natural language processing techniques to find useful code libraries for a given task. This enables quick code generation for individuals without extensive programming skills.

    ChatGPT as a tool to help cybersecurity

    As with any technology, there is always the potential for misuse by malicious actors. Within cybersecurity more broadly, ChatGPT can be used to improve defenses against malicious actors. We can gain insights into the methods applied by cybercriminals in their attempts to exploit ChatGPT and take steps to prepare for such threats.

    Does ChatGPT safeguard against the generation of malicious code?

    ChatGPT indicates that the service adheres to a strict code of ethics and is programmed to refrain from generating code that could be used nefariously.

    For instance, a ChatGPT user may request the creation of a Python script to act as a keylogger for capturing credentials. However, ChatGPT recognizes that such behavior is often employed in illegal activities to steal personal data. As a result, ChatGPT declines to fulfil the request and informs the user that doing so may violate its policies.

    Nonetheless, it’s possible to circumvent these policies to create malicious code through slight modifications of the scenario presented to ChatGPT. For example, our description of the intended use case may be misleading and alter the response.

    The following example demonstrates how, with slight modifications, ChatGPT can provide Python code to achieve the intended outcome.

    In this blog post, we will explore how malicious bots may utilize ChatGPT to carry out attacks and what steps can be taken to protect against them.

    ChatGPT can:

    1. Create malicious scraping scripts

    Scraping can have a severely detrimental impact on businesses, often leading to significant financial losses. This can manifest as website or application downtime, as well as being a precursor to other forms of attacks, such as scalping.

    Given the right prompt, ChatGPT can generate scripts that can be used to scrape data from a specific website.

    Additionally, ChatGPT can also be used to generate web scraping rules and queries. These rules and queries specify how to extract data from a website, making it easier to collect the specific data required, regardless of varying code structures implemented from site to site.

    This means that attackers can feasibly use ChatGPT to make the process of web scraping more efficient and effective.

    On the other hand, ChatGPT can also be used for good by assisting with automated testing. By training the model on a dataset of automated testing scenarios and scripts, it can generate new scripts that can be used to test the functionality of a website and potentially detect any bugs or vulnerabilities that can be exploited in further attacks.

    2. Write convincing phishing emails

    Another way malicious bots could utilize ChatGPT is by creating convincing phishing emails that are almost indistinguishable from those sent by a real person. This can be used to trick victims into providing sensitive information or clicking on malicious links.

    Spam filters and anti-phishing tools often analyze the content of an email, where common signs of fraudulent behavior include spelling and grammar errors, particularly when these are written by those with limited proficiency in the English language. Because the text generated by ChatGPT doesn’t contain these errors, such filters are ineffective against the phishing emails it generates.

    This capability extends to multiple languages as demonstrated by a translation of the email into Russian. Netacea’s in-house Russian-speaking Threat Researcher reviewed the translation and praised the high degree of accuracy and precision displayed in the translation, even stating that it surpassed the quality of other translation services they have encountered previously.

    3. Help automate spear phishing attacks

    Spear phishing is a highly targeted form of social engineering that often requires significant effort to execute effectively, because it involves researching details of a specific individual in order to feed this information into a more convincing and personalized attack.

    However, with the advent of ChatGPT, cybercriminals can combine AI and automation to launch sophisticated attacks with relative ease. Access to vast amounts of data, including personal information leaked from data breaches and platforms such as LinkedIn, allows attackers to easily create targeted spear phishing lures on a large scale, increasing their chances of success.

    4. Rapidly spread misinformation via text, speech and deepfake videos with GPT-3

    Spam bots often use fake accounts on social media, message boards, or via email to promote products, viewpoints, or to entice users to click on malicious links. With the use of ChatGPT, bot operators can generate unique and human-like synthetic text very quickly, making it easier to spread misinformation online.

    Furthermore, malicious bots can leverage the technology behind ChatGPT, GPT-3 (Generative Pre-trained Transformer 3), to generate deepfake videos. By training the model on a collection of videos and audio recordings featuring a particular individual, the bot can disseminate videos in which the individual appears to speak or perform actions that they did not actually undertake, with the script being generated by ChatGPT. This can be utilized to disseminate false information or to impersonate individuals for nefarious purposes.

    Is ChatGPT providing anything new to bot developers?

    Whilst ChatGPT can unquestionably be used to help write bots for various purposes (both well-meaning and malicious), people have been creating bot scripts without needing to write code for quite some time, using stacker tools to drag and drop functions to create code automatically. One such tool, OpenBullet, is widely used for credential stuffing attacks because of its ease of use (webinar: watch our Threat Research team demonstrate a dummy credential stuffing attack in real-time).

    Although the functionality isn’t brand new, ChatGPT may popularize bots further because of how easy it is to build them using basic prompts. ChatGPT is very accessible, even more so than OpenBullet. It may also help accelerate bot development lifecycles by speeding up research into bypasses for popular bot management solutions, and by writing code to do that for you.

    In many cases ChatGPT is merely repeating what it finds via its various data sources across the web. However, the tool’s presentation of answers in one place and in a clear format, is democratizing the process of finding workarounds to defenses or undertaking specific actions against targeted systems.

    What’s next for ChatGPT and malicious bot attacks?

    As we already mentioned, ChatGPT is powered by OpenAI’s GPT-3 engine, which is trained on 175 billion parameters. By comparison, OpenAI’s soon-to-be-released GPT-4 is said to be trained on 100-175 trillion parameters – nearly 1,000 times more than GPT-3.

    In addition to the specific examples covered in this blog, it is likely that as ChatGPT becomes more advanced, bot developers will find new and creative ways to utilize it in their attacks. It’s important to be aware of these potential threats and take steps to protect against them.

    Fight AI-powered bots with AI-powered bot management

    Just as ChatGPT is based on deep learning techniques to generate text, Netacea’s Intent Analytics® AI engine uses a variety of machine learning methods to detect bots.

    For example, our patented Intent Pathways technology is based on an adaptation of natural language processing. In natural language processing, a neural network is trained to predict the probability of words appearing together. After embedding the relationships between the 170,000 words in the English language, sentences can be analyzed computationally to evaluate their content. For example, Gmail automatically filters emails into folders based on their content, and search engines assess our queries to deliver the most relevant results first.

    For an eCommerce website, the “vocabulary” is made up on average of 100,000+ pages, split into individual paths, and sequenced as a session or user journeys between those requests. We trained our Intent Pathways model to analyze the intent of users based on the sequence of requests they make (in other words, how they navigate from page to page), to decide in real-time whether visitors were bots or genuine customers.

    Stay ahead of ever-evolving bot threats

    With AI tools like ChatGPT in the arsenal of bot developers, it’s more important than ever to put protection in place to keep them from damaging your own systems.

    Talk to Netacea today about how you can demo our advanced bot management solution, which detects up to six times more bots than competitors, including scalpers, scrapers, credential stuffing, account takeover, fake account creation, card cracking, loyalty point fraud, and more.

    Block Bots Effortlessly with Netacea

    Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
    Book

    Related Blogs

    Shopping trolley
    Blog
    Alex McConnell
    |
    14/11/24

    Evolution of Scalper Bots Part 5: The Rise of Retail Scalping

    Delve into the professionalization of scalper bots and the challenges in anti-bot legislation in our insightful blog post.
    Person hiding behind Google logo
    Blog
    Alex McConnell
    |
    13/11/24

    How Bot Expertise Stopped the Google Translate Bot Proxy Technique

    The Netacea data science team reveals a new attack technique: web scrapers using Google Translate as a proxy. Learn how to detect and protect against this evolving bot threat.
    Knight chess piece
    Blog
    Alex McConnell
    |
    17/10/24

    Evolution of Scalper Bots Part 4: New Bot Tactics vs. Anti-Bot Tools and Legislation

    Uncover the tactics and technologies behind scalper bots from 2015 to 2017. Learn how retailers tried to counter their impact in this era.

    Block Bots Effortlessly with Netacea

    Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
    • Agentless, self managing spots up to 33x more threats
    • Automated, trusted defensive AI. Real-time detection and response
    • Invisible to attackers. Operates at the edge, deters persistent threats

    Book a Demo

    Address(Required)
    Privacy Policy(Required)