Why Fake Accounts are Used in an Account Takeover Attack

Alex McConnell
Alex McConnell
20/09/18
7 Minute read
Why Fake Accounts are Used in an Account Takeover Attack

Article Contents

    Fake accounts are typically thought of as buying Twitter or Instagram followers, or influencing political voting via social media. In cybersecurity, fake accounts are much more sinister and while they present a threat to any organisation that offers online customer accounts, they can also be viewed as a pre-emptive warning of an imminent account takeover (ATO) attack.

    There are several reasons a cybercriminal will create fake accounts, with the majority being for financial gain, for example:

    – Abusing sign up bonuses and discounts on retail sites,

    – Free bet placement and account credit in gaming and gambling

    – As a part of an ATO attack

    ATO attacks are one of the fastest growing cyber threats, fuelled by an increase in data breaches leaking compromised user credentials and easier than ever to use credential stuffing tools such as SentryMBA and STORM. According to InfoSecurity-Magazine.com*, 2.3 billion user credentials were reportedly leaked by 51 companies in 2017, with the average time from the spill to a public announcement being 15 months. This gives bad actors over a year to perform ATO attacks using the same compromised username and password combinations against thousands of other target websites, wherein online retail such credential stuffing attacks can account for up to 9 out of 10 login requests.

    Within the cybercrime community, fake accounts can go by many names, often based on the type of fake account or purpose, names include; Bot Accounts, Synthetic Accounts and Canary Accounts.

    Automated fake account creation

    If 10 or more accounts are to be created, which is normally the case, the attacker will utilise scripts to automate and quicken the fake account creation process. Scripts can submit data into the forms as part of the account registration and may have API calls to human verification services such as a CAPTCHA solver.

    The scripts can create thousands of accounts in a small timeframe. Organisations should have alerts in place to notify them when a higher than normal rate of accounts are being created and investigate the validity of such accounts. If the fake accounts are part of a planned ATO ambush, the hacker will record the successful and failed account creation.

    With a library of fake accounts, the monetisation process is initiated and varies according to the purpose of the attack or the victim site.

    Real world customer example

    One of our customers, a leading luxury clothing retail company listed on the London Stock Exchange, was alerted to 5,000 fake accounts being created in a 24-hour period. The automated account creation was centrally controlled but was distributed around the globe before hitting their website with the origin of the account creation coming from 14 separate data centres and 20 countries.

    The question is, why? What was the intended use for those fake accounts? There was no sign-up bonus to abuse, could it be to frequently purchase items, perhaps on credit and return to the store for a cash refund? Probably not as the number of accounts would require hundreds, if not thousands of physical store visits to obtain a refund which comes with a high risk of being caught. So, what was it?

    Deception by misdirection

    With any criminal activity, one of the golden rules is not to get caught. Same applies to cybercrime, the offender must remain unnoticed and anonymous at all time. Canary accounts serve as both a warning and a smoke screen, they are fake accounts created on a target website prior to launching a credential stuffing attack.

    A credential stuffing-based account takeover attack may utilise tens or even hundreds of thousands of compromised credentials, managed and executed from a credential stuffing tool such as Sentry MBA or STORM. Simply launching all those username and password combinations at a login form will yield a low success rate, either account does not exist or incorrect password, and alert security to the attack.

    For example, if an attack has 100,000 compromised username and password combinations at their disposal, a direct credential stuffing attack may yield a 0.1% success rate would grant access to only 100 accounts and alert the target site security teams to 99,900 failed login attempts.

    One of the common ways of spotting a potential account takeover attack is to actively monitor the percentage of login failures to success ratio over time. After all, the reasoning goes, large credential stuffing attack must show a dramatic increase in the number of login failures. The problem is the attackers know this and have devised several bot strategies to thwart this basic ratio analysis.

    The hacker must, therefore, seek to increase their login success rate ratio.

    A smart way to increase the success rate is to focus only on the credentials for accounts that exist and remove the credentials from the attack where an account does not exist. The method behind this is to launch a fake account creation attack.

    How creating fake accounts can increase credential stuffing success rates

    Taking the 100,000 compromised credentials and using them to create accounts will yield one of two results per credential:

    1 – Fake Account Creation = Fail

    If the attacker can not create a fake account using the credentials to hand, it means the account already exists and they now know that is one tagged for a credential stuffing in an ATO attack.

    2 – Fake Account Creation = pass

    If an account is created, then those credentials are not associated with an existing account and so will be removed from the list used in the credential stuffing attack, increasing the stuffing success ratio in favour of the attacker. Furthermore, the attacker now has a fake account under their control to be used in a parallel activity track within the stuffing attack to be used as a successful login attempt, further bringing the ratio into their favour and helping evade security flags.

    These fake accounts are, effectively legitimate accounts with a known username and password combination, whose sole purpose is to allow the attacker to blend into the login stream and disguise the login failure to success ratio, by logging into these fake accounts multiple times during the attack.

    The attacker has now increased the effectiveness of their attack, with the working example now looking like 10,000 known accounts vulnerable to exploit, up to 90,000 (or however many the attack chose to activate) fake accounts to be used to mask the actual credential stuffing attack by successfully logging into the fake accounts during the attack.

    Often referred to as Canary Accounts, fake accounts can also serve as an early warning sign for attackers. If the victim becomes suspicious of abnormal account activity on their website, they will often suspend the offending accounts. If the attacker is unable to access their fake account then it is fair for them to assume the victim’s security teams are aware of their activity, affording the attacker the luxury of deciding whether to continue, alter or cancel their attack.

    Turning the tables on fake account creation

    We know an average paid list of username and password combinations acquired on the dark web will have a 1-2% success rate. However, a fresh list from one of the numerous credential spills could see this success rate soar to much higher rates and will be harder to detect from the pure ratio analysis.

    Initially, these fake accounts were often easier to spot. A sudden spike in registrations overnight with a sequential number in the email address stood out like a sore thumb. Now, these fake accounts can be much harder to spot. The fake accounts are typically created some days or weeks before the main attack and may even be actual human logins, registering with fake data and a ‘real’ working email address.

    The success ratio analysis also relies on the long-term web account login failure/success rate. The effectiveness of relying on this is reducing in modern attacks, as bots are distorting this ratio in the first place by constantly attempting to log in. These low and slow attempts can dramatically skew the underlying ratio.

    Website login success ratio varies widely, so it’s almost impossible to give average figures on what the success/failure rates should look like. Websites that are rarely accessed for example, perhaps just once or twice a year by a customer, will see much higher ratios of login failure and reset password requests as the user may have legitimately forgotten their password due to infrequent use. E-commerce sites can see success to failure ratios of 80:20, but this can easily move towards 60:40 over time when purposely skewed with additional slow and low bot attacks.

    Once the analytics are skewed, the ratio analysis is much more difficult. The slow and low bots will easily bypass the WAF rule sets, and you don’t need much blending of the fake accounts to appear within normal ratio limits.

    Blending the login stream during the attack is effective but does leave vital tell tales. Each fake account needs a valid email address and the accounts need to be maintained. Most hackers don’t use more than a few thousand of these accounts, and they often use these same accounts time and time again. This means that the accounts themselves can act as a fingerprint for the attack.

    When it comes to spotting fake accounts, there are a few tricks you can use and red flags to look out for, though these vary between different sites.

    Fake Instagram accounts

    Tell-tale signs that an Instagram account may be fake include:

    – A username that includes strings of seemingly random words or numbers.

    – The account has many followers but has posted little to no content.

    – The accounts’ followers are made up of other suspicious accounts.

    – There is no bio included in the profile or the bio is non-specific, including no personalised information.

    – The profile includes a strange link – not to be clicked in any circumstances.

    Fake Twitter accounts

    Signs a Twitter account could be fake include:

    – Usernames made up of random words or numbers.

    – Tweets which appear to have been written by an AI, with all tweets often being about the same subject.

    – No profile picture or display name given.

    Fake email accounts

    A fake email account could be spotted through signs including:

    – Mimicking established companies or people, such as your mobile provider, a social media platform or a streaming service.

    – Non-customised greetings, such as ‘Dear customer’, ‘To whom it may concern’, etc.

    – Requests for the recipient to click a link or download an attachment.

    Stop Account Takeover with Netacea

    At Netacea we use our machine learning algorithms to proactively identify the bad actors through our behavioral analysis tools to prevent account takeover attacks. The account login bots perform one action, they login in. Humans tend to log in and then do something once they are in. This fundamental behavioral is hard to fake.

    Our approach to base-line the standard deviation of normal behaviour versus the abnormal methods used in even the most sophisticated ATO attack has proved to have paid dividends. If you would like to learn more about the anatomy of an ATO attack, sign up to to a Netacea Bot Protection Demo and see for yourself how our use of defensive AI helps brands stop account takeover attacks.

    Block Bots Effortlessly with Netacea

    Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
    Book

    Related Blogs

    Hand holding magazine
    Blog
    Alex McConnell
    |
    10/10/24

    Combating Content Theft: Maximize Revenue by Securing Your Content

    Discover the impact of content theft and web scraping on your business. Find out how to handle this growing issue and protect your digital assets.
    Fingerprint
    Blog
    Alex McConnell
    |
    24/09/24

    The Truth About Why Server-Side Bot Management Beats Client-Side

    Learn why server-side bot management outperforms client-side detection. Discover how Netacea’s server-side solution enhances security, reduces risks, and scales efficiently.
    Rock music
    Blog
    Alex McConnell
    |
    11/09/24

    How Scalper Bots Evaded Detection to Snatch Oasis Tickets

    Delve into the world of scalper bots and their impact on ticket sales for the highly anticipated Oasis reunion. Learn how they exploited the demand for tickets.

    Block Bots Effortlessly with Netacea

    Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
    • Agentless, self managing spots up to 33x more threats
    • Automated, trusted defensive AI. Real-time detection and response
    • Invisible to attackers. Operates at the edge, deters persistent threats

    Book a Demo

    Address(Required)
    Privacy Policy(Required)