• Resources
  • Podcasts
  • Bots vs Consumers, Social Media API Access, Ticket Scalping Legislation

Bots vs Consumers, Social Media API Access, Ticket Scalping Legislation

Available on:
Season 2, Episode 3
13th July 2023

In this month’s episode, we start by focusing on the real-world impact of bots (scripts used to automate tasks and exploit business logic). In the UK, bots are being used to book up every available driving test before reselling them for profit; meanwhile in the US, gig workers delivering groceries are losing out to bots that hoard the most profitable delivery jobs. Our panel explains how this happens and discusses what can be done to stop it.

Meanwhile, the social media landscape is shifting rapidly. Free, unlimited access to APIs has become a thing of the past for users and businesses reliant on Twitter and Reddit. Fake accounts are still a looming problem across platforms, forcing the much-hyped IRL to close permanently. Are social media businesses taking the right approach to data scraping, fake account creation and access to their services, and will Meta’s Threads disrupt the industry?

Finally, we take a fresh look at ticket scalping considering legislative measures taken by the State of Victoria for Taylor Swift’s tour in Australia. Will it be enough to deter the touts?

Speakers

Dani Middleton-Wren

Danielle Middleton-Wren

Head of Media, Netacea
Matthew Gracey-McMinn

Matthew Gracey-McMinn

Head of Threat Research, Netacea
Paulina Cakalli

Paulina Cakalli

Lead Data Analyst, Netacea
Chris Collier

Chris Collier

Head of Solution Engineering, Netacea

Episode Transcript

[00:00:00] Dani Middleton-Wren: Hello and welcome to Cybersecurity Sessions, episode three, series two. Today we are going to be covering consumers versus bots as we delve into the challenges facing Walmart and the DVSA, as well as the battle of the billionaires.

You'll know who we're talking about very, very shortly if you haven't clocked on already. And then we'll get into the attack of the month, and your clue for that one is Taylor Swift. She is probably the sponsor of this podcast by this point, as we've discussed her ticket sales in every single episode.

So I'll quickly introduce today's panel. We have with us Chris Collier, who is Head of Solutions Engineering here at Netacea; Matthew Gracey-McMinn, Head of Threat Research; and Paulina Cakalli, Lead Data Analyst. I'm Dani Middleton-Wren, Head of Media at Netacea, and I am your host.

So each member of our panel is bringing a different perspective on today's subjects. So for Matt, we obviously have threat research. Chris is going to bring with him some business insights and Paulina is going to provide that all important technical perspective as we get into how the attacks that are actually behind these challenges work.

So let's start with digging into those challenges faced by Walmart and DVSA. So bots are being used to book up every available driving test before reselling them for profit. The DVSA has released a statement acknowledging the problem and outlined what it's done to try and solve it.

Okay. So Matt, do you wanna talk to us a little bit about this problem and what kind of bots you think are behind it?

[00:01:44] Matthew Gracey-McMinn: As with a lot of these problems, when you look at them, they get quite complex when you look in depth.

But I think the key things to remember is that while we talk a lot about bots and the automation behind these attacks, at the of the end day, it is still a human running the attacks, there's a human here who's benefiting, and in this case, they're very much after financial benefit. So if you've tried to book a driving test recently, or if you have a relative who has, you may have found that it's nearly impossible to get one.

All the slots are taken and really you're just waiting for cancellations. Alternatively, you can buy them off Facebook Marketplace for about 400 quid, a significant markup on the actual price. So what a lot of people are doing is they're waiting for those cancellations and there's services that are trying to fit into that gap. And that's what we often see with these business logic attacks, is that someone sees there's a supply and demand issue, and they'll try and put themselves into that gap to try and acquire the supply so that they can sell it to the people who are generating the demand at a marked up price.

And in this case, what you'll often find people looking for is apps that will tell them when there's a cancellation in their area. So you can go to these apps, you pay a subscription charge similar to what you do for a streaming service, you know, a monthly charge. It'll take your details, log in on your behalf to the service every few minutes, checking to see if anything's available, any cancellations have come up.

And then when one has come up, it'll place a booking there and then let you know that you've now got an earlier slot than you'd previously had. But what we've also seen historically and is still going on is less scrupulous actors are doing things such as impersonating driving instructors. So your ADIs, your driving instructors essentially, have the ability to create block bookings, not just on a one-to-one basis. They can block, say, 20 slots for their students, and then they can move them between their students. And what they're now finding is that the malicious actors are contacting these driving and saying, "Hey, would you like to make an extra 400 to 600 quid a week? If so, it'd be great if we could book appointments through you," essentially. And what they're doing is they're using things like automated tools to block book as many appointments as possible. And then they're listing them for resale at a marked up price. And they're taking advantage of either ADIs who've bought into the system to make a bit of extra money or they're taking advantage of them having falsely impersonated an ADI themselves in order to register on the service as an ADI in order to place these bookings.

And then there's no bookings left. People are left looking for driving tests. There's a massive long... months, six months I think we're up to now...

[00:04:04] Dani Middleton-Wren: Mm-hmm.

[00:04:04] Matthew Gracey-McMinn: ...waiting list for a driving test. People want to get on the roads, they're ready to go. They don't want to just sit there for six months spinning their wheels, literally in the driveway, probably cause they're not allowed on the public road yet.

And so they go, "okay, I'm gonna buy a slot of these people and it's gonna cost me 5, 6, 7 times the amount the actual slot should cost."

[00:04:23] Dani Middleton-Wren: And so besides the obviously financial impact of having to spend so much more at this inflated cost on the driving test slot, what is the additional risk? So is it the ADIs?

Are they risking their own reputation, or is it the drivers? Are they putting themselves at risk? If they book slots via these services, these cancellation services, who's at the greatest risk and what are the risks?

[00:04:46] Matthew Gracey-McMinn: So there's quite a variety of risks. Obviously if you're buying driving test slots through Facebook Marketplace, there's no checks.

There's no balances. You're not buying it off the government who do have to honor their word in many cases. In this case, you're buying it off a random person on the internet. And that could easily be a scammer. They may not even be in the UK. They may not even have any driving test slots, but they're able to impersonate and pretend that they have. You give them the money, and you get nothing in return. You've just been scammed out at 400 quid. That's a risk to the drivers. There's also the risk, obviously, if you're handing over information for logging into accounts and so forth to some of these apps, some of them may not be entirely scrupulous or they may not store that data securely, and so it may find its way into the dark web and so forth.

And if you're reusing those details somewhere else, then that can lead to other attacks against other things you've logged into or against. You know, it could lead to identity theft, loss of things like your streaming accounts. All sorts of horrible things can happen there. For the advanced driving instructors, I must admit, I'm not too clear on the exact laws around this, but I do know that the government UK site actually specifically states that in order to essentially charge people for clicking on these links and these sorts of things, you have to contact the government and get permission for that. So you are breaching the terms of service of the site.

Now, I'm not a lawyer and I dunno how enforceable that is in a court of law or anything, or what the consequences for that are, but it's certainly immoral and it's likely to cost you the faith of your students if you are actually selling things on. And potentially, if the government cracks down on this more and more, and we know this has been discussed even in Parliament, the Houses of Commons are actually discussing these issues.

As they crack down more and more, the less scrupulous ADIs are likely to find themselves in more and more hot water if they're facilitating these attacks.

[00:06:21] Dani Middleton-Wren: And have their own license taken away from them, presumably.

[00:06:24] Matthew Gracey-McMinn: That's potentially a future risk that could develop. Yeah.

[00:06:27] Chris Collier: It's interesting though when you think about it because a lot of driving instructors, unless they work for a massive conglomerate, are more than likely gonna be self-employed.

And so do you think maybe the credit crunch and stuff like that, with everything that's gone on with inflation and stuff, maybe driving more of them to think, "well, I need to make this extra money" and that's why they're willing to take the risk.

[00:06:47] Matthew Gracey-McMinn: I do think that's the case. And we've seen that across other industries as well.

Scalping, I mean, this is kind of a form of scalping really. And we saw this with scalping and video game consoles, concert tickets and everything over COVID. People lost money as, Chris, you were just saying. And as a consequence, they're like, "Hey, how can I make more money? Oh, look, brilliant. I can do this. It's cheap, it's quick, it's effective, and I can pocket quite a bit of cash very quickly." The downside is it gets into varying degrees of legality and ethicality, which go with it.

[00:07:15] Dani Middleton-Wren: So we know that the ADIs are using this service via bot users, but do we think that sometimes the ADIs are likely to be the actual bot operators themselves and they've, as you said Chris, begun to provide that service as a sideline, as a hot little hustle?

[00:07:32] Chris Collier: I dunno. I think you're gonna have varying degrees of technical capabilities. Like a lot of research that I did when I was an undergraduate was how easy is it for somebody to be able to get information for them to actually teach themselves how to hack something?

And it's surprisingly easy, whether they're skilled at it or not. In order for hiding their tracks and things like that, that remains to be seen. But in order for you to acquire the tools and things like that, I'm sure MGM can completely back me up on this. It's really easy.

[00:08:01] Dani Middleton-Wren: Yeah.

[00:08:01] Chris Collier: And so someone could quite easily orchestrate something like this on their own. Likelihood of them probably doing it that way?

No. They'll more than likely have somebody else do it, I guess just because they might not be technically capable, but it's possible. Yeah.

[00:08:15] Dani Middleton-Wren: But they can still harness the service by somebody else.

[00:08:18] Chris Collier: Mm-hmm.

[00:08:18] Dani Middleton-Wren: Because that is essentially what's been happening with the Walmart delivery drivers. So Walmart are trying to identify drivers that are utilizing bots to hoard these delivery slots for themselves, taking it away from the masses, hoarding those slots that are specifically located in their area, and essentially that then monopolizes the market so that those who've got access to bots are exploiting the system, leaving other drivers with minimal to no delivery slots available. And what that has resulted in is a huge financial impact on those drivers remaining who aren't using bots to exploit the system.

So we can see there, as you said, Matt, that economic impact that has resulted in a further impact on those who are not using immoral methods to access to the delivery slots. So let's move on to that and discuss that a little bit. So with the DVSA, it's a government run system. Walmart is slightly different.

It's a private company. And they are trying to proactively investigate and identify drivers who are exploiting the system. Paulina, do you want to tell us a little bit about how drivers might be exploiting this system, using those bots to get those delivery slots to start with?

[00:09:32] Paulina Cakalli: Yeah, sure. So first I wanted to add a little bit on the DVSA problem.

So technically I see basically two problems on what is going on at the moment. So one is the availability fraud where an individual can pay, like, an app for like up to £30 and, you know, set a time period and say like, okay, I want to book the driving test from this day to this date, adding the details and everything.

And here we go. The bot will book the test for like in seconds, other than a human will at least need like a minute to fill everything and, you know, do this activity. And the second problem on the DVSA is the scalping, as Matt said, that, you know, bots are booking places and then reselling.

So from what I have seen, they're already using like two blocking strategies. One is CAPTCHA and the other one is hard blocking. I've seen a lot of complaints about hard blocking. Maybe they're not measuring in a very accurate way, how they're blocking, let's say bad bots from human traffic.

So I think it'll be good if starting with CAPTCHA as a blocking strategy, I'll suggest, you know, using more CAPTCHA if you are hard blocking a lot of human users, basically. And then if you have basically bots that are bypassing CAPTCHA, you can add other blocking strategies on top of that, which is like setting a model that will look after users that will bypass CAPTCHA.

So in terms of the Walmart problem. It looks to me more like a, you know, order fraud problem technically. It doesn't look like it's a scalping issue as they're not reselling. What they're doing basically is taking more orders so they can deliver more stuff and doing more money comparing to other drivers.

So it looks to me like a competition between drivers and like, you know, some drivers are using bots to do this and some others, you know, don't have the information maybe or are not aware of how to do this. So I'd say Walmart, it'll be good if they can start with some blocking strategies against this activity.

It'll be good to run like some data analysis in terms of the web logs or API traffic that they have, see how the automated traffic looks like. There are a lot of stuff you can do about that, like... one might be like tracking the session per user and clustering this, basically clustering different behaviors based on how much seconds does a user need basically to take an order, like bots might need less than like a real driver. Let's say they will need like one second, two seconds, and a human driver might need, more than seconds, like minutes or two minutes, three minutes. You know, but I'm not sure what they're doing at the moment. So from the information that has been published, I'll suggest for them to start analyzing this traffic. If they don't know how to do that, it'll be good to get in touch with a bot provider, like Netacea is a good one, you know, to start off, why not?

And take more advice basically on how to stop these activities. So we play like a fair game for everyone.

[00:12:43] Dani Middleton-Wren: Absolutely. So Walmart released a statement last week following the rallies in Illinois staged by drivers that have been infected by this challenge, in which they stated that bots are an industry-wide issue.

So it's really good that they're recognizing that it's something that's happening to them because it's quite right. It's likely that a lot of delivery services that are probably having very similar challenges. And that they're taking a proactive and comprehensive approach to identifying and preventing the use of bots on the platform, including investigating incidents and deactivating drivers.

So I suppose that takes us quite nicely on to Chris. And who do you think is responsible for the problem? Should it be Walmart or...

[00:13:21] Chris Collier: think it's a really interesting topic because, I mean, the gig economy is not new. Let's be fair. It's been an emerging market for a while because of services such as like Deliveroo, JustEat and all those other kinds of things.

What I think's really interesting about it particularly is that Walmart's being called out specifically and their platform versus others, I'm a hundred percent certain and others are suffering from this problem. And I think it's just really interesting that, A, they've been called out on it, it's been made incredibly public.

So I think Walmart had no choice but to get out in front of it, given the fact that there was protests. But I'm sure that there are certainly others that are dealing with it. From a government point of view, with the DVSA, I think it's a issue of national security to a certain extent. I mean, ultimately it's a government system. It's something that is being used nefariously in order for the people to make profit.

Obviously I know that appointments do cost money from a government point of view, but you're purchasing it from essentially the authorized reseller of the appointments at that point in time. So I think it's good that they've come out, they've recognized it and that they've made everybody aware of the fact that they're dealing with it.

The issue you've got with that is it's, you're basically telling your attackers, we know you are there and we know that you are doing something. So you then lose the element of surprise in being able to then put anything in place in order to deal with it. So I suppose it's a measured approach. I would say making sure that people understand you deal with it.

Cause obviously there's a lot of dissatisfaction from the general public around it, but maybe not being as transparent about how it is that you are going about dealing with it so that the attackers aren't basically being given information indirectly.

[00:14:55] Dani Middleton-Wren: And the DVSA have stated that in their recent release, that they are tackling the problem, but they've said they won't talk about any specifics about how they're tackling for that very reason because they don't want any attackers to be able to bypass those measures.

Matt, do you wanna go into a little bit about why that's so important?

[00:15:13] Matthew Gracey-McMinn: Absolutely. It's simply because, as Chris was saying, you're giving the attackers information. So long as there's money to be made, people are gonna be trying to make money. As I mentioned earlier, you've got that supply and demand, that gap in between, and people are trying to put themselves in there. People have been doing that for millennia with supply and demand issues, we are just in a digital economy now, and that allows for an automated approach to this method. So it gets a lot easier for attackers and they can do it at a much greater scale. It's essentially the industrialization of this previous idea. When you're trying to deal with this, you have the technological solutions, asPaulina was talking about a moment ago. But we also have a lot of companies also, and businesses and organizations like the government try to tackle it by fundamentally changing the system. So attackers are exploiting the sort of logical flow in which a sort of service or a product moves to an end user, and they're putting themselves in that gap.

If you change the way that flow works, like redirecting a river, you're trying to redirect it around what the attackers are doing and where they've positioned themselves. So a lot of organizations choose not to tell attackers how they're repositioning around it. So that, as Chris said, it comes as a surprise and they've then got to change their approach.

Now, changing some aspects of that is quite easy. Other things that make it quite difficult, one of the messages the DVSA has actually attempted is increasing the amount of supply. They've increased the amount of supply, it makes it much harder for attackers to demand high returns, and that's a key point you really want to emphasize, is if you can push down the return on investment for the attacker, if you reduce the profit they're making, they're likely to go and look for more profitable activities elsewhere.

But, that is where the challenge is and trying to push that down, obviously. And as I mentioned, you know, we've alluded to this earlier, that, you know, the attackers, they start off, as Paulina was saying, it's delivery drivers competing with each other and someone finds a way of getting an edge using a technology over time.

What we tend to see as well is that more nefarious, more criminal actors tend to get involved in that. I've not had any confirmation of this, but some of the Walmart gig drivers have actually stated that when they've actually not been able to get the deliveries off other people, off the bot users, the bot users have offered to redistribute them to them in exchange for cash, drugs, or cell phones ,or over here in the UK, mobile phones that would be. So, you know, you're starting to see a more criminal element getting involved, which is usually the case with these things. It tends to start out as, someone's trying to get a slight edge over the competition, whether that's for buying a PlayStation 5 or something, or in this case acquiring deliveries to send out so they can actually make a living.

But then it moves into a more criminal way. As you know, more nefarious elements go "actually I can make a lot of money doing this."

[00:17:42] Paulina Cakalli: The other thing is that there are also a lot of scams. Basically as, usually when you use a bot, you should install an app basically, and you will fill details, blah, blah, address, everything. Telephone number, driver number, or, you know, everything that identifies the drivers And there are some scam apps where basically you can fill all your details as a driver, but then you pay this app for 20, 30 basically to do the bot activity.

And it's a scam. They take the money and you don't do anything.

[00:18:13] Matthew Gracey-McMinn: And they've got your identifying details as well.

[00:18:16] Dani Middleton-Wren: So you've paid for a service, thinking, I'm gonna provide it with all of my personal details to auto fill this form.

But then the automation aspects of the form fill doesn't then work when you actually try and use service.

[00:18:28] Chris Collier: It probably does. It just doesn't go and fill in the form for the delivery site. It probably fills in a form for a database that they can then use your personal information for something else entirely.

[00:18:36] Paulina Cakalli: Yeah.

[00:18:37] Dani Middleton-Wren: And that's where we get further into this nefarious aspect that Matt mentioned there. So whether it's a criminal element or if it's being used to simply store your personal details and be used for something else. And so whether it's with Walmart or the DVSA, let's talk about those consumer challenges and where that personal data can end up.

What are those risks? Chris, do you wanna talk about it from a business perspective? So if you are Walmart, and you said, as you mentioned earlier, they're a commercial entity, they've got a board to answer to. What are they gonna be thinking?

[00:19:09] Chris Collier: Well, I think one of the first things is that they need to understand what the extent of the actual problem is. You can't respond to something without truly understanding what the extent of it is, first and foremost. And so it's a case of, that they need to understand that, then they then need to understand how to formulate responses to it.

What I think is quite interesting about how their platform is working versus let's say, ASDA or Sainsbury's here in the UK, is that they're using the gig economy in order for them to be able to fulfill it. Whereas you are employed directly for ASDA or Sainsbury's as an example, as a delivery driver here.

And so maybe they need to think about hybridizing that so that there's a certain percentage of it that has to be delivered by authorized Walmart delivery drivers so that they can actually make sure that their customers that are buying their products are getting the service that they've paid for, but also still allow to factor for the gig economy.

So it reduces that footprint a little bit. Gives them a little bit more onus around being able to triage that element of it. So they're still providing the services, they're still giving the gig economy to those drivers that obviously want it. But being able to then maintain and manage that a lot more effectively is one of the ways that I would go about it.

Particularly because you have the business logic element here of, well, people need to access this, they need to access it more than likely via APIs in some way cause they're using an app, as an example. How do we tell that everything that's accessing that API is actually a legitimate member of the Spark platform, I think it's called, versus it's an automated tool that's just trying to get this slot for the delivery.

[00:20:40] Paulina Cakalli: Yeah. For sure, bots are very challenging as a field and they use residential proxy IPs, and a lot of other sources. So at some point they will need to analyze and get expertise on this as a problem.

[00:20:55] Dani Middleton-Wren: But so do we think it's an inventory hoarding issue or...

[00:20:58] Paulina Cakalli: What they will do, basically, they'll try to create fake accounts. They'll try to create other accounts which might be fake with like fake or stolen personal details.

And like they will still basically, yeah, book more and it'll be, at the end of the day, it'll be just a single person delivering, you know, all these orders. But yeah, inventory fraud for sure because, you know, they're basically booking the orders and it's part of that as a use case. But then if we create these limits, they will expand the kill chain with other use cases like fake account creations.

[00:21:34] Matthew Gracey-McMinn: Yeah, I'd say something we see quite regularly in most of these attacks is they, you know, you get one sort of attack, which they use to either set up or they expand it into an additional attack. That's common in business logic attacks. It's also common in traditional cybersecurity hacking attacks.

You know, attacker gets access, they'll then see what they can do to launch more attacks and essentially try to increase the financial return that they get on the attack. Paulina's been alluding to it. The same fundamental theory of defense in depth is what you need you know You need those processes, those policies, as well as layers of technological solutions to try and prevent these attacks. Some technologies will catch some attacks, some will prevent others and so forth.

[00:22:10] Dani Middleton-Wren: So you might need a different approach to tackling fake account creation to what you'd need for the inventory fraud.

[00:22:15] Matthew Gracey-McMinn: Yes, exactly. You do, they're fundamentally different problems and they will, they'll look differently..

[00:22:20] Paulina Cakalli: Yeah. Yeah.

[00:22:20] Matthew Gracey-McMinn: ...in the data sets as well.

[00:22:22] Paulina Cakalli: So basically, the user journey is completely different. It hits different areas of the web apps or APIs, so it'll need, let's say, a different model that will track the behaviors over time and will detect if a user is a human or a bot, basically.

It needs different products in terms of the detection. So we have a completely different behavior when we have like inventory fraud or like, you know, in Walmart's case, like booking abuse, let's say. And then when we have the other activity, the fake account creations, also mapping this to the BLADE Framework, the kill chain is completely different.

So we'll need a different model that will be able to identify the bad bot traffic.

[00:23:04] Dani Middleton-Wren: So fake account creation is something we've also been seeing across social media recently. The unicorn startup platform, social media platform, IRL, which has raised millions of dollars in investment funds only to recently admit that 95% of its user base was bots.

And this has killed the app off completely. And essentially that is a bunch of fake account creation. So what I'd like to understand a bit more is who benefits from creating those accounts? Chris, do you wanna dig into this? Do we think it's, is it IRL themselves that might be using it? But yeah. Who do you think is gonna benefit from that?

[00:23:44] Chris Collier: It really depends, doesn't it? So social media has always been a really interesting topic from my perspective, just purely and simply because I don't think the human race has ever seen anything quite like it ever. The rise of the worldwide web, the internet gave us access to information at scale and at speed.

And I think social media has just compounded that issue. So I would say that IRL may have added some of those users specifically just to inflate numbers. All of them? No, I doubt it highly. One of the things that is becoming incredibly prevalent across social media is misinformation and the need for people to have things fact checked.

And I think that is where some of the biggest risks come from with social media, in that you have multiple platforms that you can hook a singular app up to, as an example. So like Tweet Deck as an example, is an app that you can hook up to, like Twitter, Instagram, Facebook, all these different social media platforms.

And you can create posts and have it post on your behalf to all of them at the click of a book. And if I wanted to spread a disinformation campaign, it's probably one of the first ways that I would go about doing it is because the reach that you can get through things like social media with accounts that maybe look real, maybe don't look real, at this point I don't think it actually matters to a lot of people anymore whether the account itself looks real, if the thing that it has posted is eyecatching you enough. And speaks to you as a person, it's gonna, well, I would need the clickbait. You're gonna click on it, you're gonna wanna read it, you're gonna interact with it in some way.

And that's why I think them being transparent about it and going, yeah, we've got all of these fake accounts on it, even though it's basically cost the business, because I think they've done the just thing in admitting it, that this is the situation and that we can't actually tolerate this. It'd be interesting to see what Facebook and Twitter's accounts look like.

[00:25:38] Dani Middleton-Wren: Yeah. Whether they actually account for any of the...

[00:25:40] Chris Collier: Mmm. Definitely.

[00:25:41] Dani Middleton-Wren: ...fake accounts that exist. Twitter did a massive cull, didn't they, a couple of years ago, of all of the fake accounts, all the bot accounts that were on there. Obviously we live in a culture of misinformation, fake news, and a lot of that is spread through fake accounts.

As we touched on at the beginning of the podcast, we're going to be talking about

Musk versus Zuckerberg.

[00:26:01] Matthew Gracey-McMinn: The cage fight.

[00:26:02] Dani Middleton-Wren: Yeah, the cage fight of the century.

[00:26:05] Matthew Gracey-McMinn: It's the one time I might consider using a scalper bot to get the tickets for that cage fight because I'm looking forward to it.

[00:26:11] Paulina Cakalli: I'm loving the Elon Musk parody tweets, by the way.

[00:26:33] Dani Middleton-Wren: So Musk and Zuckerberg take very different approaches to their social platform. So while Elon Musk has been cutting the number of tweets users can view for free, so we think that this is him trying to drive more subscriptions to the Twitter subscription service, Zuckerberg sweeps in and introduces Threads. What do we think this is going to do to the social media landscape? Let's start there.

[00:26:57] Chris Collier: Threads was unprecedented. Unprecedented.

[00:27:00] Dani Middleton-Wren: Did anybody see that coming?

[00:27:01] Chris Collier: No. But I think one of the things that you've gotta really take into account is, they managed to get the same number of subscribers in like 15 minutes that it took Twitter years to acquire. So you've gotta look at that in twofold. One, are the general public at large very disenfranchised with Twitter in the Musk era? Or has the Instagram, cause let's be fair, Threads isn't related to Meta as such. It's more tied to Instagram. Has Instagram's gaining popularity as a social platform with the masses at large allowed for them to sweep in and take all of these subscribers? Or not necessarily take them, but allow for all of these subscribers to subscribe to it? I mean, I've used it really briefly. So similar.

[00:27:55] Paulina Cakalli: Yeah, I think Instagram is more used than Twitter in terms of the statistics, so that's maybe explains why Threads, you know, had the same subscriptions.

And also it's associated with Instagram from what I've heard. So in the future, if you want to delete Threads, you'll need to delete your Instagram account as well.

[00:28:11] Dani Middleton-Wren: Which nobody's going to do.

[00:28:12] Paulina Cakalli: Yeah.

[00:28:14] Dani Middleton-Wren: Where else would you find all of your historic photos?

[00:28:17] Paulina Cakalli: Yeah.

[00:28:18] Matthew Gracey-McMinn: Although Instagram does have a consistent problem with bots as well.

In fact, to the extent that obviously you've got influencers and marketers on there, a lot of them have paid for significant numbers of followers. In fact, there are countries where there are vending machines, you go up, you input your Instagram details, and you can pay for a hundred likes on your photos or 10,000 new followers, just put money into the vending machine and these bot accounts will appear following you and linked to you.

[00:28:44] Dani Middleton-Wren: And huzzah, you have a hundred thousand Instagram followers and you can verify yourself as an influencer.

[00:28:48] Matthew Gracey-McMinn: Exactly. Yes. Yeah, and then that obviously leads to actual followers following you, and not to say that they're not trying to do stuff about the bots, but there are a significant number of bots on there as well.

And I suspect at least some of the new accounts on Threads are likely to have been bot accounts that have been ported over from Instagram, specifically in preparation for this social media platform potentially taking off.

[00:29:08] Dani Middleton-Wren: I think it'll also be interesting to see, so as Chris said, in within 15 minutes, Threads had more users than Twitter had in years.

I wonder how many of those users will stay or remain active users? I think at the moment, it'll be a lot of people dipping their toe in the water. A bit of curiosity, is this just another version of Twitter, a new version of Twitter?

Because I think a lot of people fell out of love with Twitter when it became all about the trolling and it became a very different place to what it was in say, 2010. And like we've already discussed, it became a place thriving with misinformation. So I think if Threads becomes just a parody of that, it'll be interesting to see how many people stick around when Instagram has a very different vibe to it. Instagram is all about showing lots of nice things that are happening in your life.

And it's more of a, it is literally a highlights reel for a lot of people. It's less about emotion and words and more about photos. And so I wonder if it will become a combination of the two, or if people will fall out of love with it the same way they did with Twitter, but just at a faster rate.

It'll be interesting to see what that subscriber drop off looks like. Paulina, you are a avid Instagram user.

[00:30:23] Paulina Cakalli: Yes, I am.

[00:30:25] Dani Middleton-Wren: I say this as a follower and an admirer.

[00:30:27] Paulina Cakalli: I mean, to be honest, it'll look a bit like, you know, not true, but I don't like social media a lot.

As more as I get to it, I'm like, oh no, I should go back at the opposite at some point. So the thing is that, yeah, it's real. We don't know, you know, how many of these subscriptions on Threads will stay for long? I personally don't have a Thread yet. I have Twitter. I rarely use Twitter. I prefer more LinkedIn than Twitter as it looks a bit more professional, even though, you know, all the social media now, it's becoming, I'll say a accounts and everyone is keep posting fake news. People are trying to, you know, they're posting either on Instagram, they're posting like a life that doesn't exist for real. Like we see a lot of posts, like people posting like luxury brands. So the reality is completely different. In terms of the bad bots, or like scraping as a use cases, I think scraping is one of the most, let's say, common use cases you know, it's been hard for social media where like, mainly competitors scrape like, prices.

They scrape the content, they open new pages, basically they sell the same products. So I think that's a problem in terms of, you know, how the business is going, as you know, nowadays, like Instagram, it's also business, let's say. Like a lot of people earn a lot of money on Instagram and scraping might be a problem.

Let's say it's mainly, you know, focused on, as I said, on competitors as, they sell the same products. They scrape the same content, the same prices, or maybe it's also, it contributes on scalping as well. Like, they might basically estimate when we'll have like a limited edition product in the market and then, you know, scalpers might look after it or the competitor might want to release the same product, let's say in a different time period so they can have more profit, as if we both, let's say, release the same product in the same time period. It might be not very successful.

[00:32:31] Matthew Gracey-McMinn: Scraping is absolutely a big problem for social media. Reddit and Twitter have been heavily scraped by the large language models to train AI. Can't have a podcast in 2023 without talking about AI.

But the AI language learning models, so the ones going out have been scraping these services for images, text, everything, just to train their models. And I think part of what we're seeing with Elon Musk and Reddit's changes as well, is that these organizations are reacting to, "Hey, these other companies are making a lot of profits off the data stored on our site. They're making millions, billions of requests, scraping all this data that we are holding. And they're using it for free to train their models that they're then actually making quite a significant amount of money out of." So they're kind of feeling a bit victimized, I suspect.

And that's probably at the root of some of these sudden changes we're seeing in some of the big social media sites like Reddit and Twitter.

[00:33:21] Dani Middleton-Wren: So are you referring to the API access there? So the fact that, is it Twitter and Reddit are no longer making APIs available to the public?

[00:33:30] Matthew Gracey-McMinn: Generally, yes. Yeah. I don't know how that will play out long term. Cause it is not popular. And it has killed a lot of the third party apps that are built off of these.

[00:33:39] Dani Middleton-Wren: Is that a like of Tweet Deck and Hootsuite, that sort of thing?

[00:33:42] Matthew Gracey-McMinn: Yes. Yeah. And there's some third party Reddit apps as well that have had to shut down as well.

And they've made some allowances for some things being done for academia and so forth. But that's a complex laborious process and it dissuades people from just trying new things. You know, you've got to go through an approvals process before you can even attempt anything. So I dunno how long it will stick around.

It feels a bit of a knee jerk reaction to a lot of this, but it is an understandable one, as annoying as it is for an end user such as myself.

[00:34:08] Chris Collier: So from a business point of view, you can understand it, right? I mean, ultimately Reddit has basically been giving people unfettered access to their platform for God knows how long, and that is a platform that they're having to absorb all of the running costs for. And there are plenty of businesses that made an awful lot of money out of all of the stuff that Reddit themselves were basically paying for in order for them to store and for people to access. We are in Web 3.0 now. The web is not the same as it was back in the nineties and it certainly wasn't in the early two thousands.

People need to realize that you want access to stuff, people are gonna start to charge you for it. We are in the world of the subscription. Everything comes at a price now. Yeah, you want access to it. Great. We'll give it you a monthly cost.

[00:34:51] Dani Middleton-Wren: So do you think Elon Musk introducing their Twitter subscription does mark the beginning of a new era where obviously we've got no longer got the open APIs?

He's already charging people for a certain number of tweets a day, accessing that number of tweets a day. Do you think we're gonna start seeing more of that?

[00:35:05] Chris Collier: It's a twofold thing, isn't it? Really? The way that you've gotta think about it is that the world of the open web is still very much there to a certain extent. Information is still incredibly freely accessible. It's just that ultimately the platforms have a duty of care to themselves. The people that are actually using the platform and stuff like that as well, right? And by having this, we welcome everybody with open arms kind of situation, doesn't really give you the visibility into who it is that's actually making use of that API properly.

You may have some ideas of who it is, but by gating it, essentially you are asking people to come to you and specifically say, I want to have access to this. And then that way you can federate that access in some way, or at least monitor what it is that they're doing because, I bet you, even though Reddit and Twitter will have had acceptable use policies about what you can do with those APIs, there'll have been people that have abused that and not listened to any of those acceptable use policies whatsoever.

So it gives them more control over who can do what with the actual API itself, which I'm all for to a certain extent.

[00:36:10] Dani Middleton-Wren: Yeah. Do we think it provides a gap in the market for a new, for want of a better term, freedom of speech platforms, freedom of speech, freedom to access that information?

Because obviously newspapers are becoming more and more paid, well, we're in a world of subscriptions that the era that we are entering into as we've moved away from print and it's more digital and online, there has to be some way of paying the people that are actually creating that service for us.

As you've just identified, Reddit was essentially paying us at one point, and there needs to be some kind of balance, but do you think we're gonna see new platforms spring up. Do you think Instagram has a chance now to monopolize the market, as, you know, they're free, they're totally providing a free service via rather Instagram or Facebook and Threads creates yet another string to its bow.

[00:36:58] Matthew Gracey-McMinn: If we think of it as a pendulum, it's been for ages, like, swung all the way over to one side where it's all completely free access and everything.

What suddenly happens, the pendulum's very quickly swung the other way. And then threads and I suspect others will start setting up competing platforms that are much freer, much more easily accessible and so forth. And I suspect what will happen is that the big players who've been on that pendulum and have swung all the way in the other direction will come back to rest somewhere in the middle.

And what will end up is a sort of hybrid of the two extremes. I think we'll end up with somewhere in the middle where there's a slight pay wall or more controls in place, but it's not too egregious. Cause as much as I love the whole freedom of speech and anyone in the world could contribute to these sites previously and everything, I love that. But it started to get a bit manipulated. You know, people were spreading... and it's important to distinguish between misinformation and disinformation here. Misinformation is just factually inaccurate data that was shared. Humans make mistakes, that happens.

Disinformation was intentionally incorrect. Information shared to try and push a particular point of view or something, or to cause disruption and chaos. And these very open platforms did facilitate that. So some degree of control over that needs to be implemented. And I think part of the reason that we're switching the swing over there is this attempt to try and regulate and control those very aggressive disinformation campaigns that we used to see on these platforms.

[00:38:13] Dani Middleton-Wren: And that is a really important clarification based on the conversation we had earlier about fake account creation and using those to create disinformation campaigns. Especially as we think about things like the run up to political campaigns. It becomes more and more important to take that into account.

When you are reading any source by the internet, you need to make sure that it is a verified and trusted source, before you take anything into account as valid.

[00:38:38] Matthew Gracey-McMinn: Although the verification and validated, you know, sort of those blue ticks and everything...

[00:38:42] Paulina Cakalli: Yeah.

[00:38:43] Matthew Gracey-McMinn: They're not hard to get hold of a lot of the time. You know, there's, people have ways of getting around these and even verified accounts are often used in disinformation campaigns and so forth. As you mentioned earlier, if there's enough benefit, and we were talking about financial benefit earlier, but if you, you know, political benefit as well, if there's enough benefit to it, people will put in the resources needed and develop the methods needed to actually acquire what they need.

[00:39:05] Dani Middleton-Wren: Yeah, . Yeah. If it's worth their while, they'll do it.

Let's move on to attack of the month. So this month we are going to be focusing on scalper bots because yes, again, Taylor Swift's Eras tour is in the news.

This is the tour that keeps on giving when it comes to scalper bot conversation.

[00:39:31] Matthew Gracey-McMinn: I suspect the scalpers have made an awful lot of money off Miss Swift's concerts.

[00:39:35] Dani Middleton-Wren: Just making their way from continent to continent.

[00:39:38] Matthew Gracey-McMinn: Well, with the internet, this is something we see with scalping of electronic goods and stuff. And I don't see why this couldn't work with tickets, particularly with the digital distribution of tickets now, is that you don't necessarily have to be based in the country where the concert's taking place.

You could scalp the ticket from somewhere else. I could sit here in the UK and potentially try to scalp the concert coming up in Australia or over in the US.

[00:39:59] Dani Middleton-Wren: Yeah, and that is exactly what we are going to reference today. So the Victoria State Minister for Tourism, Sport and Major Events announced that Taylor Swift's tour would be designated a major event, which activates several restrictions under Victoria's anti scalping laws.

And the reason for doing this is because scalpers have been charging 200 to 300% markup on the RRP, which is I think in some cases the range of price is from $925 to eye watering figures like half a million dollars.

Nobody can pay that. But scalpers are gonna push their luck because they know the demand is there. And so the state of Victoria has put measures in place to say " not". I think the most that somebody can charge is 110%. So Matt, do you wanna talk to us a little bit about why the state of Victoria is gonna put these measures in place? And what impact do you think this is gonna have on scalper activity?

[00:40:58] Matthew Gracey-McMinn: The main reason for this is to try and keep the price down. So, as you mentioned, scalpers are trying to make money here. So what we see across the world is that they look for high profile events where there's a limited supply and they're looking to benefit from the demand generation.

As I mentioned, they're trying to get in that gap between supply and demand. Now, concert tickets are very limited. There is a set number of seats. There's a restrictive amount of space within a stadium. You can't put more than X number of people in a stadium, so they know that there is definitely a supply restriction there.

And for a very popular artist such as Taylor Swift, there's a massive amount of demand, everyone wants to go see her perform. So they're all looking to try and get those tickets. So what they do is they use automated tools, scalper bots, to acquire as many of those tickets as they can, as fast as possible, and then they'll mark them up significantly and sell them on at that significant markup.

Ticket touts have been around for decades, long before the internet, and essentially this is the digital equivalent. What we are seeing the state of Victoria trying to do here is by putting a restriction of 110% on, they're restricting the amount of profit that can be made on a ticket.

So let's say the ticket is £100 or something, you can now sell it only for £110. So that is a markup. But is it enough to be worth my while? Maybe if I can sell a few hundred of them, but like, you know, I've got to buy a scalper bot, those run into thousands of pounds quite often. They can be very expensive, particularly if I want to do this on an industrial or competitive scale. So it suddenly becomes very hard and it also allows the state of Victoria to actually prosecute people who breach this. So any attempt for me to actually maximize my profit, to push it over, suddenly I'm risking essentially significant fines, criminal activity, all of this stuff.

The US tried to do it with the BOTS Act and I think in fact three people have been fined under it now. I think it was like $10 million each they were fined. They put some serious fines in, and while that hasn't solved the scalper problem, it's certainly at the back of the mind of anyone trying to do this, that if I get caught I can be in real trouble here.

[00:42:56] Dani Middleton-Wren: Because it takes away that gray area that we always talk about when it comes to scalping, there is that gray area of, it's not illegal to purchase and run a bot.

For the actual act of using the bot, nobody's gonna get prosecuted for that. You can legally sell them. And we saw services such as this being advertised around Eurovision, around other major events, but it's the actual using of the bots and then reselling those tickets that is illegal. Is that correct?

[00:43:24] Matthew Gracey-McMinn: That is correct in the UK. I'm not too familiar with the Australian law, I'm afraid, in terms of what the exact intricacies of it are.

[00:43:31] Dani Middleton-Wren: So the state of Victoria is trying to clarify that a little bit, I guess here by saying, right, we're designating this as a major event and that means that we can put these fines and penalties in place so that anybody who does start to sell tickets, there is no excuse.

You should by this point know that you are going to get fined. They've made it super, super clear that it will not be tolerated. Which we haven't really seen before.

[00:43:53] Matthew Gracey-McMinn: Not really, not aside of the few enforcements of the BOTS Act. Though again, that's not been widely enforced and it's one of the criticisms we've seen in the US of the BOTSAct is how rarely it is enforced.

But then these acts are very difficult to enforce. Of course, it's hard to track down who actually used the bot. How do you actually track them down? And as we were mentioning earlier, what if I'm based outside of the country? Am I really going to be extradited to the US or Australia over buying a Taylor Swift concert ticket?

[00:44:20] Paulina Cakalli: I'll say in the technical side, it's very challenging as you know, to do that activity you need to create a lot of fake accounts basically.

And then to purchase the tickets basically. I'll say, other than, you know, having these, the legal measures, it'll be good, but also having like, let's say some bot protection, it'll be good because you will be able to stop a lot of this activity without even needing to do so much more in terms of the legal side.

And it's also costly, I'll say. If you have a bot protection, it might be a bit less costly than without having a bot protection as you will need basically to follow the, let's say, the criminals and set a team to find who the true people are. Basically they're doing this, you'll need to deal with a lot of stuff with, you know, other countries laws and stuff.

So I think the easy way would be to, you know, have a bot protection, you know, stop this activity from your web services or APIs when you go on sale and that's it. You don't need to do all this stuff.

[00:45:20] Dani Middleton-Wren: You don't later need to prosecute. Yeah. So you stop it before it becomes a problem.

[00:45:24] Paulina Cakalli: Yeah.

[00:45:24] Dani Middleton-Wren: Yeah. So as a panel, you've obviously all got very different perspectives on where these challenges lie and who is impacted and who is benefiting.

So do you think it's fair that it's not the people who are selling the bot services who are prosecuted, but those are using services and selling it on? Who do you... cause essentially by providing the service, by creating it and developing a bot that can purchase a mass amount of tickets, you are enabling somebody to resell them. So do you think it's fair that they're not prosecuted and the person who uses it? Or do you think it's about somebody making the decision to scalp the tickets and sell the tickets and that's why they are the prosecutable party?

[00:46:06] Chris Collier: It's a really interesting subject, a really interesting subject, just purely and simply because I'm of the belief that just because you have the ability to do something doesn't mean that you should.

Right. And there's been many tools over my life that I've managed to get my hands on to be able to do things with, in safe environments might I add, that if you were to do something with these tools outside of a lab environment, you could do some real damage. And just because I know how to use this tool, and I could launch an attack at with somebody against it, does that mean that I have the right to do so other than for commercial gain? So you've got, on the one hand, the people that have developed the ability to be able to orchestrate the attack, but they've done nothing other than develop the ability to do it. And then you've then got people that have then got hold of the weapon and then have then used it. As an example, so it's like, absolute sweeping statement, but it's like, do you sue the gun manufacturer for someone getting shot or do you sue the person that picked the gun up to shoot somebody? Because that's essentially the question that you're asking here and that's really difficult for you to answer.

[00:47:12] Matthew Gracey-McMinn: It gets more complicated in this case. Cause a lot of these automation tools aren't just for scalping or tickets, they're originally for other purposes. And many of those are perfectly legitimate, perfectly legal purposes.

[00:47:24] Dani Middleton-Wren: Can you provide us with a couple of examples then, Matt?

[00:47:26] Matthew Gracey-McMinn: So for instance, let's say, take eBay. Sniper bots on eBay were a constant problem until eBay built one into the service. You know, you can tell it, "Hey, put in a bid at this level, up to this level and just keep out bidding anyone else automatically." We've seen bots used for, essentially what I would consider ethically pure reasons. The end results weren't always great, but we saw a lot of people building bots that were designed to find vaccine slots during the Covid pandemic.

So there weren't a lot of vaccine slots, particularly at certain places in the US, that were available. So people built bots that would automatically update and send text messages to their elderly relatives who were less tech savvy, telling them when to log on and book an appointment, cause one became available.

They had good purposes. They were there to try and save lives. But the end result of, like, 30 people building these intensive bots was that the hospital site actually went down in an attack. It looked like a DDoS. And they started to struggle because so many bots were querying them constantly, and they weren't used to the amount of traffic.

So you get into this really complicated area of they're not being designed, and maybe they're not even always being used for bad purposes, but how do you also used them. It isn't just a problem for bots as well, it's a broader problem across cybersecurity. As Chris was saying, in some countries, the sort of tools Chris is alluding to, having those on your laptop, if you cross the border into that country, is illegal.

You can be fined under their computer misuse laws in those countries. You might even go to prison. Having that capability is illegal, but in other countries, it's not so illegal. And those tools are put to good use by penetration testers. So you get into this really difficult area of, as Chris was saying, who is ultimately responsible?

Is it the person who built it or is it the person who actually used it?

[00:49:04] Chris Collier: It's an interesting one, isn't it? Is a really interesting one. And just to, to your point, like the ticketing platforms will have automated testing tools, which will automate the entire checkout process.

You log into the account, you find in a ticket, you then go in through the checkout process and actually acquiring that ticket. And they will have tools that will automate that because they'll need to test that at scale. Someone gets hold of that, you've basically, you've got, your bot to scalp something straight away.

Right? And that's where you have to then tread the line. Because let's just say I own that platform. I will want that tool because I won't want my staff doing manual testing of that. I'll want them to fire through as many tests as they possibly can do so that I know that I'm ready before launch the event.

But if that falls into the wrong hands, what has been designed for perfectly legitimate business purposes has now become a tool that's being used against me.

[00:49:54] Dani Middleton-Wren: And yeah, to pick up on a word you used earlier, it's been weaponized.

[00:49:58] Paulina Cakalli: Yeah.

.

I was going to say as well, like, let's pretend I created a bot, so I sold it for good purposes to someone, let's say in a company, you know, or to an individual. And then if I sell it, let's say like four or five times, and I, you know, we pretend to sell it for good purposes, but then I have sold it basically, and that's it. You know, I'm done with it, you know, but then other people can resell it, you know, other people can use it for bad intentions, good intentions.

It's very challenging really to, you know, identify. And it's challenging to decide in terms of the law, I believe, as we said before, it might also be different for different countries, and that's even more challenging. But I'll go at the first point that, you know, bot protection will help.

That's one of the best solutions that we can do to help businesses, you know, we can help with the bot protection, stop this activity. And then you don't basically need to go in the complex way finding who did this, who used this, why, X Y Z, you know.

[00:51:03] Dani Middleton-Wren: Yeah. And I think you've summed up really well there, Paulina. So over the course of today's panel, prevention is better than cure.

[00:51:09] Paulina Cakalli: Yes.

[00:51:10] Dani Middleton-Wren: That's all we're concluding with. Well, thank you all so much for joining us today. We've covered a myriad of really interesting points that are super, super topical. And we didn't even really get into the nitty gritty of who we think is gonna win that cage fight between Musk and Zuckerberg.

But I will leave it all to you to determine from today's conversation who we're placing our bets behind. If you would like to follow our podcast on Twitter, you can find us @cybersecpod. You can subscribe or leave a review. And if you would like to send any questions into our panel, you can email us podcast@netacea.com.

Thank you all for joining us and see you on the next podcast.

Show more

Block Bots Effortlessly with Netacea

Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
Book

Related Podcasts

Podcast
Podcast
S02 E07

Validating AI Value, Securing Supply Chains, Fake Account Creation

In this episode, hosts discuss AI validation, ways to secure the supply chain, fake account creation with guest speakers from Netacea, Cytix and Risk Ledger.
Podcast
Podcast
S02 E06

Protecting Privacy in ChatGPT, Credential Stuffing Strikes 23andMe, Freebie Bots

Find out how to make the most of ChatGPT without compromising privacy, how 23andMe could have avoided its credential stuffing attack, and how freebie bots work.
Podcast
Podcast
S02 E05

Skiplagging, CAPTCHA vs Bots, Scraper Bots

Discover why airlines are battling skiplagging and the bots that aid it, whether CAPTCHA is still useful, and scraper bots uses in this podcast.

Block Bots Effortlessly with Netacea

Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
  • Agentless, self managing spots up to 33x more threats
  • Automated, trusted defensive AI. Real-time detection and response
  • Invisible to attackers. Operates at the edge, deters persistent threats
Book a Demo

Address(Required)