AI Ethics, Ticket Scalping, Russian Disinformation, Card Cracking

Available on:
Season 2, Episode 1
9th May 2023

Welcome to a new format for the Cybersecurity Sessions! We’ve refocused our podcasts to provide insights into the latest news and trends in cybersecurity, calling on the expertise of Netacea’s threat researchers, bot specialists and business leaders.

This month, new host Dani Middleton-Wren is joined by Matthew Gracey-McMinn, Chris Pace and Tom Platt. First they discuss the ever-intriguing topic of ethics in AI, with facial recognition tech from Clearview AI and PimEyes coming under legal and moral scrutiny, followed by the practicalities of fighting back against automated ticket scalping.

The recent Vulcan files leak is next up, calling into question how much we can trust information on social media and how much rhetoric is being generated by Russian bot accounts. Finally, Matthew answers questions from the team about the mechanics and impact of credit card cracking on payment gateways, retailers and consumers.

Speakers

Dani Middleton-Wren

Danielle Middleton-Wren

Head of Media, Netacea
Matthew Gracey-McMinn

Matthew Gracey-McMinn

Head of Threat Research, Netacea
Thomas Platt

Thomas Platt

VP Pre-Sales, Netacea
Chris Pace

Chris Pace

Chief Marketing Officer, Netacea

Episode Transcript

[00:00:00] Danielle Middleton-Wren: Hello and welcome to the Cybersecurity Sessions season two. I'm Dani Middleton-Wren, and I am your brand new host of the Cybersecurity Sessions, and I'm head of media here at Netacea. So today we're going to be talking about AI ,ticket scalping and Russian bots. Specifically, we're going to be discussing the ethics and legalities of scraping for AI, the Pimeye and Clearview Facial Recognition scandals, and we'll be talking about the Eurovision ticket scalping fracas from earlier this year ahead of the actual Eurovision final on May the 13th.

We'll be discussing the relationship between queuing and bots, and then onto how Russian bots are feeding disinformation into our social media feeds, as well as the Vulcan file leaks. So before we kick off, I would like to introduce our extremely insightful panel of experts that we have on hand today, starting with Matthew Gracey-McMinn.

[00:01:09] Matthew Gracey-McMinn: Hi Dani. Thanks for having me here today. I'm Matthew Gracey-McMinn, and I head up threat Research here at Netacea.

[00:01:15] Danielle Middleton-Wren: Thank you, Matt. Next we have Chris.

[00:01:18] Chris Pace: Hi Dani. I'm Chris Pace and I head up marketing and product marketing here at Netacea.

[00:01:23] Danielle Middleton-Wren: Lovely. And onto Tom.

[00:01:26] Thomas Platt: Hi Dani. My name's Tom. I'm one of the bot specialists here at Netacea, and I've been here since we founded the company.

[00:01:32] Danielle Middleton-Wren: Wonderful. Thank you so much, Tom. And that is, for anybody who's unaware ,2018. So Tom's been with us for quite a while.

[00:01:42] Danielle Middleton-Wren: Okay, so let's jump into topic number one. So we're going to be talking about Clearview AI. Clearview AI has come into some trouble recently after scraping nearly 30 billion images from platforms such as Facebook without users' permission.

Matt, do you wanna talk about why this is relevant, why this caused such a furore?

[00:02:05] Matthew Gracey-McMinn: So there's quite a lot of different reasons why people are quite angry about it and it largely depends on your perspective, whether you're an individual user of Facebook, or if you're, say a government regulator. So if we start with sort of individual users, obviously you upload things to Facebook, you don't expect private companies to go and gather that information and then essentially sell that information on to other people.

[00:02:28] Chris Pace: Although that is Facebook's entire business model that you just described.

[00:02:32] Matthew Gracey-McMinn: It is,

[00:02:33] Chris Pace: so, they totally should expect that. But we don't expect that.

[00:02:37] Matthew Gracey-McMinn: No, we don't expect

that. so...

[00:02:39] Thomas Platt: And these guys aren't paying Facebook either, which is probably part of the issue.

[00:02:43] Chris Pace: It's the publicly accessible parts of Facebook, right?

[00:02:46] Matthew Gracey-McMinn: Yes. Yeah. It's the publicly accessible parts of Facebook. And it's not just Facebook, though, they're hitting all sorts of social media sites. They're just trying to grab as much information and they're cross referencing it. in order to sort of find every instance of your picture appearing on the internet, as it were, government regulators are of course also quite angry about this.

Cause it's quite clearly in breach of certain data privacy laws. Data privacy laws are very complex, particularly when you're crossing international borders. The GDPR was an attempt to standardize countries within the EU. Even then, each country has its own interpretation. There's the UK GDPR, the Data Privacy Act.

You have a different one in Germany, a different one in France, a different one in Italy. So it gets very complex when you start working across international borders. And of course, Clearview got into a lot of trouble in Europe for breach of GDPR, essentially.

[00:03:33] Chris Pace: So, and that is because there is a provision in GDPR, therefore then ,that your face is your personal data.

[00:03:41] Matthew Gracey-McMinn: Absolutely. It is classed as biometric data, particularly when you're using it for identification purposes. And that's a special category of data underneath the GDPR, which means it's subject to extra protections and does require the knowledge of the person whose data is being collected and also in many cases, their consent.

[00:03:57] Chris Pace: These guys first came onto the scene in like 2020 when it came to light that they'd scraped these 20 billion faces and there was a bit of controversy then, but I'm assuming that they must have believed this was a gray enough area that they were gonna get away with this.

[00:04:13] Matthew Gracey-McMinn: I suspect that's very much the case. They thought it was a gray enough area. There's also a lot of misunderstandings around the GDPR and not many people realize it's actually extraterritorial. So I think Pimeyes have made that argument that they're not actually EU based, so they shouldn't be subjected to the UK data protection law or any sort of EU one.

However, these laws are written specifically to be extraterritorial and to account for the fact that the internet crosses international borders.

[00:04:37] Thomas Platt: But there's a more interesting legal dynamic about this, right? In the US lots of different agencies and jurisdictions have fined and actually banned the use of Clearview.

However, in those same jurisdictions, the law enforcement forces are paying for subscriptions of this service. So you have both the legal aspect where they're saying that this is not right, they're gonna fine, they're saying it's illegal, but then actually those same law enforcement officers are actually using the service to find criminals.

[00:05:06] Chris Pace: Yeah. And I would guess that's what adds controversy to this is because the kind of individual, for example, who is, you know, concerned about their privacy, but the likelihood is they are very keen to ensure that it's not being used by law enforcement or by central governments or by, you know, government agencies generally.

Because that's the whole purpose of them wanting to protect their privacy. So the fact that Clearview is a system marketed and sold to law enforcement agencies makes the whole thing even messier really in lots of ways. I think I also heard that the efficacy of the thing itself is called into question anyway because a large percentage of the data that they have is of white faces.

And so therefore, it has a particularly bad record in identifying individuals that are of different ethnicity. And so that's a whole other problem that it has. And of course that's all, I guess that's because the vast majority of faces that appear on those social networks are also white faces. So it's like a self-fulfilling thing. It's almost designing itself to not work as well because of where it's getting its data from.

[00:06:10] Matthew Gracey-McMinn: Yeah, exactly. There's also the question around AI poisoning attacks. So this is a new area of cybersecurity. A lot of people are currently exploring the idea of can you pollute data sets? Like you said, you know, it's not very good at people of ethnicities. In a similar manner, could you potentially poison the dataset in there? Could I upload images of say myself with someone else's name attached and then commit crimes and so forth, and essentially create a whole fake identity? Or attach that to someone else and potentially try and frame them. If I attach all of their personal details to my images, and that goes to...

[00:06:42] Chris Pace: I think I've seen the movie, what, what was, I think, Shia LaBeouf was in it?

What was it there? There's a movie that is basically this entire, this entire concept right there. There's a fake identity that frames another individual for doing things that he, it turned out that he didn't do. Anyway. I can't remember.

[00:06:57] Thomas Platt: Well, it, it doesn't even need to be fake. So you mentioned Chris, that it's just like criminals that are really concerned and people wanna hide privacy, right. There was an American male last December who was arrested in one state. They wouldn't tell him why he was arrested. He spent six days in jail, had to hire two different legal counsels, one in each state. He was moved to another state. He was not told why he was imprisoned. It took them ages to figure out why.

It turned out that actually it suspected that facial recognition profiled him, tagged him up as someone who'd committed a crime in another state. He was sent to that state and it wasn't until the actual solicitor went and looked at the video that he could actually tell, "Hey, this guy looks a bit like from the face, my client, my defendant", but his body, his build, everything else was completely different. It clearly was not him. It cost this guy thousands of dollars and he spent six days in jail. Cause it was misidentified by this technology, right?

[00:07:51] Chris Pace: This is the problem when the volumes of data are so massive, there are going to be surely anomalies like that.

There have to be, don't there? Because although people's faces are unique up to a point, they're not as unique as a fingerprint. Lots of people can look the same or similar. How clever does the machine have to be to know that It's definitely that individual. In fact, they don't commit to knowing it's that individual.

They basically, you know, Clearview themselves call it a decent guess at that person. And the problem is the more reliant a law enforcement agency becomes on it, the more difficult it's gonna be to suggest that it could be wrong.

Yeah,

[00:08:25] Thomas Platt: we are years away from people being sent down for crimes because AI incorrectly identifiedth'em. Right.

[00:08:31] Chris Pace: But it's a possibility.

[00:08:32] Thomas Platt: It's gonna happen at some point. You know, the system's never gonna be perfect, right? Like people spend a long time in jail before cases are even brought to trial. Like, who's to say that we don't see this happen more and more often.

[00:08:44] Chris Pace: I have a question about how easy is it for them to do what they did? So we've understood that the legality of the GDPR has been applied to say, look, that thing you did, you shouldn't have done. That's one thing, finding them, but technically, how hard was it for them to go and scrape those 20 billion faces? Probably not that hard.

[00:09:05] Matthew Gracey-McMinn: Scrapers have been around for a while. Pulling images is very, very simple. It's simply a matter of having enough processing power to do that at scale, probably very quickly. You know, you could do that theoretically with a laptop. Probably with a modern smartphone you could actually do that. It'd just be a bit slower than perhaps an enterprise would want it to be. But easily enough. Cloud computing, you could do that at scale very, very quickly and just store the images.

[00:09:29] Chris Pace: And it's something as simple as a database that says, this is this person's name. These are all the photos of their face.

There's probably a minimal amount of processing that says this is a face. Like they don't wanna scrape stuff that's, you know, their dog or where they went on holiday or whatever. I'm also assuming that the tagging elements in Facebook must be at play here as well because you tag photos in Facebook to say they're you.

So I'm assuming that stuff's accessible as well. So it just made their job really easy for them.

[00:09:54] Danielle Middleton-Wren: Yeah. Facebook's always been able to kind of identify faces like that, hasn't it? When it comes up with a tag suggestion, we think this is that person. Yeah. Yeah. And so how do we think that then translates to the trouble that Pimeye have got into, so it says that they've been scraping Ancestry and scraping like, dead people's images.

What do you think of that? How do you think that then...?

[00:10:13] Chris Pace: Well, well, Pimeye is worse in my view. Because what Pimeye says is if you've got one photo of someone, you can now, find all of the photos of that person and identify who they are, which I think is quite terrifying.

[00:10:28] Matthew Gracey-McMinn: The way they market it is, it's a way of finding all the photos of you on the internet so you can protect your privacy.

Yes. The question though, and this was a question for Clearview and Clearview originally, were not actually just limited to law enforcement. They actually sold it publicly, to members of the public, but then there were significant concerns around stalkers. Child abusers of course. Yeah.

And that sort of stuff. Pimeye is probably going to fall of the same concerns, I suspect. Yeah. There's real worries there. But I, as far as I know, they still do offer a public available querying system.

[00:11:01] Thomas Platt: I have a question with all of this. This concept of a right to be forgotten, a right to have your data deleted, are we able to put in a Freedom of Information Request Act to Pimeye, to Clearview with my face and say, "Hey, I wanna know what pictures you have of me.

You have to send them to me. And equally, I would like you to remove me from the system. Therefore, I can't be found." Right. Surely, does that right, does that mean criminals could go and stop, remove themselves from the law enforcement databases?

[00:11:30] Matthew Gracey-McMinn: I'm not a lawyer. I'm not an expert on UK law. But I've had a lots of interactions with the GDPR, in my past as part of my career. So data, subject access requests and the right to be forgotten do apply in certain situations.

You cannot ask to be removed from law enforcement, provided law enforcement have a legitimate reason for processing your data. However, organizations such as Pimeye, and Clearview essentially are depending upon the consent of the individual in question. So if you retract that consent, you can ask to be removed.

Both Clearview and Pimeye, I believe, do allow for you to ask for your photos to be taken down. However, their scrapers are still going around the internet, and a couple of people have found they asked to be removed, couple of months later, they've got more photos back up there.

[00:12:15] Danielle Middleton-Wren: So is the issue more with. Not with Pimeye and Clearview AI, but more with the websites that the original image is sourced from. So obviously with Pimeye, what they're trying to do is show you these are all the images that exist on the internet. You could potentially use that to say, "I'm not happy that my image is available", or "this particular image is available on this website anymore and I want to remove it".

Could you use it as a way of tracking down your images and saying, "I'm gonna use this as a way to get rid of all those images"? Or is it Pimeye and Clearview that you think are people who are doing the bad thing when really they're just, they're just the tool.

[00:12:50] Chris Pace: They can only exist because. you know, this is back to the same old thing. This is exactly like the 2016 disinformation stuff. The Cambridge Analytica, this is, this is the same, it's just faces, right? So instead of asking me to take a quiz, and me filling out a quiz that says I'm 61% fun to be with.

Instead of it doing that, what it's doing is it's taking the pictures that I've posted onto Facebook and using that data instead. The challenge... with my, with my consent... but the challenges are exactly the same. But what's interesting is that you have this situation where individuals make up that data set, but Facebook fundamentally owns that data set, right?

Facebook has made each of those individual people the product and that's why the product is free. You are the product because the thing Facebook sells is your data. And I think lots of people are now aware of that. But to your point, what do you do? What do you do? You couldn't possibly track every place where you exist, you know, on the internet.

I suppose we should talk a bit about the positives. You know, Pimeye have said that there are lots of useful things that can be done with this, with the results of these searches. Things like, removing revenge porn for example, you know, that's ended up in lots of places on the internet. Working with humanitarian organizations to help with stopping human trafficking or combating crime, whatever.

You know, all these things are obviously useful things, but those aren't to do with an individual being able to see another individual's photos and identify who they are through a search bar on the internet. That feels like a moral, ethical question that needs to be answered.

And then now it goes back to, well then, how is government involved in protecting privacy, how far are they gonna go? And then that then leads directly into things like the legislation being proposed, you know, to protect how the internet's used. And that just becomes a hot potato then that it feels like we never get an answer to.

[00:14:47] Danielle Middleton-Wren: Exactly. It is the classic, the problem needs to arise before the legislation can be put in place to moderate it and to address the challenge. Because before that, legislation can't address a problem that they don't know exists.

[00:14:57] Chris Pace: But I think to Matt's point, what's more alarming is what can be done at scale fairly easily and where the potential is then for that stuff to be misused. That's the bit I find alarming.

[00:15:09] Danielle Middleton-Wren: The fact that you could do it from your iPhone. Absolutely.

[00:15:11] Chris Pace: Well, or if you've got the, if you've got the cloud capability, you can scale to billions of images, right?

[00:15:17] Matthew Gracey-McMinn: Yeah. Very, very quickly. Very, very easily. It's just a case of, have you got enough money to pay the bill for the cloud at the end of the month?

[00:15:24] Danielle Middleton-Wren: Yeah. You don't even need necessarily, particular technical expertise to do it.

[00:15:29] Matthew Gracey-McMinn: Technology has become much more accessible to all of us. Yeah. And unfortunately, that also means that it's more accessible for people with malicious intent as well.

[00:15:54] Danielle Middleton-Wren: That takes us really nicely, Matt, onto our next topic of ticket scalping. So you're speaking there about the low barrier to entry for malicious actors who are trying to execute attacks.

So ticket scalping, it is so easy for anybody to go onto a website, purchase a bot and run it. Do you want to talk to us a little bit about how that works, and then we'll go into why this has become such a challenge for those high demand ticket items like Eurovision, Taylor Swift and any of the myriad of popular events?

[00:16:24] Matthew Gracey-McMinn: Absolutely. So bots, those are automated processes running over the internet essentially, are run largely by tools. You've probably downloaded apps onto your phone.

You probably install programs onto your computer. A lot of these bots work in exactly the same way. You install... you buy the bot online, usually a digital download, install it onto your computer and load it up. And some of them are a, require a bit more technical expertise than others to run. Some assume some technical knowledge, but you've got sort of what I consider entry level bots as well.

To give you an idea, there is a credential stuffing tool that we see very widely used. This is a tool used specifically to take over accounts en masse, accounts on streaming services, retail stores, that sort of thing. And it is eminently possible for someone to go, "I want to take over 10,000 accounts on this retail store", and within 20 minutes of just making that decision with no technical expertise, they can download all the tools, set it all up and run it purely using the available tools online that are easily found on Google searches and using the freely available handy user guides that come with them.

The scalper market is very similar. You've got tools. Some of them are free. Some of them are very, very expensive. We see them running into five, $6,000 in price, sometimes even more, which tells you something about the amount of money these people make from this sort of thing.

That they can afford to outlay that much as a business expense. And what you do is you simply download the tool, point it at the target you want to try and scalp, say, uh, Eurovision tickets and hit run, and away you go.

[00:17:58] Danielle Middleton-Wren: And so, lots of dark web suppliers, you might call them, or vendors are supplying this service to anybody with access to Google, essentially.

And so what does that mean then, for the future of people wanting to buy these high demand tickets, do you think? So if we've got like the Eurovision popping up again or say Ed Sheeran tickets go on tour, do you think it's gonna be absolutely necessary for people to purchase a bot to get those tickets?

Because if not, they're looking at, say, £11,000 per ticket in some cases, perhaps more.

[00:18:29] Matthew Gracey-McMinn: I think it's possibly the way it's going to go. We saw that with the PlayStation 5 launch where, I work in detecting bots and so many of the people I know wanted to get a PS5. They're like, "Matt, what is the best bot to use to get a PS5? Because I can't compete with the scalper bots". So I was being asked, you know, I need a bot that will get round this company's defenses. And I'm sure you know which one it was. I was like, I do, I'm not telling you because that is not helping the problem. But essentially it's what's becoming of... It's almost an arms race where people with legitimate concerns have to use a bot to try and keep up. So in response, unfortunately, the sort of slippery slope of that is, "Hey, I just used a bot. Why not just buy a couple more tickets and sell them at, you know, like you said, 11 grand. I'll make 10 grand profit."

[00:19:13] Danielle Middleton-Wren: Absolutely. Obviously then you've got economic factors in play there as well. So obviously we're in like a credit crisis and the temptation must be there. Absolutely. Must be there. Okay. So why do you think that now we've only started to just see people being prosecuted for these attacks.

So obviously the BOTS Act that's been in place since 2016, Tom, do you wanna take this?

[00:19:35] Thomas Platt: Yeah, I mean, look, this is nothing new. Ticket touts were around before PS5s. We saw the shift of a lot of the actors that were involved in the ticketing space and touting tickets and scalping tickets move online during the pandemic because of the lack of events.

These were the original people that built the tools that we used to scale PS5s and then sneakers and so on. You know, they're not new. They've been around for ages. I think the reason we're seeing this is obviously the act was released, but there is simply, you know, the act is too much of a gray area to really prosecute anyone.

It doesn't put too much of a penalty on the people selling a ticket or an emphasis for them to do much. But what we have seen is just the power of Taylor Swift Fan Club. That is the single and only reason that has so high profile is the organization and the structure of the Taylor Swift Fan Club in trying to take action against this and the outrage that they've had.

They're not the first group of fans to be outraged. They're not the last group of fans to be outraged. They're the most organized and best run fan club that are trying to take action.

[00:20:39] Chris Pace: What it does, I think, though, is it creates a template for others to be able to follow because the only way that you are going to combat a threat at scale, and it is a threat, what consumers I think have to do, and I think governments have to help them do it, is they have to scale as well. So if you are being defeated by scale, you as a group, whether you are the Taylor Swift Fan Club, or whether you are Ed Sheeran Fan Club, or whether you are, you know, you're Eurovision super fans, pressure has to be created to force governments to think about how they go about solving this problem.

I think the other thing that I find really interesting about it is it clearly reflects really badly on the brands involved, whether it's tickets or other high demand items. It reflects badly on those brands. If Taylor Swift is the brand, she recognizes ,her management recognize that it reflects badly on her.

So there is an incentive for them to do something about it. And there are other artists who have also spoken - Ed Sheeran is one of them - who have spoken out in the context of, this is a fiasco, right? Yeah. But it'll only be fixed, I think, by consumer and public pressure, forcing governments to, whether we like it or not, regulate better how things are priced and sold.

Because the way that we transact has changed. I mean, unrecognizably, even just let's say in the last 10 years, do we believe that consumer protections have kept pace with those changes? No. No. Right? No one's gonna say yes. So, that's what's got to happen. And I think also those, then those organizations have a responsibility to take in terms of how they look at stopping those bots doing that stuff.

Because the only thing that will incentivize them to do that is where it is to their advantage. And you might argue that at the moment, it's not, right? The resale sites exist. The ticket is bought, the transaction's done, the retailer steps back and says, well that, well I've done my bit now.

And then the ticket ends up on a resale site because of the entire resale market. For context, recent research from Forbes says that items being scalped by bots is a $16 billion industry. That is not helping the consumer. I don't think anyone can argue that's helping.

[00:22:52] Danielle Middleton-Wren: And the reason it's thriving is because there is no adequate legislation to actually moderate this because there isn't the understanding in the legal system to address it.

[00:23:01] Matthew Gracey-McMinn: There's also the difficulty of, it's very international. So we track groups who scalp all over the world. You just move the goods across international borders and then it gets very difficult to, you know, if we're sat here in the UK and we see a scalping attack against the UK organization and then it's being perpetrated by, you know, someone somewhere else on another continent.

It suddenly gets very, very difficult for the UK government to go, okay, am I actually going to pursue that actor? You know, is it expensive to get an, to go for extradition and then you have to go through the legal proceedings? Is it actually worth it for a concert ticket?

[00:23:34] Chris Pace: How proactive could I be about stopping a bot? Now, I know one of the things they've tried doing is raffles, but that doesn't work either, does it?

[00:23:42] Thomas Platt: So the, I think the only thing that consumers can do is to not buy on the secondary marketplaces, which is easier said than done. But the minute consumers stop doing that, the problem goes away.

It will never happen. I wish it would, I wish we could all stop buying tickets on secondary sites. But that's the only thing realistically the consumer can do. I think also there's a lot of talk at the moment, and I actually feel in some way, sorry for the ticketing companies, which I don't think anyone would ever expect me to say.

Having been in this space and having worked in large volume on sales for nearly 10 years now, ticketing is one of the lowest margin industries in the marketplace, right? So all these people talking about how we've got all the security and they should have bank grade security and bank grade verification.

You know, banks are making six to 10 times more margin than the ticketing companies, number one. The margin isn't in the ticketing ecosystem to fund that kind of level of security protection that we'd expect from a bank account. Equally, all of the other industries that tend to say that the ticket industry's messing this up, ticketing is the highest volume, highest traffic event space in the world. Most websites, most businesses, even banks, couldn't cope with the surges and traffic that ticketing sites see. So when a ticketing site is hit, they need to offload that ticket quickly, simply to keep their website online. Equally, what a lot of people don't understand is the ecosystem behind the allocation of tickets.

Tickets go out in allocations. The site that sells the tickets the fastest gets the next allocation. Because as much as all these artists talk about wanting to get their tickets in the hands of the fans, and I genuinely believe that, they equally don't wanna have the event that doesn't sell out. They want to have something that sell that quickly.

So it's a, it is a really complex and diverse problem that isn't easy to solve. I think one thing ticketing companies could do is not have their own secondary marketplaces, then also sell them directly. I mean, if I was an artist, you should just not give you tickets to those people. But there's monopolies and everything at play here, right?

[00:25:45] Chris Pace: When you're talking about the margin, what is the, it feels like event tickets are more expensive than ever. So the are we saying, that the margin is low because so many people are taking a cut as the margin low because the production values of a Taylor Swift concert are so incredibly high? Like what, what is the...

[00:26:02] Thomas Platt: I'm just talking very high level. If you look at the profits that an average bank would make on their consumer and their customer to what a ticketing platform would make, you know, they do make billions of pounds. Right? They're not short of cash. Right. The government could never implement the same regulation on that industry with the revenue they make per customer as they do on a bank per se, would be unbusiness minded, I guess is, is the way we've seen, we've actually seen people from the government say they don't wanna step in on this because it's business. You know, these people selling tickets are making money. They are paying tax. There are businesses around us, there are people employed, there are secondary marketing sites that make money, that pay taxes, that hire people.

You know, we have seen the Department of Trade in the UK not want to step in on this because they don't want to be seen to be restricting business because they view secondary ticket in sales as business. Right.

[00:26:55] Chris Pace: Well, but the question to ask there would be though, but why does that secondary market exist?

It can only exist for a handful of reasons, can't it? Someone can't go to a concert. I mean actually that's probably it, isn't it. What are the other reasons for the resale market to exist?

[00:27:12] Thomas Platt: There are a great number of businesses now that offer secondary ticket in verified fund platforms where you can sell your ticket for the same fair price to guaranteed fans. Right? So they're out there.

[00:27:22] Danielle Middleton-Wren: There definitely are. There are some really great websites that do it and even just say, right, you've got a certain amount of time, so you've got six hours and within which this ticket will be available to you on a resale site.

So there are fair ways of doing it. And the question is just definitely, how is that actually moderated? And again, that ethical question of, okay, so if one reseller can do it, how do you stop another reseller from doing it in the wrong way? And what constitutes the wrong way versus the correct way and the fair way to do it.

[00:27:47] Chris Pace: But you mentioning bank grade stuff. I mean, is one of the gates that could be put in place, KYC at the point of a, you know, a purchase over a certain amount? And this goes back to what I was saying about how the way we are buying things has changed.

So if you're buying a high demand item, should the website be able to ask right now, I need to validate that you are a real person. You're gonna have to show me your face. We're gonna have to do two factor. We're gonna have to, you're gonna have to show me ID. And once you've done that, you can't, you know, you've bought your sneakers or your ticket and you can't buy again.

[00:28:16] Danielle Middleton-Wren: But if you're looking to spend thousands of pounds, that might be something that people are more willing to do.

[00:28:20] Matthew Gracey-McMinn: There is also though the issue with that that KYC has been bypassed, and in particular the scalping industry does have issues with some links to organized crime. Scalping is a great way to launder money.

If you, you know, you want to get rid of some money fast and maybe even double, triple, quadruple your investment. It's a good way to do it. And we see it all the time. People offering bypasses for KYC processes, most of which involve mocking up fake pictures of, you know, you hold up your passport to a camera with your face next to it and they mock up a fake passports and identity for you.

And we're also, you know, we see all sorts of complaints on social media forums and so forth, where people complain that I went to register with this site that has a KYC process. Apparently I've already got an account. So yeah, your data has unfortunately been sold Someone's created a fake identity in your name and has already registered for the site.

[00:29:11] Thomas Platt: Yeah. And the challenge is bigger than just the actual validation of a purchase of a ticket, right? We see there are tens of thousands of pounds involved in the markup for some of these tickets, right? So to validate that the guy who's buying it isn't a scalper is very hard, like I could be a ticket tout.

I could buy that ticket. I could prove it's me. There is then so much margin that I, you know, we've seen this before, they will walk up to the concert or whatever with the people that have got tickets with them and get in and then leave. Because they've made their money. Equally, you know, if they don't want to go, they can give the person a printed card with a limited balance so they've got the same card to check and the challenge is, put tingthese in-depth checks on an event where you've gotta get thousands of people in, in a safe manner.

There just isn't a time, you know, if they're lucky, they might be able to check that the ticket that they've got corresponds to your driving license. How many people turn up and say, I bought it for my wife. Sorry, she's not here. Like, you know. There isn't the time at the gates to conduct the strict checks and even if there is, sometimes the ticket touts are walking in with them anyway.

[00:30:14] Danielle Middleton-Wren: Not, yeah, not when you've got security checks and everything to prioritize, I think in those, especially bigger venues.

Matt, you took us very nicely onto social media earlier. So you spoke about how your information can be used to create a new profile.

So one of our topics for discussion today is the Russian influencer bots and how bots have been used to boost presence of influencers by creating bot driven profiles. So can you tell us a bit more about that?

[00:30:44] Matthew Gracey-McMinn: Yeah, absolutely. So bots are used quite extensively on services such as Instagram and other social media profiles. Twitter, obviously, there was quite a lot of discussions about the number of bots on Twitter as Elon Musk was taking over the site. It became a very expensive issue for a lots of people.

I believe you also have Instagram as well. You know, people are often accused of having fake followers and this sort of thing. And that all comes down to bots, essentially fake profiles that are set up quickly, easily. And people essentially sell the services of that to specific influencers.

So we've actually found in some countries there are literally vending machines where you can go to, you put in your profile information, you know what your profile handle is or whatever, and put in a bit of cash and you can add, say, a thousand followers. Or a hundred likes onto your most recent post, things like that.

[00:31:36] Danielle Middleton-Wren: Absolute madness.

[00:31:37] Matthew Gracey-McMinn: It must cause, you and Chris probably know more about this than me, but I imagine it must cause absolute chaos for the marketing industry when you can't tell if someone actually has legitimate followers or not when you're trying to choose to hire an influencer.

[00:31:49] Chris Pace: Well, so I can tell you of examples in our industry where startup cybersecurity companies that might come out of Eastern Europe, suddenly have 10,000 LinkedIn followers, and yet every time they post an update, it's only the 50 people who work at that company who like any of those posts or who are engaging.

So yes, it's obvious that it happens. I find this happening on Twitter, though way more interesting because my understanding is from what we know about Russian disinformation, going way back to 2016, is that they basically create en masse accounts, not just for the purposes of following, but also for the purposes of posting updates that look to reinforce particular ways of thinking or particular views, fake news, whatever.

I'm really interested to know what are the practicalities of that? Like, how are they doing that? Is it just people? Is it purely people? Like, are they actual bots? How are the statuses created? Like how do they make sure that they change so that they don't get recognized by as being bought? Like...

[00:32:55] Danielle Middleton-Wren: And can we loop that back to our previous conversation about AI?

Are we expecting now that AI's in place, can we, is this problem only going to get worse? If you've got ChatGPT as well as these swathes of fake accounts that you can automate the content. You can just say, I want to say X, Y, Z about this politician, about this, about this,issue. That has limitless potential, presumably.

[00:33:17] Matthew Gracey-McMinn: Essentially. Yes. So misinformation campaigns have long been a part of both Russia and the West's operations. You know, it does apply to both sides. Obviously Russia's... we're the good guys. No, uh, but yeah, Russia's very, very aggressive around disinformation.

And,the Vulcan file leaks recently showed very clearly that activities such as attacks on critical national infrastructure, the sorts of attacks that we saw a few years ago in Ukraine that knocked out power stations and that sort of thing. Those sorts of attacks are seen very much in the same way as disinformation campaigns, as a way of sapping the will to fight of a target country. Trying to discourage the population, make them think their government's not capable of protecting them.

[00:34:02] Danielle Middleton-Wren: Yeah.

[00:34:03] Matthew Gracey-McMinn: So in Russian military doctrine, they sort of fit together. Those sorts of things. Disinformation is not really distinguished from, perhaps you might call them slightly harder cyber attacks, with the more physical impact.

However, what we did see from the Vulcan files as well is that there's a lot of efforts and many systems have been built and designed specifically for the purposes of pushing disinformation. They have tactics and techniques, training methods for people to use to learn how to try and manipulate public opinion.

And a lot of this is based around the idea of, one, understanding what public opinion is, both within russia and outside of it in other countries as well. Monitoring, essentially simple sentiments, analysises, you know, ML models have been doing for a while now. Monitor to understand who are the biggest troublemakers, what is the general overruling opinion, and then creating swarms of fake accounts, fake profiles, using things to go back to that harvesting of images.

Often using images harvested off the internet of other real people, attaching them to profiles completely unrelated to that person. And then pushing specific messages, pushing ideas. And these can either be controlled by an individual if you want a very precise sort of message, or again, like you said, ChatGPT, AI, ML, these can all be used to try and generate messages to try and push a particular point of view, again, very, very quickly. And with ChatGPT too, you can see how fast that is. Yeah, yeah. You just roll that constantly.

[00:35:24] Chris Pace: Different applications as well. Like you can use, there are bits of software that can create AI faces for you. So now you don't even need steal faces anymore. Can create faces. You can create the post.

And I was reading in the, some of the Vulcan file stuff that also they scale up, mobile phones, right? Yeah. So that you can sign up for an account. So they've got these machines that have 320 SIM cards in which basically mean that you can sign up for 320 Twitter accounts at a time.

And then all you're doing is telling a bot to automate the posts. The posts match the sentiment. The posts are designed to either retweet or to engage with the right kinds of people. I can't imagine that it takes you that long to be able to create a groundswell of feeling that isn't actually real feeling at all.

And the way that human beings operate, is there is a herd mentality amongst human beings. It's a psychological effect. And so once that's applied effectively, you can begin to take an entire group of potentially millions of real people with you.

And that's how the,Russians are going about spreading disinformation. I suppose the next question would be, well, how do you combat that?

[00:36:24] Thomas Platt: Well, I think the first bit, the key bit is we're talking about just Twitter. This is, this problem is far greater than Twitter.

If you read some of the analyst stuff on the Vulcan files, they call it a fire hose of disinformation. Right? It's not just about Twitter, but part of the strategy is about this disinformation coming from, at volume, from multiple people across multiple channels. Right? Not just Twitter, but Facebook, other social media, chat rooms, forums.

Yeah, exactly. You know, so sections of newspapers. Yeah, exactly. It goes far broader because I think, you know, what they proved was, you know, if you just see it on Twitter, you're like, oh, Twitter's got this crazy narrative. But if I see the same narrative in the comments and the news, I see it in forums. I see it when I'm chatting someone on chatroom.

It's far more powerful. So I think the problem is the same, right? But it goes far deeper than Twitter. And you know, more often than not, we just talk about Twitter as the focal point. I think part of the power of this Russian attack is it's far broader than just Twitter.

[00:37:21] Chris Pace: Anywhere where supposed public opinion can be broadcast, and can be upvoted in some cases, that is a target for disinformation, isn't it? Because in some cases, if you're talking about the right kinds of newspaper, you know, the right kinds of publications on the internet, for example.

It's reinforcing opinions. It's reinforcing bias. And it's doing that through using machinery, which is, you know, we could have a discussion about a propaganda machinery from almost a hundred years ago, but now we're talking about it in the context of, you know, the sort of volume that they're capable of.

I think, again, it presents a similar sort of challenge to the ticketing one. Right. How is a newspaper... and does a newspaper care, does the comment section of the Daily Mail, do they care who writes something in there? How moderated is that? And...

[00:38:08] Thomas Platt: Well, it, it's worse than that, right?

Because we see this all the time. How do we measure these social media businesses? What is the success for a social media business, that's probably a marketing person just like you, Chris, there, looking at account growth, engagement, engagement growth. So these are good stats. Do these companies ever want to make creating an account hard?

No. Do they want to encourage as many accounts as possible? Yes. Do they want encourage as much dialogue as possible, even if it is confrontational? Right? So there's someone that's a bit noisy on the chat room, that encourages more people to chat and argue with them. All of those stats are typically the stats that these businesses, when we measure them at a business level, of things that we look for as positive things.

[00:38:47] Chris Pace: Is the thing that created a storm around bots at Twitter was more to do with the fact that high profile individuals and celebrities were like, something's up here because I'm getting follows from a lot of accounts that clearly aren't real. And so the pressure was actually created by the super users, I suppose, as opposed to the individual with, you know, a hundred followers or whatever.

And so then it became an issue because of that, not necessarily because a social media organization actually wants to fix that problem, but because there's pressure from their users.

[00:39:19] Danielle Middleton-Wren: Yeah. And I think then that raises a question of where does responsibility lie for the problem, because, is it with the social media platforms?

Does it need to be a bigger legislator involved saying we are completely agnostic to the situation, whether it's the individual users, whether it's a business, whether it's social media platform. But there does need to be something put in place. But like I say, who can actually challenge that and have... without there being a business benefit in place?

[00:39:46] Matthew Gracey-McMinn: I think though we have to, similarly to the ticketing issue, we have to be somewhat fair to the businesses. You know, they are trying to run a business, to expect them to be able to stand up to what is essentially a military operation that's targeting them is perhaps a little bit unreasonable. You know, they do not have the resources that the military do.

They don't have the money, they don't have the, even the knowledge. There's cybersecurity tools and so forth that governments have that just do not make it into the public, we're just not aware of, and of course any changes or security they implement, there is a big pot of money, literally thousands of people working to try and get back around that as fast as possible.

And you throw enough money and time and a problem, sooner or later you're probably gonna fix it.

[00:40:27] Thomas Platt: I think the only thing we can do as people, and often as parents, is educate children and everyone on where they get their information from and how to, you know...

[00:40:36] Chris Pace: I think children need a lot less education than my mum because if my mum reads it on Facebook, she will share it with everyone before she's checked anything.

She has no filter for what could or could not be disinformation because in the mind of that generation, oh well it's published. It's published in a place I've read it, it must, it's in black and... it's literally in black and white. It must be. It must be real. It must be fact. So I think actually younger generations are more suspicious about where things come from than that older generation who is more involved in social media, probably less ready.

And I think, to take the Russian actions as an example, I think they have taken advantage of that. I think if you looked at the kind of people more ready to engage with that stuff, they would be of that older generation.

[00:41:25] Danielle Middleton-Wren: And perhaps really those that they're trying to influence that are more likely to, say, vote, for instance.

[00:41:31] Chris Pace: Mm-hmm. Yeah. Of course.

[00:41:32] Danielle Middleton-Wren: The voting age.

[00:41:33] Matthew Gracey-McMinn: There's also the drowning out effect. So this is something we saw with the, was it the Arab Uprisings and the Spring Uprisings? There was a lot of effort to suppress dissenting opinions. Not necessarily through sentiment analysis and so forth, but simply by commenting so often you push their comments onto page three or four.

Has anyone ever gone to like the page three or four of Google search? I'm not sure. Once you've hit page two, you know, you realize I'm not finding the answers I want here, sort of thing. You know, in comments pages it's much the same, you know, you tend not to read too far down. If you push opinions far enough down, people may not even get to them, some organizations obviously benefit a lot more from a, should we say, a more, passionate debate. They want more, as Tom was saying, that engagement and they try to encourage that sort of very polarizing opinions. Others, however, do try to moderate the forums and try to keep them under control.

But it varies from organization to organization. But even on the ones that are perhaps more restrictive, I remember seeing one that's quite restrictive, a major UK news source. And I saw, during the invasion of Ukraine, there was a couple of people pushing a very pro Kremlin point of view, called James Charles and Charles James. I remember laughing that your name generator messed up a bit there, didn't it?

[00:42:56] Thomas Platt: What gets scary is where this is going, right? When we talk about, yes, we've all talked about disinformation, fake information, you know, people being smart, being able to spot where it's coming from, right? If we look at the sophistication of, you know, the fraud that we've seen recently where people are using real people's voices trained by AI, to ring family members and, you know, tell them and extort money.

We've seen videos of celebrities that look exactly like that celebrity. We will move to a place, right, where surely, you know, Russia and other nations will be able to make videos of people that you know and trust that sound like them, that look like them, that spread this information. That will be very, nearly impossible, if not impossible, to actually validate if it's real or not.

[00:43:39] Chris Pace: Yeah. Surely. Where they have the means, it's not just about things like disinformation, it's also then about cyber crime, isn't it? Where you have the means and the target is big enough. Is it worth me faking KYC to be able to intercept a deposit for a house, for example? Yeah, it's totally worth it.

Hundreds of thousands of pounds. Yeah, it's totally worth my investment. And so where the technology exists, surely, and I don't like being a doom monger, but surely we're headed to a place where those technologies will be applied to everything.

[00:44:08] Thomas Platt: Oh, I mean, the other day I was told Jeremy Clarkson was arrested for pushing the best and fastest growing cryptocurrency.

He was taken down by the government, and I saw tons of images of Clarkson had recommended this cryptocurrency, which clearly I don't think Clarkson's been pushing cryptocurrency.

[00:44:24] Matthew Gracey-McMinn: The farming's not working out for him then. I mean, we saw it in the, again, the invasion of Ukraine, there was a video popularized on YouTube of Zelensky, a deep fake ordering the surrender of Ukrainian troops, which didn't stick fortunately.

But you know, that was an early stage test of what could well become a powerful weapon for militaries to spread disinformation.

[00:44:48] Chris Pace: The examples that are in the consciousness now around AI and deep fakes are extreme. And so therefore, because they're extreme, we can tell that they're fake.

Whereas actually what's interesting is almost certainly where they're being actually applied. They are clearly not extreme. They're very normal and they're designed to look real and not make you think, is this real or not? Therefore it must be happening, and then the question will become, how do we stop it?

I suppose in the post truth age, let's say beyond 2016. Have we just got comfortable with the fact that we now have to apply our brains to doing our own filtering on what we believe is real or not, and how important do we think that is to us? Yes, I would argue. But then the question becomes, at what point is it problematic enough that something needs to change?

[00:45:35] Danielle Middleton-Wren: Yeah. Because I think that comes back to your point earlier about there being a generational issue for those who are technologically adept versus those who are not. For instance, my dad had a phishing email that came from an account posing as me.

And yeah, I had a text from him saying, I can't access those photos. I said, I haven't sent you any photos. It would only come through WhatsApp. Do not open the email. Yeah, do not open those pictures. And he was just like, oh, I've been repeatedly clicking. I was like, oh, fabulous. And so it's having the education to say, am I suspicious of this?

So I think you're correct that it is gonna come down to individual users, but at what point does there need to be a greater authority kind of step in and say, as a society, we need to be this aware, we need to educate everybody to say, this is what real looks like. This is what fake looks like.

Oh, they're exactly the same. You need to make sure that you're looking for those more subtle indicators in your, any communication that either comes to you or anything that, I suppose, crosses your path, particularly virtually, or across social media, because not everybody knows to do it.

[00:46:38] Chris Pace: And what's more alarming for us, I suppose, is that we've talked about how AI could begin to enable these things at scale, but we've also talked about how automation is already doing these things at scale. The capability for all these things to happen at scale makes them, by definition, a bigger problem.

It'll be interesting to understand what the tipping point is. I think the next US election will be extremely interesting. I think what plays out in Ukraine will be extremely interesting. And I think that, you know, it may be that bigger questions start to be asked geopolitically about, you know, how disinformation can really be tackled. Not inspiring, when I was reading articles this week that were basically saying, the results of models like ChatGPT are impossible to identify as being from ChatGPT. So that just for me looks like a massive red flag that says we can't actually detect the results of AI. Are any of us worried about that?!

[00:47:32] Danielle Middleton-Wren: Are any of us not worried about that? Great, well, thank you all for that sterling conversation. I think we got into some healthy debate there, about some of the cybersecurity issues that are challenging absolutely everybody today. So to sum up, I'd like to go, sorry, Matt, back to you with one more question.

Would you like to talk to us about the bot of the month, which we have elected this month to be card cracking.

[00:48:17] Matthew Gracey-McMinn: Certainly. So card cracking is a rather simple technique that sort of falls in line with brute forcing, where you make thousands and thousands of guesses to try and get the right answer.

Usually when we talk about brute forcing, we talk about breaking into people's online accounts. You know, you can get a list of, say, the hundred thousand most common passwords. You can try and guess those. Credit card cracking works in a very similar fashion. Say you have either no information or partial information about a credit card.

Say you have just the number, you don't have the expiry date or the little three digit code on the back. So you want to try and guess those remaining artifacts to try and steal someone's credit card. Essentially what you do is you go to a merchant site or a retail site, or as we've increasingly seen attackers doing, you go to several and diversify across several.

One store may notice a sudden request by someone to make 4,000 purchases. They're less likely to notice if it's 10. You know, if you usually make 200 sales a day and you're suddenly making 4,000 in 10 minutes, you're probably gonna get a bit suspicious. If you usually make 40 a day, you suddenly make 10 in an hour or 11 in an hour, you're gonna be less suspicious.

That's probably, oh, we just had a slight burst of business. Great. So what they do is they spread across multiple sites and they basically just throw... trying to guess the missing digits using automated tools. So someone sits at home with their laptop, essentially sets all of this up to run and goes, guess all the possible permutations of this number and throw it against these sites.

Eventually they'll get from that a list of accurate details, which they can then use as a refined list and either use themselves or more commonly sell onto other criminals who know I can then use these numbers. They are accurate and, for certain amounts of money, usually up to a limit, I can use them without triggering any detections cause they got through previously.

[00:50:08] Danielle Middleton-Wren: And so whose responsibility do you think it is to try and prevent card cracking or to try and, I suppose, mitigate the impact of card cracking? Do you think it comes down to the retailer, the card provider, the payment gateway, or does it need to be, as we've discussed a lot today, does it need to be much more at a legislative level?

[00:50:27] Matthew Gracey-McMinn: I think there's something everyone can do to an extent, I don't think it's necessarily my responsibility trying to you know, someone guessing a random number that happens to be my credit card, but I should certainly be paying attention to my credit card statements.

The number of times I've heard people say, "Hey, I had someone make fraud on my account 12 months ago." Probably should have caught that a bit quicker realistically, rather than letting it build up over months and months. Yeah. You know, you have to, to a degree, as harsh as it is, you have to say to people, you know, take some responsibility and initiative in protecting yourself.

The internet is a wild west. You have to look after yourself. Companies, you know, they should be monitoring for automated activity, suspicious activity, that sort of thing. Again, credit card companies can do the same, but, payment gateways as well. Payment gateways and credit card providers though, also have a bit more insight than the merchants.

As I mentioned, merchants are often, it's 10 to one, 10 to another, 10 to another, not 4,000 to one. So what you are looking at from the merchant's perspective is much harder to detect than say, a payment gateway that may actually have multiple merchants all going through them. They should be looking for anomalies, aberrations, and patterns.

Something suspicious going on. Similarly, banks as well. Well, they have very thorough checks. You know, we, you mentioned, Thomas mentioned earlier, there's a lot of legislation around them already. But again, they need to be, I think, more aggressive in terms of trying to block purchases made with credit cards they think are suspicious.

And I think as consumers we need to be more forgiving of that. I've fallen for that in the past as well. My credit card gets blocked. A purchase gets blocked, I get quite annoyed, but at the end of the day, they're actually doing it for my benefit. And a two minute phone call is a lot better than me losing 20 grand.

[00:52:06] Danielle Middleton-Wren: Oh, absolutely.

[00:52:07] Chris Pace: The mechanics of this, are the numbers that they are inventing effectively with some algorithm, I guess that fills in the spaces, are they actually being tested for a purchase and then declined? Is that how it's working?

[00:52:20] Matthew Gracey-McMinn: That's how it works in most cases, yes.

[00:52:22] Chris Pace: Right. So there must be masses of declines that are happening against those payment gateways while they're going through this process?

[00:52:28] Matthew Gracey-McMinn: Yes. The issue goes through what we call target diversification. So the payment gateway sees say two or three bad requests from one merchant and then two or three from a different one, and from a different one, different one.

And unless they're tying that all that information together, they're not likely to detect it. If they're seeing each merchant as an individual entity rather than all of the payment requests they're getting as a group, they're less likely to detect it.

[00:52:54] Chris Pace: And I suppose then what they're looking for is they're looking to use those cards in places where, or do they already have the users' personal information to make that card work, or sometimes they don't need that?

[00:53:07] Matthew Gracey-McMinn: Quite often you don't need that. What we see attackers often advertising is methods for bypassing sort of VIN requirements and that sort of thing. So they advertise, oh, here's a list of credit cards you can use on this site up to this amount without triggering a security test.

[00:53:21] Chris Pace: So they know what the, they know what the floor is basically.

[00:53:23] Matthew Gracey-McMinn: But yeah, often through, again, trial and error.

[00:53:26] Chris Pace: Right. Yeah.

[00:53:27] Danielle Middleton-Wren: The aim of the game is to go undetected for as long as possible. So if you're not, like you said earlier, if you're not checking your statement for 12 months, then you are extremely high risk because that's exactly the game they're playing. They're relying on you not checking.

[00:53:39] Chris Pace: And do they just keep doing this? They move to different sites and they just keep producing, they're effectively producing card numbers that they're then selling on dark markets or whatever?

[00:53:47] Matthew Gracey-McMinn: Quite effectively. Yes, though we also see attackers pulling information from data dumps.

So what often these attacks are doing, they look at a data dump, they pull off, say, some company's been breached, their customer's credit card details have been leaked. Obviously they don't store the full collection of data, so the criminals don't have the full collection. They pull that information, plug it into their tool and fire it off against a few hundred, maybe a thousand merchants.

[00:54:10] Danielle Middleton-Wren: Scary. Tom, is there anything you want to add? Cause I know you've worked with a number of retailers over the years who suffered from a card cracking or very similar challenges?

[00:54:20] Thomas Platt: Yeah, I think it's an interesting challenge for everyone in this space.

I think quite often what we see is, a retailer is often suffering because it's really the payment gateway that's being targeted. I think interestingly what we didn't really touch on there is some of the pains that that can cause for the retailer. So a lot of these gateways will have throughput limits.

So they might be at a key trading period, it'll be a busy shopping period, and if they've got a load of fake card registrations going through at the same time, they might start to see that, oh, well actually some customers aren't able to verify their card and check out. Some customers pay per validation as well, so they're actually being charged a fee for all of these attacks. And, you know, we've seen people suffer significant losses because of, effectively paying to validate stolen credit cards. Equally, some of these have frequent limits that just shut them off. So you'll see websites go down where you can't shop anymore because, you know, the payment gateway knows they're being targeted and shuts them off.

So it can be a very destructive attack for a lot of online businesses and it can really sort of stop their revenue generation for a period, if not a longer period of time.

[00:55:24] Danielle Middleton-Wren: Thank you Tom.

Well, again, I'd like to thank you all for joining us on episode one of series two of the Cybersecurity sessions.

It's been great to be back in action, so if anybody has any questions that they'd like to send over to any of our panelists, so Matt, Chris, or Tom, please do send them to either our Twitter account at @cybersecpod or please email them to podcast@netacea.com.

For now, thank you very much everyone. Goodbye and we look forward to meeting up with you again in next month's episode.

Show more

Block Bots Effortlessly with Netacea

Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
Book

Related Podcasts

Podcast
Podcast
S02 E07

Validating AI Value, Securing Supply Chains, Fake Account Creation

In this episode, hosts discuss AI validation, ways to secure the supply chain, fake account creation with guest speakers from Netacea, Cytix and Risk Ledger.
Podcast
Podcast
S02 E06

Protecting Privacy in ChatGPT, Credential Stuffing Strikes 23andMe, Freebie Bots

Find out how to make the most of ChatGPT without compromising privacy, how 23andMe could have avoided its credential stuffing attack, and how freebie bots work.
Podcast
Podcast
S02 E05

Skiplagging, CAPTCHA vs Bots, Scraper Bots

Discover why airlines are battling skiplagging and the bots that aid it, whether CAPTCHA is still useful, and scraper bots uses in this podcast.

Block Bots Effortlessly with Netacea

Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
  • Agentless, self managing spots up to 33x more threats
  • Automated, trusted defensive AI. Real-time detection and response
  • Invisible to attackers. Operates at the edge, deters persistent threats
Book a Demo

Address(Required)