AI Regulation & Music, Scalping For Immigration Appointments, Credential Stuffing
A fresh Netacea panel of cyber experts are on hand once again to discuss the latest developments in security and bot-related news!
This month, in light of OpenAI CEO Sam Altman standing before US senators and requesting regulation of AI businesses, we give our views on whether one body – or even one nation – can or should regulate this rapidly developing industry.
Universal Music Group also instigated the removal of 7% of AI-generated tracks added to the service via Boomy, opening a debate about automated music creation, artistic copyrights and privacy, how AI models learn to make music, and how humans are influenced to create and consume music.
Also, with 69 arrests made by the Spanish police over a scalper bot ring targeting immigration appointments, our panel ponders how the approach to stopping such attacks differs depending on the target and industry.
Finally, credential stuffing is our attack of the month. As long as people reuse passwords across services, credential stuffing will be a viable attack – is it time the industry moved on and found a better way to authenticate users?
Speakers
Danielle Middleton-Wren
Cyril Noel-Tagoe
Chris Collier
Andy Lole
Episode Transcript
[00:00:00] Dani Middleton-Wren: Hello and welcome to Cybersecurity Sessions, episode two, series two. I am your host, Dani Middleton-Wren, Head of Media here at Netacea, and I am joined today by a trio of experts from Netacea, including Chris Collier. Chris, I'll hand over to you to introduce yourself.
[00:00:23] Chris Collier: I'm our Solution Engineering Manager.
[00:00:25] Dani Middleton-Wren: Wonderful. And Andy Lole.
[00:00:28] Andy Lole: Hello, I am CTO here at Netacea.
[00:00:30] Dani Middleton-Wren: Fabulous. And Cyril Noel-Tagoe.
[00:00:33] Cyril Noel-Tagoe: Hi everyone. I'm Cyril and I'm principal security researcher at Netacea.
[00:00:37] Dani Middleton-Wren: And Cyril's voice should be extremely familiar to any listeners from series one. A former host of Cybersecurity Sessions is gracing us again today.
[00:00:45] Cyril Noel-Tagoe: It feels different being in this seat now.
[00:00:47] Dani Middleton-Wren: In the panel seat or in the seat in a room with actual human beings?
[00:00:53] Cyril Noel-Tagoe: Both, but the panel seat more because first I could ask people questions, now I actually have to prep stuff.
[00:01:00] Dani Middleton-Wren: You can still ask questions, and please do, please feel free to ask many, many, and any question you'd like.
Great. So today we're going to be talking through some really interesting subjects that have cropped up over the last couple of weeks. We're gonna start with ChatGPT and whether regulations can stop AI destroying human artistry, if not the world. A nice dramatic starter for us. And then we're going to move on to: will police be cracking down on scalping and will this deter other fraudsters, before we finish with our attack of the month.
Okay, so I'd like to start with topic one, ChatGPT. Cyril, you have been researching ChatGPT over the last six to eight months, and I know you wrote a report on this and spoke to some analysts. Do you have any thoughts on Sam Altman, CEO of OpenAI's, recent judiciary hearing and his proposal to Congress that there is going to be a need for some kind of agency or monitoring around the new technology in order for it to be a safe technology that can enhance the world we live in?
[00:02:11] Cyril Noel-Tagoe: Yeah, it's interesting because you don't often get someone asking for more regulation, right? And I think that shows the power of AI, the fact that the people who are creating it at the moment are saying, actually we need to regulate this. So I see there being kind of that humanitarian aspect, whereas they're like, you know, in the future, if we don't regulate this, we'll be in the dystopia. If you think of things like Black Mirror, I'm not sure if any of you watched Black Mirror. There was the episode with the social score and then later than that now in China with a social credit. And you can just see these things that were science fiction maybe five years ago and now in the next five years, they could be coming in.
So I definitely think it's welcome. And I guess for me the question is, does that actually give OpenAI also an advantage being the first, you know, if you're out there and the regulations come in, it makes it harder for competitors maybe to train up data, maybe scale up. Is there also maybe a commercial aspect in play there?
[00:03:07] Dani Middleton-Wren: Oh, interesting. Hadn't thought of that as a potential reason for them to be proposing this. I was very naively thinking they just want to do good things for the world, but perhaps it is led by their own sculpting of the AI marketplace. Andy, what do you think?
[00:03:22] Andy Lole: I think there's a huge thing here about what is the motivation around regulation.
I think regulation is a good idea. I think they are genuine in why they're asking for it. But when we look at the broader industry, social media has been suggesting that regulation, sensible regulation or thoughtful regulation will be a good thing for years. And yet we continue to see various negative impact to that.
Despite the industry on paper asking for regulation, I do think the motivation and the sentiment and the way Sam Altman was talking about it, he means this. And I think we should embrace it, but I would equally, even if they are doing it for purely altruistic reasons and aren't thinking about this from a commercial opportunity, how do we approach that regulation?
Because we don't, we... I think it would be equally as bad to see one country taking the lead on that. And I'm not sure the United Nations is necessarily in a position to do it, but I've also heard suggestion that NATO, for example, should take the lead on that. And that also feels potentially skewed.
[00:04:26] Dani Middleton-Wren: Because that's what we've seen so far, the likes of Italy.
And I think there was another country within Europe that has banned ChatGPT, was it Greece?
[00:04:34] Andy Lole: I think it was Greece.
[00:04:36] Dani Middleton-Wren: Yeah, they've already banned ChatGPT and tried to put... until they can work out what those regulations might look like they've said. Right. We're not using it at all. But I think to your point that who would that governing body be?
It's, it needs to be agnostic, but how do you create an agnostic worldwide legislation that would work? Because, I mean, we've seen GDPR and other legislation, regulations try to do that, but...
[00:05:04] Chris Collier: You've gotta think that GDPR though, was the naturalization of a number of different policies that were given as a directive, right?
And so ultimately nation states really do need to think about how it's gonna affect them. So like as an example, we have the Department for Science, Innovation and Tech, and they put out a pro-innovation approach to AI, and that was published this year. And they're outlining a framework about how these five principles can be used in order for you to actually innovate safely with AI.
But that's obviously our approach to it. But I think what's really interesting is when you brought up the fact that democracy and social media and stuff like that, as an example, that one of the US senators was actually quoted saying that with AI, they now have the same choice that they had with social media and they failed with that one last time round.
So maybe this is why they're trying to take it a lot more seriously.
[00:05:58] Dani Middleton-Wren: And earlier on.
[00:05:59] Chris Collier: Yes, absolutely.
[00:06:01] Cyril Noel-Tagoe: I think that geopolitics still plays a massive role in it as well. So this OpenAI CEO, he had, did a talk at UCL recently as well, where he was talking about kind of the EU current draft regulations and with the EU regulations, it splits, kind of, AI into two risk categories.
And I think Sam's point was that you could interpret the current draft regulations in the way that will put ChatGPT, and OpenAI systems in the higher risk one, and they're not really comfortable with that and kind of pressure in there. So if you've got kind of a US regulation, which is a bit more lax of the EU one, how do companies play in that space, almost like the inverse of GDPR where you've got a very strong privacy in...
[00:06:45] Andy Lole: Yeah.
I think the one thing that is clear is that regulators and legislators need to get used to moving significantly faster than that historically they have. There was the a letter recently around having a pause. I think I can understand why people feel that way, but I think it's naive to think that that would ever happen.
The genie is out the bottle on this. And so actually regulators need to be able to move at the pace of industry. Drawing a parallel there with what we've seen around Bitcoin and other stuff, that it's moving very quickly. It looks and feels and smells like stuff that we're used to, so we sort of accept that it is, but maybe it isn't.
And I think, again, the parallels here of AI in the short term, these are tools that should enable everyone, but how quickly does it become something that actually is eliminating jobs or worst case, eliminating humans.
[00:07:41] Dani Middleton-Wren: Absolutely. And we've seen, as you say, like across industries. So how is that going to work, let alone across countries?
And we've seen the likes of Spotify try to legislate, is it Universal Music Group, have tried to bring in some legislation on Spotify about the use of AI on their platform. So Spotify recently removed 7% of songs that were created with AI music service Boomy.
[00:08:06] Cyril Noel-Tagoe: So those were AI generated songs, but that's not actually the reason why those were taken down.
Those were taken down because they were using bots to inflate the streams on those songs as well. So you've got generating songs with AI, then using automation to make it sound like people are listening to those songs that then feeds into an algorithm that then promotes it to other people. And it's just got this whole crazy kind of machines using machines to get people to listen to music.
It feels very, very dystopian to me.
[00:08:36] Dani Middleton-Wren: It feels like the machines are ruling the world.
[00:08:39] Chris Collier: Skynet...
[00:08:40] Dani Middleton-Wren: But what does that mean for long term AI music generation? So there's some really interesting points around this, and I think. Good place to start would be the how they are going to determine what the use of copyright and how that will be infringed upon going forward.
So for instance, David Guetta, he has created music using samples from Eminem on top of his own produced music. And then Grimes has said, I'm happy to do a 50% split with royalties with anybody who wants to use AI with my music. So what is that going to look like going forward?
[00:09:17] Chris Collier: It's an interesting one, isn't it?
Because the music industry's always, or at least when streaming became a thing, always kind of had a bit of an issue with streaming in that it's like, well, how do we pay royalties? How do we do this? How do we do that? And so you then get into a realm where it's like, okay, you can use elements of my persona.
So let's just take Grimes as an example. If you were to use her voice in a track, even though she's not recorded any of it whatsoever because you've used some AI or ML models to synthesize her voice and things like that. Does that then make it a Grimes track, or does that then make it your track like then, because all intents and purposes, if you were a vocalist as an example, you'd have got paid based on your vocal that goes down on that particular track.
So how does that work at that point? It creates an entirely different business model in its entirety. And I can imagine a few artists would've a problem with it, to be honest.
[00:10:14] Dani Middleton-Wren: And that seems to be the split, doesn't it? Some are embracing it and some are saying flat, no, this is, like, say a copyright infringement.
This is my work that I've spent years and years cultivating. And then it's being...
[00:10:27] Andy Lole: Yeah, the wider market, I think again is worth looking at here because there is already guidance and I think probably legislation in place around sampling. And what's an acceptable duration of someone else's track that you can use before it becomes copyright infringement or shared authorship.
And then the wider stuff around the court cases with Ed Sheeran recently, who's, of course, who's been taken to court for allegedly infringing copyright and consistently won those cases because actually with analog and electric instruments, there's only so many ways you can play them. There's a lot more space on the electronics side, and EDM to develop.
That I think I could imagine it ending up being a licensing world where if you want the singer on your track to be Grimes, then you can do that for a small or probably quite significant fee. Listening to Aiasis over the weekend. Which was an AI created missing Oasis album. It's low road Oasis. It's all right.
It's It's not their best stuff. One of the Gallaghers seems to endorse it. The other one, not so much.
[00:11:37] Dani Middleton-Wren: Which is curious because they never. ever argue about anything do they? Famously get on.
[00:11:43] Cyril Noel-Tagoe: But if you're taking someone's voice, is it actually a copyright issue or is there more of a privacy issue there?
Because let's say, Chris, you use my voice, you create a track which stands for everything I'm against, and you release that. That's now associated with me. That's more of a privacy issue I see there. Where the copyright issue then comes in is if you try and then sell that, that you are owning that track, which is using my voice without my permission, and that's where the copyright comes in.
But then also, AI can't just generate music on its own. It needs to learn by listening to other music. So do you stop AI from listening to music, or do you stop it from training? Do you put some rules in place there? I think that's where it has to go, really.
[00:12:26] Andy Lole: That definitely makes sense. I think the, I could absolutely understand why people are feeling nervous about their roles being eliminated.
But every, I'd argue, every artist, every architect, every creator has learned from the canon of others that's come before them and even imitating things in the style of, I think, so long as there is clarity on what is the boundary between being inspired by and outright copyright infringement.
[00:12:56] Cyril Noel-Tagoe: And that's basically what like the Ed Sheeran case and others would be trying to understand.
[00:13:00] Andy Lole: Exactly. This is a, this is a human problem that just happens so much faster now because machines can do this quicker than we can.
[00:13:08] Chris Collier: I don't think it's necessarily a bad thing if it's gonna give rise to something that's completely and utterly unique rather than... so like the next AI pop star that walks on stage because they're being cast onto the stage using projectors and stuff like that. Like it kind of feels that we're heading towards that kind of sci-fi vision of being able to have a completely computer generated persona.
But I just, I dunno, I dunno how I feel about it because like, it's like, it's on the same level as like deep fakes for me, in that it's like, well, how does the populace then be able to determine what is real and what's fake anymore?
[00:13:46] Dani Middleton-Wren: And to go back to some of Sam Altman's points and some of the issues raised within that Congress session, that is one of the concerns, isn't it?
So you've got the election next year and is how is ChatGPT going to enhance the spread of misinformation ahead of that election? I think there are a lot of challenges and deep fakes in music are amongst them.
[00:14:09] Chris Collier: Definitely. I mean, it's not the first time that social media and elections has come under fire, has it, for misinformation and stuff like that.
And I think given what maybe came out from all of the Cambridge Analytica stuff around that kind of stuff, I think if you take any credence in anything that had gone on there and then you then apply to the fact that we've got these incredibly advanced AI models that are now out there that are probably capable of harvesting information at a far faster rate than anything that they were doing probably was... Mmm, yeah.
Makes for an interesting melting pot, doesn't it?
[00:14:44] Dani Middleton-Wren: Absolutely. A lot has changed in the last seven years.
[00:14:47] Chris Collier: Definitely. Definitely.
[00:14:49] Dani Middleton-Wren: And I think to Cyril's point, is it a copyright issue or is it a privacy issue? I think all round it's an ethical issue. It's just where do you as an AI artist or a musician draw the line?
So Cyril, you create music. What is your position on this as a musician?
[00:15:07] Cyril Noel-Tagoe: As a musician? So I don't, I'm not a vocalist, so I wouldn't, my voice would be used, although I guess someone could train it off this podcast, use my voice. I like creativity. I think I would be for it if it was my voice, but I want to be the one that makes that decision.
There needs to be a consent involved and that's why I like what Grimes has done, where she's, you know, said, yeah, you can do this. Here's the software to do it, and I'm gonna have a 50 50 split. It's kind of like what happens with sampling these days anyway, if you sample without permission, then you go to court and you probably lose all the proceeds.
If you agree beforehand with the original copyright owner, you can come to a fee split.
[00:15:45] Dani Middleton-Wren: Interesting. So then I don't know much about how Grimes is actually going about this, and obviously I know that she said she'll happily do a 50% royalty split, but how is she going about that process of making sure that people come to her, get her permission?
Is that something that's set up? Is that something we can predict other artists might do?
[00:16:03] Cyril Noel-Tagoe: So I haven't looked into it in too much detail, but as I understand it, she's released an app that you can upload, I think it's either you singing something into it and then it'll re-synthesize it in her voice, and then, and release it through that.
So I think because it goes through that, she's got control where it's not people then using their own AI models trained on her voice. So it's going through something she's controlling and she's allowing people to use it. I think others probably will start doing that. I think that was one in 2021 as well, previous to Grimes, who did it as well, so...
[00:16:34] Dani Middleton-Wren: Wow. Yeah, I think that is a good way to go about it. I wonder how many other artists will start to consider it, because Grimes is a pretty big name, but she's always been a little more progressive, I'd say, than maybe others, her peers. So it'd be interesting to see who does follow suit, if anybody does.
[00:16:54] Andy Lole: There's been some suggestion around what regulation could look like, and I think auditability is a key part of that. But watermarking, whatever digital watermarking could be. And this does bring some sort of crossover towards the crypto world and NFTs and that kind of thing. But I think this point around, the tooling is freely available at the moment, but if we look at other industries, is access to tooling more cheaply or more quickly has enabled huge positive things.
It's also enabled huge negative things. And this does feel like, yes, we need to move fast, but actually putting licensing in the hand of the individuals who feel they have something to defend may well actually go some way to taking the sting out of this in the short term. I think in the long term, as everything accelerates and the tooling accelerates, yes, we need much better thought around this. But to start with empowering the people who feel they have something valuable to defend may well take some of the pain away.
[00:17:57] Dani Middleton-Wren: Yeah. And like you say, it'll help people move quicker as well. Because to Chris's point earlier about Congress having failed with social media, it took them a long time to realize they failed with it, whereas now they seem to be grabbing the bull by the horns a bit more and taking it a bit more seriously, which is always going to be a good thing.
[00:18:31] Dani Middleton-Wren: Right. So let's move on to our second subject. So topic number two for today is scalping appointments to resell for a profit. And we have been reading about police having arrested 69 people who have allegedly used bots to book nearly all of Spain's available appointments with immigration officials, and then sold these on for between $30 and $200 to aspiring migrants.
So scalping has been used for a myriad of purposes to get sneakers, to get justin Bieber Crocs, or heated toilet seats. Some of our favorites that we like to quote. To Taylor Swift tickets, to I guess on a more serious level vaccine appointments in the US when the Covid vaccines were just rolled out. And this isn't something that I've seen before. Andy, do you have any insights into this or any opinion on this that you'd like to share?
[00:19:27] Andy Lole: I think this actually ties a little bit back to what we've just been talking around the unexpected intent of technology. So looking at social media at first glance, that looks incredibly positive.
And actually then we understand the impact on young girls of being exposed to body shaming content, for example. A lot of the technology systems that we're talking about here, particularly around vaccine systems or other government tech that have been potentially spun up fairly fast and potentially quite cheaply, will have been built with the intent of solving a specific problem, and how much research and development time has gone into making the architecture work for the worst excesses of human behavior of trying to take advantage of it. So I think bots have been a problem for as long as we've been able to transact on the internet. The opportunities to exploit it will be identified so much faster, and I think it's incumbent on the people creating the systems to expect people to use them badly.
That's fairly normal in a lot of places now, but clearly there are still opportunities to abuse things. How much more can we build on that and how, what could we do in the short term to help critical systems get defended?
[00:20:42] Dani Middleton-Wren: And then we also have the moral and ethical subject of what about those people that being taken advantage of, those people who are in very vulnerable and desperate situations, and they want those immigration slots. They're, one, having to pay a, you know, a lot of money for it. But what if that is the only way they can get hold of them? If the bots have already have taken them all? They are left with very little option but to purchase it, it leaves them really exposed. So what do we think about, I guess it comes back to that legislative slash regulation point earlier.
Who is responsible for making sure that this can't happen, is anybody able to be responsible for it? Cyril, what do you think?
[00:21:25] Cyril Noel-Tagoe: I think it depends on the item. So there's different industries that scalping can affect and there's gonna be different responses for each of those. So if we're thinking of consumer goods, let's go with the PS5 there.
The poster boy for scalping, right? If you can't get a PS5, that's not the end of the world. I know a lot of people felt like that at some point, but it is really not the end of the world. When it becomes a vaccine appointment or you know, the immigration appointments, those are more on this, the serious scale there.
And if you think of currently where legislation is tackling scalping it's looking at event tickets. So I think for these kinds of appointments, which are also on the more serious end. We need some legislation there, and we should look at the way event tickets have been legislated to start with and build on from that.
It won't be exactly the same, but you can build on from that. The problem we have though, with, especially in this case, is that this was a free item, so people are taking something that's free and then forcing people to pay for it. So it's not only that people can't get to it in time, but also they're having to pay for something that's free.
So there needs to be some extra legislation there that actually if you're charging someone for a free item, that that itself should be legislated.
[00:22:37] Dani Middleton-Wren: But it's again, coming down to that process of how do they do that and who is meant to be, be putting that in place. Do you think it's down to, in this case, would it be the Spanish government making sure they had extra measures in place?
Would it be a, ID situation like, I mean, obviously going back to ticketing like Glastonbury, where you can only get in if your face matches the face that is on this ticket.
[00:22:57] Cyril Noel-Tagoe: So as I understand this case, you need to input details when you're signing up for an appointment. So what the scalpers will do in this case is as soon as the appointments are being released, you'll get them all with fake details.
So if we've got some sort of ID verification there that could help. But remember, these are also people who might be asylum seekers, they may not have all the ID required, and you don't wanna be too stringent and mean that vulnerable people can't get access. But then also, these people may have the facilities to use fake IDs, which might pass kind of stringent things.
And then once a genuine person wants an appointment, they can go to the scalper. The scalper can cancel the appointment they have and rebook immediately, in their name. So the actual appointment will be in the right person's name of the right ID, but they've got this kind of temporary one beforehand.
So it is quite a difficult challenge to solve. I think in this case they had like a CAPTCHA or something on the site, which the scalper has bypassed and kind of used different IP addresses. But yeah, you need to be really careful in these cases that anything you impose doesn't make it too hard for the genuine people using the site, and that's the balance.
[00:24:10] Dani Middleton-Wren: Absolutely. Because not only are you exploiting people who have got very little to nothing and making them pay for something that's free, but you're putting them at risk of being knocked back when they actually get to the appointment and not being able to be migrated. That doesn't sound like the official word.
[00:24:27] Chris Collier: I think it's important that the state does take interest in it. I mean, ultimately it's an attack in a government system at the end of the day, so they should absolutely have rules around what can and can't be done on that and take a vast interest in it. I mean, taking it to the extreme, there could be people that are abusing the fact that they can get hold of these asylum slots and it be an issue of national security in some instances.
You know what I mean? So it's, I suppose it should be taken differently depending on the system. So scalping for ticketing as an example, that's an entirely commercial related thing because it's a commercial body. But with this being a government based system, it should probably be held to further regulation, further...
[00:25:11] Dani Middleton-Wren: Higher standards.
[00:25:12] Chris Collier: Yeah, absolutely. Particularly when you think that it's the movement of people across nations and stuff, like I understand obviously people are seeking asylum. They may not have the right information, but people should know who is in the country at any one given time and things like that.
[00:25:26] Andy Lole: I think this comes back to another point we've been talking about already, of the pace that organizations can move at, I think it is in this instance, I think it is incumbent on the states, so Spain, to say what is the appropriate way of using its tooling, but also to be able to ensure that that appropriate behavior is technically enforced. What we're actually saying there, though, is that if Ticketmaster can't get it right for Taylor Swift tickets, how on earth is the Spanish government, who hasn't got the desire or probably the ability to write a ticketing system of that global scale, how are they gonna get it right?
So I think there is space here for, whether it be the European Union or again, the as yet undefined group that could be talking about AI regulation, to be talking about what is good practice in this space and helping people think it through. There are parallels here, it's not quite the same as, but there are parallels with accessibility and the, your average website.
How usable is it for someone who is not normally able to interact with it? And I think, I'm surprised that it was possible to sign up for these places without relevant ID, but clearly it is possible to do that. So these kind of steps of just thinking through, role playing, what is the worst behavior we're gonna see from humans?
Because sadly we will see it.
[00:26:50] Dani Middleton-Wren: And do you think it's because within government there is not the education there, there isn't the awareness. So they wouldn't necessarily think, we need to consider how somebody could exploit this system to the worst degree for the most nefarious purpose?
[00:27:05] Andy Lole: I think there's probably a number of things at play here, isn't there?
There is probably an ambition to do something fast. Which is great, but speed often leads to cutting corners. I think there is almost certainly a lack of understanding of the full complexity of it. I'm sure there are some very, very smart people working in either in the Spanish tech government or any other government, but are they ,do they have the access to the right resources, the right colleagues, the funding needed to actually think through some of this stuff? I suspect probably not.
[00:27:39] Dani Middleton-Wren: I think you're probably right. And as you just said, it's very important to point out here that we're not just hounding on the Spanish government.
It is definitely across all governments, and we've already been talking about how the US are trying to address some of their challenges with automation, but we've seen it across all governments.
[00:27:55] Cyril Noel-Tagoe: I guess the positive thing here is they have made arrests, right? They are actively enforcing something here so that's a step in the right direction.
[00:28:05] Dani Middleton-Wren: Absolutely. So the Spanish police released a quote saying, those arrested rented the bot for uninterrupted use of the online appointment computer systems, entering or transmitting data in a way that seriously affected the normal and correct functioning of the appointment management website for immigration procedures throughout national territory.
And that is something that we would call at Netacea, business logic exploits. But it seems like to our previous point, that there is an understanding and awareness of what has happened, which like you said, Cyril, is very positive for the future and that there will be the ability to address this going forward.
And like you said, the arrests have been made. Hopefully better measures have been put in place and that means that vulnerable people will not be taken advantage of quite so harmfully.
So to sum us up, shall we go on to our attack of the month? I'm gonna start with you, Cyril, and I would like to have a quick chat about credential stuffing attacks, which I know you have written a lot about and your team has written a lot about.
Is there anything that springs to mind that the team have researched most recently?
[00:29:12] Cyril Noel-Tagoe: So I think I'll just give an overview of what credential stuffing is, just to make sure everyone's on the same page. So, credential stuffing is essentially taking a lot of passwords, a lot of usernames, and trying to see if any of them work.
Imagine you've been on a site and you've logged in with your username and password. If you put those incorrectly, you should get in. That's how these sites work. Now, what happens if that username and password then gets breached, but you've used it on a different site? Well, if the attackers get access to that, they can just log in with your username and password, they don't actually have to try and guess what your password is. And that's essentially how credential stuffing works. It's done at a very mass scale, and you know, you might be trying hundreds, thousands, millions of usernames and passwords and you only need a few of them to be right to get in.
We've seen some big cases over the last few years, I believe in the US a senator did a review of a few companies and found multiple instances as well. It's really a growing problem here and we've seen a lot of people trying to push password managers as well recently. You know, if you can use a password manager to store your password, that makes it easier for you to have multiple passwords, and not have to remember them. So like, Dani, how many online accounts do you think you have? Roughly?
[00:30:32] Dani Middleton-Wren: Oh gosh. I'd say circa 60, maybe.
[00:30:36] Cyril Noel-Tagoe: 60. Fair enough. Yeah. And obviously if you're following good password practice, those are long, got a mixture of letters, numbers, and all those kinds of things.
[00:30:46] Dani Middleton-Wren: Strong passwords. Yeah.
[00:30:47] Cyril Noel-Tagoe: Exactly. It's literally impossible for you to remember all of those. So most people fall back to either writing them down, which if you're using the password manager is great, write them in there. Or just reusing the same password. You might think, I've got a really strong password, I'm gonna keep reusing that.
But then if that does get breached and not all sites have equal security, you know, one site might get breached, then that exposes all the different passwords you have.
[00:31:12] Dani Middleton-Wren: So in terms of attacks that we've seen over the last few years, I think LinkedIn was probably the biggest. I mean, going back to like 2012, I remember that attack happening.
And to your point about passwords being reused, that was one that I think really made it hit home for a lot of people that it doesn't matter what site your password is breached on, once you've reused that password, every other account that you use is then vulnerable. So that was obviously an example from 2012 and I imagine lots of people in this room were affected in some way or other, actually I'm talking to the wrong group.
You're in cybersecurity. You probably were not affected. What have... what more recent examples can we think of with with credential stuffing that there's going to have affected people? I know that Dunking Donuts had an attack over the last couple of years.
[00:32:05] Cyril Noel-Tagoe: So one of the most interesting ones I then, this was in 2022, I believe, and there's a wedding registry website and that had a credential stuffing attack.
So imagine that you're about to get married, you've got your registry website set up, and then your guests all start complaining that people have been able to get into their accounts. Like, think about adding stress to a wedding party, right?
[00:32:29] Dani Middleton-Wren: Oh God, don't.
[00:32:31] Cyril Noel-Tagoe: I mean, so the ICO, which is a data protection regulator in the UK recently put out a statement about the fact that credential stuffing is again, becoming a really big problem.
The FBI did the same in America. Last year we had General Motors, has had attacks. There's been, yeah, across different industries. There's... as long as you have a portal that takes usernames and passwords in, you can be affected.
[00:32:55] Chris Collier: And it's a really fair point because like, I think obviously it's
It's
quite obvious that big high profile businesses are probably gonna be more likely to be a target.
But I can speak from experience managing IT systems for businesses. We used to get cred stuffed every day. Guaranteed. Against my system that we had that was for single sign on. I could guarantee I would walk in every day and I would have notifications that say that we'd been cred stuffed from like Thailand and places like that.
I mean, thankfully they never got it in, but guaranteed I could walk in every day and I would find out that someone had attempted to cred stuff.
[00:33:28] Dani Middleton-Wren: And what was the purpose of that attack?
[00:33:31] Chris Collier: It was probably just trying to work out using a password combinations, which is exactly what Cyril was saying because unfortunately humans are habitual and you crack one, there's a good percentage that that's gonna be reused across other things. And as soon as you've then got that, it's then a case of you then just trying it elsewhere.
[00:33:48] Andy Lole: You don't want to necessarily be testing stolen credentials on the site. You want to get into. Because if that website is paying any attention to its security, it'll be aware that it's subject to a credential stuffing attack.
You want to try that on a less important site. So if you've got someone's username and password, try them on bulletin board sites or less important e-commerce sites than the one you're actually targeting. If you want to get into someone's bank, don't validate that you've got their credentials on the bank you want to get into.
[00:34:20] Dani Middleton-Wren: And I suppose that's where the challenge is because for instance, if you've got an Asos account, which I may or may not have amongst my 60 online accounts, then you might not think I need to have a super strong password for this account.
It's my Asos account. What details is that holding? But the fact that it could be a business that's used to validate your username and password then makes it really high risk for other accounts that you are utilizing, you're recycling that password across.
[00:34:45] Andy Lole: Exactly. If you've got evidence that you're recycling passwords, that's good enough to then try it on the system we do want to get into.
[00:34:53] Dani Middleton-Wren: And so what can people be doing? What... Cyril's mentioned a few, but what can people be doing to protect themselves and what can, more importantly, what can businesses do?
Andy is trying very hard not to say "speak to me."
[00:35:09] Chris Collier: I think people have got to rethink the way that they actually go about protecting their applications. I mean, I've been a massive advocate for getting rid of usernames and passwords for forever because we're relying on antiquated technology. We're in the 21st century, guys. Why are we still using a username and password to protect everything?
It's like putting a padlock on the key on the door and expecting things to change. It doesn't work. So there's got to be a way of us actually thinking about how we reinvent authentication, and I think we're going a long way to do that. But whether that's gonna be the silver bullet or not, I don't know.
[00:35:42] Dani Middleton-Wren: Because MFA is being bypassed left, right, and center.
[00:35:46] Chris Collier: Yeah, exactly. Yeah. So what we once thought was probably the silver bullet, people have worked their ways around and I think we live in a world where it's always gonna be an arms race, right? And so this a case of us continuing to innovate.
[00:35:59] Dani Middleton-Wren: Do you think it will be authenticator apps that provide the answer with one-time passwords or,
[00:36:05] Chris Collier: I think biometrics is probably the way that we're gonna have to start going.
And I don't mean just like thumbprints and retina scanners and things like that. I think we have to think of something a little bit different that makes it really difficult for someone to be able to fake who you are.
[00:36:19] Dani Middleton-Wren: Gosh, what might that be? Tongue.
Not hygienic in a post COVID world.
[00:36:29] Chris Collier: I remember Lenovo did something years ago where they had a camera actually like, check someone's vein system because the vein system's entirely unique to each person, and that was one of the ways that they could get into systems and stuff like that. But obviously it's not really entirely commercially viable.
But yeah, it was one thing that they'd looked at.
[00:36:47] Dani Middleton-Wren: Scan my vein so I can get into my Asos account. Easy.
Andy, do you have any thoughts on this? What might be next?
[00:36:58] Andy Lole: There's a couple of challenges here, isn't there? There is, anything that we do that incur... that, that puts resistance in people's wa, commercial organizations will feel that that is, that is likely to get in the way of them making money. If it's hard to log into your account, you don't necessarily log in and therefore repeat custom, et cetera.
So I think low friction matters. In the short term probably is some sort of multifactor authentication and the way that authenticator apps have evolved recently is significantly more mature and harder to fake. I think that's a step in the right direction. But even getting people used to using that is, there is still friction there.
End users themselves, I think need to be aware that it's on them to secure that stuff just as much as it is on the systems that they're using.
[00:37:50] Dani Middleton-Wren: And that is ultimately the greatest challenge because it is a far greater number of people to try and influence than it is if you're just trying to influence the business.
There's only so much the business can do, which takes us right background to where responsibility lies for regulation around AI and scalper bots. Who can you place responsibility on? And it has to be everybody. I think ultimately everybody has to do their bit. Great.
Well, thank you all for a fantastic panel today. Feels like we've covered a lot of subjects and somehow managed to circle it background to the very beginning. Well done everybody. So if you would like to contact us with any questions following today's cybersecurity sessions, please contact us @cybersecpod. Or you can contact us via email at podcast@netacea.com.
Thank you all for joining us, and we look forward to series two episode three, due in July.