Validating AI Value, Securing Supply Chains, Fake Account Creation

Available on:
Season 2, Episode 7
7th December 2023

Netacea CISO Andrew Ash welcomes two special guests to the podcast this month to talk about AI adoption and managing third party risk: Thomas Ballin (CTO, Cytix) and Haydn Brooks (CEO, Risk Ledger).

In 2023 the AI genie is well and truly out of the bottle, gaining mainstream attention and usage across business, academia and in day-to-day life. As a result, AI has become somewhat of a buzzword used to sell solutions or make products appear smart and modern. As mutual advocates of AI to solve problems more efficiently for clients, Andrew and Thomas weigh in on how to define “real AI”, which solutions really benefit from incorporating AI, and how we can validate these claims.

Meanwhile, CISOs are rightly concerned with gaining as much control as possible over internal systems so that they can be secured against known and novel threats. But businesses are also reliant on their supply chain and third-party systems, which have their own potential vulnerabilities. Haydn has a wealth of experience in this area, and sheds light on the potential risks third party relationships expose and how CISOs can manage them whilst maintaining the value of these relationships.

Finally, threat researcher extraordinaire Cyril Noel-Tagoe explains why criminals use bots to mass create fake accounts on web services, the other attacks these accounts facilitate, and how businesses can cut off fake accounts before they do their damage.

Speakers

Andrew Ash

Andrew Ash

CISO
Cyril Noel-Tagoe

Cyril Noel-Tagoe

Principal Security Researcher, Netacea
Haydn Brooks

Haydn Brooks

CEO, Risk Ledger
Thomas Ballin

Thomas Ballin

CTO, Cytix

Episode Transcript

[00:00:00] Andy Ash: Hello and welcome to the Cybersecurity Sessions podcast, episode seven. My name's Andy Ash and I'm the host today, replacing Dani temporarily. I'm the CISO here at Netacea and I'm joined today on a bumper edition by three panelists. We have three topics as well today. First one being validating AI.

Second one, securing the supply chain. And then we have our attack of the month, which is all around fake account creation. So, onto our panelists today. Cyril Noel Tagoe, our resident expert.

[00:00:31] Cyril Noel-Tagoe: Hi everyone. I'm Cyril Noel Tagoe. I'm principal security researcher at Netacea. I look at public facing threat research.

[00:00:39] Andy Ash: Tom Ballin from Cytix. Tom, did you just want to introduce yourself?

[00:00:43] Thomas Ballin: Thanks for having me on. So I'm Thomas, one of the co founders at Cytix. My background goes back to the 10 years or so in the penetration testing space. I used to be a pen tester myself a few years ago, decided I wanted to move away from actually sort of delivering that consultancy work and into building out products and innovations to actually improve on what I saw to be quite a broken industry.

[00:01:06] Andy Ash: Cool. Thank you. Okay. So the first topic today is validating the value of AI. So in the last year, as we all know, the AI genie is well and truly out of the bottle, gaining mainstream attention and usage across business, academia, and in our day to day lives. As a result, AI has become somewhat of a buzzword and used to sell solutions or make products appear smart and modern. How much of this AI is real? Which solutions really benefit from incorporating AI? And how can we validate these claims? So before we can start going on to the, the questions for the, uh, for our guests, a couple of observations and, and actually a question from me. So my observations are, I've been reading a lot.

Listening to a lot of podcasts, and I've been part of quite a few panels and round tables where AI and security has been the main topic. And I would say that in 25 years of working in IT, I've never come across such an emotive subject, and with such a range of opinion about the benefits or the challenges that AI presents.

So, there's a bit of thought, and there's a first question. On a scale of 0 to 10, 0 being the apocalypse and the ending of the human race through bot overlords, and 10 being the liberation of humanity, allowing us to conquer galaxies beyond our own. Where do you, Cyril, stand on what AI can bring to us?

[00:02:25] Cyril Noel-Tagoe: I'm going to go for a very neutral six.

[00:02:28] Andy Ash: A six? Go on, why is that?

[00:02:31] Cyril Noel-Tagoe: Slightly positive, um, obviously with ten being liberation of humankind, I feel like going any higher than an eight is probably a bit too much, but um, I think no, it's an exciting technology, and as you've seen in the past, you know, humankind keeps evolving by you know, introducing new technologies.

And I think with any technology, it depends on people using it right in terms of how good or bad it will be. But, okay, I'm confident that it will be positive for the human race in total.

[00:03:01] Andy Ash: Good. I share that view. I would say a six, moving to a seven, depending on developments in the not too distant future.

Tom, how about you?

[00:03:11] Thomas Ballin: I think at the risk of kind of echoing a lot of the same sentiment, I'd give something of a, a Schrodinger kind of an answer, somewhere between three and seven and oscillating between the two of them perpetually, you know, at the moment we're still in a position where setting out those guardrails and meeting AI with a healthy level of skepticism means that there's a lot of potential for us to do a lot of good, not harm, but at the same time.

There are definitely some bad actors who are using this for some pretty significant things and it is still a bit of a race to make sure that we get those protections in place before those bad things are able to actually come to fruition, I think.

[00:03:47] Andy Ash: It's an excellent answer, but unfortunately, two fence sitters.

I was hoping we'd have wildly differing views. But yeah, I think one of the things that I have witnessed in a lot of interactions around AI is that, like I say, it's a very emotive subject, but it's not an if statement, whether the AI is here, it's not going to go away. And I think it's up to companies like Netacea and Cytix to really push the benefit of using AI to enhance, in our case, security tooling.

We have to be able to describe that accurately and how we're doing it. That explainability piece and we'll come on to that. So first question I have is how can AI in its current iteration realistically improve cyber security and what might this look like in the near future? So Tom, do you have a view on that?

[00:04:34] Thomas Ballin: Yeah I think the main thing it comes down to is volume, you know in cyber security we've been playing a game for a long, long time of everything from how do we handle the volume of logs when we're trying to do defensive operation, how do we handle the volume of vulnerabilities that are identified when we're talking about scanning and detection capabilities in the offensive space.

And that level of volume is something which we've been trying to combat for a very, very long time. But the inconsistency between each of those, like between each of the different bots that are created, or each of the different vulnerabilities that are created, or each of the different outputs that are received from the different tools have meant that trying to come up with a traditional approach to being able to deal with any of that has really proved untenable, no matter how hard certain, um, organizations might try to push the idea that they're able to do it. Whereas with AI, all of a sudden you're able to open up this whole realm of different possibilities. Being able to process lots and lots of very unstructured data in a very, sort of, fast and, and, and effective way.

The other thing as well to look at it as well as just the structure of the data is also being able to do pattern recognition in a very different way, right? Fundamentally, in security, one of the main things that we need looking at is trends and patterns and observations over time. And if you give somebody a hundred thousand lines of logs or a hundred thousand vulnerabilities that are picked up over the last five or six years, then they're not going to be able to tell you very, very much insight into what's actually gone on. Whereas with the sort of the capabilities that machine learning and artificial intelligence have, I think the potential to be able to just ingest and look at those at a sort of holistic level is, is incredibly vast.

[00:06:20] Andy Ash: Yeah. So the, the, the sheer volume of data that you have to collect to know what is happening in a particular estate or landscape is so vast.

You know, from a Netacea point of view, 2 trillion records this year that we are running ML on, it's not human knowable. You can't ask a lot of people to go and curate that data for you, curate it. You can't get any insight from it. The only way you can do it is through the massive spark clusters that we run to actually process that.

[00:06:47] Cyril Noel-Tagoe: I echo what both of you said, I'm coming at it from a slightly different view, so I'm generally looking at it from the, the other aspect in terms of threat intelligence collection, right? We're, we're going out there and we're collecting lots of data about chatter and all these other things that we can use to, to inform.

And while it is human readable and you could get a team of, you know, 10 analysts to go and sit there and read that, that's so inefficient. If we can get AI to do many kind of summarization tasks there and here's a summarized thing that the analysts can look at, then we're reducing the time that they're spending, but we're not reducing the quality of the work.

And I think, for me at the moment, that's the way I'm using AI.

[00:07:27] Thomas Ballin: I'd probably add to that as well. I'm sure you see a lot with analysts where they are expected to look through massive swathes of data. There's a, there's an element of burnout that comes into that as well. You know, you ask somebody to do a very repetitive task over and over and over again, and eventually it becomes a very painful process for them, high levels of stress and you end up chewing through people at not healthy rates.

And so I think that's one of the things that maybe doesn't get talked about so much in AI when we are thinking about some of the benefits, but just being able to remove people from the more painful processes is perhaps just as important as sort of the pattern recognition and things like that.

[00:08:07] Cyril Noel-Tagoe: That's a really important point.

Cause especially in our industry, burnout and stress is such a major factor. Can't remember the exact stats, but I remember reading a report about like just so many people were saying that we're planning to leave the industry just because of the amount of stress. So if we can reduce that and keep the talented people in the same industry that we keep talking about having a skill shortage, then that's, that's a win win.

[00:08:30] Andy Ash: It's certainly something that is required and using AI appropriately so the human element in the machine can actually become a strategic asset is really important. Exactly what you're talking about, Cyril, I mean, we ingest a whole lot of threat data. It would be impossible for the threat team to actually manually curate that.

And the benefits of actually having the AI in there is that you can take more time to go and investigate those threats thoroughly. It's, it's quite obvious when you, when you put it in those terms, but it makes your job much more interesting. So rather than reading through a lot of stuff, you're actually doing something about the threats that we're uncovering.

So yeah, really, really interesting. Um, do, do we think that business is underestimating the potential or, or the opposite? Based on the emotive piece that we started with, which I think is, is really important. Are we missing out already on the benefits we could get from AI?

[00:09:25] Thomas Ballin: I don't know whether businesses are underestimating the potential.

In, in a lot of cases, I think people are misunderstanding potential. You know, people have seen GPT and people have seen the sort of capability of interacting with what feels like a person. And they think that that is the be all and end all of what AI can do. They then start running away with the kinds of ability to be able to interpret text as you feed it into a system and all of the concepts there.

And, and in that process, maybe get the misconception that it's just a really straightforward, easy process to be able to apply that into their business. But also the misconception that that's the only thing that AI is capable of doing and that the whole host of other different types of AI and formats that it can take, which they really should be exploring.

They just, they just sort of set it to one side for, at least for the time being while everybody's in this big GPT hype, as it were.

[00:10:19] Cyril Noel-Tagoe: I think I'll add to that just the sheer amount of organisations, since ChatGPT, there's just so much noise now that people can't actually properly evaluate it. It's not so much that they underestimate AI, they don't understand the differentiators between different organizations saying they've, they've got AI and what's that AI do and what do they mean by AI and ML? Is it actually AI or are they just using the buzzword, so?

[00:10:43] Andy Ash: Absolutely. I started by saying this isn't an if statement. AI is here and it's not going away. It's a when, when is this going to be adopted?

It'll be adopted by vendors that you use, regardless if you run any kind of IT operations or security operation. I think the bit that not a lot of people are talking about is the how. How do you actually adopt AI? So as somebody who buys software a lot, how do I evaluate? How do I understand, one, what my requirements are in this area?

And two, how do I validate what it is? You know, the AI based solution I'm looking at actually works for me. So, uh, Netacea, we've been deploying AI across our customer estates for sort of five years. So we have quite a lot of experience in this, and we have a very good way of taking customers from that point of not quite understanding the reason they need AI in their environment all the way through to kind of automated mitigation of threats.

That journey for the customer starts with generally a place of slight mistrust. We deal with web traffic and web traffic is obviously one of the major revenue generating streams in any business, any retail business, eCommerce, finance. You know, it is of great value and anything that interferes with that needs to be well checked for obvious reasons.

So taking customers through that trust process of efficacy, um, uh, reliability and availability of our services is really, really important. And I know, Thomas, your tooling at Cytix, it also has quite a lot of this involved. Do you have that same kind of journey with the customer?

[00:12:17] Thomas Ballin: To an extent, I think when you say it's not an if statement and AI is going to happen.

One of the things that I don't think a lot of people recognize or, or have had as much appreciation for is the fact that this is not new technology by any stretch. This is something that has been out there, been used, and everybody knows that YouTube and Google and, um, social media have algorithms that they use in order to determine what kind of data to present to you and when to present it to you.

And the illusion that this last year's worth of evolution, you know, language processing and generative AI has somehow been the inception of AI is a mistake, I think. And when you start talking to people about that and showing people that actually this is not an unfamiliar technology, and this is something that we are trusting to be ingrained into our lives on a daily basis, it's a much, much easier conversation to have, but also fundamentally AI is an expensive tool to be able to build. And so I don't think that we should just be trying to apply it absolutely everywhere. We should be trying to apply it in the places where we have either tried and failed or simply haven't tried because we don't have a tenable alternative solution to the problem.

At which point I think the sort of proof of value is fairly evident in itself. If you can't do something beforehand, you can do it afterwards. Obviously there is value in that service in terms of being able to take your hands away and trust that the AI is able to do things, um, or, or whether or not we should even be open to doing that at the moment, that is obviously it's a, it's a risk based decision.

There's lots of things that individual cases people need to consider. But at the end of the day, my problem with the conversation that people have around that is that we seem to be holding AI to some much, much higher standard than the standard that we hold people to or, or traditional technology to, you know, if we look at people talk about, for instance, um, using AI to help pharmacies, the risk that might come from, you know, misdiagnosing somebody and then providing the wrong type of medication.

Now if that happens one time in a thousand, absolutely not acceptable as far as most people are concerned in an AI world. But that happens more like one in six hundred times or something like that in the real world using humans. I mean, it's probably, probably more than that even. And that is an acceptable level of risk.

So, you know, you really have to look at this, I think, with a lot of pragmatism in order to say, yes, this is not an infallible technology. Yes, hallucinations is, I think, the word of the year from Oxford Dictionary, because we are aware that there are some limitations in this technology. But no, those risks, those slight risks do not mean that this is not an acceptable solution to a lot of problems.

[00:15:02] Andy Ash: Yeah, it's interesting that this is something that's come up in a couple of the groups that I've been speaking in, that acceptable risk. And I think part of it is that you can hold a person to account, but you can't really at the moment hold an AI language model to account, right? That is likely to be changed with legislation and regulation in the not too distant future, certainly in the EU and the US and I would imagine in the UK at the point that we have to do it.

It's exactly as you said, it's that holding something that you don't quite understand to account. You know, everybody understands that humans make errors, we make errors every day, all the time, but somehow it's not acceptable for a machine to make an error because of that perception of what a machine is.

A machine doesn't make errors. A machine always makes the same part every time. Turn the handle and the same thing comes out. And it's not, it's not true. So the, the bit for me that's missing is the practicality. Because if you're trying to buy new software of nearly any vendor now, there is going to be an element of AI in there.

They will certainly be telling you that there is an element of AI in there. And that kind of practical, how do you evaluate if it's actually, one, relevant to the problem you're trying to solve, two, meets your requirements, and three, has good efficacy, so it's not hallucinating wildly on your prize assets.

And I think that's the bit that really is interesting to me because, well, you've got a lot of emotion, like I say, it's a very emotional subject. Where you have emotion, it's very difficult to make decisions. And it's even harder if you don't have a blueprint to do that. So going to your own requirements, looking at the efficacy of the models that have been employed, whatever you put them into and making sure that the results aren't stored in the black box that you can't get access to, making sure that there is explainability around the decision making that that AI module, whatever it might be, is providing.

Because if you don't have that, you are playing to the fears of the emotion and the person buying it. So that kind of buyer journey needs to be very carefully managed from beginning to end. So, yeah, I think there'll be a lot written on this in the next year as people start and probably fail to implement AI solutions, whether that's through a vendor or themselves.

So I think there's a big academic study to be done on this undoubtedly.

[00:17:17] Thomas Ballin: I think where a lot of the distrust in AI that maybe isn't distrust in people comes from as well is the fact that it's not very good at showing it's working out. And that means it's very, very hard to understand whether the answer that you're getting is the right answer or the wrong answer.

You know, you can't measure it in the same way. It's also, pardon the pun, AI isn't binary. It doesn't have to just be on or off. You can use AI in combination with people, or you can Introduce AI in phased approaches, and you can say, okay, well I'm going to get it to get this far in the process, and then I'm going to, you know, let a human supervise, or let a human make some judgments, or I'm going to run the two in parallel and then compare the results, and build up that trust over time.

So, I think that's where, as long as you've got defined criteria for what success looks like, what good looks like, And you can measure it, you can run those comparisons and you can build that understanding and build that trust through the experience and not necessarily just have to say, okay, from now on, I'm going to let AI just run everything in all situations and not really think about it past that.

[00:18:20] Andy Ash: Yeah. And then we, we have patterns for this in IT. Observation mode across all security tooling is the first thing that always gets enabled AI or not AI. Understanding and ratifying results. And that, that stands even more. And I think the key bit for vendors is to, is to be able to show what is actually coming out as a result from the software that's deployed.

Being able to compare with the baseline that you have today. So things like next gen WAF or wAP or bot management in our case, vulnerability management in your case, you know, what was the baseline before? Why is AI building on top of that to make this better? And being able to demonstrate that in a clear way in reporting that self service and self explanatory.

That's the thing that will change this from an emotional reaction to a business process piece, which is we now can't do without this. And that's actually the journey that a lot of customers we work with go on from, okay, you can turn a bit of blocking on now. You can turn bits of your mitigation on now.

Okay. You can turn a bit more on. And then by the end of it, yeah, you know, blocking half the internet because it's actually malicious. And that, that is the journey that people tend to go through. But we can only do that if we expose the metrics, expose the explainability, to the context in which these actions are taking place.

I think that's really important for anybody buying to actually understand that. You just need to go on that journey and test, basically.

[00:19:47] Cyril Noel-Tagoe: To echo what Tom said, I think that whole holding. AI to a higher standard than humans is probably going to be one of the biggest reasons for AI not being exploited as well as it can be.

Everything from like driverless cars to anything is always there. If something's a machine, it needs to be perfect. And there's never going to be an AI that's perfect. That just will never happen. And the sooner people realize that and start tempering their expectations and then also developing, what are the metrics?

Can we measure how accurate we are at this process right now? And then can we measure the delta between that and AI product. I think that's when people will be able to probably start trusting AI, that yes, we get 10 percent increase in accuracy here. We can already measure the efficiency, but I think it's the accuracy bit that people are hung up on.

[00:20:36] Thomas Ballin: A question I have and something that, you know, I've, I've thought about for a while, obviously building an AI product. And as you say, having bought several products that both promise to be AI. And products that don't mention AI much that probably do use some behind the scenes is, how necessary do you think is to be putting AI at the forefront of the description of what you actually do?

And if we look at Google or Facebook or YouTube or people like that, they don't typically refer to what's going on behind the scenes as AI. They don't need to say, this is an AI powered product. There are some use cases inside what they do. There are some processes that benefit by it. But do you think the calling it AI is helping us?

Or do you think that all that's doing is feeding the marketing machine and also maybe the fear machine?

[00:21:25] Andy Ash: Yeah, absolutely. Uh, great question. Um, what people really want is, and this is why it starts with requirements, right? What people really want is to solve the problem that's in front of them. If they could do that with a piece of paper and a pen, that'd be your first port of call.

You know, it doesn't matter that there's AI in the background. There is some efficacy piece there that matters, but that, as I was saying before, is a pattern that we've been buying software with for decades and longer. You know, does it meet the requirement that I'm buying it for? Does it solve the problem that I have as a CISO?

Can I see my vulnerabilities in my estate a lot easier if I use this tool? And the fact that it's got AI in the background... you might want to know a little bit about the data and you might want to know a little bit about the models and the risk, but you would probably want to know that anyway if there was no AI involved.

So yeah, I think that it is that kind of rush to buzzwords. The hype cycle is absolutely driving a lot of the market and the, and the way that we talk about this really as buyers, you just want something that works. You know, you, you want to be able to define the problem, define the solution, and deploy it.

[00:22:34] Thomas Ballin: Yeah, a hundred percent, a hundred percent agree. And that's the feedback that we get from customers as well is, you know, they're happy to help facilitate us in things like training the models and they're happy to help facilitate us in testing out the efficacy of these solutions. But the end sentence of pretty much every customer call is always just the proof will be in the pudding.

You know, that's, I can't count the number of times I've heard that statement because they don't really care as long as it meets their security criteria, as long as it meets their success criteria, they don't really care how it gets done. And, and I think that's the right way to look at things. And so we'll learn and adapt over time where the AI should be applied and when when we should just park that idea and say, let's just use these traditional, simple approaches and not get overexcited as, as much as it's, it's very tempting for the technical people to do that.

[00:23:23] Cyril Noel-Tagoe: At the end of the day, people are buying an outcome, not, not the thing in the middle.

[00:23:28] Andy Ash: Yeah. Proof being in the pudding is probably a good place to leave this because that's essentially where we're going to end up.

We'll find out. Next year is going to be really interesting.

Okay. So now we're going to move on to our next topic, which is securing the supply chain. CISOs are rightly concerned with gaining as much control as possible over internal systems, so that it can be secured against known and novel threats. But businesses are also reliant on their supply chain and third party systems, which have their own potential vulnerabilities.

What potential risks does this expose and how can CISOs manage them whilst maintaining the value of these relationships? And here to help us answer that question today is Haydn Brooks from Risk Ledger. Haydn, do a quick intro and tell us more about this subject?

[00:24:11] Haydn Brooks: Yeah. Happy to. Thanks, Andy. So a brief introduction from me, started my career as a security consultant at a big four company.

Spent about two years there dabbling in various different domains of security, having a background in neuroscience before that, and then, yeah, I did a few, few projects in supply chain, so I became quite interested in the topic. Then moved to another big four where I was made their subject matter expert on supply chain security risk, which was good fun, then had a brief stint at a startup consultancy before leaving there and deciding what to do next.

Based on a lot of the problems I'd seen our clients have at Big Four, um, when looking at supply chain security, we decided to found Risk Ledger to try and solve them, essentially. So we founded that in 2018. Uh, launched the platform in 2020. We're now just raised our series A, 40 people, growing client lists across multiple different kinds of sectors.

[00:24:59] Andy Ash: Cool. And in terms of your overview of third party risk and supply chain, what would you just want to describe the landscape that you see?

[00:25:09] Haydn Brooks: Yeah, happily. So when we are speaking to potential clients, usually we describe them as having three different types of supply chain. We talk about it in terms of their corporate supply chain, which is every single company their company engages with.

And they might consume services or products through that, but these are the other companies that allow their company to operate. The second one is their software supply chain. So that's slightly different. And that looks more, if they are building software products, where are the packages within those products being pulled from?

Are they through vulnerabilities? And it's, it's more kind of a technical, uh, domain. And the third supply chain is the logistics supply chain, which looks more at making sure that the right products are in the right place at the right time. So what I do and what Risk Ledger does is we specialize in that corporate supply chain.

So it's helping companies work with other companies and trust that they have the right maturity of security to be able to work with them, essentially. And the other way we slice it is that when you do kind of zoom into that corporate supply chain, people will tend to have many suppliers. And the way we think about how to categorize those suppliers is basically using the CIA triangle.

So some of those suppliers you'll share data with, and there's a confidentiality risk there. You need to make sure that those suppliers are protecting the data that you're sharing with them. Uh, the second is an availability risk. And, uh, traditionally kind of clients refer to some of their suppliers as being critical.

And we tend to link criticality to availability. And that's all about making sure that the suppliers your business relies on stay online and producing or providing their service as required. And then the third is an integrity risk. And this one we talk about in terms of if you give system access to your suppliers or you link your systems together in some way, it's all about protecting access to essentially those trusted backdoors into your systems through the suppliers.

And those three kind of categories of suppliers, then you could use that essentially to then go out to your whole supplier base and tier them in terms of which suppliers are important and start to think about actually what the impact of an incident at a supplier would be on you, given which one of those three categories they fall into.

We have a platform that essentially enables the conversation between them and the suppliers to happen in a much easier way than those traditional kind of spreadsheet based questionnaires. Uh, so you can think of our platform almost like a social network. The idea is organizations sign up. We can help that organization describe what they do to secure themselves.

We can help them implement various bits of security and then other companies that need to run assurance against them can connect to them. Run that assurance against them, uh, and then do some whizzy things on top as well with some of the data we collect.

[00:27:27] Andy Ash: Cool. I mean, I've, I've had to look at the Risk Ledger product.

It's really interesting stuff. So we've got our resident expert in everything, Cyril, who is going to talk through the actual. So we've pulled out a few examples from the last year of third party breaches, et cetera. So Cyril, do you want to talk us through SolarWinds?

[00:27:45] Cyril Noel-Tagoe: Yeah, sure. And I think it's really interesting, Haydn, how you stay into the CIA triad I think, especially when I used to look at third parties, the availability and confidentiality were always the ones people focused on, right?

If we give them data to people, are they protecting it? And if we rely on them for a critical service, are they going to go down or are they going to go out of business? And so on. But the integrity one really links to the, kind of the SolarWinds incident, which has probably been one of the biggest, uh, the most well known third party risk incidents.

This was where SolarWinds, which is a software developer, their software, which is used to monitor networks and it's installed in lots of different organizations. The threat actors actually were able to infiltrate the build process and poison the software with malware. So if you were an organization who had this software running, an update would happen and that update will actually give you malware, which gave the, the attackers backdoors into your, your network. So through no fault of your own. And, you know, just running some network software and running the update because, you know, in security, they tell you if the update comes, you need to run that because patching is important. If you run the update and then suddenly the, there's a backdoor. Yeah. So just because of the, you know, the scale or the amount of people using SolarWinds, the software at the time, this, this reached like tens of thousands of organizations.

And I probably think it's the biggest supply chain beach out there.

[00:29:03] Haydn Brooks: One of the more impactful, I think as well, cause they worked a lot with, with US government. So a lot of the, uh, the end targets ended up being really quite, uh, security conscious organizations, but that also touches on another important point in the supply chain, which is one I didn't describe at the start.

So one of the things when speaking prospects about supply chain that becomes apparent is people think of their supply chain almost as their third parties and they kind of forget everything that happens underneath that, but they also forget all of the kind of the interconnections between their third parties as well.

So instead of thinking of supply chain as just kind of a list of companies you work with, it's almost like a spider's web underneath you filled with companies that are always nodes and then connections between them. And this is a classic example of a threat actor targeting a company knowing that that company worked with a long list of very desirable targets and by breaching that one company they were then able to gain access to many others and it was an attack against that integrity, uh, part of the triad.

So yeah, classic attack, quite an interesting one.

[00:29:58] Cyril Noel-Tagoe: It's almost an evolution of the, you know, the kind of watering hole attack where, you know, you've got the website that, you know, everyone goes to where you put some, some malware there. But I think what gets me with this one is just the sheer audacity, as some might call it, right, to, to, to put your malware in a build so that that gets released that way.

[00:30:15] Haydn Brooks: Yeah. And the complexity of it as well. Like, um, if you think about, I mean, we're a software company and if you think about actually the steps it takes to, to try and kind of poison our build with a, some sort of bit of malware, it was definitely a very thought out and very targeted style of attack.

And you can tell that as well by the end companies that were breached. So I think the available software was put into about 10, 000 companies, but I think the actual list of companies that uncovered a breach within their systems was, was a lot smaller than that. I think it was sub a hundred, but I could be wrong there.

So don't quote me on that.

[00:30:42] Andy Ash: It was highly targeted. And of course, because the lead in to actually get this attack off the ground was a significant amount of effort. So they knew what the return on investment would be.

[00:30:52] Haydn Brooks: This is also where the corporate supply chain and software supply chain overlap slightly.

So, um, if you think of something like SolarWinds, there's another style of attack. It was either a real life example, or it was a proof of concept where somebody had become a member of an open source community. And then put a backdoor in an open source package that then was used by many other bits of software.

And it's a similar style of attack, but they're able to kind of put this backdoor into many other bits of software. And, and then spread that around a list of companies that then they can kind of pick their targets from in quite a nice way. So yeah, the threat actors are definitely getting very creative in the way they, they think about how to use one attack to reach many others, which is, is quite fun to watch, but also quite scary.

[00:31:30] Andy Ash: I mean, the, the interesting thing, well, there was a lot to unpack with SolarWinds, we won't do it today. But one of the things is, it didn't matter whether you were on the target list. If you were one of those 10, 000 companies, you still have to work out whether you had SolarWinds in your estate.

Were you affected? You couldn't not then patch, you couldn't then roll back or roll forward. And even if you were not one of the sub hundred that were actually being targeted, so it just caused absolute chaos across tens of thousands of networks.

[00:31:57] Haydn Brooks: This is also one of the problems we look to solve. So when you're running kind of a supply chain program, people think it's all about looking at the maturity of your suppliers, which to a certain degree it is.

But if you visualize that supply chain underneath you as a map of nodes and connections between them, what's more important in a case like that is spotting where the problem is, but then who else is impacted by that. We tend to call that the blast radius. So if you pick any company across the world, it's who it would be impacted by an incident there.

Um, and how critical is kind of that connection between the company that's been hit by a breach and then the other companies around them. And that's something that no programs that I know of today can accurately solve.

[00:32:35] Andy Ash: So with that in mind, I guess one of the questions I had was how, how can CISOs manage third party security risk as a smaller player in the relationship. So, I mean, it's something that affects me on a daily basis. We have public cloud hosting. And as such, we're not going to be the major player in that relationship. That's just a fact. So what, what are the strategies? What can CISOs do to submit to you?

[00:32:58] Haydn Brooks: Yeah, it's a really interesting question. So, um, I'm going to talk for a minute around the context, then dive into hopefully actionable points, but if we go back even 10 years to when I started looking at supply chain, really back then the only companies that ran supply chain reviews with any sort of maturity and discipline were the big banks and they had the weight and the leverage to be able to do this and to pay teams to go out and start reviewing these supplies.

Now, as kind of securities matured, now everybody pretty much has to run these supply chain reviews. And that's where that problem is cropped up, cropped up for two reasons. One, smaller companies who are paying smaller amounts of money to their suppliers don't quite have that leverage to be able to demand reviews to be completed. But equally, it's led to a burden on the actual suppliers themselves. So if you were a supplier that, let's say, worked with a few banks and then a hundred other clients, back in the day, it was only really the banks that were asking me to fill in these kind of questionnaires and answers about security, whereas now it's everyone.

And suddenly these security teams are being overwhelmed with questions. And if you take a cloud service provider, they work with tens of thousands of clients. So it's just, it's impossible for them to be able to respond in any meaningful way to that breadth of, of client base. And that's probably, yeah, that's one of the key things when we look at supply chain is just scale and the amount of work it takes to actually run these processes today.

And it's part of what we do, it's all about reducing that burden of work to kind of a manageable amount, both for the clients, but also for the suppliers. Now, in terms of smaller companies trying to review larger ones, that's a problem that's common. What we tend to find is, uh, I've worked with governments that have tried to get cloud service providers to complete reviews, and it's been a journey to try and get them to do that.

And so, if you're a small company spending, let's say, a grand a month with somebody like AWS, it's going to be nigh on impossible for them to do anything bespoke for you. And there you kind of have to fall back on typically a trust center or some sort of like a list of certifications or artifacts that you can kind of draw from and almost use that as a proxy for the review.

The second thing we advise clients on there is that if a company refuses to do a review in itself, that is an outcome, and that's an outcome that you can list in your risk register and use. So when you're, let's say, talking to a regulator or talking to the board, that is a viable outcome and it's not as if you haven't done anything.

The third thing, and this is something that we're really trying to promote, is 10 years, if we go back 10 years, kind of the supply chain view type process was always seen as an audit process. And it typically sat with risk auditors. And the issue there was that if you go to any company and audit them, and you'll know this being a CISO, you're already on the back foot.

You want to pass an audit, you want as few findings as possible, and so you're going to try and hide everything you can, assuming there is something to hide to pass that audit. And that's just the nature of what an audit is. Um, and if you think about that in the supply chain, these companies are trying to win contracts, so they also, and then on the back foot, so there's kind of two parts to this.

Firstly, is we try and change that conversation away from an audit into a conversation. If the supplier knows that their contract isn't at risk and that actually by talking an honest conversation about what they're doing, the client might be able to help them. Uh, they might be able to pay a bit of money to get more controls implemented.

They may be able to give them advice. They may be able to implement some internal controls themselves to minimize the risk on themselves without the supplier doing anything. That conversation becomes a lot, lot healthier and a lot easier to have. And if it's still a challenge trying to do that with a larger organization, we tend to promote this idea of defending as one and banding together and collaborating with your peers.

And more other friends in the industry who may also be wanting to review that large supplier. We've got multiple cases where we've had a large, for example, technology company, where one of our clients has tried to review them and there's been pushback. But actually, if we band together kind of 10 or 12 of our clients who all use that supplier, suddenly, not only do they have more weight and more leverage to enable that review to happen, but it makes more sense as well for the largest supplier.

Because they're getting 12 reviews off their plate with one conversation. And it's all about what we can do to kind of collaborate and change the conversation away from an audit to a much friendlier conversation, but also how we can have this conversation more out in the open to be more transparent, uh, to reduce the burden of work, but also to make it more valuable for the clients themselves.

[00:36:50] Andy Ash: It's good advice, even when you can't audit a supplier, there's a lot of work for both sides.

[00:36:55] Haydn Brooks: Yeah, the other, the other thing on the audits is like, at the minute, they're very risk based and running a risk company. The first problem in supply chain is capacity, but the second problem is that typically CISOs don't actually see a return on the amount of time they spend on these risk assessments.

So, I mean, you could drop me on an on site audit. And hypothetically with AWS for four days, there is nothing I'm going to find in four days with AWS that will be actionable or meaningful to a client. So the idea there is that with, I believe, that we're focusing a little bit too much on the risk side of the supply chain and not enough on the response, the detection, and the recovery side of the supply chain.

So part of our platform is all about opening up these new capabilities when something like SolarWinds does happen. Where do they sit within the supply chain? Who's impacted? And what can we do very quickly to minimize that impact across the network? And then that feeds into things like concentration risks and systemic risks and all these other domains that I could sit here and chat for hours about, but I won't dive into it.

[00:37:45] Cyril Noel-Tagoe: Talking about like the concentration risk and that kind of phase, one of the other supply chain incidents that is commonly talked about as kind of the Log4j one, which, I mean, that one's slightly different because it's open source and we can talk about open source and all of that good stuff. But the, the real thing here was just that so many of your third party vendors would have Log4j somewhere in their stack and maybe not know about it, or you wouldn't know about it. And then how do you, as a CISO, you know, get that confidence that, okay, you've dealt with it in your estate, but has it been dealt with, with all your suppliers?

[00:38:19] Haydn Brooks: Yeah, very difficult question to answer. So the way I think about the corporate and the software supply chain together is that you can almost imagine the corporate supply chain as a web of companies at the top layer.

And then underneath that, you always have this web of software packages that kind of has a dotted line up into that top layer of companies. So you end up with multiple companies using different kind of packages that may be vulnerable to something. Um, and there are almost two different networks. One's kind of a, uh, one almost on that oversized seven layer model.

One's kind of a layer down from that corporate, um, layer. In terms of something like Log4J, um, that's where you find like a supply units and touches both. So there's two issues there. The issue that we kind of help clients with and we have a lot of experience in is how you would go out to your supply chain network and understand who is either investigating, thinks they're vulnerable to it, remediating, or is vulnerable to it and can't remediate.

And that gives you an idea of what companies may be impacted and who else in that chain might be hit. But then the second part of that question is more on the technical side. It's all about understanding actually what software am I building? What software do I use? What are the packages involved in building out that software?

Um, are these packages vulnerable to something or have a vulnerability like Log4j in them and then essentially, yeah, how can I detect that? And what can I do about it? And there is a different discipline and style of toolset. That's all about kind of tracking bits of software, being able to spot vulnerabilities within them, and then being able to react and remediate to them as well.

So I would almost classify them as two slight different disciplines, just with huge overlap between the two.

[00:39:47] Andy Ash: So moving on from that, SBOMs, they're increasingly becoming a regulatory requirement here, in the US, certainly in the EU and they're not too distant. Why do you think they're important and how do you actually take advantage of them?

[00:40:00] Haydn Brooks: Yeah. So, I mean, in SBOMs, software bill of materials is essentially just a, the way I think about it is a, almost like an ingredients list for a recipe. So if you imagine your bit of software as a recipe and you're building a cake, the SBOMs is a, there's kind of a list of ingredients you'll throw into the pot to build that.

And the way the developers work, they'll be using that list of ingredients, then tying it all together with some bespoke code. And the idea behind an SBOM is making sure that people who are building software have a really good grasp over what is going into that software. And that's kind of the first step, I think, in a longer chain of steps that will increase maturity on the software supply chain side.

So the idea behind kind of the regulation, it's big in the US, and as I say, it's beating now out into kind of other geographies, is that if we can make sure every company really understands the ingredients they're throwing into the mix, then we can use kind of that bit of data from each one of those companies to start to map where those packages are going.

So that if there is a problem with one of them, we know who is impacted very quickly. And then that leads to a much faster response and a much quicker tidy up before these vulnerabilities can be essentially exploited. I wouldn't even pass it as a security thing either. It's almost like a kind of good practice development in the same way kind of a little security is to do with IT hygiene.

And it's actually what the IT team should be doing anyway. This is kind of a similar thing where developers should be keeping track of those packages, should be making sure that their licenses are all compliant and different bits of pieces like that. And this is almost just using security to try and enforce that because now we have this extra risk of one of those packages is vulnerable.

And actually there's real world kind of impacts that will happen across that network of companies relying on the software.

[00:41:30] Andy Ash: It massively leans into the DevSecOps piece.

[00:41:34] Haydn Brooks: Oh, massively, yeah.

[00:41:35] Andy Ash: Just understanding exactly what it is that you're building. And there's so many good practice reasons to actually need this.

Now it isn't just the policy around the compliance to these regulations. It's also how far out of date are you?

You know, we've all worked at companies that run old third party software as part of a, what they're describing as a new build. And there's a lot of problems come with that. They're non security related.

So it's just good practicing in DevSecOps, something we practice in Netacea.

[00:42:04] Haydn Brooks: Good. Same with Risk Ledger.

[00:42:06] Andy Ash: I thought so. Yeah.

[00:42:07] Haydn Brooks: The interesting thing with SBOMs is, is always like the, the technical solutions that I'm hearing that, that people have built and are thinking about building on top of them. So because like, uh, DevTechOps is becoming more and more automated, it's almost like these lists and these SBOMs essentially can be written in machine friendly kind of formats.

And then it's almost becomes a big data problem, being able to map that data against each other and companies being able to share that. And I know quite a few companies that are actually sharing their SBOM publicly as well, which then kind of just increases the transparency, which inherently leads to, I think, better security.

Because it's almost a similar argument you could make for open source code. It's if it's transparent, then we can spot the problems, hopefully, before the bad guys do. Yeah, so I think it's only ever a good thing that people are now kind of having to do it through regulation. And I'm not a huge fan of regulation in general, but this one seems like a no brainer.

[00:42:54] Andy Ash: So, other categories of third party risk? Cyril, thoughts?

[00:42:59] Cyril Noel-Tagoe: I mean, Haydn did a really good bit at the start about, you know, how he's separated it all. But I think one that we haven't talked about yet is, you know, when you're storing data elsewhere. So we've talked about, you know, software and whether that opens up software vulnerabilities or, you know, malware.

But I think for most organizations, you've got these third parties that you have hand in data to and. You know, it can be varied from, and I'm going to use two examples from, from Uber, just because they, they are always in the news, but you know, there, there was one where they had a data breach of about 70, 000 employees, names, details, um, through a third party asset management company.

So that is in the tech space. There's another one, um, a few months later, where they had, you know, private information of their drivers, like their social security numbers and that. That was through a law firm. So, you know, it's not just your tech providers. Then there was one very recently, actually, with the Met Police.

And it was just like any type of third party organization you have that you're sharing data with. It's not just your cloud services. It's not just their software. If everyone can have these breaches, we're just thinking holistically, um, about the situation.

[00:44:09] Haydn Brooks: Yeah, it's an interesting one as well, because I don't have any hard and fast data to back this up, but my inclination is that a lot of the breaches I see in the supply chain to do with integrity, so people targeting one company to almost get a backdoor into many others, tend to be very targeted.

The data ones on the whole tend to be very opportunistic and not actually very targeted. So it tends to be kind of less mature threat actors, finding a vulnerable company, breaking into that company, stealing a ton of data, and then realizing they've hit the gold mine because actually they've got a hundred companies data in front of them instead of just the one they breach.

So, um, that tends to be the more common style of attack we see on that side. There's a wider point there just around data hygiene in the company. Like if, if you think about logging in at work or anything like that, just the amount of different tools nowadays that touch, potentially sensitive data with SaaS products popping up left, right, and center.

I think even amongst startups, there was an interesting bit of research saying that most startups of 30 people use over a hundred SaaS tools, which is, is ludicrous. And there is no easy way to track kind of where your data is going, who has access to it, um, and what they're doing to protect it, which then is why supply chain is so complex.

Because not only do you have this kind of very cumbersome process where you're trying to run it over your suppliers, there's always a foundational step there. Which is, we don't typically speak to any companies who really even can provide a full supplier list. Um, and even if they do have a very mature procurement team who kind of have a decent supplier list that covers most of the major contracts, there will be SaaS tools that somebody's logged into at some point within the company or a team uses and they haven't told procurement about and often that kind of foundational data isn't really there for somebody to run like a 100 percent coverage supply chain per account. Those data tags, uh, I think in my head, they're less impactful.

Because they are less targeted, but that changes depending on the type of supplier you're looking at. If you're looking at somebody, let's say, who specializes in data storage, so like, Iron Mountain or AWS or someone like that, then that would obviously be very impactful. But if you're kind of going around hacking random SaaS apps, Um, I think the impact of the data leaks there probably aren't as big as, as let's say a breach against integrity and being able to hack into many other companies off the back of one breach.

And the other thing though, it does actually change the style of review. So oftentimes when you're looking at a company that stores data, you want to focus in on confidentiality controls, things like Encryption, um, like TLS, these kind of things to protect data. Whereas if you're looking at a company that needs to be available, so it's an attack against a critical supplier, there it tells you more about availability and resilience, being able for that supplier to get back up again if they are hit and having a good understanding actually of what the supplier does for you and what data they have, if any, really does help tailor that review that you're doing to make it a bit more valuable to you.

[00:46:48] Andy Ash: Yeah, the kind of SaaS sprawl is a really interesting challenge. So I've heard people talk about having a ratio of SaaS apps to the users. And what's a healthy ratio? Well, anything under one, I think it's a one to one relationship with a SaaS application that the number is, you know, so there are parts of the security market that are springing up and becoming successful, uh, to try and track that.

[00:47:13] Haydn Brooks: The other interesting thing there is that SaaS apps are very, very good, this is a good thing, but I'll say, um, very difficult to restrict. One of the tips that we often speak to CISOs about in terms of them finding out kind of a very thorough supplier list is a procurement, uh, let's say a struggling to do that.

Go to Finance and find out who you're paying. And that can be one of the best ways for you to get a very, very good list of suppliers very quickly, including a lot of SaaS apps. Now the issue with SaaS is, they often follow a product led growth model, which means you can have a large number of users uploading a fair amount of data to a SaaS app before they start charging.

And so you end up with this kind of hidden group of suppliers that actually tend to not be that critical, but could be touching some quite sensitive data and you have no easy way of finding out who they are. There are tools popping up now that do try and look at kind of internet traffic coming in and out of an organization to map them out.

And it's the same problem with finance as well, like people are losing track of How many of these SaaS tools they're paying for? You end up paying SaaS tools that often aren't used really at all across the business.

[00:48:13] Andy Ash: Overlapping usage. And yeah, yes, it's an interesting, interesting challenge. Coming to move this on a little bit to the most hot topic in security at the moment, AI.

Do you think the BOM concept is likely to be applied to AI? And I think we've coined the phrase of AI BOM, becoming a requirement in the near future. Essentially what you just described is a huge amount of data and a huge amount of connections between that data, not just with the software packages, but the companies that are using them and the vendors, you put resellers in there and MSSPs.

Do you see the AI becoming a factor in trying to sort through this?

[00:48:48] Haydn Brooks: Yes, but I would say from what I've seen, it's still not very clear how that's going to end up. So for example, at the minute, a lot of the AI conversations that I am seeing focus less on the security of the AI, whether it's a build at home type product or whether it's using a supplier.

Like OpenAI or Microsoft. The conversation tends to focus on the ethics of it. So almost, what's the ethics of using this AI? What's the AI doing? Are we trusting the outputs? Are there any hallucinations? What data are they using actually to train these models? And it tends to focus more on that rather than traditional cyber security type concepts.

I would say where the, where the AI BOM, which is something I'm really concerned, may come into it, it always overlaps again with those two different types of supply chains. The corporate supply chain and the software supply chain. So I'd say if your company is engaging with other companies are using their AI models.

So if you're querying the API of OpenAI and there it's more of a traditional supply chain type problem where you would need to do a thorough review of. That company, how they think about their own security, how they think about how they built the AI and what the AI does. And that's kind of a similar problem maybe with a different set of controls that you're looking at.

If you're then looking at more building your own AI type tooling, that's where more software supply chain comes into it. And where you'd want to track things like, um, using TensorFlow and all these other kinds of types of packages that help support the build of that product. And that's where I see an AI BOM kind of being required there.

[00:50:12] Andy Ash: Okay. So every month we turn to our resident threat researchers to describe a different attack type in our Attack of the Month feature. This month's attack is a fake account creation. Cyril, can you tell us a bit about what account creation is and how it works?

[00:50:26] Cyril Noel-Tagoe: Yeah, so fake account creation is effectively the process of registering accounts using either fake or stolen information. And it's one of these more intriguous types of attacks. This is really the precursor for a lot of other attacks. So why would attackers need fake accounts? Well, these days, identity, especially your digital identity is really the core of any interaction you have with a digital service. And if you think of things like loyalty points or different types of limitations based on your account, if an attacker can take hold of multiple accounts, they can either bypass these limitations or they can get more loyalty points. So creating fake accounts is really a powerful technique for attackers to do. And while an attacker can sit there at the computer and type in lots of, uh, details for their account, it's much easier for them to do it in bulk. And that's what fake account creation bots allow them to do. They're also known as account registration bots in that ecosystem.

And some examples of kind of the following attacks that fake account creation bot are used for, uh, things like new joiner, promotion abuse. If I get X percent off a purchase as a new customer and I make multiple accounts, I can get X percent off multiple times. Even things like just typical scalping. So just buy an item. If you've got a restriction of one item per customer and I've got 20 accounts, then I can get 20 items. Um, gift card abuse. If I try lots of gift cards on one account, that might get noticed. If it's spread across multiple accounts, that's a bit harder to see. And then even things like, you know, fake reviews or banned posts, you know. If having multiple accounts is just so useful for attackers.

[00:52:13] Andy Ash: What are the stresses within a business that make this kind of go unnoticed? Or if it is noticed, what, why, I'm thinking specifically around, uh, sales goals for increased signups.

[00:52:25] Cyril Noel-Tagoe: I mean, it's really interesting because if we take a look back at when Elon Musk was trying to buy Twitter.

A lot of companies are measuring the amount of users, especially in the kind of social media ad space. The more users you have, the better your company's valued.

So there's almost an incentive to promote and not look too closely at account registration. If you also add to the fact that with most bot attack, so things like scalping or carding, it's all about the speed. They're using a bot there to do something quicker than a human can. But for fake account creation, speed isn't necessarily the goal here. You can do what's known as kind of a low and slow attack, where you're over a time period, you're creating multiple accounts, but you're doing this slowly. So this isn't going to be picked up by kind of some sort of rate limiting algorithm.

So you've got kind of one side, but it's almost, you're, you're happy to kind of overlook it because your, your metrics are going up, you're getting more accounts. And also it's not as obvious and in your face as other bot attacks. And that lets it go unnoticed. I think where it starts getting noticed is on the marketing side, when your analytics start getting skewed or on the fraud side, where, you know, you're starting to see an increase in fraud from the fresh accounts that are operating fraudulently.

[00:53:45] Andy Ash: Yeah, and it's interesting, you know, because a lot of customers we work with have a know your customer, especially where there is a big price to be won through managing to create fake accounts and fraud. But whose responsibility do you think it is to monitor new accounts? Who should be doing that?

[00:54:01] Cyril Noel-Tagoe: I think that's a really good question.

And I think it's going to differ from organization to organization. I don't think it's a security problem. This is security's responsibility because security are there to help you achieve your business processes. They are not there to inform that every account that's being created is the right account, which makes this a more difficult problem because It relies on communication within the business that people can raise something to security and say, this looks off, can you look at it? And I think that's where things fall down. But I think depending on what type of business it is, this might be something that customer services look at, especially in your retail spaces or where you've got, um, a lot of new joiner bonuses. I think it should be there, but it really depends. I think the difference is in places where kind of know your customer, kind of the financial services, I think there.

It's quite clear there that you've got, you've got compliance teams are looking at these types of regulations and there, it's quite clear where it fits, but in other organizations, it can be a bit trickier.

[00:55:01] Andy Ash: Oh, agreed. It's scale based on the damage that can be done, right? So, and I think I know some of the answers to this, but what methods could be used to prevent fake account creation attacks?

[00:55:13] Cyril Noel-Tagoe: Oh, I want to hear some of the answers that you think you know.

[00:55:17] Andy Ash: Um, so there's, uh, there's, uh, bot management. That's, uh, that's, that's a given as we're a bot management company. Um, but once, so the know your customer stuff is designed to stop fake accounts. Why isn't that working though?

[00:55:30] Cyril Noel-Tagoe: I think it definitely increases the barrier for threat actors, so in terms of is it working, it is to some extent. I think the more motivated attacker is always going to try and find a way through. And so defenses shouldn't be seen as kind of a, it stops it or doesn't stop it. How do we increase the barrier to the point that it is not profitable for an attacker to do it? And in certain areas, know your customer does do that. And the problem is that there is a lot of information available online for people.

So we get kind of this almost identity fraud through fake accounts and creation where know your customer is all about kind of verifying that this person exists and they can prove that they are this person. Well, if you can scrape enough information about people that's available online, and you can use things like, you know, we've been talking about AI. You can use generative AI to kind of fill in those gaps. You can bypassing a know your customer process that way, but again, that's going to be the more advanced threat factor. So it's definitely important. I think some of the other things that people have tried using are things like phone verification to making sure that they can receive phone calls.

Again, with anything, all you're doing is raising the barrier and there's been lots of these, you know, rent an SMS number kind of services coming out to provide threat actors with a way of bypassing that. So I don't think there's any one solution. I think it's dependent on the scale of your problem. You start adding barriers and increase the kind of the hurdles and make it harder for threat actors to the point where that is, it's no longer profitable for them.

[00:57:03] Andy Ash: Yeah. I mean, essentially we get to a point of proof of life. That's where this is going. You have to prove that you are human, which is very difficult to do when you're not actually meeting the entity that is trying to agree that you.

[00:57:14] Cyril Noel-Tagoe: It's that, but we're also doing that at the same time when there's this, there's almost arms race and generative AI and deepfakes.

So how do you prove you're human? Is it blinking? Well, deepfakes can blink now. Okay. What? Okay. Is it biometrics? There's the whole kind of how are you securing those biometrics.

[00:57:31] Andy Ash: But it's that fundamental identity piece, it's the identity fraud. If you can create something that looks like a human being that has a social security number or a national insurance number in our case. A passport number, uh, an address that can be looked up. The arms race is going to get phenomenally complicated. Neither of us said blockchain, thank God, but yeah, but, um, but yeah, it's, it's, it's fascinating. I mean, the, the other option is to try and stop it at source to prevent the actual traffic from getting to the point of signing up.

There is a conduit, an actor, the conduit and an endpoint, right? So break the chain, um, and you will get less fake accounts, which is something we do. Let's say we stop those requests getting through in the first place, but that's, that's obviously the best way, by the way. I've. Cool, Cyril. Thank you very much for that.

So that's it for this month's edition of the Cybersecurity Sessions. I'd like to thank Cyril Noel-Tagoe, Thomas Ballin from Cytix and Haydn Brooks from Risk Ledger for their participation today in what I hope was a really interesting session. If you've enjoyed this episode, please subscribe and leave a review on Spotify, Apple Podcasts, or wherever you're listening today.

You can also follow our page on X @CyberSecPod, and email any questions or comments to podcast@netacea.com. We'd love to hear from you.

Show more

Block Bots Effortlessly with Netacea

Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
Book

Related Podcasts

Podcast
Podcast
S02 E06

Protecting Privacy in ChatGPT, Credential Stuffing Strikes 23andMe, Freebie Bots

Find out how to make the most of ChatGPT without compromising privacy, how 23andMe could have avoided its credential stuffing attack, and how freebie bots work.
Podcast
Podcast
S02 E05

Skiplagging, CAPTCHA vs Bots, Scraper Bots

Discover why airlines are battling skiplagging and the bots that aid it, whether CAPTCHA is still useful, and scraper bots uses in this podcast.
Podcast
Podcast
S02 E04

National Risk Register, Encrypted Messaging, Residential Proxy Networks

This month we begin by examining the 2023 National Risk Register, explore the issues surrounding encrypted messaging apps, and look at the rise of residential proxy networks.

Block Bots Effortlessly with Netacea

Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
  • Agentless, self managing spots up to 33x more threats
  • Automated, trusted defensive AI. Real-time detection and response
  • Invisible to attackers. Operates at the edge, deters persistent threats
Book a Demo

Address(Required)