RSA Conference 2023: Review & Insights from Netacea’s C-Suite

Available on:
Bonus Episode
23rd May 2023

In part two of our RSA Conference 2023 series, Netacea CPO Andy Still and CISO Andy Ash return from San Francisco to share their insights from the biggest cybersecurity event of the year.

As part of Netacea’s C-suite, both Andys are always looking ahead at how new threats are developing – “what’s new, scary and worth worrying about?” The rapid advancement of AI, both as an offensive threat to cybersecurity and as part of defensive technologies, was one such topic at RSA 2023, with Bryan Palma’s opening keynote speech grounding this complex issue and setting the tone for the rest of the conference.

Did their discoveries about AI & ML at RSA validate their own work at Netacea? Did they achieve their goals to explore the future of cybersecurity and confirm the direction of their own product? And did they pick up any useful swag?

Tune into this special bonus episode of the Cybersecurity Sessions podcast to find out!

Speakers

Dani Middleton-Wren

Danielle Middleton-Wren

Head of Media, Netacea
Andy Still

Andy Still

CTO & co-founder, Netacea
Andrew Ash

Andrew Ash

CISO, Netacea

Episode Transcript

[00:00:00] Dani Middleton-Wren: Hello and welcome to the Cybersecurity Sessions bonus episode. We are going to be talking today about RSA 2023, the summary. So a couple of weeks ago we put out an episode where we introduced the fact we're gonna be attending RSA, and the sessions that our team were most excited to attend. So I'm your host today, Dani Middleton-Wren. I am head of media at Netacea, and I am joined by the fabulous panel, Andy Still, and Andy Ash. Andy Still, do you want to introduce yourself?

[00:00:34] Andy Still: Yep. Thanks Dani. I'm Andy Still. I'm, CPO, Chief Product Officer and co-founder of Netacea, attending RSA this year to look at the future of cybersecurity and make sure that's in line with where we're going as a product.

[00:00:47] Dani Middleton-Wren: Perfect, and hopefully the answer will be yes, we are.

[00:00:51] Andy Still: Hopefully.

[00:00:52] Dani Middleton-Wren: Hopefully, we'll have to wait and see. Andy Ash.

[00:00:56] Andy Ash: Hi, I'm Andy Ash. I'm CISO at Netacea. It was my first time at RSA this year and I was wearing two hats. So one was helping Andy assess different vendors in the market space that we work in, and also looking at the security tooling that we use at Netacea to keep ourselves and our customers safe.

[00:01:16] Dani Middleton-Wren: Perfect. You are actually wearing three hats because you are also very prominent on our booth this year.

[00:01:22] Andy Ash: I did meet quite a few people, yeah, I had a great time chatting with various people that dropped by to say hello and ask what we do. So yeah, that was really, I won't say new for me, but it was different.

[00:01:35] Dani Middleton-Wren: Because we love to mingle. It's what we're there for. Great, thanks. Andy. Okay, so let's have a run through of what we're going to be discussing in today's episode. So we're gonna be talking about: what is RSA and why we attended the sessions that we wanted to see. Did we actually see them? The conference key themes, and that included: defensive AI, offensive AI, scary new things to worry about, and identity crisis. Okay, so let's talk a little bit about RSA and why we were there. So, RSA 23 was held at the San Francisco Moscone Center from April 24th to the 27th. RSA Conference is one of the biggest security conferences in the world with events that draw over 40,000 attendees throughout the year in Europe, Asia, the Middle East, and the UK.

So we attended to gain valuable insights into cybersecurity trends and why we expect the landscape to shift over the next 12 months. And we're gonna be hearing from the Andys who attended a myriad of sessions with a host of experts from the cyber community. So before we kick off and talk about the conferences, key themes and the sessions that we've planned to attend, Andy Ash, do you want to tell us about your experience as a first time attendee at RSA?

[00:02:48] Andy Ash: Yeah, absolutely. So from the previous podcast, I was obviously very excited to be going. It was my first RSA and I think to be fair, it exceeded my expectations in terms of content. So, we all went to some really, really interesting seminars, the exhibition hall was really good.

Some fantastic vendor stands. And the overall atmosphere was the thing that I guess I wasn't really expecting. I've been to lots of conferences in the UK before. To say that the US do this in a bigger and more exciting way, I think would be fair. Just the organization, the way that everything was set out, it felt a little bit like a security festival, like a weekend festival.

You know, the vendors outside hiring acts to wow crowds as you're waiting to go in places, and free things everywhere. Yeah. It was, it was, it was. It was really great. I really enjoyed it. San Francisco was wonderful. I think it was helped, the weather was fantastic as well, so yeah. But I also met a lot of really, really interesting people, not just on our stand, but walking between the centers.

Because there's four buildings where the seminars and the exhibitions are. Yeah, just generally chatting with people who do a similar thing to us. Very validator around what I do for a living and what we do as a company. So really great.

[00:04:11] Dani Middleton-Wren: Yeah, it was great to be around so many like-minded professionals, and that's what's quite special about it. Everybody sort of talks the same language, and it's quite an interesting perspective. What you've not mentioned there, Andy, is your swag haul.

[00:04:26] Andy Ash: I and my children will not need any new socks until the 2030s. I that's fair to say. I have more socks than I'v ever had before in my life. If anybody gets them me for Christmas, I'll be annoyed. But yeah. Great swag haul. Absolutely.

[00:04:42] Andy Still: Yeah, my swag haul came into its own in San Francisco because we had a giant power cut.

[00:04:46] Andy Ash: That's right.

[00:04:47] Andy Still: A large area of San Francisco lost power, including our hotel. Luckily, I had picked up two flashlights in the conference, so I was able to actually move around my room, pack my bags by torchlight.

So yeah, it was very, very useful and I think there was a lot of criticism within our booth about the fact I picked those up. "They're gonna be useless." But no, they were incredibly useful.

[00:05:10] Dani Middleton-Wren: They were incredibly and immediately useful.

[00:05:13] Andy Still: And I think to be fair, we actually have a call later on today with one of the people who gave out those flashlights. So, everyone's a winner in that.

[00:05:23] Andy Ash: IS it just to say thank you to them, Andy?

[00:05:26] Andy Still: Yeah, just, you know, after, after they helped us out, we really do need to look at their product. Yeah.

[00:05:31] Dani Middleton-Wren: Yeah, absolutely. Okay, so let's just talk a little bit about the sessions that we were planning to see. So the first one was: "Hardening AI and machine learning systems. The next frontier of cybersecurity."

[00:05:42] Andy Still: From an overall big themes of RSA, this was one of the ones that was very realistic about the state of defensive AI.

So I think we'll come later on to talk a lot more about the offensive AI and the big problems are out there, but there was a definite gap in between the vision of offensive AI and the big problems that that will bring in the future and the reality right now of the position of defensive AI.

And when we were going around the exhibitor hall, there was not a massive amount of companies who were leading on their defensive AI capabilities, although a lot of them mentioned them. And this kind of session was very much about "how do I validate the reality of what people are telling me, versus what I actually need" and "what will do good for me as a company" and "how much of what people are saying about defensive AI today is really snake oil and how how can we check that?"

[00:06:41] Andy Ash: Yeah. So this was all about how to assess whether a security vendor's AI is going to be suitable to protect you as a business.

They went through the kind of questions that you need to ask. What and why. So how is the vendor using ML? What training data have they used? Is the AI resilient? So is the resiliency built in, you know, robustness and... et cetera.

And then, real ROI versus claims. So can you validate that the solution that you are looking at to protect yourself is actually, has efficacy in what you want it to protect you from. And then is it really solving a problem? Which is a really interesting question. Reason being is there's a lot of claims about people using ML AI within their vendor stack.

And is it doing what you want it to do now in terms of giving us a model for, when we look at third party AI to bring into Netacea, to protect ourselves? It's a really good template. But to kind of relate it to what we do as a product.

These are questions that we answer all the time with our customers. So actually seeing this laid out as a plan of how you should think about adopting ML in your environments is useful because we do all of this. So, what do we do? Well, we provide ML models to protect against bots, training data where we actually use our our own data and the customer's data to train on Was resilience built in? Yes, we talk about how we have redundancy right throughout the stack, that always comes up when we're talking to prospects and new customers. Real ROI versus claims, and is it really solving a problem? Well, how do we prove as a business that the solution that we put in our customers' estates works well?

We work with customers on POC to prove the efficacy of the solution. And And we provide the ROI on the actual results of a POC. And we continue to do that through the lifecycle of the contract. Is it really solving a problem? Well, we hope so, and it absolutely is in the case our contracted customers, so yeah.

[00:08:55] Andy Still: Yeah, I think this session was very reassuring to us because we are answering those questions.

This was very early on in the conference as well, and it was a very good scene setter for getting a sense of that reality of this is kind of, where we are today in the real world. And we'll hear a lot of talk about big visions and big problems but actually, this grounded you a little bit.

So it was a very good starting point.

[00:09:20] Dani Middleton-Wren: Great. Okay, so let's quickly run through the rest of the sessions that you said you were going to attend. And then we'll start to weave in those key conference themes. Okay, so "The secret life of enterprise botnets", which of our conference key themes do we think that that aligned to?

[00:09:37] Andy Still: Yeah, I would position this as very much a, not necessarily a "scary new thing to worry about", but it was definitely eye-opening in the evolution of how botnets are evolving over time. What this background of this one was, was basically saying that if you were to look a number of years in the past, the majority of botnets were distributed globally and they were compromised machines, usually relatively easy to detect because they were coming from different countries. Whereas now actually the US is the biggest source of botnets, and it is primarily compromised IoT devices, so security systems, et cetera, that have been installed within relatively small companies, but without the security expertise there to make sure they're properly patched.

They're usually cheaper systems, so they've not got the security efficiency built into them. So they introduced some very scary stats about the size of these botnets, that, if they were coordinated, they could take down the entire internet. Luckily, so far, they have seen nowhere near that level of coordination.

They've seen relatively small attacks.

[00:10:44] Andy Ash: Yeah, one of the takeaways was it was an, ISP led change in the amount of bandwidth that you have outbound from your devices. Traditionally that was throttled to 10% of the download that you had.

But now most ISPs are offering one gig bandwidth up, which means you need less devices. Basically that's caused a shift in the makeup of the botnets that create DDoS attacks. I looked at this as not something scary and new, but an evolution of an existing threat. And really it's, you know, identity. "Know your customer", well this is "know your enemy." If you understand the weapons that your attackers are using, you've got a better chance of defending against them. So from a Netacea point of view, it was fascinating because we do a lot of work to track the botnets that hit our customers, and quantifying and qualifying what they are. We're doing a lot of work on residential IP at the moment. Residential proxy IPs. It was really insightful to see a shift in the type and source of these attacks. Because knowing your enemy is a key part of defense, right?

[00:11:55] Dani Middleton-Wren: Sounds scary and interesting. Okay. "Anatomy of the attack, the rise and fall of MFA". This is one I'm really interested in.

[00:12:02] Andy Still: Yes, this one again was, interesting and potentially very scary, because the attacks that were being demonstrated in this one essentially render a large amount of MFA solutions potentially redundant. So they were talking about attacker in the middle attacks on MFA, which have been available for a relatively long time.

But what has evolved over the last year or so is phishing as a service kind of solutions. Essentially you can go on, sign up, use these proxy solutions that will actually replicate the site that is being protected by MFA. They sit in the middle and then when a user interacts with them, they're interacting directly with the real site, but obviously sit in the middle capturing the credentials. So they relay the MFA request to the user. The user fills it in, they relay that back to the website and they get legitimately logged in. So as far as the user's concerned, they've completed a legitimate MFA process, as far as the actual website's concerned, and legitimate MFA process has been completed, but there's an attacker in the middle that actually has completed that and they have access to the site behind the scenes. So it wasn't necessarily the technology that's doing that, it was the ease and availability of services for anyone to do that for a very small amount of money. It means that anyone who's relying on MFA as a silver bullet to solve their identity login problem, they need to be more careful around that.

There are ways that you can deal with that. Companies can no longer just think, "I'll stick basic MFA on there and that will keep me safe" because it won't, as a result of these things. And these things are just gonna get better obviously, more distributed, easier to use, as time goes on.

So, certainly something to be aware of. In our business where we protect a lot of websites from credential stuffing attacks and et cetera, people trying to bypass identity. Sometimes MFA was seen as an alternative to what we do, because it would drive away attackers. And what this is just illustrated is, it won't, it's just part of the overall attack vectors that we'll see.

[00:14:16] Dani Middleton-Wren: That's really interesting. I think more and more businesses are going to need to worry about it, come up with alternatives as you say. Okay, number four. "No more time: Closing the gaps with attackers."

[00:14:27] Andy Ash: So yeah. Yes, I attended that one. I think it was echoing a lot of the themes that Bryan Palma raised in one of the keynotes, which was SIEM there, done that, Rising Up in the SecOps Revolution And essentially, that keynote resonated throughout the entire conference.

Brian Palmer is the CEO of Trellis, who were one of the main sponsors of the conference. One of the quotes from that talk, which is actually available on YouTube and well worth watching, is that we've been barely staying in front of the bad guys for years, and the table just flipped again.

We are in an AI arms race and the attackers have taken the lead. Legacy service is still largest spend for security teams, but 96% of CISOs say they need better solutions to be more cyber resilient and SOCs are struggling to keep up. So essentially what Brian was trying to get across was that we as a security community, we're managing, but it's a struggle and the tables have just turned so in terms of offensive AI, adding offensive AI into the attackers' arsenal requires a different response from the security community community and in his case, particularly the SOC. So, detection in the SOC is led on data, but the data volumes are so large that it's gonna become impossible through human interaction. Brian Palmer then goes on to talk about what the tomorrow's SOC look like.

It was quite inspirational really. It was definitely Peak West Coast, and I'll explain why in a second. But the SOC of the future bites back, offensive tactics against AI attackers to eliminate existing future threats. So this is basically, take down and using the same tactics that the attackers use against the corporations they're attacking, to disrupt the attackers.

And that's a really interesting viewpoint and I think the example he gave was, you know, you can't win any game by just playing defense. Which I agree with completely. The SOC of future games the system, so basically use gamification to capture talent, promote cyber careers and bridge the skills gap.

We know there's a skills gap in cyber. And then something that was quite different, and I wasn't expecting to hear particularly in the US. But basically, the crowd that is source should work for everyone. So it becomes like a force for good security as opposed to being attached to a single business. So, that's an interesting concept. But the really key takeaway was the future SOC runs on AI. So robots manage the detection, some response work, AI on AI attack versus defense. With the automation leading and humans become arbitors to make strategic decisions, and run offensive campaigns against attackers and attacking infrastructure. That last bit, humans become arbitors and make strategic decisions to run offensive campaigns, is really interesting because as there is more automation, as the systems get better trained, better able to cope with attacks, the role of the SOC analysts and the role of those organizations changes to become a strategic resource as opposed to a reactive resource.

And that is a step change that is something that I've taken on in in my thinking. It stops the threat of AI to jobs, and everybody has concerns around what AI will replace. However, it does place the human as the strategic component of the system as opposed to the reactive.

So yeah, really interesting talk.

[00:17:52] Andy Still: I thought that was a really interesting element as well. And I heard a couple of people use slightly different analogies in here, but what it, what they both were essentially saying is there is a view at the moment that humans and AI will be co-pilots working together to solve an aim. And I think that what they were saying is that the reason why that is being positioned is to reduce the scariness of it.

It's not going to replace your job. It's going to help you. But what Bryan Palma was saying was, a much better analogy is AI is the players, and human is the coach. So it won't be working alongside you. It'll be working for you and you will be at the sidelines, dictating what what it does and being there to advise and steer the direction of where we're going.

And there's a couple of people said something very similar to that, and I think it's a really good way of looking at, again, another way we've talked about it internally is that, AI is the people playing the instruments and humans are the conductor.

So you've got control of all these systems that can be combined to make the perfect sound, but you need the conductor there to make sure they're doing the right thing. They're all playing together at the right pace. They're all delivering just what they need to add to the overall solution.

So I think, obviously as we go into the next years, there's a lot of worry about AI replacing humans, and I think where we need to go is to think at what are humans good at and what value can the AI systems add and how do we work together? Particularly from the security world, because I think, as Andy mentioned to start off this conversation, SOCs are overwhelmed at the moment. There aren't enough people to solve the problems that they need to solve, so they need to use that time wisely and use automation and AI wisely to make sure that they can actually keep up with the attackers, without needing to expand the amount of resource needed, and and get the expertise from the humans and use that to control the AI defenses.

[00:19:54] Andy Ash: There were so many analogies and little sayings. My favorite one about this was "it's time to get out the cockpit and into air traffic control". yeah, I remembered that one.

[00:20:05] Dani Middleton-Wren: That's really, really interesting. I think especially, so, Andy, you and I had a conversation with an external analyst the other day, more or less on this topic about how AI is, it's already integrated by SOC teams. It's already there. Customers don't necessarily know it's there. It's not driven by customers wanting it to be there, but they need it to be there because in order to keep security working as effectively and efficiently as possible to match the scale of attacks, that's where AI becomes absolutely critical.

[00:20:36] Andy Still: Yeah, absolutely. I mean, we, I've talked to people who were working in SOC environments who were getting 10,000 alerts an hour, so there is no way that a human can respond to that. Even if you had 10,000 humans, you'd be struggling to keep up with that. So you need that intelligence and automation to be able to pull out of that the things that actually need to be dealt with by humans.

And anything that can be dealt with automatically needs to be dealt with automatically. But you then need to inform the human what you have done, why you've done it, and what the impact of that change was so that we've got that full auditability of what is going on. But the ability to respond in a timely manner to what will be just an ever increasing amount of attacks that are coming in.

[00:21:25] Andy Ash: Yeah, it's interesting to map against the Netacea evolution as well in terms of, what we thought of was scale when we first started doing bot management just got bigger, exponentially bigger year on year on year. And actually the volumes of data that we're putting into our machine learning models now is, you know, trillions of requests a year. And how we dealt with that and how we had to learn to not try and eyeball everything that came through and to work with the ML to understand what it was saying and direct it in the right place to protect our customers in the most efficient way.

And that is a journey that we've been on for five, six years. Really interesting to hear other people talk about it. It's, again, I used the word validatory a couple of times today, but it is, it is validatory. It is.

[00:22:15] Andy Still: I think it is, it is true. I remember going back to the early days of the product, the The idea of auto blocking was something that was unpalatable to some of our early customers because they just wanted to approve any changes that we were making. But the is that by the time any changes could be approved, the attackers are long gone. So the only way that we can stop these attackers is that, we have to build that trust relationship with our customers that we will be right in that we will be making wise decisions, and by making those decisions, we are making their lives better. We're making their sites safer. So it's, it definitely resonated with the journey that we've been on. It is definitely reassuring us that we are on the right path.

[00:22:59] Andy Ash: You were trying to find another word for validatory there weren't you?

[00:23:02] Dani Middleton-Wren: Yeah.

[00:23:04] Andy Still: Don't even know the word validatory, so.

[00:23:09] Andy Ash: I hope I'm using it in the right context. I'm sure I am.

[00:23:12] Dani Middleton-Wren: Yeah, let's go with it. It's confirmed that we're doing the right thing. That's great. That's the main thing.

[00:23:36] Dani Middleton-Wren: Right. So let's move on then. We've touched on some of the conference key themes, but first of all, Andy Still, can I ask you to define what offensive AI is?

[00:23:46] Andy Still: Yeah. So offensive AI is growing to be an industry standard term, for the use of artificial intelligence to either drive or improve cyber attacks. So traditionally cyber attacks will be a combination of automation and human ingenuity. We've seen an increase in the regularity of AI doing some elements of that.

At the moment, it is only small elements, but the expectation within months or years is that that will become the standard use of AI. The growth of ChatGPT has helped this in that it has enabled the general availability of some of the code and systems that you used to have to be more of an expert. I think one of the people, I think it was Brian Palmer actually in the conversation that Andy was mentioning earlier, said that novices have already become experts as a result of ChatGPT. They can write malware now that would've required expertise not available to the regular person before, and that is now available to the wide market. So we're already seeing this being used. But over time we expect that to be dramatically more. And the, one of the key themes of the conference was that this is happening. It's not "if", it is "when", and it's not "when", but "how soon".

[00:25:09] Dani Middleton-Wren: And is it therefore happening a little bit now as well if we are already talking about it and it's, is it present day and has ChatGPT made that present day?

[00:25:19] Andy Still: I think ChatGPT hasn't made it present day. I think it was already present day. ChatGPT has expanded the amount of attack vectors that can be readily available to AI.

[00:25:33] Andy Ash: The only thing I took away from the... because we went with these kind of preconceptions of what offensive AI is, again, going back to the SOC of the future, and, it is about leveraging AI to attack the attackers as well. It is fight back. And I don't think there's a firm understanding or consensus of what that might be, but the fact that that is now being talked about and having humans, again, controlling, the offensive plays that the security vendors make.

Is it's, it is a massive, it's a massive change. You know, the take down services, human operations of infiltrating groups that are persisting these attacks. It's new and different in terms of certainly anything I've heard in such a public forum. So...

[00:26:23] Dani Middleton-Wren: Okay, so that kind of takes us neatly onto defensive AI, because we've spoken throughout this session about how AI is changing the cybersecurity landscape and how the role of humans and the role of technology is changing, and it's got to be about addressing the strengths of each and playing to those strengths in order to keep up in that constant game of cat and mouse that exists between those working in cybersecurity and those harnessing technology to carry out attacks.

So let's then go onto defensive AI. Andy Still, do you want to define defensive AI for us?

[00:26:58] Andy Still: Yes. So it is simply the use of AI in cyber defense. It could be as a response to offensive AI, but equally it could be just as a general means of improving our defenses. So I think there's an evolution in, that we're in at the moment where AI is being used primarily as a way of improving automation and improving human processes.

Going forward, I suspect we will end up more with a AI red teaming kind of approach where we're using AI to continually monitor what attacks are going on, AI to push attacks against us, see how we can be learning from that, but also AI then to improve our defenses automatically as well. So learning from what the attackers are doing, and then evolving our defenses in real time.

So rather than just improving the processes for humans, this is improving the underlying systems that we are using for defense.

[00:28:03] Andy Ash: Yeah. Well, what's interesting as well, and just echoes the point I made on offensive AI, in the same way that the SOC may use offensive AI against attackers, it's likely that attackers will use defensive AI against SOC. So it becomes a battle on all fronts. We run a very good threat intel team headed up by Matt Gracey-McMinn, I should say, and, and we had a specific task due, where one of the barriers that we had was a significant SecOps organization within the, within one of the groups. And it was well organized and well defined based on things that had happened recently, you know, the Genesis Market, et cetera. And that tells you that the attackers are thinking about their defenses and at the point that they're being attacked, they will employ the best tactic, the best way of not being discovered, of not being infiltrated. So I think, I think all this cuts in both directions.

[00:29:01] Andy Still: Yeah, I think it was, it was described at RSA as machine on machine warfare.

[00:29:05] Andy Ash: Yes. Basically. Yeah.

[00:29:07] Andy Still: I think one of the big themes I would say I took from the conference was that there were a lot of big visions of defensive AI, that wasn't necessarily reflected by some of the sessions that were more practical. Talking about very specific examples of what people are doing, particularly on the research area in defensive AI, and there is a big gap between those and the visions that were being put forward as being needed in the keynote. So there were some examples of what people were doing. For example, looking at how to detect phishing sites and there's a lot of work going into that. But it still seemed a long way off having solved that problem.

We saw a couple of other examples of specific technologies that were being used in practice today, and they were nowhere near the level of sophistication that will be needed. So, there is a gap between where we are today and where we will need to be. And this,is not a lack of awareness of there is that gap, but we, as an industry, have got to bridge that gap, fairly quickly, based on the very realistic expectations of the growth of the attackers in that time.

The defenses are not where they need to be. I mean, this is obviously, it's our business. We are in this battle constantly and we, I believe, are actually ahead of a lot of companies in terms of defensive AI for what we're doing today, but we know there's a battle on there. And what we were seeing from the talks, and to some extent from the vendors that we spoke to as well, is there is a, potentially very large gap that could open up very quickly between attackers and defenders if as an industry we don't get on top of some of these things and start escalating research on here. Now, what we took from the big visions is that this is not an unknown problem and people are aware of it and now working on it, but it was a potentially slightly worrying aspect of the conference that there is that gap today day.

[00:31:20] Andy Ash: Absolutely. I dunno if we've got time to do some predictions for 2024 in terms of what will RSA look like? My thought would be that at least 20% of the vendors will be pushing some sort of defense against AI in general. How to protect networks from AI, how to protect yourselves from third party AI internally in your businesses.

It's a challenge that we're looking to address at Netacea at the moment. So, you know, how do you use ChatGPT safely? How do you, it's away from the SOC challenge, but I think there's gonna be a significant focus on that in the next 12 months. I'd be amazed if it's not the number one topic of conversation next year.

[00:32:01] Dani Middleton-Wren: Yeah, I think you're probably right there. Okay, let's very quickly then close on "scary new things to worry about". Again, we've touched on this throughout the session review, but are there any other themes you think that arose that you want to flag as scary new things to worry about?

[00:32:20] Andy Ash: A couple of the seminars I went to were around credential sharing as a service, the dark side of no code, and the dark underbelly of third party applications. And they're talking about reasonably the same things. The dark side of no code, so the number one programming language in years come will be English, and citizen developers, i.e. people like myself, actually I don't, I've not been a software developer, have got the ability now to create applications within the enterprise using, you know, Slack and Salesforce. Zapier has obviously been around for a long time. And those applications are predicated on permissions. So, you can't opt out of citizen development. 70% of enterprise apps will be developed by citizens by 2025, which is a frightening number. Millions of new business developers are, are, just sprouting out of the ground. And they're me and you, they're people who don't normally use any kind of creative tools to make applications. It's now so easy that you don't know you're doing it, and there's that... at the end when you've connected your Slack account to your Jira account or your Jira account, your Miro account, there's just that big button that says, Slack needs these permissions in Miro, and everybody just clicks "okay". Then I'll read it, and actually what you're doing is, is making it possible for cross account access basically. So I found that fascinating and frightening in equal measure. And then the kind of the third party app in the enterprise estate, talked about having a ratio of applications to users.

Is there a good ratio? Because it used to be, you know, 20 years ago you'd have 40 enterprise applications in 4,000 users. You knew what exactly the applications that you have in, and you knew how to protect them. And you knew how to monitor that they weren't being breached, et cetera. Now, you know, even at Netacea we have, 60, 70, 80 applications across our, what is a, you know, reasonably small business. So how are we ensuring that those applications are secure? How are we ensuring the privacy of the data that is stored in them? The presentation came out with some really good recommendations around prioritization and how you actually look at what permissions each application's got. So, yeah, from my point of view, it was not necessarily eye-opening, but something that resonated with me.

[00:34:49] Dani Middleton-Wren: Andy Still, was there anything that you were particularly scared and worried about?

[00:34:54] Andy Still: So there was one thing, it's not, I wasn't scared and worried about at the time, but there was a book recommendation that was made there, which was Human Compatible by Stuart Russell, which talks about the concerns you need to have of when building a general intelligent AI system, and and how to avoid that accidentally destroying the world.

So, I'm a little bit worried and scared about that since I read the entire book. And the complexities of solving this problem are really quite immense. How do you build an ethics system for an AI platform that will make the right decisions in every situation? It's very difficult. So I'm a little bit scared and worried about that, but reassured that we are still a reasonable distance away from getting AI to that level of intelligence. And we have time to address that problem before that...

[00:35:47] Dani Middleton-Wren: Before

it descends into iRobot, the infamous Will Smith film...

[00:35:52] Andy Still: Yes, or The Matrix?

[00:35:54] Dani Middleton-Wren: Or The Matrix. Oh God. I hadn't even thought of The Matrix.

[00:35:56] Andy Still: The Matrix is probably the more realistic one, where it figures out that the way to keep humans happy is to create an imaginary world for them that they live in, and then robots can rule the legitimate world.

[00:36:09] Andy Ash: Well the machine on machine AI battle, it'd be better taking place somewhere else.

[00:36:14] Andy Still: Yeah.

[00:36:15] Andy Ash: And we could just get on with it then. It's interesting, this week, OpenAI have basically been asking for regulation in the U S, which I think kind of alludes to what you are saying, Andy, there is time to solve the challenges around the uptake of AI, but I think the people who are closest are the ones most worried, which tells you something.

[00:36:37] Andy Still: Yeah. Well, I mean, this book was written by a AI professor at Berkeley, so it's very real to him.

[00:36:45] Dani Middleton-Wren: Was it written recently or was it written 10 years ago? And he is now saying, "you really need to read this book."

book

[00:36:50] Andy Still: I think it was 2019, so it was relatively recently. But interestingly, one of the big technology gaps that he highlights within that, that hasn't been solved yet, has now been solved by ChatGPT and other similar large language models. So one of the gaps that he highlighted from where we were when he wrote the book to where we needed to be to for general AI has been solved, but there are a number of others yet that have not been solved.

[00:37:17] Dani Middleton-Wren: Yes. I think we should be scared and worried. That's quite right.

Well, thank you both very much for joining us on our second RSA bonus episode of Cybersecurity Sessions. And if everybody has enjoyed listening to this session today and you'd like to hear more from the Netacea Cybersecurity Sessions team, you can find us on Spotify, Apple Podcasts, or wherever you like to listen.

Stay up to date by subscribing and follow us @CybersecPod on Twitter. If you have any questions for the team, including Andy Ash and Andy Still, please email us at podcast@netacea.com. Thank you for joining us and we'll see again on Cybersecurity Sessions, and we're back for series two, episode two.

Show more

Block Bots Effortlessly with Netacea

Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
Book

Related Podcasts

Podcast
S03E04

Dr. Christoph Burtscher (AI Researcher & Author)

Join us for an engaging discussion on how AI is reshaping cyber defense. Learn about the shift from human-led security to machine-led defenses.
Podcast
S03 E03

Stuart Seymour (Group CISO, Virgin Media O2)

Discover the captivating journey of Stuart Seymour, Group CISO at Virgin Media O2, and his passion for building diverse, neurodiverse teams in cybersecurity.
Podcast
S03 E02

Arve Kjoelen, CynomIQ (former CISO, McAfee)

Get valuable insights into the world of CISOs with guest Arve Kjoelen (former CISO, McAfee) Topics include compensation, governance, and preventative security.

Block Bots Effortlessly with Netacea

Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
  • Agentless, self managing spots up to 33x more threats
  • Automated, trusted defensive AI. Real-time detection and response
  • Invisible to attackers. Operates at the edge, deters persistent threats

Book a Demo

Address(Required)
Privacy Policy(Required)