RSA Conference 2023 Preview: Our Top 5 Most Anticipated Sessions

Available on:
Bonus Episode
17th April 2023

We’re going to San Francisco! Netacea will once again be at the Moscone Center from 24-27 April for the RSA Conference. We’re looking forward to it so much that we decided to compile our top five most anticipated sessions on the Cybersecurity Sessions podcast.

Special host Danielle Middleton-Wren will be in sunny San Francisco, alongside her two guests for this episode: Netacea co-founder and CPO Andy Still, and Netacea CISO Andy Ash. Andy S was at RSA last year and has some insights and advice for Andy A – an RSA first timer – ahead of their trip to the Golden City.

Meet Andy Still, Andy Ash and the rest of the Netacea team at RSA Conference 2023 in San Francisco from 24-27 April! Come to booth 1367 to chat about bots, API protection and online fraud prevention, or tell us what sessions you’ve enjoyed. See you there!

Speakers

Dani Middleton-Wren

Danielle Middleton-Wren

Head of Media, Netacea
Andy Still

Andy Still

CTO & co-founder, Netacea
Andrew Ash

Andrew Ash

CISO

Episode Transcript

[00:00:02] Danielle Middleton-Wren: Hello, and welcome to Netacea's podcast, the bonus episode of Cybersecurity Sessions. So you won't have heard my voice before. I'm Dani Middleton-Wren, and I am head of media here at Netacea, we're going to be talking about RSA 2023 with two of Netacea's C-suiters who'll be attending the conference this year. And I am joined today by the wonderful Andy Still who should be a familiar voice to any regular listeners, and he is Chief Product Officer and Co-founder at Netacea.

I'm also joined by Andy Ash, who is our CISO at Netacea, and also a first time Netacea podcaster. So what is RSA? And why are we attending it? So RSA is being held in San Francisco in the Moscone Center between April the 24th and April 27th.

It is one of the biggest security conferences in the world with over 40,000 attendees expected in San Francisco, which means the Moscone Center needs to be absolutely enormous to hold all those people. Netacea attended RSA Conference last year where our threat researchers, Matthew Gracey-McMinn and Cyril Noel-Tagoe presented "Automated threats: the rise of bots and what to do about it".

It was met with critical acclaim from both the team and the audience who attended, and we are, as a team, extremely excited to attend the 2023 event. So I will hand over to Andy Still to tell you a little bit more about himself and why he's going to be talking to us today about RSA.

[00:01:36] Andy Still: Thanks, Dani. Good to be back on the old Cybersecurity Sessions. Yeah, I'm Andy Still. We've made this really simple today, by having two people with the same name on to make Dani's life easier as presenting. I'm here to share some of my thoughts about RSA. This is not my first trip to RSA, but it is Andy A's first trip to RSA. So hopefully we can share some knowledge for first time visitors to RSA or to repeat visitors, and we'll share some information about some of the sessions we're particularly looking forward to.

[00:02:07] Danielle Middleton-Wren: Great, thanks. And over to you, Andy A.

[00:02:10] Andy Ash: Well, firstly, thank you for having me on this splendid podcast. Very excited to be here. So yeah, this is the first time I'll be going to RSA, obviously been to lots of conferences before, but not one this big. I always think external events are the best way to learn and to get energized. And whether that's training courses or round tables, but in this case, the biggest and hopefully the best conference around cybersecurity.

So I'm just really looking forward to getting over there and seeing the lay of the land really.

[00:02:38] Danielle Middleton-Wren: And getting your steps in as well while we're there.

[00:02:40] Andy Ash: Yeah. Yeah. Well, we'll see.

[00:02:43] Danielle Middleton-Wren: Yeah, that's what I'm looking forward to. The average number of steps across the team as we walk up and down the Moscone Center every day. I think they're gonna be in the plus 20,000. Okay. So let's crack on with your top five RSA sessions. So you both spent quite a bit of time going through the entire agenda, which is absolutely chock full of must see sessions, so, do you want to talk us through exactly how you've whittled it down to your top five?

Because it must have been quite tricky, becuase I have also seen your full agenda and everything you are setting out to go and see during the four days. So I know that there are probably about 20 to 30 sessions in total. Do you wanna take us through how you got to those top five?

[00:03:29] Andy Still: Yes, I, well, I, you've seen my full agenda of sessions that I'm actually signed up to see. There was a bigger full agenda, where in some sessions there were four different things I wanted to see at the same time. So it's not an easy job to whittle this down to some specific ones.

I'd say there is a very wide selection of very interesting sounding sessions this time, I did have trouble getting it down to just being enough to actually be able to be manageable and not have your head explode at, you know, we're building enough breaks that you can actually appreciate some of them.

I was particularly interested in looking, and I think this will come out in the five that we've selected today, in a lot of the AI and ML based sessions, and it's such a growth area in terms of cyber defensive tools, but also in terms of the offensive tools that are being seen as well.

I'm really interested to see how the industry is talking about that kind of area. Particularly with all the of news coverage of the popular things like ChatGPT, is really driving into popular culture, but what does that mean for actual, the actual real world of what we are dealing with day-to-day of protecting cyber systems?

[00:04:45] Andy Ash: Yeah. I think me and Andy picked quite a few of the same talks. The reason for that is we probably share a lot of similar interests. So anything around AI. There's one that we've not pulled out, but just to give a flavor of like, what this conference is about to me really is the keynote on the looming identity crisis. The strap line for the talk is "as we enter the age of AI, we're confronted with a staggering new challenge. Traditional approaches to identity are dead." And that's immediately interesting to anyone at Netacea. Identity is changing. We kind of work in this space.

And it's actually, you know, you've got people like Chris Krebs talking about disrupting attackers, and Chris Krebs worked at the Homeland Security. He was a director of cyber there, sacked by Donald Trump. You don't get to see this kinda stuff every day. Yeah, when I was picking out what I wanted to see was actually really difficult.

But we have narrowed down onto quite a few similar things between us.

[00:05:43] Andy Still: Yeah, I think the identity thing you're calling out there is such a fascinating evolution of what we will have to deal with as people. Because so much of the amount of trust we have in identity relies on us being able to see a person, listen to a person, believe inherently if we see and can hear a person, that gives it a sense of reality. That world is changing now.

Deep fakes are here now, so we can no longer have those inherent views of reality, particularly as we're moving online. I'm fascinated to hear that keynote as well.

[00:06:19] Danielle Middleton-Wren: So let's start talking through your top five then, and we'll start with one that you've both selected, which is hardening AI and ML systems, the next frontier of cybersecurity. So what was it that really drew you to that? Andy, you've already, both Andys, you've already mentioned that AI and machine learning is changing how we as humans are operating in the world because it's already so much harder as humans to discern what is real versus what isn't, because deep fakes exist among us, let alone doing jobs like you two do, while your job is literally to discern human versus bot. So yeah, if Andy, can you start us off with what has particularly piqued your interest on this subject matter?

[00:07:03] Andy Ash: There's a couple of bits that are really interesting about this one. So it's been run by MITRE and MITRE have just brought out the new Atlas framework, which is essentially a MITRE framework for AI and ML attacks. My assumption is as part of this, they will be promoting that new framework, and that's massively relevant to the bot management product and the Netacea security program that I run. So, we run a really big data, AI driven platform. And understanding the TTPs used to attack it is probably one of our biggest priorities in the business. So understanding the threats, the attack vectors to our estate is massive.

And, you know, there's no better place to learn it from, from the people who write that framework. We've invested tons into our security program, a lot of time and effort and money. I'm just really excited to have a chat with MITRE and find out what it is that they're proposing.

In terms of the framework.

[00:08:02] Danielle Middleton-Wren: Absolutely. Andy Still, what is it that makes you want to go and see this session?

[00:08:06] Andy Still: Yeah, I think similar to Andy, I'm interested in knowing how susceptible we are to attack. So we detect attacks using AI, and we are very good at doing that, and it's very difficult to bypass our detection. What I'm interested in, is understanding whether there's a potential of our strength being turned into a weakness there.

Can people bypass our AI by poisoning the data that we're using, making us make wrong decisions going forward. What protection do we need to put in place to make sure that is not happening? I want to make sure that we are ahead of the crowd, in doing that so that our detection is staying at the levels that we're hitting at the moment.

[00:08:52] Danielle Middleton-Wren: Absolutely because I suppose it's about understanding the direction of travel for AI and how from the offensive to protect it from a defensive perspective, as you said. Okay, great. So the next one, Andy Ash, I'm going to come to you with this one, so it is "No more time: closing the gaps with attackers".

[00:09:12] Andy Ash: Yeah. So, the foundation of this one is about closing the gap between detection and response times to attacks and breaches. And that's a goal that all security teams have. That's a metric that we have internally at Netacea. And it's very relevant to any business, but particularly relevant to Netacea.

So to go back to what we do in the bot management product, we deal with security incidents on our customer's estate every day. And understanding the language and what the market is basically saying around the detection and response of bot attacks is really interesting. Now I suspect that they won't be talking specifically about bot attacks, but all of this thinking translates into our product.

So yeah, that's one that I'm really interested in going to and just listening to how, I think it's IBM that are actually running it. Yeah. So how people are shortening the detection and response times to attacks and breaches. It's basically what we aim to do in the bot space.

[00:10:12] Danielle Middleton-Wren: And how do you think that is going to translate? I suppose when it comes to applying it to what Netacea does, do you think that is going to come down to how the data analytics and the data scientists go about their day-to-day? Or is it going to be closer to our threat research team?

How do you think that kind of closing that gap and identifying the attack before it becomes a, before it becomes a breach, who do you think that's going to affect?

[00:10:38] Andy Ash: Well that's a core part of the product. You know, the data science, ML models that detect the behavior of attackers on our customers' websites and APIs. We act in, real time.

I think the challenge is to get the attack surface to the customer as quickly as possible so they can also act on anything that's happening on their estate. That's the key for Netacea. And just understanding that in the wider security picture, the kind of language that's being used by IBM and I think it's one of the cyber leaders of Qatar, which is should be quite interesting. So yeah, essentially it's just a case of looking at an attack as it's coming in and taking that action, responding appropriately.

[00:11:22] Danielle Middleton-Wren: Great. Thank you so much for that explanation. Andy Still, I'm going to come to you with our next one because this is something that you referred to earlier when you were talking about deep fakes and how AI is changing the world that we live in and how we respond as humans to AI. There is something that's been in the media quite a lot over the last few months, I'd say since around December. ChatGPT. And you have flagged this as a session you are particularly interested in attending. So the session is called "ChatGPT: A New Generation of Dynamic Machine Based Attacks". And I'm not surprised that this session has made it onto the roster for RSA this year because it's highly topical.

It is the buzzword on absolutely everyone's lips. You have to be hiding under a rock to have missed it. Do you wanna tell us why you specifically are interested in attending this session?

[00:12:11] Andy Still: Yeah, I'm very interested in this because as you say, ChatGPT is everywhere at the moment, I think it's been called the iPhone moment for AI. It's certainly taking it out to the mass market. And it's out there in popular culture and the world as a whole is trying to decide what it means, you know, how this is going to change our lives. So what I was particularly interested in this session, is hopefully a bit more of a nuanced view of this from people with genuine expertise in this area who understand it perhaps to a slightly deeper depth than some of the popular writers out there in the media do.

And what will this mean for our lives as cyber security professionals? How will this change what we can do from a defensive point of view? And how can this change what we need to be aware of from an offensive point of view, how can this be used for evil? And hopefully the presenters here will give that kind of much more realistic take on this.

I'm interested in short term, right? Like right now, today, is there a whole new range of attack vectors that are now readily available to a wider range of people than were before? So we should we be expecting a rise of certain types of attacks? And then there's a kind of longer term view of what does this ,for the next generation of attacks that we'll need to protect... What do we need to be aware of with our product in 6, 12, 18 months? And this session sounds like it may start the conversation about that. I don't think it's gonna answer the questions, but I think it'll hopefully make us aware of the right questions to be asking ourselves when we're doing our own investigations.

[00:13:57] Danielle Middleton-Wren: Yeah. Cause it is such an interesting, both ethical and security subject. I think Martha Fox was talking about it the other day. She was interviewed on ChatGPT and AI and how she expects this to be used in the future. It is a really difficult thing to predict because I know that we internally have been talking a lot about the future of AI and trying to forecast where it's going to go to help us with our own product.

But it's how different businesses and different, even different countries and what the regulations will be around this, because I think we'll always see an element of this technology being used on the dark web, as you say for nefarious purposes, whether we like it or not, because like you said, it does lower that barrier to entry and it could create new attack vectors and that's what we need to be aware of.

And yeah, the scope of possibility. But I think with AI, that just seems to be something that is completely unpredictable, but it's how it can be used for good. With Martha Fox's interview the other day, that's what she was trying to, she was banging the drum for.

This could be a very positive thing for technology, for security. It doesn't have to be all, oh, this is new and evil. So for instance, like Italy, I think became the first country to ban ChatGPT until there was greater understanding about it, whereas Martha Fox was banging the drum for, let's discover, let's see what is possible, and then start to build that new understanding and put some regulations in place.

Once we've actually gone on that exploration, which hopefully sessions like this will enable further.

[00:15:33] Andy Still: Yeah, absolutely. I think it's, I mean, we've all heard about these calls for it to be banned or stopped, but, it's one of these things that, once... the genie is out of the bottle now with this. Now, if you stop using it, other people are not stopping using it. So the bad actors will be carrying on seeing what they can achieve and take this technology is available far wider than ChatGPT. ChatGPT is a leader in the industry, but the underlying technology used is well known and understood. So there will be people out there who are using this for bad already and they're not going to be pausing while they consider what to do, so we need to make sure that we are from a defensive point of view as as aware of this technology and what can be done as possible.

[00:16:20] Danielle Middleton-Wren: Exactly. Preparation is the best form of defense. Thank you very much.

[00:16:42] Danielle Middleton-Wren: So. Let's stick with the topic of defense. This is a, and as obviously we've seen in the last, again, in the last couple of months, we've seen lots of different companies starting to add in extra layers. So as technology advances, so does the security need to advance. And Netflix recently said, one, they're going to crack down on people sharing passwords. They've also threatened to introduce MFA, which is multifactor authentication, to prevent credential stuffing and other such attacks upon on their platform. And so a session that you've both identified to have interest in is "the anatomy of an attack: the rise and fall of MFA".

[00:17:23] Andy Ash: Yeah. Again, a session that I'm really looking forward to. Obviously again, Netacea, we look after a lot of accounts on behalf of our customers and the MFA challenge that a lot of customers have is one of low touch login. So if you think about the most important things that we have, so bank details, health details, people generally don't mind having an MFA solution sat in front of that to protect their financial or medical data.

When it comes to things like Netflix, as you just said, that is actually a very challenging solution for Netflix to implement it. It's not exactly low touch. It's not, it's not seamless. It's not frictionless for the customer. And a lot of these companies have not put on MFA on some accounts.

And I think the Netflix example is fascinating because obviously the proliferation of reuse of Netflix accounts, whether that is on purpose with individuals sharing with family members or friends, or whether that is credentials being hacked and sold on the dark web. MFA can solve that, but essentially, this seminar is really to talk about where MFA has also failed.

I'm really looking forward to understanding a little bit more around the use cases for this.

[00:18:44] Andy Still: Yeah, and I think long time listeners to the podcast may remember we had in the past, Roger Grimes on talking about the weaknesses and potential weaknesses in MFA. And I think to call out one of your things you just mentioned, Andy, is some companies see MFA as a silver bullet that they can just put in and all their usernames and passwords will be protected.

And it is not that. There is a lot of weaknesses potentially in MFA, as Roger pointed out within that episode, it isn't MFA itself that is, a problem. It is how often it is implemented badly in the modern kind of view of MFA. There's very simple MFA where you maybe get a validation code sent to your phone.

Those kind of things are very easily compromised or bypassed. So I think this session will be interesting to try and get a better understanding so we can educate our customers on why that... it's not a waste of time to do it, but you need, it's not going to solve all your problems. You need to still be aware of how it can be compromised.

It can, it obviously will add a layer of security. Over reliance and dependence on layers of security that not as secure as you think they might be, can be even more dangerous than not having that there at all. So I'm very interested in hearing again, someone else's take on that industry and where it's going, and how we can be as secure as possible.

[00:20:21] Andy Ash: Like I said before, I think MFA is a dual challenge. One is adoption, where you want frictionless login. You know, you don't necessarily want to use MFA to buy a pair of trainers. There's always been pushback from industry to business to not use. And then as soon as adoption is, the pace of adoption picks up, there are then technical challenges and security challenges and breaches.

Where does MFA go next? I'm fascinated... It comes into the identity piece that we mentioned before. You know, how do you prove who you are.

[00:20:55] Danielle Middleton-Wren: That's really interesting because obviously as you said, you don't necessarily want to use MFA to buy a pair of trainers, but then does it depend on the trainers that you're buying? Because if it's a pair of panda dunks or Air Jordans, you might feel a little bit differently. You might want that extra level security on your account.

But no, you're right. It's something that I think we've spoken about internally a lot. So MFA being perceived as that silver bullet piece and that any extra layer of security is a good thing, but it's got to be implemented correctly and actually work for that customer and for their business. Otherwise, it just exploitable. So yeah, it'll be really interesting to hear what the outcome of that session is, which is something worth noting actually. So all of the sessions that we're talking about now, we will be running a follow up podcast with an RSA summary to go through everything that we've learned at RSA 2023.

So if you're really interested in the outcome of how MFA is exploitable, is ChatGPT going to be the next generation, form the next generation of machine-based attacks? What is that going to look like? Should we expect an array of new attack vectors to emerge? All will be revealed post RSA 2023.

But let's get back to the preparation session, and we've got one more session to discuss. So our fifth and final is again, one that you've both selected, "The Secret life of Enterprise Botnets". Andy Still, do you want to kick us off?

[00:22:33] Andy Still: Yeah. So I am particularly interested in this because essentially we spend most of our lives here at Netacea stopping automated attacks. A large amount of those launched via a botnet of some variety, whether it's legal, illegal, or informal. So we have been going through a long process here of trying to map those various botnets and trying to make a good degree of knowledge about what machines make up those botnets? What type of machines are they, what type of botnet it is, what's the kind of characteristics of it? Because that enables us to, as we see attacks come in, we can then relate that back to where that attack is coming from, which obviously allows us to block that attack much more effectively.

So I'm really interested in this session just to get any more insight. from other people who've been going through this same journey as us, what insight can they provide? Can that be useful? Will that add anything to our knowledge? I can feed that obviously into our various research teams to improve our levels of detection in this area.

So of all the ones I think we've spoken about, is the one that's kind of the most practical and direct session that is directly related to something we're doing today. A lot of the others are more future gazing, which is one of the points of RSA is to go and look at the future, have some time there to think about things that you wouldn't normally spend time thinking about because they may be out of your area of interest, but this is the one for me where I'm hoping to come back from this one with some really practical insight that I can pass directly back into the team to act upon.

[00:24:19] Danielle Middleton-Wren: Excellent, and the future gazing piece, and it's such, as you said, it's such a luxury to go and just say, right, I'm stepping out of my day-to-day and I'm just going to watch this session, listen to this person, talk about something that day-to-day, I don't have the brain space to go and research. So hearing somebody who's done all the research for you, you can download it and then apply that to all of your own. Andy, Is there anything that you'd like to add to Andy's previous statements?

[00:24:46] Andy Ash: Just to back what we said, we do tons of work around trying to map out botnets. The interesting thing about this seminar is, I know that the botnets that we uncover as part of our day-to-day work with customers are being used for many different things, and we know that because we have tools where we can go and look at what else they're doing.

I am fascinated to look at somebody else's work on this. The threat research team at Netacea have done a whole lot of work around data center proxies, residential proxies, we harvest a lot of this data ourselves, both through the products and through threat research. And the common attack source data that we have is of great value to Netacea and potentially elsewhere.

So having someone to help us in that and getting some new ideas in that area would be fascinating. My feeling is we'll find out that botnets are, at 9:00 AM, do a credential of stuffing attack and at 10:00 AM it's scraping. And at 11:00 AM it's DDoS I think. Yeah, I'm hoping that that presentation looks a little bit like that.

That would be, that would fit with my worldview.

[00:26:00] Danielle Middleton-Wren: Well, write that down before you go in and we will confirm on the other side. Obviously the key theme amongst absolutely everything that we've talked about today is AI, apart from perhaps MFA, but I still think that may come into it.

And do you think that is because that's the direction that Netacea is heading in, or is that purely a reflection on the direction that technology and therefore security is moving?

[00:26:24] Andy Ash: I think it's both. Obviously we've been deploying AI to protect customers for many years now. We started as an AI based business. But, you know, just going back to the ChatGPT piece, you know, companies are trying to formulate policies at the moment to protect their assets, to protect their staff from what might happen going forwards, using ChatGPT, or AI based models that are publicly available to anyone.

And so just hearing the world's leading voices in this area is of huge value to the formation of our own thinking. How to manage our own business and how to improve our product, we operate in this market. And a AI is massive, so...

[00:27:14] Danielle Middleton-Wren: That makes total sense. Great. Well, thank you both so much for running through your RSA top five must visit sessions. It's been extremely useful and I think that, as we said, there are quite a few themes popping up and we'll have to find out when you get back if your predictions are correct. Thank you both so much, Andy Still and Andy Ash for joining us on the Netacea Cybersecurity Sessions. It has been a fantastic and interesting discussion about what to expect at RSA 2023. We will be at RSA from the 24th to the 27th of April at Booth 1367. So please do come and say hello.

You can talk to us about automated threats, bots, business logic attacks, and tell us what sessions you've enjoyed. It doesn't even have to be those that we've mentioned today. There are lots of other sessions to go and see. If you'd like to follow the Netacea Cybersecurity Sessions on Twitter, you can find us @cybersecpod. And if you'd like to subscribe to the podcast, you can find us on all of your regular podcast channels, including Spotify. If you have any questions that you'd like to ask any members of the Netacea team, then please email us at podcast@netacea.com. Thank you all very much, and I look forward to seeing you on the other side of RSA for the summary.

Speak to you all then.

Show more

Block Bots Effortlessly with Netacea

Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
Book

Related Podcasts

Podcast
Podcast
S02 E07

Validating AI Value, Securing Supply Chains, Fake Account Creation

In this episode, hosts discuss AI validation, ways to secure the supply chain, fake account creation with guest speakers from Netacea, Cytix and Risk Ledger.
Podcast
Podcast
S02 E06

Protecting Privacy in ChatGPT, Credential Stuffing Strikes 23andMe, Freebie Bots

Find out how to make the most of ChatGPT without compromising privacy, how 23andMe could have avoided its credential stuffing attack, and how freebie bots work.
Podcast
Podcast
S02 E05

Skiplagging, CAPTCHA vs Bots, Scraper Bots

Discover why airlines are battling skiplagging and the bots that aid it, whether CAPTCHA is still useful, and scraper bots uses in this podcast.

Block Bots Effortlessly with Netacea

Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
  • Agentless, self managing spots up to 33x more threats
  • Automated, trusted defensive AI. Real-time detection and response
  • Invisible to attackers. Operates at the edge, deters persistent threats
Book a Demo

Address(Required)