• Resources
  • Podcasts
  • The Behavioral Science of Cybersecurity – Si Pavitt & Steve Dewsnip, MOD

The Behavioral Science of Cybersecurity – Si Pavitt & Steve Dewsnip, MOD

Available on:
Season 1, Episode 14
8th December 2022

If a stranger walked into your workplace and asked you your name and email address, would you co-operate? What if they asked you to open a door for them, or to use your laptop or phone, all whilst wearing a shirt that said “CHALLENGE ME” on it?

This is the malicious floorwalker, an example of the behavioral interventions staged by the UK Ministry of Defence to educate their workforce about security threats and put their teachings into practice.

In this episode, Cyril speaks with Si Pavitt (Head of the Ministry of Defence Cyber Awareness, Behaviours and Culture Team) and Steve Dewsnip (Behavioural Scientist at Atkins) to find out how gamifying psychological theory delivers surprising results across as diverse an organization as the UK’s Ministry of Defence.

Key points

  • Why you should incentivize positive actions rather than police security best practices
  • How to use social engineering to reinforce the need to challenge suspicious behavior
  • The importance of protecting psychological wellbeing during behavioral exercises

Speakers

Cyril Noel-Tagoe

Cyril Noel-Tagoe

Principal Security Researcher, Netacea
Simon Pavitt

Simon Pavitt

Head of the MOD Cyber Awareness, Behaviours and Culture Team
Stephen Dewsnip

Stephen Dewsnip

Behavioural Scientist, Atkins

Episode Transcript

[00:00:01] Si Pavitt: When we were thinking about this and we thought, "All right, we're gonna go on the floor plate and we're gonna go get caught." I thought, "this is gonna be like shooting fish in a barrel. This is so easy." And then you walk out and go up to someone, you say, "Hi there. I was sent down here to collect some information. Could I have your email address?" In the back of my head, I'm thinking "I am gonna get nailed straight away." And the person I'm speaking to went, "Yeah, sure. Here's my email address." I'm I'm wearing a t-shirt that literally says "Challenge me" cause I'm trying to draw attention.

[00:00:30] Cyril Noel-Tagoe: Hello everyone and welcome to Cybersecurity Sessions, our regular podcast exploring all things cyber security. I'm your host, Cyril Noel-Tegoe, principal security researcher at Netacea, the world's first fully agentless bot management product. Today we're going to be investigating how clever uses of behavioral science can strengthen cybersecurity. Now, imagine if a stranger walked into your workplace and asked you for your name and email address, would you cooperate? Or if they asked you to open a door for them or if they could borrow your laptop. Now, how about if they were wearing a bright top with "challenge me" written on it in bold. Well, Well my special guests for today, Simon Pavitt and Steve Dewsnip, have designed and delivered these types of exercises for the UK's Ministry of Defense with some very interesting results. Welcome Simon and Steve. Thank you both for joining us today.

[00:01:19] Steve Dewsnip: Thank you for having us.

[00:01:20] Cyril Noel-Tagoe: Uh, Before we get started, would you like to quickly introduce yourselves to our listeners? Maybe should we start with you, Simon?

[00:01:27] Si Pavitt: Uh, Sure. So I'm Si Pavitt, the head of Cyber Awareness Behaviors and Culture for the UK Ministry of Defense. I've been working in this field for the last three or four years now. But before that I was on the other side of the fence doing social engineering penetration testing.

[00:01:41] Cyril Noel-Tagoe: Brilliant. And Steve?

[00:01:43] Steve Dewsnip: Hello, I'm Steve Dewsnip. So I'm a transformational change and engagement consultant and behavioral scientist with Atkins. And I've been working with Si on the cyber A, B, and C project for about the last two and a half years now.

[00:01:57] Cyril Noel-Tagoe: And Steve, how did you get into applying behavioral science to cyber securities, especially?

[00:02:03] Steve Dewsnip: So I did a degree in psychology and then a master's in occupational psychology. And then after that I went into quite a broad business consulting job. Really, I was doing a lot of change and engagement, a lot of transformational change with different organizations. But throughout that period, I was always really interested in the human impacts and the relationships that people have with machines and with computers and within IT. So it just became a natural sort of next step for me to move towards how you can apply that psychology and the behavioral science side to cybersecurity really.

[00:02:40] Cyril Noel-Tagoe: And Simon, what about you? How did to kind of swap sides as it were?

[00:02:44] Si Pavitt: Oh wow, that's really interesting. So I think that I... I've been doing the behavioral science thing for a long time in a lot of different formats across my military career and my civilian career, as a civil servant in the Ministry of Defense. I really wanted to engage in the people element of cyber. I'm fascinated by that aspect of influence mainly through the things that I've done as a soldier. And the various things that I did there were about influencing people to make certain decisions. And if I can influence people to make a certain decision, then we can get some kind of operational advantage. Now there is no better tool on earth to do that than IT. It influences and pervades everything that we do. It, it's, technology is always within reach nowadays. So if we can use that as an influence vector, we can get someone to do something that they might not have necessarily chosen to do, were they fully informed. And that's fantastic. It's really interesting. And if we do that ethically, we can test that ourselves. As we know, social engineering pen testing is enormously valuable. The conversion from being an SC pen tester, the kind of person that sees whether or not these things are possible into the mitigation side is because I'd spent a long time pointing at holes and saying, there's something wrong there. You should do something about that. And, oh, well, you'd be really good if you did something about this. When the pandemic hit and field work fell off the radar as something we really couldn't do. It was a perfect opportunity to switch fire into, okay, well how can I patch these holes? How can I do something to help remediate human behavior? Those risky behaviors? And that tied up perfectly with the research I'd done to date, some postgrad research, to steer towards making effective decisions. How do we get people to make better decisions? How do we make them more alert to the risk landscape? And then how do we look at the behavioral factors behind that, whether they're conscious or unconscious, and tweak those levers to drive people towards the most secure behaviors.

[00:04:43] Cyril Noel-Tagoe: I, guess, so why? Why is behavioral science so important for security at the Ministry of Defense?

[00:04:49] Si Pavitt: Well, I think there's a really clear answer to this. Every computer has at least one person attached to it, and I think that on average, you're probably looking at two people for every computer. Every computer, because someone's gotta use the thing. It exists for a usage reason. And someone has to do the other side of it. Someone has to look after that equipment. So it's absolutely imprecise and it's just, just kind of a hypothetical, but if there are roughly two people for every computer, then there are two ways into every computer that don't touch the keyboard at all. You can use the people as the influence level, and, people make decisions in a different way than machines. People engage with information in a really different way, so you can shape that... shape the way that the person interacts with the machine and then get them to make decisions that are fundamentally risky and expose vulnerability. So if there were two vectors for attack, that means there's two vectors to try and get the platform much, much more secure. So it makes sense to me that we should be paying a lot of attention to the people aspect here, the behavioral dimension, alongside the technical dimension. But right now we focus on the tech a lot. There is, there's a huge amount of technology, some really great stuff coming from a lot of providers to deal with very, very, specific problems. Not as much on the behavioral side.

[00:06:09] Cyril Noel-Tagoe: Right. And you need that holistic approach right. To solve the problem.

[00:06:14] Si Pavitt: Absolutely.

[00:06:15] Cyril Noel-Tagoe: And Steve coming to you, obviously when people look at the MOD, you think, military, hypersensitive sector, why do you think it's so challenging to still enforce security?

[00:06:27] Steve Dewsnip: So I would say that it's fairly challenging everywhere. I wouldn't say that MOD is particularly different, even though it does have that, obviously that huge military side to it. I think a lot of the problems or a lot of the challenges that we face that make security difficult to enforce are ubiquitous, especially when you think the people side of things because a lot of those vulnerabilities, a lot of those opportunities that a malicious actor might want to exploit are prevalent in almost all people. I'm not sure there's a group of people where those, some of those exploits wouldn't exist. So, you know, things like the shortcuts that we might take in thought processes or how you can weaponize tools of influence and things like that are going to work across huge groups of people, that's not unique necessarily to someone in the military. All organizations and especially MOD as well, have a vast spectrum of people within them. So, you know, within defense you've got the different services and then you have all of the crown servants and the civil servants that sit with inside that organization as well. So there's a huge amount of diversity and in thinking and styles and preferences and traits that all help with that, you know, potential to kind of be exploit in some way, but then there's also very normal reasons why it becomes so difficult to enforce security measures. You know, we're all faced with an enormous amount of individual competing demands on a daily basis. We all wake up and go and check our emails and see how many we've got to deal with coming over the night before, and we look at our calendars and realize how much stuff we've got to prep for, and then we've actually got to do some work in between all of those things and produce whatever it is that we do. And all of those things take up a certain proportion of our attention. And we only have so much attention that we can give at any one time. Right. And that's why sometimes things kind of slip through the cracks. You might be really well intentioned, but actually you've just got so much going on that you can't focus on everything at the same time. And so that thing that you know, if you look twice, might look a little bit dodgy, but on first glance, it's fine. Just let it go. Well, that suddenly slipped through and that could be something that is actually a genuine risk or threats to your organization. And then there's a whole load of other things like social pressures as well. So people don't really like being the stick in the mud a lot of the time. No one wants to be the one that gets in the way of progress or gets in the way of business as usual. So again, occasionally things that might not be quite right go unchallenged because you don't want to be the one that looks like the bud guy in this situation. And so the slightly safer route for you is to just let it go. there's a whole kind of spectrum of reasons from the human side as to why trying to enforce security measures is so difficult. But again, just kind of want to make it really, clear at this point as well that none of those things make the people in the organization, make any of our colleagues the bad guy. We really shouldn't be blaming people or making people sound like the weakness in any of this, because as soon as we do that, then we're just gonna reinforce the things that actually we want to try and get rid of. So everyone has some of these shortcomings. We all only have so much attention that we can give, that's common across everyone. So we can't say that that is a people fault or a particular person's fault, necessarily.

[00:10:05] Si Pavitt: See, I think it's a really important point in the way that you phrased the question, because we say, we often talk about enforcing security. And, and exactly as Steve said, that the way that we are interpreting this as cybersecurity professionals tends to be like we're trying to police behavior. But the whole reason this thing exists is because someone is trying to attack you. Someone is trying to take advantage of you as a human, and they're good at their job. Attackers are really good at what they do for the most part. So, and if you're good at that and you are faced with someone else who has competing priorities, exactly as Steve has said, then we're trying to bully people into catching something they're very unlikely to catch. And I think it's really important that we need to shift the way that we talk about this. It's not about enforcing the behaviors, it's about trying to help people be as secure as they can possibly be. Rather than viewing this as a compliance issue, viewing it as an additional layer, building an additional capability.

[00:11:03] Cyril Noel-Tagoe: That's a very, very interesting and very useful way of thinking about it, and I guess Simon, talk about kind of encouraging those new behaviors then instead of enforcing, can you talk through some methods you've introduced to try and do that?

[00:11:18] Si Pavitt: Yeah. There are loads and loads of ways to approach this thing. We have tried some things like, applied behavioral economics, looking at incentivization. Steve's smiling because we worked on a couple of pieces together to look at how we can get people to engage with tangible items to make them feel like their behavior's been valued. And then how do we get that exactly at the right level and exactly the right way. So people feel good, build positive sentiment. We've explored things like environmental changes. How do we change the world around the person so they're naturally more inclined to demonstrate the behavior or putting something in the way that will stop them from engaging in that risky behavior, but autonomously so that they don't necessarily know that the world has changed around them. They just can't seem to get to that risky behavior anymore, and naturally have to do the secure behavior. We've explored things like software platforms that will facilitate the most secure behavior by giving people an option between engaging the risky behavior or engaging in a secure behavior that facilitates exactly the same outcome. The list is endless here, and it all comes back to exactly what Steve said. Finding those different angles, different things that are happening that are inhibiting the behavior we want to see. Perhaps there is, there's a lack of capability to perform any given task. So let's give people the capability, if the reason that people are plugging in unauthorized USB is because they simply don't have enough authorized headsets. Let's go buy some headsets and give them those headsets because now they do have all the tools they need, and suddenly the behavior disappears. We might want to influence motivation. And one of the best techniques for influencing motivation is looking at the way that people interpret language. How do people understand and engage with a simple concept? We've done things like built a physical escape room. We've got some online games that have gone really well. Some capture the flags that help people understand their digital footprint. Where are their exploitabilities online? All of these things come at the problem from various angles and almost, well, I would be bold enough to say, never tell anyone they're doing something wrong. We don't tell people they are making mistakes. We try and focus on the positive. We direct them towards the next most secure behavior. It's really a bottomless pit. There are so many fun things you.

[00:13:45] Cyril Noel-Tagoe: But I mean, those sound amazing. And I guess it comes back to what you were saying, Steve, about kind of just there's so many different people with different personalities and one person might react one way to one exercise, but having that kind of, that bottomless pit really helps you to kind of catch the whole, audience. I'd like to talk about the floor worker exercise because I alluded to it in the intro and it's just seems like such a counterintuitive exercise. So, I guess starting with you Steve, could you just quickly explain for the listeners what that exercise entails?

[00:14:19] Steve Dewsnip: Yeah, of course. And I actually, I thought your introduction to the floor walker as well was absolutely brilliant as well. So the whole reason that the floor walker exercise came about was to try and encourage some of those challenge behaviors. So we had a section of the organization that we work in where we knew that these behaviors maybe weren't quite as prevalent as we would like them to be, and we could see what some of the consequences of not being as prevalent as we would like them to be. We could see what that was. So we developed a whole malicious floor walker exercise to help encourage challenge behaviors within a particular population of that organization. So the floor walker itself is an overtly identifiable threat that is somehow managed to gain access to your office. And the floor walker walks around, they engage with different people and they ask them to do things that present some kind of risk to your organization potentially. So, you know, things that you mentioned. Can I borrow your laptop? Do you mind if I just plug my phone into charge very quickly? Can you plug this USB in? Do you mind if I just take that file off your desk? My boss wants it. And obviously we've never met any of these people before. None of that could possibly be true. Yeah. Maybe other things like, do you mind if I just ask you a few personal questions. You know, what was the name of your first pet? What's your mother's maiden name? What was the street where you grew up? You know, those sorts of things. Things that we're probably not supposed to be doing and probably shouldn't really be asking for. And when we say overtly identifiable, well, basically what we mean is, well, there's the things that we're actually asking. Those things are identifiably wrong in themselves. But then we also wear really brightly colored t-shirts, as you mentioned at the start. So we've got, you know, some yellow and reds and very bright white t-shirts with slogans on them that say things like Challenge me or cyber threats, or, I am lying to you, is a good one that works really well, or do not trust me. Which when you forget to take them off at the end of the exercise and you go. Buy a coffee afterwards as side will attest to it gets you some very funny looks.

[00:16:30] Si Pavitt: The story that Steve is alluding to was marvelous. After we'd done the presentation in Vegas, I was wearing the I am lying to you T-shirt and it's just a white t-shirt with "I am lying to you" in massive font on the chest. And I kinda walk around the conference and, you know, techy conferences, that kind of stuff. That's not too weird, is it? Yeah. You get slogan t-shirts. As the night drew were closed, I went back to my hotel. My hotel was way off strip. I was staying at the Rio. So there wasn't really many Black Hat people going to the Rio. And I managed to find my way to the shop at the end of the night just to buy a bottle of water. And so I walked up to the till this woman just burst out laughing at me and I had no idea why. And yeah, she didn't know what to deal with the fact that I was asking could I buy this water and my t-shirt says I'm lying to you. She can't take me seriously, and she went, "Well, I'm, I'm not sure can," I went, Yeah, you can trust me. How can she trust me if I'm lying to her? It becomes a paradox. It was great. Uh, yeah. Yeah. So.

[00:17:21] Steve Dewsnip: I suppose that in itself though side proves one of the principles of the exercise, isn't it? Because obviously in that context we're not talking about cybersecurity. I suppose you might be talking about fraud or theft or something, but you gave the lady on the till enough red flags to say that this person is probably dodgy, can I do the thing that they want me to? That exact principle is what we are trying to do when we go into someone's office, albeit from a cyber or physical security perspective. We want them to take all those different bits of stimuli. The fact that we're wearing a stupid t-shirt, the fact that we've not got any ID on, so you can't possibly know who we are and we've never met you before. The fact that we've just asked you to do something which you know is against policy or could be risky to your organization. We want you to take all of those things that individually you might kind of just let slide. Maybe you can or maybe IT can go and help you or something. And we want you to call us out on it. We want you to raise that challenge because we don't really always get the opportunity to practice those kinds of behaviors until it's a real situation, and then as soon as it becomes a real situation, there's a whole bunch of other social pressures that you've got to deal with because you're now actually responsible for security. That might give you a little bit more stress, it's a little bit more burden, and all of that then might just kind of contribute to you not feeling as positive or having as positive sentiment towards challenge behaviors as we would like people to. So we do it in that weird, lighthearted novel way. At times Si and I might look a little bit daft, but that really helps because it engages people. It helps them build positive sentiment towards challenge behaviors, and it helps them to feel confident in being able to actually do it. So the next time that something comes along that we might actually want people to challenge for real, they feel slightly happier in being able to do that.

[00:19:26] Si Pavitt: Having someone laugh at us during this is great, isn't it, Steve? We love the opportunity that someone just breaks down to a fit of the giggles or calls us out on a t-shirt. What the hell are you wearing? That's ridiculous. They're all really good things because they're not seeing this as this point of tension and a fear and frustration. All those things that come from trying to engage in the behavior properly that they're seen. This is great fun. This is a joke. It's not serious, but behind the scenes, they're still doing the behavior, They're still practicing, and that's fantastic.

[00:20:14] Cyril Noel-Tagoe: And Simon, I mean, you're an experienced social engineer, right? How did you find this intentionally trying to get caught by people? I guess your whole career have been trying to do the opposite.

[00:20:23] Si Pavitt: Yeah. Yeah. I love it. It's brilliant. But originally when we first discussed this, and I'm not sure Steve if you remember this cause it is quite a while ago, but this, the very, very seed of all of this came from a flippant joke from a friend of ours saying that we should just wander around wearing a giant USB stick, like a giant foam rubber USB stick mascot thing and see what people would say. And it was, it was just a joke.

It was a joke. And then I saw Steve's eyes light up and we went, hang about. There's something there. There is

something really interesting there. and obviously the foam rubber USB suit, that doesn't really pass the Daily Mail test for public procurement. So the T-shirt's great. But when we were thinking about this and we thought, all right, we're gonna go on the floor plate and we're gonna go get caught. I thought, this is gonna be like shooting fish in a barrel. This is so easy. Because I work so hard to not be caught. I work very, very hard to make sure that my physical presence, my appearance, my pretext, everything is set up to make sure that no one stops me. So, all right, golden. Wear the T-shirt. I'm gonna be inundated with challenges. We'll be in and out in 10 minutes. And then you walk out and you go up to someone, you say, "Hi there. I've been I was sent down here to collect some information. Could I have your email address?" In the back of my head, I'm thinking I am gonna get nailed straight away. And I start to get nervous again. When you're first on social engineering engagements, you do get quite nervous. It's an emotional control game. And I start to feel that old thing like, "Oh my God, oh my God, I'm gonna get caught. I'm gonna get caught. Wait, that's okay. It's fine if I get caught, That doesn't matter." Meanwhile, all this is going on in my head and the person I'm speaking to went, "Yeah, sure. Here's my email address." I'm sorry, what? I've not given you any

context. I'm wearing a t-shirt that literally says Challenge me. I am writing on a custom napkin of all things cause I'm trying to draw attention. And I go, Oh, okay. And I fake write down an email address cause I don't really wanna take any actual information from the person they go. "Okay. That's great. Thank you very much. I've got a couple of other things that I need to collect. Do you mind if I ask, what was your first pet's name?" And they go, "Oh, why do you need to know that?" And I went, "ah, I don't know. I've just been asked." And they go, "Oh, alright. Yeah, it's Boots." Are you kidding? And then it dawned on me actually, to do this well, you need to be a very good social engineer because you need to engineer the challenge out of someone. You need to convince them that you are a threat. And to do that effectively, you need to know all the things that you would try and get away with and turn them on their heads. So I need to start to alter, the rhythm, speed, pitch volume of my voice. I need to start to stand uncomfortably close to trigger those internal senses that something's going wrong. I need to do everything that I would avoid doing. And as you start to steer this interaction, it becomes this weird blend of social engineering and coaching all smashed together. And then sometimes, yeah, you do have to make it blindingly obvious. You have to kind of go into the pantomime version of social engineering. But as soon as that click happens, as soon as someone realizes, "Oh, I'm meant a challenge. Oh, and it's a game. It's not real. It's all fake." And you see that penny drop moment that they are so happy to have caught you. It's marvelous. It really is such an interesting reaction. I was also worried we're at the very beginning that people were gonna get quite upset with us because it's still a deceptive engagement and so far that has never happened. Not one person has had a really adverse reaction. There's been confusion and sometimes it takes a moment for them to jump into the game world with us. At other times we get the most entertaining reactions. I mean, Steve, you've seen some of the, ridiculous comments that we get and they're great.

[00:24:05] Steve Dewsnip: They are, they are great, but again, they just, they really reinforce the whole purpose and the whole point of the exercise and when we get those strong but quite humorous reactions. It's such a brilliant moment because we just know that it's working, that you can see it in people's faces, you can see it in their reactions that the exercise has started to work. So things like, you know, and this was the title of our talk that we did at the Black Hat Conference a couple of months ago. So, you know, "No Mr. Cyber Threat". So we were on a very busy floor plate. It was packed. I think it was a Wednesday morning. We'd gone up to it and engaged someone, and it was actually one of our colleagues that had done it. And he'd asked the person about three or four times if he could plug a USB stick in, and he was being quite persistent on this one. And he'd got a partial challenge. So the person had said, you know, you need to go and see IT to deal with that sort of thing without actually being explicitly told no. And the person continued with the full, full challenge. Until Eventually they did, and their way of challenging was to stand up and shout, "No Mr. Cyber Threat!", and just shouted at us in the middle of this really busy office, at which point the floor walker goes bright red and everyone that is sat at their desk turns around and looks up to see what is happening. But that is absolutely perfect. That's brilliant because in that moment it's gone from being a one to one relationship between the floor walker and the person that they're engaging to a one to many, it's opened it out to the whole floor plate. We've effectively got saturation of our target audience through one engagement. Because now everybody is looking, everybody has now had some kind of experience of the floor walker, and one of the things that we do, just to make sure that we kind of keep control of the exercise and make sure that it's done in a really safe way, and that actually people get the point of what we've done, is that we always do a debrief at the end of every particular interaction that we do. So we let the people know who we are, why we were there, that we've got the authority to be there from the organization so that they know that it is genuinely an exercise. We'll reinforce just how brilliantly they did. Even if they take a little bit of nudging, we let them know that they did a really good job and that they're good at challenging and, you know, well done. And actually the Ministry of Defense expects people to challenge things that aren't quite right. Because that's how you can keep the organization safe. So now, rather than having to have done that debrief from one person to one, and then gone and done it another 20, 30, 40 times across that office, everyone stood around listening to it because they want to understand what's just happened, which means that we've basically been given a platform to share all of those messages in person to that whole floor plate. So those really strong reactions just work so well and they really do help us. We've had lots of other things as well, haven't we Si, like being heckled out of the second story window? So...

[00:27:00] Si Pavitt: Yeah.

[00:27:01] Steve Dewsnip: We'd done the exercise in a particular site and we'd gone round a certain floor plate, which I would say it was quite a vocal one to start with, which made it quite fun anyway. But then, we'd put our jackets on, we'd covered our t-shirts up, so we kind of got into incognito mode a little bit and we'd gone outside just to try and have some lunch. We were just walking around the building trying to stay out of the way effectively, and all of a sudden someone opens a window and leans out and goes, "Hey, you can't be going around there. I know what you are up to. You're not gonna catch us again." And just starts heckling us and you know, shouting things out the window at us and everyone else that's sat near a window is like, what earth is going on? But that's beautiful because again, it shows that that person has re-picked up the game world. They've reengaged with the exercise off their own back and they've just raised a really proactive, lighthearted, humorous, yet effective challenge towards if we were up to something, now everyone has stopped and stared at us. And as soon as you are made, as soon as you are rumbled as a social engineer. You know, that's kind of, the jig's up effectively. So yeah, those strong and quite funny interactions are often really, really helpful.

[00:28:10] Si Pavitt: And you capitalize on extroversion there, don't you? Because you've got someone, these people who do this are slightly more likely to be extroverted. They wanna stand up, they wanna play the game with you. They wanna join you on your platform, and make sure that they're seen as really partaking in this. While they're doing that, they get the social advantage. They're seen as doing exactly the right thing. We then stand up and say congratulations. It's great. And Steve, thank you for so much for bringing that up. The debrief is the most important part of this exercise. We walk around in the silly t-shirt. That's great. That's just a cue. What we need is the interaction between the challenge and the debrief. Absolutely central. But when we've got this person standing up and saying, "no, Mr. Cyber threat," great. They're now a role model of effective behavior that's been reinforced by the organization. They did the good thing. everyone around who might not be quite as extroverted or quite as confident in that behavior has just been told implicitly that this is the right thing to do. This is a good thing to do. Something that you will not be socially challenged on. That when you stand up and say, "I'm really sorry. Where's your ID?" People won't be upset with you. You're not doing the wrong thing. You are doing exactly what the organization needs.

[00:29:24] Cyril Noel-Tagoe: It's almost, social pressures and applying them in a different way.

[00:29:28] Si Pavitt: Yeah, yeah, exactly.

[00:29:29] Cyril Noel-Tagoe: I mean, listening to these stories. It's obviously been a successful program, but I guess at the start, getting a buy-in for that, how difficult was that?

[00:29:38] Si Pavitt: It was interesting right?

[00:29:41] Steve Dewsnip: Very.

[00:29:42] Si Pavitt: Defense is a big organization. Okay. We've, got, I say it regularly, 244,000 people in the organization. But really we're dealing with the layer of people behind those as well. So our target audience is around 3 million people in the country,

an organization that big does not move quickly. We want to do things that we know and that we can trust, and that's compounded by the fact that we are using public money here. Every penny that we spend to make defense more secure is coming directly from taxpayer money. So we need to be absolutely certain this is the right thing to do and that that is exactly what we need to be doing as crown servants when we come up with novel things like this. It really does take some selling. And I think between Stev eand I and indeed the rest of the Atkins team, we were able to put together a very compelling argument based in very good, grounded psychological and behavioral science research that says this is going to have the effect. Another benefit is that we've kind of developed it as it's gone. The initial cost, the initial risk to do this, alongside the risk to the people in the environment was quite small because we did a pilot, and the pilot that we did was in a really interesting and very friendly location that is innovative by nature, the defense, science and technologies laboratory. They want to see how these things play out. And once we'd proved this at the pilot, we were able to grow it. We were able to develop it, deploy it to other locations. Steve, when we initially picked up one group, they were a little bit wary of us. And then we did it in one location and now we have that group coming back to us and asking, "Well do it this site. Can you do it

this site? Can you send us over there?"

[00:31:23] Steve Dewsnip: I think that's one of our markers of success for this really, isn't it, that every site that we have been to has asked us to go back and has subsequently opened doors to a lot of other sites for us as well. As I kind of mentioned, defense being such a diverse and at times kind of desperate organization, it's sometimes hard to get those messages across some of those borders. And the fact that people have started to pick up on it in different areas, that kind of other people actually want us to go and do it is a, again, a marker of that success because we've been able to transcend some of those borders.

[00:32:01] Si Pavitt: Yeah, I mean there are hard measures as well that aren't beneficial. We are seeing increase in challenge when we speak to people who are responsible for security behaviors on various sites. You can see that these things are coming to the forefront. They are seeing more constructive, more effective, more friendly and meaningful challenges. But really that's only part of the sales pitch. Exactly as Steve says there. If we are getting invited, quite enthusiastically to come and do this again. It's obviously having the right effect.

[00:32:33] Cyril Noel-Tagoe: And is it just MOD that you're doing these types of exercises for, or are there opportunities for others to get involved?

[00:32:40] Steve Dewsnip: So at the moment, yes, it's predominantly MOD that we do it with because that is our remit, that is the extension of our scope. I think again, because of the nature of some of the things that we're trying to tackle, some of the problems that we're trying to tackle, It would work in lots of other places as well. We have built an awful lot of control into the exercise to make sure that it runs the right way and it runs safely and it runs the way that we want it to work. So it is applicable to lots of other scenarios, lots of other organizations and industries and markets, I'm sure. If we're going to do that, then just make sure that we have the controls in place to make sure that we don't inadvertently damage people while trying to make the situation better.

[00:33:22] Si Pavitt: And I wanna highlight here that when we talk about defense, I appreciate those that aren't necessarily involved in it see defense as the soldier in the field. They see someone flying a fighter jet. But defense is a vast community of people. We have people who fill absolutely every role and position, every demographic split that you can imagine. We have... three or four different languages are spoken regularly in the organization, that this is how big and diverse this is. So if we were talking about applicability from a theoretical standpoint, we've got a huge amount of evidence that says, even though we've worked inside the perimeter, we can see this working in the office environment, in the deployed environment, in industrial spaces, in mechanics workshops. And we've done this in all of those places. Steve's smiling, cause we did the mechanic one fairly recently.

[00:34:14] Steve Dewsnip: We did and that was an experience.

[00:34:16] Si Pavitt: It was, and still just as effective.

[00:34:19] Steve Dewsnip: Yeah.

[00:34:19] Cyril Noel-Tagoe: I mean, I think you're gonna have to tell us what that experience was like.

[00:34:22] Steve Dewsnip: It's just very different walking into a set of aircraft engineers environments than it is to walking into, say, like an HR or like supporting business function type environment. You get a different kind of challenge, often raised back, which is at times a little fruitier, but you know, but nonetheless interesting and valuable and fun, and that is just an idiosyncrasy of a particular group of people.

[00:34:51] Si Pavitt: Yeah, I like it because it wasn't an unconstructed challenge. It wasn't that it was abrasive or rude or problematic. But it, was very group orientated. We were hunted down by small groups of people as we walked around the site and that they kind of come up as a group going, "Oh, what are you doing? What's going on? What's happening here?" And it's robust. I liked your description on the conference where you called it spicy.

[00:35:14] Steve Dewsnip: That's what was a little bit spicy. We also had someone get in their car and come back to re-challenge us, didn't they? So someone came out of their office and was like, "Hi guys, can I help you? What are you looking for?" Sort of thing. And said, "No, it's okay. Thank you. Don't worry." And just carried on. And the person was, "uh, Okay. That's, that's not the response I was expecting." Turned around, went back into his office. We walked about another 150, 200 yards down the road or something like that, and then this car comes flying up alongside us. The chap gets out. "I'm really sorry. Who are you? Why are you here? What are you doing? I realize I just can't let you walk off, like, what are you doing?" Again, brilliant. Because now that person has gone even further out of their way to challenge us. And when we positively reinforced that and we gave him the good message that he did a really good job, and that was exactly why we were here. We did exactly what you wanted, that person's demeanor changes entirely. "Oh, thank you. Oh, awesome. Thank you. That's that's really good." And we gave him a little token to say, "Well done." And you can see that that sentiment towards a challenge has started to change with reinforcing that particular behavior. And that's exactly the outcome that we want out of it.

it

[00:36:26] Si Pavitt: You could see with that one that there was the social cue first when we said, "No, we're fine. Thank you." We kind of draw the conversation to a close automatically. We've

given all the cues that there is nothing more to be said. And it took him a couple of moments to overcome. "Actually, I don't care about the social cue. This is a risk." It was really good. That was the same site where we had someone, a disembodied voice from a building, shout. "What are you doing?" I don't know. Where did that come from?

[00:36:50] Steve Dewsnip: Yeah, it was also the same site where we got chased down by one of the ladies from the canteen as well, which was a definite first, it felt like being in high school again, was, again, a great response.

[00:37:02] Cyril Noel-Tagoe: Oh wow, that's definitely very, very interesting. So obviously you said that it's primarily the MOD that you're performing these exercises in, so for another business that wanted to start doing this kind of thing. Is there any advice you'd give them on a good place to start?

[00:37:18] Si Pavitt: The first piece of advice, and I would give this to anyone, whether you're doing a concealed social engineering engagement or whether you're trying to do something like the floor engagement that we are doing. Take psychological safety seriously. It is without a doubt, the single most important thing that we can do. Our role should never be to emulate the bad guys so well that we end up breaking people. and it's a real big risk. It is so incredibly important that people enjoy what we do, and get something from it, because that is what's gonna drive behavioral change. If you want to deliver something like this though, I think that there is a growing market of suppliers out there who are going to be either delivering things like this or are gonna be getting more into the behavioral domain. So if you want this look for the expertise, the expertise is out there. behavioral scientists are an extremely valuable resource in this cybersecurity domain. What about you?

[00:38:16] Steve Dewsnip: So, again, I touched on it earlier as well, but a hundred percent echo the point around making sure that you maintain psychological safety. It's often something that is kind of brushed across, you know? Yeah, we're psychologically safe. Don't worry. Like everyone's allowed to have their say. But it goes a lot further than that. And psychological safety, it impacts people's wellbeing, how they feel about their work in a broader sense. And a lot of the time if you, in any situation, you may have an element of... there may be colleagues who are not necessarily neurotypical, and that is something else that you kind of need to bear in mind as well. But yeah, there's lots of people out there that can help you with it. If you want to develop something similar, just make sure that those controls are in place, lots of other ways of developing or using things like gamification that you can do as well around this. So in fact, a lot of the floor walker is incredibly heavily grounded in gamification. So I think thinking about it from that kind of principle level and then, building up from there what you might want to do. So it doesn't have to look exactly the same as the floor walker. There's lots of other things that you could do that would be similar, as I mentioned. This came from a throwaway, fairly flippant comment. Sometimes there's an awful lot of value in some of those comments cause I dunno, let's just dress up as the hamburglar and walk around and see if we can get away with stuff. Yeah, that's kinda works. Like what, why not? I mean, professionalize it, make it corporate friendly or office friendly or whatever environment you're in. But sometimes some of those weird things that you, you know, are just throw away comments. There's a lot of value in those. So it doesn't have to look exactly like the floor walker does, but think about how you can develop something that is, you know, uses the same principles or, gets a lot of those good messages out.

[00:40:01] Cyril Noel-Tagoe: Yeah. And keeps psychologically safe in all of that as well. Yeah. Well, thank you Simon and Steve. It has been a really interesting, really eye-opening discussion and I'm pretty sure that we could probably record about three or so podcast episodes, just going through about this. But I'm afraid we are coming to the end of this episode., so thank you both so much for joining us.

[00:40:24] Si Pavitt: Oh, thank you.

[00:40:25] Steve Dewsnip: Yeah. thank you.,again.

[00:40:26] Cyril Noel-Tagoe: And thank you to all of our listeners as well for tuning into this episode of Cybersecurity Sessions. if you enjoyed this podcast, please be sure to subscribe and like, or leave a review on your podcast platform of choice, and we would love to get your feedback. You can also get in touch with via Twitter, that's @cybersecpod or by email to podcast@netacea.com. Thanks again for listening and see you again next month.

Show more

Block Bots Effortlessly with Netacea

Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
Book

Related Podcasts

Podcast
Podcast
S02 E07

Validating AI Value, Securing Supply Chains, Fake Account Creation

In this episode, hosts discuss AI validation, ways to secure the supply chain, fake account creation with guest speakers from Netacea, Cytix and Risk Ledger.
Podcast
Podcast
S02 E06

Protecting Privacy in ChatGPT, Credential Stuffing Strikes 23andMe, Freebie Bots

Find out how to make the most of ChatGPT without compromising privacy, how 23andMe could have avoided its credential stuffing attack, and how freebie bots work.
Podcast
Podcast
S02 E05

Skiplagging, CAPTCHA vs Bots, Scraper Bots

Discover why airlines are battling skiplagging and the bots that aid it, whether CAPTCHA is still useful, and scraper bots uses in this podcast.

Block Bots Effortlessly with Netacea

Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
  • Agentless, self managing spots up to 33x more threats
  • Automated, trusted defensive AI. Real-time detection and response
  • Invisible to attackers. Operates at the edge, deters persistent threats
Book a Demo

Address(Required)