Offensive Security – Jonathan Echavarria, ReliaQuest
How can you really know what havoc hackers could wreak on your systems? By challenging them to do it and fixing the exploits they discover, of course. In this episode of the Cybersecurity Sessions, Andy finds out what it’s like to be on a ‘red team’ tasked with hacking into an employer’s own systems by any means necessary, with lauded offensive security practitioner Jonathan Echavarria (ReliaQuest).
Key points
- The difference between penetration testing and offensive security
- The advantages of introducing a red team to any business
- How to apply red teaming practices across all stages of the tech lifecycle
- The ethical implications of ‘attacking’ your own organization
Speakers
Andy Still
Jonathan Echavarria
Episode Transcript
Andy Still 00:00
Hello. Welcome back for another instalment of the Cybersecurity Sessions, our regular podcast talking about all things cybersecurity with myself, Andy Still, CTO and founder of Netacea, the world's first fully agentless bot management product. This time, we're looking to move to the other side of the fence from defense into attack. What better way to improve your defenses than have some friendly attackers tell you all the weaknesses that they find with your systems? This is part of the nature of red teaming, challenge a team to attempt to discover problems and exploits with a system by attacking it. So today, we're joined by Jonathan Echavarria. Jonathan is currently an enterprise architect with ReliaQuest. He's here today to share with us some more details about red teaming, how that works, and generally the nature of this kind of defensive activity. So welcome, Jonathan, thank you very much for joining us today. Before we get into any more details, could you quickly introduce yourself for our listeners?
Jonathan Echavarria 00:54
Sure. Thanks, Andy. Yeah, my name is John Echavarria, like you said, you know, I'm an enterprise architect over at ReliaQuest. We are an open XDR platform that unifies your existing environment and provides security visibility into pretty much anything that you want. So single pane of glass view and the end goal to enhance your own security posture.
Andy Still 01:11
Thanks Jonathan. Now, in your past life I believe you worked as an offensive security engineer. Now, I've known a few offensive security engineers in my time. But I think in this case, you had a very different meaning of the word offensive. Could you tell us a bit more what's meant by offensive security, and how that differs from maybe things that people may be more familiar with, like penetration testing?
Jonathan Echavarria 01:30
Absolutely. Yeah. So the... and I really fleshed this idea out a lot when I was working at Facebook, because I really saw the way that they used it, and the way that they applied offensive security principles really blew my mind. And I really want to push out this message that offensive security is pretty much a subset of security, but it applies principles of adversarial system design to everyday engineering problems, right? And when you start thinking of it like that, that means that offensive security can really be reframed, and applied to any field of work, not even just specifically red teaming or penetration testing. There is a book that Micah Zenko put out, "Red Team: How to Succeed by Thinking Like the Enemy". I really liked the way he phrased this. And it's essentially that the team's goal is to challenge assumptions and bias and act as a devil's advocate. Really, this concept could be applied to anything, but it's most important to demonstrate impact and be inflexible. Anything can happen when you start looking at systems from this perspective. And you can't have a concrete approach to it.
Andy Still 02:33
So I think it's a really interesting approach to take. So it's very easy as a developer or development team, even operations team, to only look at the problems that you think may happen, or the problems that you know about. And I think the idea of coming at that from the opposite angle of trying to get away from groupthink, trying to get away from just easy, easy solutions to this kind of problem. Is it something that typically when you're involved in this, is it something where you're in the team? Is this better to have a separate team doing this? Or is it better to even have a third-party service come in and do this kind of approach?
Jonathan Echavarria 03:08
So realistically, in my mind, I think pretty much everyone can be doing this in some aspect, right? Like, there's advantages to having a dedicated red team internally that looks at your business, and they know in depth pretty well, how do the different pieces of the business integrate with each other? But I say that, you know, even an individual coder can apply the principles of offensive security, just by asking, how is it that the code that I'm writing can be abused? A marketer can ask the question of, how can the marketing campaign that I'm doing be misinterpreted, right? And it's a matter of challenging your assumptions that whenever I make something, everything is going to work. There's cases where... that being said, though, you know, hiring an external firm can also be really advantageous, especially when you start talking about challenging groupthink, right? These are people that have zero knowledge about your environment, and you know, in the exact way that an actual attacker would, and that can provide you another unique insight into it. But realistically, you know, in my opinion, every business and every individual is capable of doing it. Security is a team sport, and everyone is a member.
Andy Still 04:16
So is this something that you would recommend that every business looks at? Or is there kind of particular use cases, particular size or type of business that you think it's more applicable to?
Jonathan Echavarria 04:29
Yeah, I really think that anyone can do it, right. But that doesn't necessarily mean that they have to go out and do a full blown red team engagement. I mean, the ultimate goal is just to challenge assumptions and move the impact of it left of bin. And what I mean by that is, if you take a problem in a application or system or process, right, it's $10 to fix that problem in the design phase, it costs $10,000 to fix that problem once it's in production, right? And it's important to not let these processes of looking at, you know, what could potentially go wrong with something, be a blocker, you know. As long as the business can foster an environment where it's okay to bring issues up and get decision makers and system designers in an adversarial mindset, that can really help enhance any business's innovation, because groupthink is really the largest hurdle to innovation within an organization. So everyone's stands to benefit from it, regardless of whether you're spending 50 60, 70k on really in depth red team engagement, or you're just getting everyone together in a room for a couple hours and doing a tabletop exercise.
Andy Still 05:35
Yeah, sounds really interesting, because I think there tends to be an image of red teaming as an external attack on production systems. And in your experience, is this something that can be done right from, as you were saying, design stage? Or do you still find it in your experience that it's still primarily done on production systems?
Jonathan Echavarria 05:57
I say do it in both, right? It's really weird to think about because everything is in prod. Even staging and testing are in prod, right? Whenever I do operations, I really like to focus on production systems, because that's the most realistic assessment. It's the most accurate one. But staging and testing environments are just too perfect. And they typically don't reflect the actual environment, right. But as a red teamer, your goal is not necessarily just to think about, you know, individual technical vulnerabilities. Is there a SQL injection involved in this application? Is there a way for me to upload a card skimmer? Is there a way for me to bot something, right? You should take into account the entirety of the business logic. So the staging and testing environments are part of prod if you will, but the entirety of the whole is prod, as well as prod. So I say focus on everything, right. And just make it be realistic. It's really about the impact of the operation of what you're doing, and things that you're looking at.
Andy Still 06:55
Yeah, and I think a lot of this is wider than just security isn't it. This is kind of a mindset that you can apply to almost any problem. I'm thinking about performance problems, it could be standard business problems as well. But having that idea, right from the start of thinking about... How could someone take this, how could they use this differently? What different approaches could they take to try and bypass the happy path that we expect people to go down?
Jonathan Echavarria 07:20
Yeah, that's exactly it, you know, like the red teaming part. That's the super cool, sexy part that everyone wants to be a part of, right. But ultimately, everyone is a red teamer, everyone is capable of being a red teamer. You know, I really am a huge fan of tabletop exercises, you know, you get all of your decision makers in a room, but also get people that are on the ground, you know, operationalizing in the systems, either a sysadmin, you know, an SRE person or whatever. And just give them a scenario, you know, the common one is going to be, "Okay, someone executes a malicious document in accounting, what do we do?" Add interjects into it, walk them through the process, and really challenge the assumptions that all of the systems are working, and that you're operating against an actual, sophisticated attacker, right? You tend to get a really in depth analysis of where are our actual holes, because you're not just talking to the people that designed the system, you're talking to the people that are in it every day, they have two entirely different perspectives on what is going right, what is going wrong, where are potential gaps, and a lot of the times those are misaligned, until someone takes the time and effort to marry those two together.
Andy Still 08:29
Yeah, so taking that logically through, particularly when we're talking about production systems. Obviously, the extent of what you could do in offensive security is vast. Is there an ethical line here? Is there some things you wouldn't do? For example, a lot of the gaps could be around social engineering, is that something as a red team, you would actively go down that road? Or would you draw a line somewhere?
Jonathan Echavarria 08:52
Absolutely. So ethics and red teaming is important, but it's also kind of hard, right? Like the common counter question that people ask to, "Oh, how do you do ethics and red teaming?" Is that attackers have no ethics, why should we operate with any ingenuity right? To go back to impact, you could demonstrate the same level of effectiveness without doing unethical or immoral things. My general guideline is I like to avoid ridiculous emotional manipulation. I offer you a $500 gift card from Amazon. Okay, that's fine, right? But I can't reach out to you and say, "Hey, your child is in the hospital, and you need to sign and authorize these medical authorization forms in order for them to get treatment." That's really manipulative, that's really unethical. And it doesn't really make that much of a difference in the long term of the impact, right? Like, the goal is for you to execute this document. I can do the same thing with a gift card as I can with that authorization form, but one is going to be a lot more ethical than the other. There's the obvious things like, don't open a new line of credit with PII that you gather. Don't change your own pay rate. Right. But I think it's also important for all SEC teams to think about How the business can use them ethically as well, right? No red team should allow themselves to be used as pawns in a political chess match, because one manager thought the manager of another team slighted them and they send the red team as their attack dogs, that doesn't benefit the business in any way. So, you know, the ethics and the impact of operations really line up with each other as well. And it just, you know, being unethical reduces the overall effectiveness of the red team.
Andy Still 10:29
Yeah. And I think it comes down to as well, when you're looking at what you're trying to prove here. You're not trying to prove the failure of humans, we know humans are always the weak point of any system, don't we? So at a certain point, some of those things you probably assume will happen? And you look at what's the process in place for when humans do accidentally allow themselves to click on phishing emails, to be socially engineered? What are the processes to mitigate that risk?
Jonathan Echavarria 10:55
Yeah. And it's important for a business to do that with impunity, right? Like, they need to be able to focus on that and foster an environment where it's okay to fail, if you will. But the whole goal is to make the business as a whole better, right? The best operation ideas that I've gotten as a red teamer have come out of conversations, where I'm just talking through the implementation of a system with a person who designs it, or is working with it. When you come at them with the frame of mind of, "We're here to help fix things and make things better and, you know, solve your concerns" rather than, you know, "security is hard security's bureaucracy, security is a red tape, I'm here to make your life hell and, you know, give you nothing but problems," you tend to get a lot more ideas out of people, and people are a lot more willing to talk to you. And I get some of the best ideas from that.
Andy Still 11:46
Yeah, well, I think it's about you taking into consideration that security is always a compromise isn't it? It's always asking people to do things that they may not want to do. So engaging with them to determine what would be the least impact way, but actually, you're protecting them, as well. Because ultimately, if you're the victim of social engineering, and you make a mistake, and you accidentally transfer $10,000 to an attacker, then that's not something you want to do. And it also is not something you want to be classed as your fault. So if there's a way that you could come up with something that works for both sides, and getting them involved in the conversation, rather than it being a 40 step process that someone else has put in place that they then have to follow, they get, you know, let's start engagement and in collaborative approach to problem solving, which I think this drives, because you start with those kinds of tabletop exercises, and you start with getting them involved in the conversation to try and understand the nature of the problem and what the solution would be.
Jonathan Echavarria 12:45
Yeah, and I would argue that the debrief lifecycle of red teaming is... and the way that red teams do that, is really what separates good red teams from bad red teams, right? Like, in my opinion, a good red team should take ownership of their findings, and do everything that they can to ensure that they're remediated. They should be working with the system owners throughout the lifecycle of the operation, because they'll give you that unique insight, right? But it also establishes good relationships with them, so that they don't see it as, like you said, a process or a burden of the cycle, like it's ultimately for their improvement. And then again, like that goes into generating additional ideas, and you know, getting additional findings. But you know, during that debrief cycle, once you get everyone into a room, and you walk through the findings, walk through the narrative of your attack. But it's important to not only point out, you know, the flaws of the system that you've assessed, but also point out areas and highlight the effectiveness of existing controls, the effectiveness of, you know, hey, what worked in this design? People like that, people get very receptive to it, and it doesn't make it feel so hostile, because that's really not how offensive security is supposed to be. It's offensive but it shouldn't be hostile. Does that make sense?
Andy Still 13:57
Absolutely. What do you typically find is the relationship between red teams and the rest of security teams?
Jonathan Echavarria 14:04
It really depends on the, kind of the security culture that's been fostered, right? Like, you know, I've seen security teams in general that are just treated as absolute cost centers to the org and they don't realize that, you know, the entire goal is to save them money over time. When it comes to the red team, you know, some really go for that mystical mystique aspect. And I don't think that, you know, trying to play that up, it brings too much of an advantage, right. Like, ultimately, this is just another subset of your engineering organization, and it's important to not let your ego get too inflated from that, if that makes sense.
Andy Still 14:44
Yeah, yeah. I mean, that does, and I think it's the same kind of approach that we see trying to take right across IT, of blurring the boundaries between different areas of responsibility, trying to identify as many problems as early as possible. Obviously, we're trying, all trying to deliver the best possible outcome to our customers or our customers' customers, depending on how you... the type of product that you're working on. So yeah, I think just that collaborative approach, and your way of talking about this as offensive security rather than a red teaming exercise, kind of, that's more of a mindset shift, getting the entire team to be thinking that proactively, what can go wrong across all different types of risks that we have to address? We've talked about various different approaches that you can take identifying problems, once you've carried out an approach, particularly I'm thinking if this is something that you do on the production system, you identify a problem. What's next, after that? What's the goal? How do you show the value of that exercise that you've undertaken?
Jonathan Echavarria 15:48
Yeah, once you've actually conducted the op, you start the debrief cycle. As far as like, when you're hands on keyboard, within the... that's really where it's important to not be rigid, right, you need to you need to be very flexible. As you're going through things, new challenges are going to arise, people don't necessarily know what's going to happen. And you have to be able to shift that activity as quickly as possible. I always like to think about what is the 10 step down of the actions that I'm doing, right? Like, the entire goal is to demonstrate the impactfulness of every action that you're taking during the execution stage of the operation. Once you do the debrief, you deliver everything, here's what worked, here's what didn't work. Here's our findings, here's our suggestions for remediation. Next step is that the red team should be following up with the owners of those findings. You know, "We found a SQL injection vulnerability that allows you to do this XYZ thing that connect to the system," okay, get with both system owners, ensure that there is a mitigating control in place, make sure that it works, right? And do that for every single finding. Through that process, you've really got an idea of the effectiveness of these new controls that you've introduced. But it's also important to do a entire general retest, right? It doesn't have to be as in depth as the operation itself. But when you put all those new controls together, there's a possibility that you could have introduced some new logic that could have introduced some new sort of risk or vulnerability.
Andy Still 17:18
And how receptive do you usually find people are when you've found an exploit in a production system, and you're going to them, maybe saying "you need to take this offline, you need to disable this kind of functionality"? Do you find that they're usually on board with that? Or do you get a lot of pushback?
Jonathan Echavarria 17:32
I usually find that people are pretty on board with it, right? Like, no one wants to be the one, the person that's responsible for something that went bad. And, you know, generally, people are pretty grateful for the identification of these things. Because obviously, they didn't know that it existed, assuming you find something that existed. And that's really where it's also important to be able to demonstrate the impact of that flaw, right? Like, if you do come across someone that says, "oh, you know, that's not a big deal. Blah blah blah blah," well, that's the point of you being able to demonstrate, "here's why it is a big deal. Here's how it is." And that's a good technique for defusing situations.
Andy Still 18:08
Yeah, I mean, it's also really, it is about getting the message across that you haven't created this problem, this problem exists. You're saving them from something much worse. Okay. So before we wind up, if we've got people listening to this, what would be the one message that you really want them to take away about the value of offensive security?
Jonathan Echavarria 18:28
Yeah, in the financial world, they talk a lot about due diligence. If your organization was to acquire another organization, there'd be a process of looking at them from an adversarial standpoint, you're going to dig through their financial health, you're going to dig through their source code, and you're going to do everything you can to ensure that you're making a sound financial decision. Why not apply that to everything that your business does? If you do that, your business will operate much more efficiently, it will impose significantly less costs. And just start small. Start with tabletop exercises. Start with, you know, your very small scope pen test and work your way up. Everyone is a participant when it comes to offensive security, and allow that mindset to foster a culture where it is okay to challenge assumptions and challenge groupthink. And you're going to allow innovation to succeed throughout your org.
Andy Still 19:22
Thank you very much. Thank you very much, Jonathan, for sharing that with us. It's really, really fascinating. And also, thank you, Jonathan, for your offensive security stance, which you wouldn't know this from listening, this is actually the second time we've recorded this, because Jonathan did expose a flaw in our recording software, and so the first time didn't properly record. So thank you for that, bringing that mindset to our recording process.
Jonathan Echavarria 19:46
My pleasure, I can't help it, you know?
Andy Still 19:48
You just can't help yourself with it. Thank you very much for joining. And, as usual, if you've enjoyed this, please subscribe. Leave a review, please feedback to us via the Twitter account @cybersecpod or you can email podcast@netacea.com. We will be back in our next episode, we would love you to come and join us and listen again. Thank you very much and we will see you then.