Dr. Christoph Burtscher (AI Researcher & Author)

Available on:
Season 3, Episode 4
21st November 2024

In this episode of Cybersecurity Sessions, host Andrew Ash, CISO at Netacea, is joined by Dr. Mark Greenwood, Netacea’s Chief Technical Architect, and Dr. Christoph Burtscher, AI researcher and author. Together, they explore the evolving partnership between humans and AI in cybersecurity, discussing machine-led defenses, ethical challenges, and the role of trust in AI adoption.

The conversation highlights the shift from human-led security to AI-driven systems and how generative AI is reshaping cyber defense. Christoph also shares insights from his upcoming book, Silicon Minds and Human Hearts: How to Navigate the AI Revolution.

Join us for an engaging discussion on bridging technology and the human element in cybersecurity.

Host

Andrew Ash

Andrew Ash

CISO, Netacea

Guests

Dr. Christoph Burtscher

Dr. Christoph Burtscher

Senior Visiting Fellow, Henley Business School

Dr. Mark Greenwood

Chief Technical Architect, Netacea

Episode Transcript

Andrew Ash: Hello and thanks for tuning in to the Cybersecurity Sessions podcast, season three from Netacea. I'm your host, Andy Ash, CISO at Netacea. For the new season, we've decided to invite peers from the world of cybersecurity to join us and discuss the issues that are affecting them and the cyber industry.

This week, I'm delighted to be joined by two guests, both of whom are doctors, so I'm in very esteemed company. First of all, I'd like to introduce my Netacea colleague and one of our resident experts in the field of data science and AI, Dr. Mark Greenwood. Mark, do you just want to do a quick piece about what you do here at Netacea?

Dr Mark Greenwood: Sure, I'm Chief Technical Architect at Netacea, so I work developing our technical strategy and I work really closely with our tech teams on what we're building, how we're building it, and, and sort of where it fits in.

Andrew Ash: Mark's been with us for nearly as long as I have, which is a long time. So, expert in bot management, expert in AI, expert in building stuff that works, I would suggest.

Which is, building stuff that works being the most important bit. We've also, with us today, we've got Christoph, Dr. Christoph Burtscher. Sorry, Christoph, I missed Dr. off. It's because I'm feeling very naked in such esteemed company. Christoph is a visiting fellow at Henley Business School. He's also an author, researcher, and transformation executive in non compliant IT,

digital technologies and AI. The non compliant IT piece worries me because I've also, or have also been in charge of our campus IT at Netacea. Christoph, why don't you tell us a bit about yourself?

Dr Christoph Burtscher: Yep. Hi everybody. As Andy was saying, kind of, I have a variety of different roles. I have the pleasure of having worked for a few decades in industry to kind of focus on my studying.

So I'm going to do my second PhD just because I can, not because I need. And that one is focused on particularly two topics. One, non compliant technology or digital deviance of people using technology that's not approved. But secondly, also lately I do a little bit more on AI and deepfake detection, particularly, with the human aspect of it. In the past I was CTO, CIO, had a lot of different roles, but it was always kind of focused on delivering change. That was always part of my element of focus and what I tried to do. And in that, one thing that became very clear and also applies kind of for our discussion here is the human is very important, when it comes together as process of structures or technology.

And as we go through, hopefully today, you know, I can give a little bit of some of my insights of how machines and humans can work together.

Andrew Ash: Thanks Christoph. We really appreciate you joining us today. So I guess we're going to start off, let's just set the scene. So one of the things that, you've talked and we've talked before, Christoph.

So one of the things that you're really keen to, explore is the human machine relationship, and it is the Cybersecurity Sessions podcast. So we'll kind of relate it back to cyber. How do you see the relationship between humans and machines evolving? Or how have you watched that happen in your career?

Dr Christoph Burtscher: In my career, having been also in the security space, what is very apparent is that like with any technology, we are dealing with now with technology and AI in general, with something that is A, changing continuously, and B, it has positives and negatives. And as we hopefully kind of go through today, we can explore some of those aspects where, how humans and the machines kind of work together, to create really value for an organization and, or sometimes kind of prevent value destruction in the cyber security space.

It is important that we cater for both. We make sure that we understand some of the opportunities and leverage those, but also make sure the challenges that are happening are being addressed. Be that kind of cybercriminals or other kind of aspects coming into the mix that we didn't have before in the space of traditional technology.

But it's not all bad. Kind of, yes, technology will change our lives, but it doesn't mean it has to be to the worst of all of us.

Andrew Ash: Yeah. And particularly challenges, that I see a few different challenges. One is attacker AI, so offensive AI, being used to leverage ever more complicated attacks against targets.

There's also the, and this is really the crux of what we're talking about, the kind of, the way that humans adapt to using AI in cyber, that's defensive AI. The sociological part of that, which is potentially pushback of actually, it's my job. I don't want a machine to do it. You're coming here taking my role, nameless object.

But, also how you actually deploy that, and, the importance of efficacy and ethical use of AI. So I think there's lots of, there's lots of challenges, but obviously the opportunities are vast and worth, you know, worth the time that I think most companies are putting in at the moment to leverage AI.

In terms of that human-AI, you've introduced me to a new word which, we've got two doctors on the call so it's not surprising. So the human AI dyad. Is it dyad? Dyad. Yeah.

Dr Christoph Burtscher: For me, that is how the human and the machines kind of work together. And, it's quite often, I call it a partnership because it can only be a partnership that is successful. It's not kind of one or the other. I think we're in cases where, where we have the one extreme of purely human and everything is manual to the other extreme of everything is kind of run by the robots, which kind of, kind of what's called machine language and, machine governed.

I think both extremes are a kind of a dying breed.

And that means everything in the middle. So things like, what is human led, but machine supported, or to some degree, even as we see also with some of the Netacea products, more machine led and human supported, and even then kind of machine led and human governed. I think that relationship between the human and the machine is something that is changing now through the advent of generative AI, particularly, and another element that we probably can explore is I think that relationship is a dynamic relationship. It's kind of constantly evolving and changing.

Andrew Ash: In terms of, sorry, just to, just a quick definition. Can you define dyad?

Dr Christoph Burtscher: Dyad is basically when two things come together, it's two opposites of a continuum. And for me, that's why I put it on a continuum in that dyad. It's, there's a position for every combination of a task, technology and the human, that is the right one to create optimal value.

It's not, I don't think it's one or the other. It's always a combination or most likely a combination. We need to define what's the best combination.

Andrew Ash: If we look at how security tools have worked previously, so, you know, there's a long legacy of security tooling, we're really looking at human led, decision making, calls to action, and then humans explaining what's happened afterwards.

So if you think about a general security incident, something happens. that information is surfaced in tooling, but it, generally, I think a lot of the complaints or a lot of the challenges with, you know, the kind of SIEM tooling that's out there is that people have to write really complicated queries to get information out.

They then work out what the problem is from that information, and then they take an action. And then when they're asked, maybe weeks later, what actually happened, they then go back through the evidence and present that back to the stakeholder who's asked for that. So that kind of, that is a traditional way of working and there's automation in there.

We can talk about SOAR, and we can talk about XDR and all these things. But they're, you know, really the continuation of, it's a continuation of human led effort. Mark, what, do you think? In terms of the use of AI in tools like Netacea, bot management, but also many others, where, would you say we are?

So the options, human led, machine supported; machine led, human supported; machine led, human governed; machine led and governed. Where do you think a lot of tooling is at the moment?

Dr Mark Greenwood: It feels like, especially in cyber security, we're in that sort of machine led, human supported sort of area where patterns and attacks are identified and surfaced to some human to then interpret, put in business context, and sort of act upon, I suppose, and mediate.

So it feels like we're very much there at the moment. I think there's been, obviously with the advent of generative AI over the last couple of years, there's been a huge explosion in applications of that technology. I'm curious just to, like,why that is, and where we're going to see the same sort of burst through in the cyber security realm, and where we see that application of that technology a bit more sort of widely, I suppose.

But it feels like that's kind of where we are at the moment. Yeah.

Dr Christoph Burtscher: And I think it's also one of the drivers for that is obviously the emergence of generative AI kind of, and the tooling basically being available. But I think there's also another aspect that brings with it, which is what I call the multiplicity of agency, where in the past it was all about the human.

So we basically define the goal, we then define the steps, how to get there, to achieve that ends, then reflection on it, then we implement it and then optimize it. I think now with generative AI, that has changed to, the machine is also getting agency, where the machine itself, within the boundaries that we are giving it, as you mentioned, Mark, that human governed element is not going away, so the maximum at level 4 in that sense, machine led, human governed, where within that kind of framework that we give the machine, starts to define their own objectives, how they achieve it and reflect on it.

And, as you mentioned, that's, I think that's particularly relevant in the cyber security space because the attacks get more sophisticated as you mentioned, Andy, but also we, that means we as organizations need to get more sophisticated in how we react to them. And, our way to react to that is with the use of generative AI.

Why is that the case? A, because the attacks become more sophisticated and clever, but I think there's also an element of volume frequency. And that is exactly what the machines are really good at. Way better than we are, kind of, if you give me a terabyte of data, it takes me forever get through that. If you get a generative AI and find a pattern, they do it in a few seconds, because that's exactly what they're good at.

And again, I think that's the important bit in that dyad between the human and the machine, use the right thing, human and machine, for the right thing.

Andrew Ash: So I was asked recently, what I think is the kind of, the next thing in AI, for cyber and my thoughts is, and this is kind of happening now, is that AI and will become a prompt for humans.

So when we think about Gen AI, we think about using prompts to get good results from the LLM, so, you know, you could imagine that a prompt engineer might be a job title in the not too distant future in the same way that nobody thought search engine optimization might be a role, you know, it might well be that prompt generation engineer is a job that we see.

But it strikes me that in cyber, the roles are slightly reversed because of that terabyte of data, because you cannot eyeball that, you cannot glean any insight from looking through what are essentially logs and reasonably are readable, even if they were small. The AI can do that, the, whether it's ML like we use or whether it's gen AI can pull insights out, but essentially passes it to a human as a prompt to actually act on that. And I think that's where a lot of tooling is getting to. I don't think we're quite there, it's certainly where Netacea want to be. I think that would be fair to say, Mark.

Dr Mark Greenwood: Absolutely. Yeah.

Andrew Ash: Yeah. That's, where we, that, that's where we're aiming.

We have a lot of that already in terms of, insight into what's actually happening on the customer's website. What do you think the challenges are, though, to go from that kind of, that, multiple agency is how you describe it, Christoph. That multiple agency piece where you have humans basically being the arbiter of the data or arbiter of the intelligence that's come out,

through to it being kind of machine learning governed.

Dr Christoph Burtscher: The, challenges are manifold and, come back to some degree, kind of the dyad, because there are technical challenges and there are human challenges to it. So that the technical challenges of, so how do you build those kinds of systems?

What technologies do you use? Because even LLMs as a, kind of model, version, they are kind of changing to multi model or kind of very specific, but also LLM is not the answer to everything. So there will be different models kind of that have better usages. So I think there's a technical challenge to find the right technology to support that kind of multiple agency.

I think the bigger challenge, and I always kind of say that when I talk to CIOs and CTOs, you know, I always kind of say technology is a social science, not a technology science. And the, that means kind of, for me, the human in that equation is the most difficult challenge that we need to kind of help with.

And, that means, as you mentioned at the start, Andy, who trusts kind of the system basically to be told what do I need to do? So having kind of a software telling you kind of this one is right, or this is wrong, or you need to do this and the other. Building that acceptance and that trust into that technology is very important.

I think that's what many organizations will struggle with at the beginning. They might have found a technical solution, but then making sure that really sticks and people are using it and trusting it is, a, I think a big challenge that we need to get right. And that goes across the board. But I think making sure that we have, that we help the human to address that challenge is important.

And that might be training, helping them to understand. But also helping kind of some of the ethical concerns we can cover in a second. But I think that human aspect is still the weak link in that overall journey that we need to get right. But it's not unsolvable. We have proven in the past, we can improve our information security and software security,

and we will need to do the same step forward again with, with kind of the emergence of generative AI.

Dr Mark Greenwood: If you think of the sort of journey we've had with the adoption of generative AI, just as a model of adoption, I suppose, of these kinds of technologies, it's interesting that availability seemed like one of the major tipping points.

Like when people could get their hands on this kind of technology, interact with it, it's not perfect. You know, there are errors, there are hallucinations, things like this, but people have like worked with the tool via a feedback cycle to sort of understand those, like where that tool fits into their stack, where it can help them go faster, where you should apply more sort of rigor as opposed to the follow up decisions that you're making.

It's an interesting model of adoption and I think that has helped build that trust. It's not just it being right all the time, it's understanding where and when it's wrong and when to just analyze just a bit further. It'll be interesting to see that sort of play out in the cybersecurity field, I think, as we start trusting these agents a bit more.

Dr Christoph Burtscher: Absolutely. And I think it's definitely a model that we see more and more in that kind of iterative adoption. They basically try it out, see how it works, what doesn't work, what are the accepted risk terms. And I think that becomes, in the security space, even more important because in the security space, the impact of getting it wrong are huge.

And so therefore, organizations can have, for example, then using already kind of tools and platforms like the Netacea platform, then have gone through that process of establishing what works and what doesn't work might well be a big shortcut in addressing some of that risk they don't need to take because they don't need to play around with their own bot management.

They can buy one that is already kind of working, and adapt it kind of to their need if they needed to. And I think there's another element in it, which comes with more widely available technology. Not only is it us learning in the organization, others learn too. And creating that network of learning is important.

And again, I see that particularly in security. So let's say we spot a pattern emerging of a kind of a sophisticated attack somewhere kind of in the morning in, in kind of in APAC as a region, we know that will flood over kind of and go kind of through the globe to some degree. And being able to not have the same experience and prevent that experience somewhere in Europe or in America, with the knowledge that you already have gained. Again, in the past, we would have a security operations center that would kind of put calls in place and say, guys in America now, kind of be careful, this happens here in APAC, kind of that might happen there too.

I think with generative AI built into our security and our bot management platforms, we can do that automatically. It's something that is, I think a big difference between very traditional kind of old school kind of cyber security platforms to the new ones with generative AI built in. That prevention becomes something that is not depending on the human.

We can make it depending on the human. I think very often we just don't have the time to make it depending on the human and therefore get the machine automatically to put preventative measure in place.

Andrew Ash: I mean, to me, the adoption model for Gen AI looks a little bit like SaaS adoption Plus. Yeah. So SaaS adoption is really straightforward and it was very easy to sign up for OpenAI, and, all, the Copilot piece, you know, it just appeared one day.

You couldn't stop it. It was... suddenly, everybody, if they weren't using it at work because it's blocked it, they were using it at home and suddenly you've got this mass. And you're absolutely right. That feedback loop into the producer of the model and the SaaS product just helps build that product out.

And that's what SaaS businesses want. So really good SaaS businesses look at engagement and engagement is the most... and so basically you need something immediately compelling. It was immediately compelling. You then want people to use it, and it was easy to use and free, and then you want them to tell you what is wrong, and in this case, help train the models,

you know, and that's exactly the pattern that was followed by OpenAI. It's a certain genius in the amount of signups that they got in the first six weeks.

Dr Christoph Burtscher: Yeah. And again, it's a great example there, even as an organization, if you then want to use technology to control and govern some of that, to manage it to the risk level that you're prepared to take, you cannot. Kind of, who has kind of figured out how to control what data goes into the open AI kind of client that they're using? A really difficult thing to do.

So what do you then need to do as an organization to manage that? Look after the second part. If you can't do it through technology, you have to do it through humans. And that means you need to help them to understand. So how risky is it if I am a software developer and I put my software code of my company that describes my IP into kind of Copilot and ask it to make it better or more performant.

That is, it's potentially quite a big risk for the company and that can only be controlled in many cases by helping the developer to understand, if I put it in there, I might leak the IP out there or the data might go out there. So I shouldn't be doing that. And I think that's an important part when we look at, kind of generative AI in general, but especially also in the security space, making sure we protect our data properly.

Very often, technical controls don't really work. So we need to make sure that they are, kind of through a human kind of compensating controller, almost put in place.

Andrew Ash: Yeah. And then the story that kind of underlines the point I'm going to make was, I was, I was at a round table and we were discussing the adoption Copilot.

So, Microsoft's AI assistant. And the promise of that is such that everybody will have their own personal assistant, right? That is basically the promise. It'll sit on your inbox. It'll sit across all your documents, and basically I won't ever be late for a meeting again. I'll never have them, you know, it'll prompt me and give me information that's going to be valid for the time that I'm about to spend.

And that is a really big driver for lots of individuals within businesses who are potentially tech forward looking. And I was at this, I was at this round table and basically, not one single executive around that table felt confident in turning it on based on the, not knowing what was stuck in SharePoint, what documents they had that were not tagged.

Not... and the irony is, that you actually need AI to go back across your document store to catalog and categorize all of the data in there because you can't do that anymore. There's too many documents to, you know, it was telling to me that the, the driver was to use AI and the solution was to use AI.

It was like, it was, and there's lots of businesses out there now making quite a lot of cash going through SharePoint, GitHub, et cetera, repositories and essentially cataloging. And maybe you don't want all this employee data public. Yeah. It'd take you a million years to search it manually.

Dr Christoph Burtscher: Absolutely. That is the action about kind of always kind of protecting the downside of not having a kind of an increased risk with AI. But I think there's also kind of taking the positives of it. So, I'm sure kind of, you at Netacea, you're using kind of NLP kind of natural language processing to identify what are threats out there based on the huge amount of events you definitely will get.

So I think using that technology kind of for the positive is also kind of, a benefit that it has. But as you say, it is important to protect the downside too. And that means kind of, we need to look after the technology and the humans.

Andrew Ash: So I think this is a good point to kind of just move on to ethical implications of AI, because part of this adoption, part of the human challenges is efficacy

and trust, and I mean, first off, do you think Christoph that companies are doing enough in this space to make our AI seem to be ethically sound?

Dr Christoph Burtscher: The thing is, I don't think so. I think there's a lot more to do because for me, ethics in AI kind of has... ethics was always kind of an important element in any organization, but I think with AI, some of it becomes even more important.

So what a lot of organizations already have kind of caught on to is the, for example, the bias in the training data. But for me, that is only one part of ethics. There's so many other things. So that's about, for me, the fairness of making sure the data is non discriminatory, but also make sure that there is no bias in the training data.

But you kind of mentioned trust. So for me, that's about explainability. I think organizations need to do way more of explaining how do they use kind of that AI and what does it mean to me as a consumer, for example. Privacy concerns in the sense of, is personal identified information being used, if it is, then that needs to be protected and safe and secure, in that way.

And it also needs to be transferred, coming back to that comment of who governs kind of that relationship between the human and the machine. It needs to be clear who makes the decisions and plays a little bit into that explainability. So in short, there's a lot more we as organizations need to do in the sense of a whole range of ethical concerns.

But I think it's also one that needs to go through the whole life cycle of those solutions. It's not just training the model and then forgetting about ethics. I think there's a lot more kind of that goes through the whole life cycle. And, we know those systems get adapted to get changed. The software changes and the data changes.

And every time we need to almost go back, not necessarily to the full drawing board, but we need to reconsider. Is it still unbiased? Is it still not harmful? Is it still understandable? And, I think, there's an element of that element of... it's not just one time effort to do the ethics right. It's a continuous effort that we need to get right.

So, yeah. And also what, happens is that the requirements are changing. So what we find acceptable where AI can be used. It's now different than it was two years ago, and that means all the organizations need to adapt accordingly. That is the same kind of, for example, for security in a world where we get more and more kind of generative AI enabled, very sophisticated attacks.

I think we as a society might become more acceptable of using generative AI to prevent some of them. So I think there's an element of our requirement as a society and humanity is changing over time based on the environment that we live in.

Andrew Ash: Yeah, no, I agree that the, kind of offensive AI we're seeing at the moment is, it's a continuation of a theme.

So phishing attacks, the use of, yeah, I mean, deepfake. Yeah. As they start to have more impact, yes, I think we will have more appetite. Appetite for risk is a very important part in cybersecurity. If the attack is such that it is unstoppable by current means, then the defensive AI will move forward much more quickly.

I think that for the attackers, for the people who are generating this, it's all about return on investment. So, yeah. Cost. It costs some money and effort, probably not enough, I would suggest. And that's why we see so many cyberattacks. But it costs some money and effort to put together an attack. If that attack is being prevented by today's cyber tooling, whether that's management or DDoS or whatever it might be, phishing attacks, the attacker, if the return on investment is enough, the attacker has a reason to upskill into, and essentially adopt AI, or the next most complicated thing. Attackers are, in my experience, reasonably lazy, in that, you know, if something worked once, they'll do it again. If it's not working, they'll try that somewhere else. If that doesn't work, they'll change the attack. And basically, it's just a steady progression to a more and more complicated attack surface.

So, you know, if you can do, if you can attack a website without having a proxy involved, great, because that proxy costs money. If you need a proxy, then you go and buy a proxy, you have part of the proxy farm. Great. A little less ROI. If that starts getting blocked, then maybe you go to a residential proxy farm,

and that costs even more. And then you go to a very fresh, brand new residential proxy farm that nobody's got the IPs for. So, you know, it's it's an ongoing, it's an arms race, essentially. But it's driven by the, it's driven by the return for the attacker, you know. not anything that the defense is doing.

Dr Christoph Burtscher: Exactly. And, I think platforms like Netacea can have helped, in that arms race because in two ways, one, it's one way of managing your cost as an organization, to defend yourself. Because you would just otherwise chuck people at it, and that is very expensive expertise you're chucking at it.

And secondly, by using kind of a platform like Netacea, you get to the stage where the patterns that are emerging are, identified from one client, but also across other clients, and they're identified much faster. So, so there's an element of, it's almost the only way to react to kind of that almost generative AI automated attack by kind of having the same reactant, because if you don't, you will always be kind of in a losing position to some degree of how to react to it.

Andrew Ash: Yeah, no, exactly.

So in terms of overcoming challenges. So, you know, we've talked about the ethics and absolutely adoption, human trust, et cetera. How, do you see, how do you see us overcoming those challenges?

Dr Christoph Burtscher: The, there is... the organization has a variety of actions it can take. And, one of the things is kind of looking after the human aspect, the user behavior, because I think there's still an element that, however long, kind of, the human is in the process of it, there will be a weakness of the human.

We all kind of make mistakes, kind of, doesn't mean machines don't make mistakes, but they're programmed mistakes. They might not always know them, but a human kind of does make mistakes. We also get kind of fatigue and in kind of security, we might get a new security training and then not kind of be exposed to phishing, but a year later, kind of who will run them through training.

So, and then you do that constantly, you get fatigued of it. So I think that human aspect is an important element to it. And I would take that even wider. And when you look at governance, kind of was one of my first focus areas in my first doctorate, I think an organization needs to capture both, which is the coercive governance and the enabling governance.

And what I mean by that is, it needs to be making sure that there's a balance between some of the norms and assumptions, so really telling people and advising them of what's the right behavior, what they shouldn't be doing, and if they behave not accordingly, kind of what happens. But more importantly, I think there needs to be an element of that empowerment, being able to help people and humans to get to a stage where they can make better choices of what's the right action to take. And that means to be able to say, okay, I get an email and that phishing attack suddenly becomes very obvious that it's a phishing attack. That is something we can help people to understand, and we need to help people to understand that continuously.

So I think that element of kind of empowering them is important. And it comes back to your point, kind of Mark, of that adoption. I think we as organizations have to have a duty of empowering people to try those kinds of things out, but we need to give them a way of trying them out in a secure space

and manage that risk in there. And that is what we can do as organizations of enabling that adoption and trialing it out. And as mentioned before, if we can do that in a safe environment, then that's even better.

Dr Mark Greenwood: Yeah, I think on that feedback loop, we get the, it's that agency that needs to be deployed, right?

We've talked a bit about security fatigue, I suppose in cybersecurity teams, that'll be alert fatigue, where we've got lots of false positives or we're looking into a diverse range of events all happening at the same time. I think where we're going to see adoption is in cutting down that noise, like just dealing with a lot of this stuff in an automated fashion, maybe still raising those edge cases and keeping the humans in the loop and being, like, transparent and explainable in the decisions that we've made, so that,

so that things can still be, sort of dug into. But I think, so long as we're making people's lives and jobs easier, and achieving that 80 percent plus of value for them, we'll see more adoption, more feedback, and sort of more rollout of this tool, not just like, well, like Netacea, but also other sort of tools in this space as well.

Dr Christoph Burtscher: And you make a good point, kind of, there is the benefit of using that AI to get rid of the noise and really get the human to focus on what's important. So out of the 10,000 alerts, there might be one that's really critical to be dealt with. So that's the aspect of using the human for what they're really useful for, and machine what they're really useful for.

I think there's another aspect, if you take that a step further, and I think Netacea does that to some degree too, is then with the volume of actions that need to be taken, tailored to what's required. So being able to, for example, AI generated content of that training to say, so I have following alerts, I found a pattern that a particular user might have a particular blind spot in knowledge that we need to fix.

So being then able to create the right content that provide that content in the form that addresses that need, I think that's for me then the really next level of using generative AI of not only detection, figuring out what needs to be done, but also putting then actions in place to address that for the future.

Dr Mark Greenwood: And describing, I suppose, these events as well, like what is it that happened and what do we think was the ultimate aim and what, like, what value have we provided in stopping that behavior, I guess. It's interesting.

Andrew Ash: Yeah, I mean, in terms of the requirement of trust. Going from machine led, human supported, machine led, human governed, to machine led and governed.

I would say there is an exponential difference in the amount of trust required to get from that kind of place that we kind of are now, between, machine led support, human supported and governed. And, actually just, saying, okay, well, that box in the corner there, the Borg, that's going to look after you for the rest of your life.

That's quite a, that's quite a steep difference, and I think that's where, the kind of, the efficacy and the ethical considerations are massive. Just very quickly, because, we're kind of a little bit short on time now. Just a question for you. Do you think that there is a place for governance? ISO have brought out a, an AI framework and compliance.

Do you think that is something that will help, or do you think that they will always be impossibly behind in terms of what should actually happen?

Dr Christoph Burtscher: They will always be behind, but it doesn't mean it's not useful. The... what makes it useful is having those frameworks. And especially when I look at it also from an international perspective, we see, I think, UK kind of goes the right way and has a more principle based approach to AI governance in general, which I think it's the right way to do.

Kind of to say, not about kind of how you necessarily do it, but about what is the outcome that you need to achieve and what are the principles that guide that? I think, personally, that's the right way to go in governance because you will always be behind otherwise. What I think ISO and other kind of frameworks can help with is kind of some interpretation of how you can achieve those principles.

And, I think organizations need experts kind of like you guys to help them to identify out of those frameworks, what are the elements they need to deploy to manage their risk and create the best outcome they can. And I think maybe as a closing kind of thought of that, for me, that's a systematic and systemic solution.

And what I mean by that is, it's a combination of, you know, what are the, dyads that we have before of the human and the machine, but it needs to be for the right task. It needs to be with the right structures and the norms around it. And it needs to be the right context, like the networks and regulations that we have discussed before.

So for me, that holistic way of looking at a solution is very important. It's not just either throwing people at it or technology. It's the combination that will either address your security risks that you might want to address kind of through bot management platforms, for example, or create the benefits that you kind of have by using those characters.

Andrew Ash: No, exactly. So we're coming towards the end of the podcast. It would be unfair of me not to talk about Christoph's upcoming book. So in his spare time, Christoph gets doctorates and in his spare time from that, he writes books. The busiest man in tech right now. Do you just want to quickly go through what it is?

Dr Christoph Burtscher: Very quickly, I'm currently writing on a book. That's my second book. My first one was on business design. This one is called entitled "Silicon Minds and Human Hearts: How to Navigate the AI Revolution". And it's a different take on AI in general, but also as kind of we have today, kind of focus on that partnership between the human and the AI and, and explores kind of some of the concepts like kind of the agency, some of the history, slightly in more detail.

But also, in slightly different ways, it's one book written for practitioners. And for example, the history of AI is told through theater act and scenes. So it's also a different type of book that I'm currently writing. I find that fun and hopefully can have some of the knowledge in there and my experiences help others too.

Andrew Ash: Sounds great. Can't wait to read it. So thanks to Christoph and Mark for joining us today. If you have any questions for us, please leave a comment if you're listening via Spotify or YouTube. Or you can mention us on our X account @cybersecpod or email podcast@netacea.com. Please subscribe wherever you get your podcasts and thank you for listening.

We'll see you next time for more Cybersecurity Sessions.

Show more

Block Bots Effortlessly with Netacea

Book a demo and see how Netacea autonomously prevents sophisticated automated attacks.
Book

Related Podcasts

Podcast
S03 E03

Stuart Seymour (Group CISO, Virgin Media O2)

Discover the captivating journey of Stuart Seymour, Group CISO at Virgin Media O2, and his passion for building diverse, neurodiverse teams in cybersecurity.
Podcast
S03 E02

Arve Kjoelen, CynomIQ (former CISO, McAfee)

Get valuable insights into the world of CISOs with guest Arve Kjoelen (former CISO, McAfee) Topics include compensation, governance, and preventative security.
Oasis ticket scalping
Podcast
S03 E01

“Bot’s the Story, Morning Glory?” Oasis Ticket Scalper Bots

Discover the behind-the-scenes battle against bot-driven ticket scalping. Learn about the challenges and strategies for managing high-demand events like the Oasis reunion tour.

Block Bots Effortlessly with Netacea

Demo Netacea and see how our bot protection software autonomously prevents the most sophisticated and dynamic automated attacks across websites, apps and APIs.
  • Agentless, self managing spots up to 33x more threats
  • Automated, trusted defensive AI. Real-time detection and response
  • Invisible to attackers. Operates at the edge, deters persistent threats

Book a Demo

Address(Required)
Privacy Policy(Required)