Video: Building a Security Blueprint for the Ungoverned Agent Problem | Duration: 2728s | Summary: Building a Security Blueprint for the Ungoverned Agent Problem | Chapters: Introduction and Welcome (24.75s), Initial AI Reactions (93.4s), Security and Authentication (200.02s), AI Agent Security (264.6s), Managing AI Agents (364.935s), Agent Permission Management (562.76s), Agent Identity Management (704.07s), AI Security Governance (924.62s), Build vs Buy (1176.48s), Speed vs Security (1448.995s), Board-Level AI Accountability (1725.71s), AI-Driven Compliance (1869.1901s), AI Security Future (2023.095s), AI Threat Evolution (2247.1099s), Patch Deployment Challenges (2442.23s), Closing Thoughts (2556.16s), Closing Remarks (2615.03s)
Transcript for "Building a Security Blueprint for the Ungoverned Agent Problem": Alright, guys. Thank you for joining me for this topic. We're gonna be talking about all things AI, but specifically, you know, how we enable, AI agents in a secure way at speed and scale for companies and really building that security blueprint for the ungoverned agent problem. So again, looking forward to the discussion ahead. Before we get into it, I don't think either of you really need an introduction, but I'd love you to to give a quick intro of yourself and who you are and who you're with. Mark, why don't we start with you? Hi. I'm Mark Clancy. I'm senior vice president of cybersecurity at T Mobile. And and Jason Loesch. I'm the global CISO here at Cisco Systems. Fantastic. We'll get right into it, guys. I think we're in our careers we're all I don't want to age you guys but I think we're all of similar age or at least have been in this business a similar amount of time and I think we're at a point where we've not seen what's happening in our industry ever before in our careers. I think with regard to AI agents and the use of AI across enterprises, businesses aren't asking permission to deploy AI agents. They're already doing it and it's Securys responsibility to try and enable that in a secure way. So really the first question here is for both of you and and Jason, why don't I start with you. When you guys first started to hear about AI agents inside your organization, it'd been kind of a slow ramp, I think, in the last several months, if not the last year for sure. The speed at which we're seeing agents across our environment has has exploded. You know, what was your initial reaction to this and and how has that evolved today and and probably how has it evolved over the last month even? Yeah. I mean, it it to to that point, the challenge that we have is is a lot of, our employees and colleagues are using these in their personal lives and, probably have ramped up a lot quicker than organizations have. So, you know, initially, it was challenging because, we saw agents appearing in our environment, and we were trying to use traditional mechanisms, you know, that that we traditionally use for unauthorized software and and activity across our environment, and it was really like, you know, playing whack a mole. And, you know, so we just like in the generative AI faith, block things like, Chachi d p initially, without knowing what the implications were. You know, we've learned a lot and have tried to, you know, move at the speed in which, you know, the business is going, you know, with digital twins and and, you know, agentic usage now. But it's still moving a lot quicker, and so we're having to really understand what exactly that they're wanting to achieve, what why individuals are wanting to bring, you know, AI usage into the environment, what they're using it for so that we could start to, continue to tailor the right capabilities, you know, from a control perspective. Yeah. Jason, one of the things you said recently was people are using this outside of work. Right? They're learning how to engage with and use AI agents and build agents, and they're bringing that behavior into the workplace, into our enterprises, because they think it feels normal and natural to do so. And so it's compounding the I I say the problem is compounding the speed at which AI, deployment is happening at our companies. So, Mark, what about you? Couple of things. I mean, for me, I heard this I'll call it a joke earlier. You know, the in the model context protocol or MCP, the s in MCP stands for security. Right? Here's another technology innovation where we're trying to bolt security on to a framework that didn't have it built in. And where that's got really complicated is how you do the authentication and authorization. And so for us, we moved our workforce to being passwordless. And so it's really hard for people to stuff credentials into these agents and make them work. So we had to connect things like OAuth and make those things work, which give us some security enforcement points, that really aren't existing in the protocols as they stand now. Yeah, great point. I think throughout the day we're gonna talk a little bit about the authentication problem and non human identity problem or the non human identity issue I should say. Mart T Mobile is obviously an enormous company. You got a massive customer facing business. At what point did you realize that, okay, the use of AI agents, it's here. It's not just it's just it's not a nice to have. It's a must have for the business. And when when did that become or has it become a real security concern for you at this scale? It has, it kind of exploded, like, for everyone. Right? And and we were talking earlier about the last weekend we all had when Claude slash Molt slash OpenClaw happened. Right? And from a security perspective, you know, we built a bunch of things to pay attention to the end user compute environment. We built a bunch of things to look at the server and container infrastructure. And until these AI agents showed up, those universes didn't really cross each other. Now they do every day. And so from a SecOps perspective, how do you pay attention to these two different pieces and how they connect? And then the identity piece where you have users using agents as themselves through OAuth or they're getting access tokens to APIs and things like that. In the early days when we're still pre agent in in the chatbot space, the kind of footprint was, you know, give me a GodMode API key and I'll make the thing work. That doesn't work for enterprises. It might have worked for consumer, but it definitely doesn't work for large complicated enterprises and that's where we spend a lot of time getting the security of the agents to fit the overall access patterns that we want. Yeah. These these agents are, as you said, authenticating through auth tokens and API keys, and it's interesting because it looks what it is. It it is legitimate authentication. Right? And it makes it really challenging to sort of understand what these agents are doing. To both of you, we keep hearing that security teams describe AI agents as the next big thing to manage. I don't wanna say the next big threat to manage, but the next big issue to manage within organizations. And I say not necessarily the next big threat because there's obviously a ton of value to come from AI agents and and really the future of how it will work. What is the issue your role as a CSO at large, you know, organizations, what's the real problem or the real issue I should say with AI agents across your enterprise? And what do you think most security programs are are still missing? Jason, I wanna start with you. Yeah. No. That's great. And and, you know, Mark obviously touched on the identity piece of this, which is, extremely critical and foundational, you know, when we look at, you know, any carbon based life form machine or now agent, you know, identity is key to to understanding, you know, access and permissions and things like that. But what I what I think really has changed here is the introduction of a new class of actors in our environment, you know, that have nonhuman identities and are operating with valid credentials but behave very differently than humans. So they operate continuously in machine speed and without natural friction, or judgment as you and I would tend to apply. So the risk isn't just, you know, who who the agent is or or what it has access to. It's really around how it behaves, once it has that access, how quickly can it act, and then how broadly can it scale those actions, which is sort of a new frontier for a lot of us. So that combination, you know, the valid access plus the autonomous behavior at scale is really what's creating exposure that, most traditional controls weren't designed to handle and something that we're having to quickly pivot and and adapt, you know, our our, approaches and ways of thinking. Yeah. That that's a thought I wanna come back to you, Jason. Mark, I wanna give you a chance to to answer and then, come back to a point that Jason made. Yeah. I think, you know, we have insatiable demand and everything we do is too slow. Right? We have kind of that challenge. As from an security security perspective, one of the things we look at is how do we put some type of visibility and security layer between the resources and the agents themselves. So we started doing gateways at the MCP level with security. Right? And that gave us some ability, gave us some access control. But I think Jason really got had a good point, which is these things operate differently than humans. They don't feel constrained. They'll try and try and try. And so an early thing we saw was we had a senior executive who's trying to look at, piece of code and wasn't permissioned in repository. And the agent tried hundreds of things to get around to get to that code repository which fortunately didn't work because they weren't permissioned, but it gave no feedback that, hey, you don't have access to this. It just kept trying. And that operationally, I think is part of our discovery journey we're all going through because no one's completely figured this stuff out yet. Yeah. And and that's a great point that the three of us have discussed prior to this was, hey, we're all learning this together. I mean, we got, you know, two CSOs at some of the largest organizations in the world. And collectively we don't all have the answers individually, but I think collectively there's a lot we can learn and answer together. And Jason just going back to your point, Mark, I want your thoughts on this too is when you have this, authentication that is I I don't even say looks legitimate because it is legitimate. How are you managing that or how at least are you thinking about managing that? I I appreciate you may not have the answer to that today, but how how are you thinking about managing that access or at least what good behavior looks like and what anomalous behavior would look like? Yeah. And to your point, like, some of this, you know, we're going down the path of other, you know, areas we're thinking about, the different approaches and it evolves, daily, if you will. You know, the running joke is, you know, we'll just wait and, see what happens next week or what comes out, and we'll we'll we'll pivot. But, you know, I think I think a couple considerations here is, you know, when we look at and Mark called us out too. Like, how do we how do we evolve towards task scope permissions instead of just broad role inheritance. And we've had a lot of conversations about, you know, it's one thing to give an agent identity and, you know, we've seen this time and time again where a new employee joins and we clone the, the permissions of the manager or the peer, like, that doesn't work with agents. And to Mark's point, earlier of it trying at all costs to complete the mission, you know, through different mechanisms, and we have to be very careful about, you know, how do we scope those those permissions, accordingly. And then separating analysis from execution, you know, especially for high impact actions, I think, is important. And and then I think more importantly, you know, that runtime enforcement, you know, that we can evaluate behavior in context because we're just relying on logs or post event analysis. You know, the the agents already acted and, you know, likely acted at speed and scale, and that's hard to unwind for us. Yeah. And and, Mark, you you touched on this a little bit, but when an agent authenticates with a valid token and starts hacking, you know, starts taking action on behalf of of the business or on behalf of a workflow that the business has asked of it, You know, how do how do existing or I should say, do existing security programs even though it's there? And what's the gap that you're seeing as a CSO or with CSOs that you're talking to about this problem? Yeah. I mean, so much of this leverage would be built to pay attention to things are are focused on the human and now you have essentially a non human actor proxying the identity. And so one of the big debates we're having is, do we create a unique digital persona in the identity plumbing for the agent, like the non human worker for Mark type thing, or do we proxy Mark's access? That's part a. Part b is, you know, Mark's permission to do a lot of things. Should the agent have those exact same rights and permissions, Or there are certain things that are too risky that we wanna descope? And how do you do that? And so, you know, I was talking to one of the big data back end platform providers, and it actually created sort of a subset of permissions where if you're connecting as an interactive user, you have permission set a. But if you're connecting as agent, you have a reduced set permission set b by using the same identity. So we're thinking through those kind of pieces and how we descope functions. You know, a lot of the original points were, like, you have read only access to this thing. Whatever we want to write, do you allow that broadly or do you do it in particular use cases for particular platforms and systems and data types? Yeah. I I I think the thing that we're thinking about too, Mark, and I'm interested in your perspective is, in some cases, can we apply traditional tooling and capabilities that we have? If we look at IDPs and PAM and policy based access control, like, are all of these things can we apply to agents, or do we have to go out and invest in, you know, a new set of of tools on top of the existing tools? And I I think that's top of mind for everybody as well when we look at what we've already invested in in traditional capabilities to manage what we consider machine or human identities. How does that look? Because we're all budgetly constrained. Right? And so, I don't have all the answers at this point, but just kinda talking out loud there. Yeah. I mean, for me, you know, building on that. Right? I think the big one is there are kinda two work modes. Right? There's me doing things as me and I happen to use an agent to do that. Like, I had to produce a summary of all the security changes we made in the last five years for an executive brief. Now, you know, I built a quick agent to go through, look at emails and PowerPoints and stuff like that and compile it and turn it into a PDF. That's really different than my team doing a security ops function that, you know, we have a 24 by 17 who runs it. Now we're gonna have agents do that work, and there isn't a single person who's doing that thing. And you really wanna have, you know, if it was a traditional app, we'd have a service account or kind of thing. Like, what's the right mix of those with agents where you have that piece? The other puzzle that we've looked at is we've handled this in the application space with service accounts where we attribute service accounts, particular business application. But we don't have 50,000 instances of that business app running at the same time where you might have that with agents. And so how do you also deal with the agents and what one of my peers calls, you know, the the hire to retire cycle of these agents? And you might have a couple agents who kinda need to get fired. Right? Like, how do you handle that when the identity is crossed? Yeah. Well, you have to put them on a performance improvement plan. Yeah. You gotta tip them. Yeah. For ninety days at least. Yeah. On monitor data for sure. So, it's it's interesting. Jason, just on your point about, you know, do do tools today, will they scale to meet the meet the demand of nonhuman identity? And and it's it's an interesting question. And the answer for me is I don't know. It's made me think differently about, you know, knowing that an agent is there, knowing that a token, you know, existed, and knowing what application this agent had access to individually aren't very effective. Collectively, it's still the same problem we have with, you know, human identities. We still wanna know that and understand it. It's just caused security teams to have to understand that context far faster, you know, whether it's seconds or minutes, it's certainly not days and weeks, and and it's an interesting challenge. Mark, for you, how have you how are you thinking about building a security framework or security governance for AI adoption? I don't know if it's necessarily a framework per se, but are you evolving how you think about security adoption and AI deployment or security controls for AI deployments? We have it. And what was helpful when I started thinking about this and was asked to present to the board about is, actually, we don't have a single AI security challenge. We have multiple. And so I broke my universe into four things. We have these big corporate initiatives using AI like live translation in our message, you know, infrastructure for voice. Right? We have the departmental productivity thing, which is where I think a lot of the agentic pieces are coming. We have all the AI that's built in every vendor platform that we buy. And then we have the AI that's doing real operational things. So the AI ops piece. And we looked at the control footprint for those, like, we're not doing the same things on each because the environments are different. And there is a different risk tolerance particularly for the workforce productivity. We're gonna take on some more risk to get the benefit. We're completely unwilling to take that for things that are large customer facing, you know, efforts. Right? And so for me, it's also about partitioning what you do in what environment, what risk context to have the right control footprint. Yeah. Yeah. I I couldn't agree more. I mean, the way that we're looking at, AI assisted, tooling, you know, for developers is different than the way that we're looking at for production. But one of the things that we're doing here at Cisco is we have an overarching governance that, you know, a lot of these feed into so we could look at the, you know, responsible for AI, you know, the the tool sprawl because there's a lot of, you know, toys that people wanna get and and, you know, how do we make sure that we're understanding that, you know, token usage around all of this. Those are all gonna be important elements, you know, as part of the governance. But the way that we're trying to structure it is not as a phase gate, but how do we have the right governance to help the business move faster. And I'll give one example. You know, there's a lot of questions around an internal AI generated platform that we've built inside and, you know, the we got a lot of well, we can't use restricted data in it, so we were not gonna be able to use it for these, you know, use cases. Well, Well, the governance team got together and said, well, based on these controls and capabilities, we can. So there's no excuses there of why we can't continue to move forward, you know, with those particular use cases. So I think if you properly structure the governance, it it actually helps to move a lot quicker and reduce barriers. I I agree. There's also one complexity that's just real world where we have, you know, policy structures that don't map to the way the technology works or our technical enforcement capabilities work. Right? So we've round certain types of data, like, we're not using AI tools for it today. We may in the future, but right now, they're out of bounds. And then how do we make our control footprint support that so that people don't actually include some data set we don't want. And, you know, traditional DLP tools which we've used in the office can be, you know, forever, don't work when you're at, you know, clawed at the command line. Right? Like, it just doesn't work. So trying to figure out how do we recreate the analogs of those in our digital AI world. Right? That work for those environments so that we do achieve our bigger, you know, risk hedging objectives for things we're not comfortable yet. Yeah. Guys, I'm curious and and Jason, I'll start with you. At Cisco, obviously, enormous tech company. Right? And lots of builders there, lots of people that wanna develop things. Within security, within your security organizations, how are you thinking about these build versus buy decisions? You know, what do we build internally and and leverage, tooling for versus what we what we flip on capability wise within existing, investments we made in our security stack? Well, when we say buy, it's buy individually or buy as a company for us. Yeah. Yeah. So no. It it it's a great question and something that we have been talking a lot about, recently. And I think this is a problem that we all face is that with, various models and, you know, the ability to do vibe coding and everybody to become a developer now, you know, there's a lot of, capabilities that that are now at everybody's fingertips. So rather than going out and and buying a platform for function x y, you know, they can spin up an agent and Vibe code a portal or something for it. That's great, you know, and and I think, you know, organizations are embracing that. But that piece then goes back to what's the governance structure and the framework to be able to to do that development and make sure that all the things that we've been talking about, that the general population that are whether it's somebody in finance that's building, you know, capability to, you know, to go and do, you know, financial analysis a lot faster rather than going buying the tool, that's great because they can spin that up quickly. But we don't want all of these things to sparely across our organization without a lot of the things that we need. And then do we treat a lot of this internal build? Because you and I can now be a builder. You know, historically, we haven't been able to be. But, you know, how do we go through the pipe the same CID, CICD pipeline or a version of it that our developers too. And I think that's that's an important element when we're looking at at, you know, the at least the build aspect of this. As far as, like, whether we buy a tool, you know, or not, We as a tech company, we we like to build things. We have a lot of engineers, so it's probably a different answer for us. But there are cases where we evaluate the capabilities of of a commercial tool that can help expedite something that we're doing. And then the long term sustainability of that, that's a big consideration at least for me is if I look three years from now, am I gonna have the right people to be supporting some internal homegrown capability versus paying somebody, you know, the the maintenance that I need. So these are all considerations when I'm looking at larger platforms versus, like, one off, niche things that we can quickly spin up, if that makes sense. Yeah. No. Absolutely. I I would always you know, my career in retail and and media and entertainment, not tech companies per se, I was always here with the opinion. I don't wanna take on this technical debt from, you know, a small group of people that have built this that where are they a few years from now within the company? And then I'm I'm sort of stuck trying to manage this key workflow or this key tool for us. And so I was I always looked at that lens. Are we really in the software development business? Are we in the security business? Mark, how about you? So I think the blend line between building and buying is very much shifted. There is a lot more bias to build things than buy. We're still buying things of course, right. So I think just the boundary of where you make that decision for the reasons Jason mentioned, right, they shifted for sure. Right? The other side of it is you've also gotta manage the incentive structure in the org. And Jason's point about supportability, I think, is huge. One of the techniques we're trying out right now is that everybody who gets AI tools, their token dollar budget only gets reset when they check-in code in their code repo, even the people in finance and HR who are using these tools. So a lot of learning curve for them to figure out how to do those things. Good news is the AA tools will help them do that. But also, like, we wanna have that institutional preservation of what this stuff is, so it becomes more supportive. And then we can look at it in aggregate and do all our security testing in the pipeline that we've already built in. But for teams that aren't traditional developers, I think that's super important. Yeah. We talked about this, when the three of us were together. I think it it CSOs today are being pressed or being challenged, I think, to move faster than we than you've ever had to move before with the AI adoption, you know, getting AI adoption across the enterprise. And there's real tension, or I should say you tell me, you know, what is the tension like between moving fast, enabling the business that we all talk about, but also putting guardrails in place to make sure that we have secure deployments and deployments that we're comfortable with across our enterprise. Like I wanna hear both your thoughts on this, Mark. Like, what are the trade offs that you're making today that maybe you weren't making in a in a traditional, you know, security review not all that long ago? Yeah. I mean, if you go back to my sort of four sections. Right? The workforce productivity box, the design objective was make it safe enough so we can start. Right? And recognize we're gonna have to adjust and tweak things and and make improvements because the stakes are a little bit lower than if we're pushing something to all, you know, millions of customers or we're using it to decide how we manage our radios on the network. Right? And so it's sort of getting those shots. But in that workforce productivity space, like, moving at warp speed. And we have kinda two challenges. One is everyone's learning this technology, which is brand shiny new, feels like it changes every 30 days. And my security team is also learning it and using it and trying to get up to speed, and nobody's at the same place. Right? We have people who are in the Vanguard and they really dove in and have what I call their AI, you know, agent epiphany and have really jumped into it and figured it out. We have other people who are lagging, and that's true in every team. And the security team, you know, some people aren't comfortable because they're not how to they know how to control this stuff because it's so different than what they're used to. And just trying to work through that and then create that sort of safe safe space to work in and also recognize, we're gonna get something wrong. We're not doing fix stuff. Right? And so just recognize that you're gonna have to iterate and pay attention to what's happening, and edge your bets on the big things going wrong and then deal with a little bit of noise on the smaller scale. Yeah. And Jason, for you, same same question, but, you know, I should add, have you evolved your have your team evolved their process around risk risk acceptance or how they're analyzing, you know, whatever the the the solution that the business wants to bring in, how they're analyzing that from a security perspective? Yeah. I mean, you know, I mentioned the governance process that we put put in place and, you know, that was that was established about, a little little over the two years ago. So we've we've had the ability to to get in front of that, and then we've done things like create self-service capabilities so that anyone in the company can can prompt our, internal, AI engine and say, hey. What can I what can I use for this use case, or does this type of tool exist? You know, who owns it from an attribution standpoint so that it reduces some of the toil and, you know, the go and chasing down information so that that is instantaneously at their fingertips. The other thing I would say, you know, beyond what what Mark just described is part of what we're looking at is, you know, this it's not really attention of speed versus security. It's it's really how do we move fast to get as much visibility and and control as possible versus, you know, just allowing, you know, the business to move fast where we don't know what's happening in our environment. So I think as much visibility we can get around the AI usage and the observability that helps us to to make better risk based decisions and, you know, be able to pivot quickly. And then the last thing I would say is really around I I I've been thinking about and and on this journey of, you know, how do I look at my traditional security organization? What do I need to do to pivot that so that I'm aligned to how the business is moving and what they need? Because oftentimes, what's happening is I'm having to send the head of I'm or the head of SOC and the the head of architecture. So you've got, you know, five people trying to figure out, you you know, what what the this ecosystem should look like versus do I have some sort of squad or center of excellence that I can iterate and move quickly on. So I'm also thinking about it from a talent and organizational perspective as well. Yeah. And and, Jason, as we said, you know, Cisco is a tech company building AI tools for your customers and for the market. You support that function, I assume, but you were also responsible and tasked with how we secure those tools, how do we secure them internally, for for the company. When you're in front of How we use them internally too for to to enable our Yeah. Yeah. A third a third complexity, I suppose. When you're in front of your executive stakeholders or your board, your audit committee, and you're talking about AI risk and you're talking about all this with what's happening, let's just say this last year or the last six months, what are the questions that they keep asking? What are they most interested in? And I think, you know, from your view, what are some of the hardest questions that you don't have answers yet? Yeah. I mean, it's you you get general questions typically from the board, and it's it's hard to supply the right answer without really dissecting, you know, individual components and losing them. So it's it's always a delicate balance. For example, you know, do we know where AI is operating in our our environment, what it's doing, and are we securing all of it? You know, it's a very broad and general answer or question because as we've heard, there's different facets of of AI usage. There's different use cases and and scenarios. You know, and some some of that really comes down to, you know, different parts of the system. Can we see the identities? Can we see the tokens? Can we see the application activity? Yes. In in a lot of cases, individually. But I think where we're still evolving is how do we be how do we instill instill confidence that we're connecting all these pieces together in real time to understand exactly which agent is doing what on whose behalf and what data, it it's accessing. And this is really still our emerging capability, I think. So it's that's difficult to say, you know, that, yes. We have complete control, you know, confidently at this point, if that makes sense. Yeah. It it's it's so interesting to hear you describe that, Jason, because what I hear is, like, we're still trying to do the fundamentals really well. We we still have to do the fundamentals really well. We just have to do them at a speed which we didn't before. We have to do them faster and maybe better. All together at once. Yeah. Hey, Mark. You're in a heavily regulated industry, T Mobile. How are you thinking about mapping AI agent behavior to either existing compliance frameworks or do you think that we've got to rethink compliance frameworks altogether with in the age of AI? Yeah. Actually, I challenged my compliance team to use AI to go measure where we stand against all these compliance regimes instead of, you know, survey says kind of approach. Like, let's just go create a measurement. You can go build the integration to get that platform yourself now. Right? And so I've actually used it this is early days. We used it to change the way we go actually achieve compliance and driving more things to doing the measurements, as opposed to waiting for an output or a screenshot or some other type of thing that confirm a control works. The big organizing principle for us is we made a unified control framework. AI is eat that stuff up. It's, like, very easy for them to digest it and distill it. And so now we're using it to also measure, like, where you know I gotta ask a random question. Hey. Do you comply with this randomness standard? Pick one. Said, I don't know. Let's go ask and go measure. And so it's actually been quite an enabler on the compliance side, broadly speaking. That said, you know, the, regulatory landscape doesn't quite handle AI yet. Right? We were having a debate the other day of if you give somebody in finance, you know, a Claude CLI code agent, are they now a developer? And how does that fit under Sarbanes Oxley? Because there are a whole bunch of controls you have around developers when we're thinking of traditional platform development. And so there's these more complex questions we have to go answer now in all of these compliance regimes because every single one of them predates the prevalence of AI that's being used now. Yeah. I I the thought of AI making compliance easier, I I completely agree. You know, my hope is it's it makes evidence collection easier. It makes, you know, the evidence repository much more efficient. Right? Wouldn't that be wonderful where we can pull evidence one time? Yeah. All the time. You don't have to wait once a quarter, collect the data, go through the control testing. Well, I I I think we're in a position where we can't do that anymore. You know, when we look at pen testing, enumeration, continuous control monitoring, these are all things that we need to be striving for and, you know, leveraging API based data collection, like the days of walking, you know, the the lab coats and clipboards, you know, are are are gone at this point with the speed at which, you know, we need to operate at. For sure. Yeah. Completely agree. Alright. We've got a few questions left. We're gonna take a few questions from the audience. This question that I'm about to ask to both of you might be the hardest one yet because, I'm asking you to, predict the future, in in just the short future. We're we're holding this days ahead? Yeah. A couple days is all. We're holding this at the April, so let's let's predict into early May next week. Defensible AI security in 2026, I guess, what does that mean in practice? What is your security program look like at I say the end of this year, but what might your how might your security organization evolve over the course of this year with all that's happening in AI? And and Mark, why don't we start with you? Well, what I challenge my team with is, you know, historically in security, we give what one of my peers used to call the naughty list. Like, here's all your broken vulnerabilities. You gotta go fix all this stuff. And let's say we're staying within the realm of software for a second. I expect by the end of the year, we're just proposing the merge requests to go fix the flaws, not complaining about the defects. Right? And that's the transformation I wanna see from the security side of the house where, you know, find a problem, here's the fix, somebody else is is the domain expert reviews it and make sure it's correct. And I think that could be a big change for us. So that's sort of part a. Part b is, you know, we're 10 x ing the capability of these models every few months. And so we also have to anticipate that things that used to be hard are gonna get simple. And the amount of velocity we need to put into all our security process is just gonna keep going up and up. Jason, how about you? Yeah. I'd I'd I'd break that up in a maybe a couple predictions. One is, you know, first around just general usage of AI, and and I continue to harp on this because it's top of mind right now about everybody now having their hands on on AI and everybody being a developer. I I think early on, and we're still in the phase of just go do AI and learn it and become comfortable with it. I I do see as this evolves that, you know, there's gonna be more frameworks and and operating control planes around how AI is is is developed and used, you know, by the general population. So that's production number one is there's gotta be an intersection between learning, get comfortable with it, and have some process and and structure around that. You know, two, just to to add on the to, you know, to Mark's, commentary around our own security usage of it. I mean, if we see what's, you know, we've read about in the last couple of weeks with, you know, where we're the the speed at which, you know, the exposures may be coming and the adversary is leveraging this, you know, we have to get to a point where we're using AI to speed up patching life cycles, you know, runtime defenses, whether it's deploying it autonomous reauthorized containment actions, like host isolation credential revocation, network path blocking, those type of things that can fire in, like, sub sixty seconds. Whether it's human in the loop or not, like, those are the type of things that we need to be thinking about to leverage AI for because, you know, again, these things are gonna be moving at at machine speed. And then the last part is, you know, I do see that continued maturity of looking at a Jentic agent and AI usage, whether it's internal to SaaS or, you know, as it extends outside of our environment that we we are able to to look at as as more of a collecting ecosystem from observability all the way down the the guardrails that we need. Yeah. Okay. Guys, we're gonna we've got a couple submitted questions from the audience, so get ready, Mark. The first one's for you. You're you're an Obsidian, customer. So with Obsidian, how does identity and SaaS visibility play into your AI risk strategy? And are you seeing threat actors specifically targeting AI integrations through compromised identities? We're really seeing it as a supply chain. Right? So we use a direct integration with somebody. They have a suppliers attached to their thing, and some of those access tokens get compromised. And we've had to respond to those kind of events in our, you know, third and fourth party sort of ecosystem. That's not getting any better with with the AI. Right? That's just gonna accelerate those types of events, and so they're part of the new normal. So that's a big one for us. And also, you know, any you're gonna get punished by any configuration environment mistakes that you make because it's too easy to enumerate all the attack surface now. Right? And so we expect that to be the sort of the second modality and it's already happened to some degree, right, with all of us. And so those, to me are sort of the two big pieces you gotta tackle and be anticipating if they haven't already happened. Yeah. Okay, Jason. Next one's for you. We we would be remiss if we didn't talk about this topic. We're a few weeks past it now, but what is your perspective on Project Glasswing and the implications of Infropix's mythos on security programs? Yeah. I mean, you know, hopefully, everybody that is watching this or, you know, will be will be watching this has seen some of the, you know, the publications and and the recommendations and everything. But I think I think the biggest thing with Glasswing or any of these models like Mythos is it's not just the capability, because these are gonna continue to evolve. So, you know, there's a lot of questions of, you know, can I get access to this model and use it to to find issues? There's a lot of models out there right now that folks can utilize, but I think the big takeaway is is this really brings and highlights the compression of time. It's, you know, the time between discovering a vulnerability, validating it, and then the the actual, ability to to turn that into an exploit. We've seen the data. It's it's shrinking dramatically, and, you know, that really in turn changes the defender's problem. So it's no longer sufficient to rely on standard patching life cycles or detection. We're all the things that we've been talking about here, you know, but how do we reduce the the exposures upfront, put controls in place that can prevent or constrain, exploitation in real time? You know, I've I've had this thought of, can you know, as adversaries use these models and, you know, they have to pay for tokens as well, Can we make it more expensive for them and cost prohibitive in some cases? You know, look at the economics around some of this. So it's definitely gonna be an interesting and disruptive time in general as these models continue to evolve whether it's glass wing or or some of the some of the future ones that we're gonna see. Jason, your your comment made me think, this is this is the worst model that they will make from here on out. Right? So they only get better from here. And all the models learn from each other too because people, you know, compare them and test them and do inferencing from one to the other. So for me, the way I thought about this is, you know, I called the patch tsunami. Right? We're gonna have a million software updates we're gonna have to apply and they're coming soon. Right? And you just saw, like, today, Firefox just dropped a whole bunch. Yeah. Now it's just one patch with a, you know, large number of vulnerabilities tied to it. The challenge I see is the and and Jason mentioned this. The ability to find the vulnerabilities has grown exponentially and is on the steep part of the, you know, exponential curve. The ability to fix them in the code is growing at that rate with the cogen tools, but it's on the shallower side. It hasn't gone vertical yet. And we're still in linear time on how we roll out patches across big complicated infrastructures. And, Anteom, Sean McCann came up with this term a couple years ago called breaking the patch sound barrier. Like, we need to get from Mach point nine nine to Mach three and how we maintain and roll out capability infrastructure. Jason mentioned this earlier. And in the interim, we need some real time protections because we have not broken through that sound barrier yet. And that I think is our big to do as operators of this complicated infrastructure. Like, how do we do that so that recognizing the flaw to exploit Windows near zero, the fix to deploy cycle can't still be measured in days, weeks, months, and years. Right? Like, that's the grand challenge I think we're all facing. Yeah. And the last piece there is just really how can we leverage these a AI models and and agents that we've been talking about to actually expedite the patching the way we do testing and accreditation and all those things. Like, we've gotta be thinking differently about that as well. So, yeah, it's definitely interesting times. Yeah. Alright, guys. Wrapping up here, final thought from both of you. What's the one mental shift you'd like CSOs that are attending this today to walk away with? Yeah. You you you said it earlier and, I I was thinking the same thing is, like, we even as we sit here today, we don't have all the answers and this continues to evolve quickly. So I think the one thing is, you know, even in our shoes, we don't necessarily have all of this solved, and I think we need to continue to collaborate like this as an industry, to share best practices and and figure out how we can move at the at the speed of which this is. And then, wouldn't be remiss to say this is, May is mental awareness month, and we're all sometimes feeling the stress of, like, the environment that we're living in. And it's important to say this this shall pass, and we'll figure this out, you know, collectively as an industry. I would just add two things. One, like, if you're still in the mindset, you can just block your way through this problem. Like, that's not gonna work. It's not sustainable. It's gonna be overcome by people working around your control for or the agents working around your control footprint. So you have to figure out how to embrace it and shape it so that it goes on productive paths and do the course correction. And as as we were saying, you know, no one has figured all these pieces out. And so there's just a lot of iterative learn, tweak, adjust, learn, tweak, adjust that you need to go through. But the big piece is, you know, how do you get a Workday structure where, you know, I call these things little Tomagotchi's, like, they always need a little love and attention. Like, how do you actually change your Workday so you don't burn out because you're routing flat out 247, which is historically a problem in security anyway when you have all this AI capability happening in your environment or ability for you to use it yourselves. And so it's I still think, you know, a little restock on sort of how do you structure your work day when AI is part of your day to day. Yeah. Guys, thank you. Thank you to Obsidian for bringing us together for this discussion. I know everybody in attendance found it valuable. Everything that Mark and Jason talked about today, it's real, it's urgent, but as Jason just said, it's solvable together collectively in our community in our industry. So, stay tuned for the second part of this webinar series, which is scheduled for May 6 to hear more on how Obsidian is helping enterprises secure AgenTic AI deployments. Guys, thank you. Thank you. Take care.