Claude Mythos has pushed AI cyber risk from an abstract future problem into a live banking issue. In this episode, Adam Blue talks with Beth Anne Bygum and Ryan Hollister about what changes when AI can discover vulnerabilities at machine speed and how to think clearly about security, governance, and resilience without sliding into panic.
Watch
Subscribe
Related Links
Alignment Risk Update: Claude Mythos Preview, Anthropic
AI for Everyone, Q2
"The Prestige" directed by Christopher Nolan
Transcript
Adam Blue
Hey everyone, welcome to Cut to Context. I'm Adam Blue. We are here today with Beth Anne Bygum, chief information security officer of Q2, and Ryan Hollister, one of our very, very senior architects on the engineering side, to talk about Anthropic Mythos.
So, 2026 and 2025 have been interesting years in terms of the progression of AI technology. It feels like every month we get a new announcement that the world will change at a faster pace and in an even more dramatic way. Some people likened Dario Amadei's announcements about Anthropic and Mythos and vulnerabilities and attack and defense to an arsonist selling fire extinguishers. As much as I enjoyed that metaphor, and I enjoyed it deeply, it felt maybe a little too off to one side.
As we move into a world where AI technology now appears, according to Anthropic’s claims, of being capable of uncovering zero-day vulnerabilities and exploiting them via complex multichain attacks as a matter of course, what does it mean for attack and defense in our industry? Beth Anne, you are right on top of this topic, so let's start with you. What's changed since April 7th?
Beth Anne Bygum
Hey, Adam, thanks again for having me here. And it's the acceleration, right? The ability to accelerate and activate what traditionally were lower categories or lower classification of vulnerabilities and the identification of vulnerabilities that previous to this technology being available may or may not have been as transparent and the ability to identify that. So concepts such as the scale and ability to string vulnerabilities together and leverage those vulnerabilities for a campaign at scale is definitely what's different, and most security teams are responding to ensure readiness.
Adam Blue
OK, great. Ryan, your thoughts?
Ryan Hollister
Yeah, it's interesting from like an industry perspective, you know, taking just a step out of Q2 and just thinking about how us as an industry have shipped code and how we've maintained code and how we've justified, you know, kind of putting code off to the side and thinking about, like, there are many systems out there running deep in the bowels of government and probably industry and companies out there that are still running Windows XP and all those kind of old operating systems or old versions of Linux that may have not yet been upgraded to modern kernels or modern versions.
And so I think it pulls all of that risk forward, kind of as Beth Anne was saying, that we could kind of get by and justify reasons to not do that work. And now it's just kind of bringing that to the forefront again and saying, yeah, maybe those code bases that we've kind of got by with are no longer justifiable to get by with. I think that's the existential question that I think we as software engineers and operators really got to wrestle with these days. The reasons that we could come up with before are often these days less and less valid reasons.
Adam Blue
Yeah, yeah, that's an interesting perspective. I love in the first two paragraphs of the Anthropic report, which I reskimmed this morning in preparation for the podcast, they use both the terms “stack smashing” and “JIT heap spray.” And as a reasonably technical and fairly nerdy person, I actually know what both of those things are with some familiarity. But there's a little bit of a feel when you read that document that it is built for people who don't know what those words mean, for which those are mythical things. I think there's a certain tone in calling the newest model of your tool Mythos, literally, that I think may turn out even to be a little undercutting in terms of maybe there's some overpromise here realistically.
But at the end of the day, one of the things we have to deal with in our business, and this is what I want to talk about next, is there are always assertions of new threat factors, new zero-day vulnerabilities, new cybercrime rings. There's always a new threat every single day. It's like “Men in Black” where he's like, “Hey the earth's under threat every day we deal with this stuff.”
And so in banking, in particular, where the prize for successfully executing an attack is so valuable, what does it mean to really try and endure the constant onslaught of new challenges, and does anything really change about what I'll call cybersecurity basics of what we're doing? Or is this simply a call to action to get stronger and better along the lines of the things we're already executing? So is it real change in the world, or is it simply an acceleration of what we're actually dealing with today? Beth Anne, why don’t you start?
Beth Anne Bygum
Yeah, thanks Adam. I mean, the basics are super important, right? To your point, I mean, is this a change? The change is scale, which means you have to get better at the basics, right? It's about the precision. And then from a pivot, I think the opportunity is to begin to leverage our security tools, specifically our defense in depth capabilities, in ways that we haven't necessarily needed to use in the past.
I'll give an example is applying some of those capabilities to newly identified thematic patterns that before this time weren't necessarily seen. So the basics are super important. You stick with hygiene. You're doubling back down. But then you're applying your capabilities in something that's net new in preparation to scale, identify, and respond.
Adam Blue
Yeah, I think that's great. I saw a report, and this came out on the 13th, so it's pretty fresh, from the AI Security Institute, which is, I think, a pretty well regarded set of researchers in the U.K. In their view, and you can Google to find the report, obviously, in their view, Mythos is better, but it's not a step change better than the existing tool. And their conclusion, which I thought was interesting, was for weakly defended organizations that are subscale in their security, this is very, very bad news. But for organizations that have concepts like defense in depth and active defense, which is, I think, an interesting concept that people are really starting to lean in on, there's a very different kind of threat profile that's created by this new set of technology.
And so it brings us to this seeming paradox, right? And so on the one hand, we have a set of folks that are building new tools that are capable of doing things that are theoretically, societally harmful. And on the other hand, those same people are telling us to buy as many of the tokens to use those tools as we possibly can and just not use them for evil.
So as a thoughtful person in particular, Ryan, what's your view on that paradox? How do you think about rationalizing that for a junior developer or a non-technologist?
Ryan Hollister
Yeah, I think, you know, Beth Anne's right. The basics are always where we start with really understanding what, you know, say even just going back to basics, the OWASP Top 10 is and like really living and observing a cross-site scripting vulnerability, click jacking vulnerability, or SQL injection vulnerability and hands-on operating it and then you know because if you don't have that firsthand context it's really hard to then reason about how to apply tools to navigate those contexts. So they just come up. If you've done this long enough, you've gone through all the generations of static code analysis tools that are out there.
And so to me, this is a fairly substantial evolution of that journey, but not like a step change of like, this is a completely new thing that we need to go learn and get educated about because it all kind of comes back to the same underlying vulnerabilities. It's just like how actionable are they? And I think you're right that for the organizations that have people like Beth Anne and I and others, we have huge teams that are subject matter experts in this. We come in every day. I think it's easy for us to fold this in and add this as another tool to our tool builds.
But I think the government recognizes that those are probably the exception and not the norm and the amount of computers and software and things on the internet out there in the world these days, like at the end of the day, we all as a society probably have a risk that even if it's not our stuff, like Q2 stuff, like we still have this like codependency throughout the system of, and I think you saw it some last year or the year before where Microsoft was taking some proactive updates. There's just pushing patches on Exchange servers and Windows boxes because we just can't wait for people to prioritize patching and stuff like that.
So I think that's the struggle with the industry right now. It's like it's easy to say, “Well that's everybody's responsibility, the company should just do what they're supposed to do,” but we're probably more codependent on each other than we might realize.
Adam Blue
Yeah, I think that's a good point, you know, the nature of the technology stack in which you build the top 10% of the software in the value model, right, is that you get to live with whatever mistakes were made below you. And so it's probably not going to be code that was written at an organization or even code in a library that an organization uses. It's much more likely to be operating system level code or driver level code, some of which maybe hasn't really been examined or looked at in a long time.
And so I think that, you know, the next thing to think about then is from an operational perspective, right? When a new vulnerability ships and a patch is made available—and I think the team at Anthropic is really trying to think through what does it mean to first use the information that they gained via the Mythos model to try and reduce the total amount of risk before it's released in a way that it increases the total amount of risk. And I think that's admirable.
But it means we're going to get a slew of patches. I mean, I don't think there's any other way to look at it. And so what does that velocity look like and what can organizations do and what is Q2 doing to really try and maximize the velocity at which we can absorb that set of security changes? Beth Anne, maybe take us through that.
Beth Anne Bygum
Yeah, I mean, I think there's two sides to that question. One is the defense side, which we are partnering with our strategic vendors to prepare and be able to interpret the weaknesses, those vulnerabilities as we are pending patches from the vendors, right? Being able to monitor that, interpret those weaknesses, apply higher level of intelligence against that. And that's a constant, right? And I'm, you know, constantly knocking on wood here because it's about practice and it's about pivoting the application of that intelligence real time. We're living in a state where it's a persistent application, persistent monitoring of that.
The other side of that conversation, which you rightly, you know, give us context around, is the speed with which we'll see Tuesday patch releases come out with more fierceness. An I think the recent change of our organization, Cloud First, being able to have faster life cycle management strategies, faster being able to rotate those strategies, is part of the future. And I think as we watch Anthropic and all of the leaders in this space deliver self-healing capabilities and organizations evolve their tech debt … I mean, we talked about that, right? You evolve the tech debt, I think, over time, holistically, most companies are going to evolve past that, but we're in this window where it's going to take some time, practice, and everyone just is operating eyes wide open at this point, for sure.
Adam Blue
Yeah, great. When you talk about patching, one of the things that I've always believed is if you don't have 15-year-old versions of operating systems, you are going to have to deal with a lot less patching. If you don't make choices that result in you having 17 different fundamental images under your docking containers, you're not going have to do nearly as much patching. And so some of this is, as technologists, when these kinds of things happen, these kinds of singular events, we invite this on ourselves by not chewing down some of the tech debt. Because if there's vulnerabilities and they're new, the fewer number of total operating systems and total versions you have to go deal with, the easier it's going to be to react. So if you've got something that runs on Python 3.6 running on some ancient version of SUSE Linux, it's like, man, that's a problem that you created for yourself to some extent.
I think this idea of getting to automation on the defense side and AI on the defense side is really interesting. And then it draws into question, before we let a model like Mythos automatically apply patches or make changes to operating environments, what are the things we need to think about in terms of how that works from a practical perspective? So Ryan, I know you're deeply involved in both the design and the execution of the SDLC at Q2. What are the kinds of things you worry about when you think about, wake up in the morning and I've got seven PRs from Mythos that tells me that I need to rebuild these containers or remove these secrets from a container, whatever it is. How do you think about keeping that from creating more chaos than the original problems themselves?
Ryan Hollister
Yeah, I mean, it's a real problem. It's a real problem for non-security, is that if the code velocity is increasing at a pace, but the overall software development life cycle pipeline is going to be unable to consume it all the way down to the value chain, then how are we going to get through it? I think Mythos will provide us, and the current models provide to a large degree this already, but I think Mythos being more focused and tuned to say will give us more insight.
But the idea that we are going to have vulnerabilities that show up and have proposed remediations, I think we're operationalizing this already today with a tool that we call AutoSecure that is running kind of on a nightly autonomous basis that is out there harvesting all the potential known CVEs, because in the In the modern technology stack, it's layers all the way down of open source packages. And in any given day you could show up in like your foundational library or five layers down from your foundational library could have a CDE that's found in it. But it's an acceleration because otherwise it would take a human to kind of navigate what is the appropriate remediation for this. Am I even exposed to it given the use cases that I have? Has historically been kind of a human curation of, well, sorry, that vulnerability applies to React server-side components, not using React server-side components.
So yes, it's a vulnerability, but probably lower risk because I don't use it. I think these models are going to really help us navigate some of the noise. Although the volume may come up, I think they're going to afford us the ability to navigate them quicker. And so it's probably not any less work, but I'm hoping that it comes out to actually be equal work and just more effective work.
Adam Blue
Yeah, OK, great. I think that's a great perspective. So Beth Anne, one of the things that you do so well for us is you've got a set of communications that goes outward and upward to customers and our board, who I know you spend a lot of time communicating with. And then you've got communications inside Q2 that go to employees and managers and leaders. So talk to me a little bit about post-Mythos, like what changes about what you're communicating both kind of up the organizational chart and then down the organizational chart and what are the messages you think that security and development leaders should be modeling?
Beth Anne Bygum
Yeah, thanks for the question. I want to pull the thread on what Ryan just said. It's all about rich context, rich interpretation, and then being able to communicate more meaningful insights. The minute we're able to interpret what the weakness or vulnerability means, you know, and again, as I mentioned earlier, it's real time and persistent analyzing that so that when we do release the message, it is clear, it's actionable, and we have teams that are able to respond and also running side by side with the teams to help them as they're walking through.
You know, I think the interesting thing about this conversation is that in reality, what Mythos is helping us do is get ready for what's in front of us, which is the next generation of what quantum computing, but you can't carry some of this tech debt into that next chapter. And so to Ryan's point, it's going to be slightly a little bit of an adjustment right now. We'll get through that together. And then to more specifically your point, Adam, is the communication is continuing to lean in role-specific, action-oriented, context-rich so that we are able to get in, address, and get out as quickly as possible.
Adam Blue
I think you brought up the Q word, I really don't want to get into today's podcast. But it is very likely that this is not going to be the last time that an improvement in an AI model or an improvement in technology kind of knocks out one of the legs of the stool in what we think about as foundational security and foundational, even to some extent, governance about how we manage technology.
I think one of the takeaways, and I think it really, you really highlight in your point about communication, but one of the takeaways is being good at the basic things and having rigor and discipline around them will always make you stronger, regardless of what the changes are. So if you are good at applying patches, if the velocity of patching goes up, you'll be good at doing that. If you understand your SDLC deeply and you have people that care about security, and they're thoughtful and you minimize your attack surface by being thoughtful about your tech debt, you will be in a better position.
There is no unbreakable. I remember Oracle put it on a side of trucks and drove it around at conferences for a while and I just thought, you're whistling through the graveyard right now. And OpenBSD, which was thought to be nearly unbreakable as an operating system, it was the one you used when you really needed guarantees. It's got a 17-year-old remote, exploitable, zero-day buffer overflow, if I remember the CD correctly. I think just working forward from a belief that nothing is unbreakable, nothing is really fully grounded, nothing is uncompromisable is a much better way to go forward. And think that's what effective organizations are probably going to do.
So the other lesson here to think through a little bit is we've had this kind of shock. People are absorbing it. We're thinking about what it means for banking. Maybe we can draw a little bit of conclusion, although it's very early. What does this tell us about AI in banking generally? What does it tell us about the application of AI or what's going to happen? Ryan, how do you take the lessons we've learned about this new model and everybody's excitement about it or terror about it and apply that broadly to AI and financial services?
Ryan Hollister
Yeah, I mean, I think just more and more the technology is the service and, you know, it is at the very, very least the primary vehicle for the services that our financial institutions deliver to their end customers. And, you know, the deeper that they understand it, the closer we can partner with them and not just have this, you know, vendor tell me and give me the software that you're shipping. And it's a tighter partnership and we can be equal partners and equally educate each other on what we're seeing out there, the better off we are. And those are the best partnerships that we have. And so the technology is a mutual responsibility and so as we communicate to our customers what our perspective is of what this means to our software and what we think it means to their systems, it's great to have partners like our customers who can kind of give us their perspective and what they're seeing and how that applies to their systems and software.
Because as Beth Anne said, the AI accelerates the ability to kind of traverse otherwise low-risk things and we're all in this together. And so it's not just like is our software secure and buttoned up, but is our software secure and buttoned up in the context of the systems that it's integrating with that the financial institution is bringing to the table because those are, it's the totality of the system, not just our software in isolation.
And so we really need customers that are willing to kind of come in and roll up their sleeves with us and partner through it so that we can really get a good context of what it means in the entirety of the system.
Adam Blue
Yeah, yeah, I think that's a really interesting insight. Now if I think about when I've seen or read about organizations or been first party to security issues or breaches or whatever, it's been very rare in my experience that someone has a security event as a result of a brand new zero day that came out 36 hours previously. It's usually some system that a bunch of people knew that they were issues with and they had a reason why they didn't fix it, there's a communication failure, there's a lack of orchestration, there's a lack sometimes of leadership. And so the real impact, right, comes from not managing the business, not managing the tech debt, not managing the attack surface, the portfolio in an effective way. Not somebody waking up at three o'clock in the morning and inventing a new exploit just against your organization.
I'm not saying that doesn't happen, but typically the kinds of things that are getting exploited are already so fixable in an environment that it's orchestration, communication, and organization that I think end up undermining security posture and security practice, much more so than some fascinating new technique that somebody throws out. So just kind of a takeaway there for everybody to think about. Doing your basics, being disciplined, applying governance, working as a team, leaning on partners, and being kind of in stoic, in some sense, about how you approach these things can be really valuable.
Ryan Hollister
Yeah, I think we've said AI amplifies everything, right? And so if the process is broken, then it's going to amplify the brokenness of that process. And if you don't have the right processes in place, it's only going to amplify that. But if you're in a good position and you have the right processes, it's really going to amplify the good practices as well as that.
Adam Blue
Mm-hmm. Alright Beth Anne, anything you'd add to give everybody a little hope?
Beth Anne Bygum
Yeah, you know what? As human beings, it's about leaning in, right? When the folks that win what they win because they get up every day, they get up and they show up and they keep at it. And so I think good. Good is about not letting the house get too dirty. Like you get in there, you stand, you get in and you clean it up and you just keep cleaning. And so the basics are where we will continue to lean in. The technology is going to do what the technology is designed to do. And so the opportunity now is to be prepared to win with this technology and run side by side with it. So thanks so much, Adam, for allowing me to be here with you.
Adam Blue
You bet. All right. Thanks, Beth Anne. Thanks, Ryan, for spending some time today on Cut to Context. So last, before we depart, I always have a recommendation of a piece of media or art for everyone to enjoy. There's a Christopher Nolan film called “The Prestige.” It stars Christian Bale and Hugh Jackman. And it is a phenomenal film about two men who destroy each other and everybody else around them by trying to be the best at a very specific thing. I don't think I've ever seen as amazing a film about magicians who despise each other in my entire life.
And somehow, I can't quite put my finger on it, but there's an extraordinary parallel between this film and what's going on in the AI space today. So if you've got some time, watch “The Prestige.” You will need to take notes because it is a classic Christopher Nolan twisty plot, but just a fantastic, fantastic piece of film to help you think about today's context a little bit differently.
So thanks everybody. Adam Blue. Thanks for joining us on Cut the Context. Have a great day.

.png?width=255&height=116&name=Spotify_white_bg%20(1).png)