Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: Top Republican pushes party to shun $300 million AI lobby…AI model scams are scary good….Anthropic’s new AI model sets off global alarms.
As Anthropic Mythos drove a fresh wave of headlines this week—highlighting both its advanced capabilities and how easily such systems could be misused—I made my way to a conference room just outside Washington, D.C. There, a cross-sector group of AI security practitioners, standards-setters, and policy experts had gathered to figure out what securing AI should actually look like.
Outside the industry, their acronyms—SANS, NIST, CoSAI, OWASP—may not mean much. Inside security, they help set the rules organizations around the world follow. But right now, those rules are struggling to keep up.
I had been invited to sit in on the discussion as organizations race to plug AI into everything—handing over sensitive data and critical workflows—even as those same systems are becoming more attractive targets for adversaries.
Leading the session was Rob van der Veer, chief AI officer at software platform Software Improvement Group and a founder of the AI Exchange at security community OWASP. Systems like Mythos, he said, are accelerating how quickly vulnerabilities can be discovered—and shifting the balance toward attackers.
“They show that weaknesses in AI systems can now be found faster and at scale—often before developers are aware of them,” he said. “This shifts the balance toward attackers and reduces the margin for error.” So far, concerns about Mythos have mostly focused on how good it and similar models are at finding so-called “zero-day” vulnerabilities in traditional software, but they can also discover vulnerabilities in the AI models and systems that enterprises are increasingly deploying across their organizations.
The problem is that most organizations aren’t ready to deal with most of the AI security concerns that are already clear and the emerging ones coming down the pike. There’s a growing need for practical guidance—how to identify AI-specific threats, and what to do about them. But the field remains fragmented, with overlapping frameworks, competing recommendations, and little agreement on where to start.
How to secure AI systems is still unsettled
Even some of the basics are still unsettled. What does it mean to measure whether an AI system is secure? How should that differ across use cases, infrastructure, or third-party tools versus underlying models? Should guidance focus on capabilities, or outcomes?
Gary McGraw, cofounder of the Berryville Institute of Machine Learning, pointed to a core gap: Today’s benchmarks tend to measure how well AI systems can perform security tasks—not how secure the systems themselves are. Companies need to keep that distinction in mind when evaluating their tools and defenses.
McGraw warned as far back as 2019 that securing machine learning systems would be “one of the defining cybersecurity struggles of the next decade.” That moment has now arrived.
“These meetings are a way to remind ourselves of the fundamentals,” he said, “as we try to define what machine learning security actually is.”
Another significant concern is that no finite set of guardrails is universally robust against adversarial prompts, said Apostol Vassilev, a research team supervisor working on AI security at the National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce. “This means that the security of AI systems is not a static problem—one that can be solved once and done,” he said. Unlike many traditional software vulnerabilities that can be patched, AI security requires a more dynamic approach: continuously updating guardrails to address known exploits, conducting internal red teaming to uncover new adversarial prompts, patching defenses before attackers strike, and prioritizing resilience so enterprises can limit the impact of—and recover quickly from—inevitable exploits.
“Ultimately, the goal is to reach an equilibrium that makes it difficult and costly for attackers to find new exploits,” he added. “But that can only happen if businesses invest in adopting and maintaining this dynamic posture.”
Similar to transition to securing software
Still, many of the meeting’s attendees remain optimistic the industry will catch up. McGraw noted that security has been through transitions like this before, such as the software boom of the mid-90s. “We didn’t have to panic when software swamped the world,” he said. “I remember when banks realized, ‘Oh my God, we’re a software company.’”
At moments like this, the narrative communicated by companies like Anthropic and OpenAI can run ahead of the reality, he warned. “Security loves a good story with a flaming pile of broken stuff and the fire department coming to the rescue,” he said. “I am still optimistic that we’re making progress towards better security engineering all the time. We can take what we’ve learned and we can apply that to machine learning.”
And that’s why these kinds of meetings about industry coordination matters, said van der Veer. “Aligning standards and guidance across initiatives reduces fragmentation, improves clarity, and gives practitioners a coherent path forward,” he explained. “It enables organizations to move fast without losing control.”
With that, here’s more AI news.
Sharon [email protected] @sharongoldman
This story was originally featured on Fortune.com
Hence then, the article about ai security leaders gather in washington as risks mount and mythos raises the stakes was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( AI security leaders gather in Washington as risks mount—and Mythos raises the stakes )
Also on site :
- Citizens Bank Customers Targeted in Third-Party Data Breach .. PYMNTS.com
- Trump shares letter calling India and China ‘hellholes’
- Caesars faces new lawsuit after second alleged data security breach claims
