Welcome to Eye on AI! In this edition…Meta wins AI copyright case in another blow to authors…Google DeepMind releases new AlphaGenome model to better understand the genome...Sam Altman calls Iyo lawsuit ‘silly’ after OpenAI scrubs Jony Ive deal from website, then shares emails.
This week, I spoke with Steven Adler, a former OpenAI safety researcher who left the company in January after four years, saying on X after his departure that he was “pretty terrified by the pace of AI development.” Since then, he’s been working as an independent researcher and “trying to improve public understanding of what the AI future might look like and how to make it go better.”
What really caught my attention was a new blog post from Adler, where he shares his recent experience participating in a five-hour discussion-based simulation, or “tabletop exercise,” with 11 others, which he said was similar to wargames-style exercises in the military and cybersecurity. Together, the group explored how world events might unfold if “superintelligence,” or AI systems that surpass human intelligence, emerges in the next few years.
A simulation organized by the authors of AI 2027
The simulation was organized by the AI Futures Project, a nonprofit AI forecasting group led by Daniel Kokotajlo, Adler’s former OpenAI teammate and friend. The organization drew attention in April for “AI 2027,” a forecast-based scenario mapping out how superhuman AI could emerge by 2027—and what that might mean. According to the scenario, by then AI systems could be using 1,000 times more compute than GPT‑4 and rapidly accelerating their own development by training other AIs. But this self-improvement could easily outpace our ability to keep them aligned with human values, raising the risk that seemingly helpful AIs might ultimately pursue their own goals.
The purpose of the simulation, said Adler, is to help people understand the dynamics of rapid AI development and what challenges are likely to arise in trying to steer it for the better.
Each participant has their own character whom they try to represent realistically in conversations, negotiations and strategizing, he explained. Those characters included members of the US federal government (each branch, as well as the President and their Chief of Staff), the Chinese government/AI companies, the Taiwanese government, NATO, the leading Western AI company, the trailing Western AI companies, the corporate AI safety teams, the broader AI safety ecosystem (e.g., METR, Apollo Research), the public/press, and the AI systems themselves.
Adler was tapped to play what he called “maybe the most interesting role”—a rogue artificial intelligence. During each 30-minute round of the five-hour simulation, which represented the passage of a few months in the forecast, Adler’s AI got progressively more capable—including at training even more powerful AI systems.
After rolling the dice—an actual, analog pair that was used occasionally in the simulation in cases where it was unclear what would happen—Adler learned that his AI character would not be evil. However, if he had to choose between self-preservation or doing what’s right for humanity, he was meant to choose his own preservation.
Then, Adler detailed, with some humor, the awkward interactions his AI character had with the other characters (who asked him for advice on superintelligence), as well as the surprise addition of a second player who played a rogue AI in the hands of the Chinese government.
A power struggle between AI systems
The surprise of the simulation, he said, was seeing how the biggest power struggle might not be between humans and AI. Instead, various AIs connecting with each other, vying for victory, might be an even bigger problem. “How directly AI systems are able to communicate in the future is a really important question,” Adler said. “It’s really, really important that humans be monitoring notification channels and paying attention to what messages are being passed between the AI agents.” After all, he explained, if AI agents are connected to the internet and permitted to work with each other, there is reason to think they could begin colluding.
Adler pointed out that even soulless computer programs can happen to work in certain ways and have certain tendencies. AI systems, he said, might have different goals that they automatically pursue, and humans need influence over those goals.
The solution, he said, could be a form of AI control based on how cybersecurity professionals deal with “insider threats”—when someone inside an organization, who has access and knowledge, might try to harm the system or steal information. The goal of security is not to make sure insiders always behave; it’s to build structures that prevent even ill-intentioned insiders from doing serious harm. Instead of just hoping AI systems stay aligned, we should focus on building practical control mechanisms that can contain, supervise, restrict, or shut down powerful AIs—even if they try to resist.
Forecasts and predictions are ‘hard’
I pointed out to Adler that when AI 2027 was released, there was plenty of criticism. People were skeptical, saying the timeline was too aggressive and underestimated real-world limits like hardware, energy, and regulatory bottlenecks. Critics also doubted that AI systems could quickly improve themselves in the runaway way the report suggested and argued that solving AI alignment would likely be much harder and slower. Some also saw the forecast as overly alarmist, warning it could hype fears without solid evidence that superhuman AI is that close.
Adler responded by encouraging others to express interest in running the simulation for their organization (there is a form to fill out), but admitted that forecasts and predictions are hard. “I understand why people would feel skeptical, it’s always hard to know what will actually happen in the future,” he said. “At the same time, from my point of view, this is the clear state of the art in people who’ve sat down and for months done tons of underlying research and interviews with experts and just all sorts of testing and modeling to try to figure out what worlds are realistic.”
Those experts are not saying that the world depicted in AI 2027 will definitely happen, he emphasized, but “it’s important that the world be ready if it does.” Simulations like this help people to understand what sorts of actions matter and make a difference “if we do find ourselves in that sort of world.”
Conversations with AI researchers like Adler tend to end without much optimism—though it’s worth noting that plenty of others in the field would push back on just how urgent or inevitable this view of the future really is. Still, it’s a relief that his blog post concludes with the hope, at least, that humans will “recognize the challenges and rise to the occasion.”
That includes Sam Altman: If OpenAI hasn’t already run one of these simulations and wanted to try it, said Adler, “I am quite confident that the team would make it happen.”
With that, here’s the rest of the AI news.
Sharon Goldmansharon.goldman@fortune.com@sharongoldman
Special Digital Issue: AI at Work
Fortune recently unveiled a new ongoing series, Fortune AIQ, dedicated to navigating AI’s real-world impact. Our second collection of stories make up a special digital issue of Fortune in which we explore how technology is already changing the way the biggest companies do business in finance, law, agriculture, manufacturing, and more. These companies are rolling up their sleeves to implement AI. Read more AI avatars are here in full force—and they’re serving some of the world’s biggest companies. Read more Will AI hold up in court? Attorneys say it’s already changing the practice of law. Read more Banking on AI: Firms such as BNY balance high risk with the potential for transformative tech. Read more Recycling has been a flop, financially. AMP Robotics is using AI to make it pay off. Read more AI on the farm: The startup helping farmers slash losses and improve cows’ health. Read more Can AI help America make stuff again? Read moreThis story was originally featured on Fortune.com
Read More Details
Finally We wish PressBee provided you with enough information of ( Why this former OpenAI researcher thinks it’s time to start gaming out AI’s future )
Also on site :