ENFR
8news

Tech • IA • Crypto

TodayMy briefingVideosTop articles 24hArchivesFavoritesMy topics

Anthropic Situation Just Got Even More Insane

AIAI RevolutionMay 9, 2026 at 10:18 PM17:05
0:00 / 0:00

TL;DR

Anthropic is rapidly transforming from a safety-focused AI lab into a central player in a global infrastructure race defined by compute power, strategic alliances, and geopolitical tension.

KEY POINTS

SpaceX Deal Reshapes Compute Access

Anthropic secured access to SpaceX’s Colossus 1 data center, providing over 300 megawatts of power and more than 220,000 NVIDIA GPUs. Crucially, this capacity is available כמעט immediately, not years away. The deal directly addresses Anthropic’s most visible weakness: limited compute availability rather than model capability.

Immediate Product Impact

Following the agreement, Anthropic expanded usage limits across its platform. Claude Code rate limits doubled, peak-hour restrictions were removed for key plans, and API throughput for Claude Opus increased from hundreds of thousands to millions of tokens per minute. These changes indicate demand had been constrained primarily by infrastructure shortages.

Unexpected Alignment with Elon Musk

The partnership is notable given Elon Musk’s prior criticism of Anthropic. Despite leading rival firm xAI, Musk now provides infrastructure support via SpaceX. The alignment reflects shared incentives, including monetizing idle compute and counterbalancing OpenAI, a common competitor.

Near-Trillion-Dollar Valuation Push

Anthropic is reportedly seeking up to $50 billion in funding at a valuation approaching $900 billion, potentially exceeding OpenAI’s $852 billion. Annualized revenue is projected to surpass $45 billion, driven by enterprise adoption and developer tools like Claude Code.

Massive Multi-Cloud Commitments

The company has assembled extensive infrastructure partnerships:

  • Up to 5 GW with Amazon, including 1 GW by 2026
  • 5 GW agreement with Google and Broadcom starting 2027
  • $30 billion in Microsoft Azure and NVIDIA capacity
  • A reported $200 billion commitment to Google Cloud over five years

These overlapping alliances highlight how AI competitors are simultaneously interdependent.

Compute as the Core Battleground

The competition is shifting from model quality to control over chips, electricity, and data centers. Frontier AI development now depends on large-scale physical infrastructure, making cloud providers and hardware partners निर्ण decisive strategic actors.

Government Conflict and Pentagon Exclusion

Anthropic was excluded from a Pentagon AI vendor list after disputes over military use policies. The company reportedly resisted terms allowing unrestricted use, including autonomous weapons. A federal judge temporarily blocked its designation as a supply chain risk, and discussions have since resumed.

Mythos and Cybersecurity Risks

Anthropic developed Claude Mythos, a powerful vulnerability-detection system capable of identifying software flaws at scale. In one case, it reportedly found 271 vulnerabilities in Firefox. The tool remains restricted due to concerns that similar capabilities could enable cyberattacks if widely distributed.

Systemic Risk Beyond Software

Advanced models like Mythos could extend beyond code to analyze financial systems, legal frameworks, and regulatory structures, identifying exploitable loopholes. This raises broader concerns about AI accelerating systemic vulnerabilities across society.

Global Strategic Investment Signals

Entities such as Kazakhstan’s National Investment Corporation have taken stakes in Anthropic, reflecting a shift toward viewing AI firms as strategic national assets rather than conventional startups.

Orbital Compute Ambitions

Anthropic has expressed interest in developing space-based data centers with SpaceX. While speculative, the idea reflects growing constraints on الأرض-based infrastructure, including power, land, and regulation.

CONCLUSION

Anthropic’s expansion reveals that the AI race is no longer centered on chatbots but on infrastructure, alliances, and strategic control, placing the company at the heart of a rapidly intensifying global competition.

Full transcript

Anthropic just entered one of the strangest chapters in AI history. A company that started as the cautious alternative to open AI is now surrounded by some of the biggest numbers, biggest alliances, and biggest contradictions in tech. a near $1 trillion valuation, a massive SpaceX compute deal, more than $220,000 NVIDIA GPUs, a reported $200 billion commitment with Google Cloud, a fight with the Pentagon, a secretive hacking model called Mythos, and Elon Musk, who once attacked Claude now suddenly giving anthropic access to one of the most powerful AI supercomputers on Earth. None of this feels random. Anthropic is not just scaling Claude. It is being forced into a much bigger game where models are only one part of the story. The real battle is compute, electricity, government access, cyber security, enterprise control, and who gets enough infrastructure to survive the next phase of AI. So, while everyone is still comparing Claude and Chat GPT like this is just another chatbot race, something much deeper is happening underneath. Anthropic may be turning into the company that exposes what the AI war is really about. And the strangest part starts with Elon Musk. For months, Elon has been one of Anthropic's loudest critics. He mocked Claude, criticized Anthropic's culture, called the company hypocritical, and treated it like one of the examples of everything he dislikes about the current AI industry. And that matters because Elon runs XAI. Gro is directly competing with Claude, Chat, GPT, Gemini, and every other major AI assistant fighting for developers, enterprise users, and attention. Then suddenly, Anthropic announces a partnership with SpaceX. Not a small partnership. Anthropic says it will use all of the compute capacity at SpaceX's Colossus 1 data center. That means more than 300 megawatts of capacity and over 220,000 NVIDIA GPUs. And the important detail is that this capacity comes online within the month, not years from now. Not after some huge future construction project. It is basically usable now. That is why this deal matters. Anthropic was not on the edge of death. It did not need Elon to rescue it like some collapsing startup. Anthropic is one of the fastest growing AI companies in the world with giant investors, huge enterprise momentum, and multiple cloud partners already lined up. But it was clearly compute constrained. Claude had the demand. Claude code had become one of the hottest tools for developers. Opus was still one of the strongest models for serious work. The issue was that Anthropic did not have enough infrastructure to give people the access they wanted. Users felt that clearly. Claude's limits became one of the biggest complaints around the product. People were paying for Pro or Max and still hitting walls. Claude Code users were running into 5-hour limits. Peak hour reductions made the experience feel even worse. API users wanted more Opus capacity. Developers building real products needed predictable access, not a paid subscription that still felt strangely restricted. So when the SpaceX deal lands, Anthropic immediately makes changes. Claude Code's 5-hour rate limits get doubled for Pro, Max, Team, and seatbased enterprise plans. Peak hour limit reductions get removed for claude code on pro and max. API rate limits for claude opus jump massively with some tiers moving from hundreds of thousands of input tokens per minute to millions. That is anthropic. Basically showing everyone that Claude's biggest problem was not model quality, it was capacity. And this is where Elon's move becomes interesting. The obvious reason is money. If SpaceX or XAI has huge compute capacity available and Anthropic is willing to pay for it, that is a serious business opportunity. These clusters are insanely expensive. Letting them sit underused would make no sense. AI compute has become one of the most valuable assets in tech, and selling access to it can be almost as strategic as using it yourself. But there is probably another layer, too. Elon's biggest AI enemy is not anthropic. It is open AI. His fight with Open AAI is personal, legal, ideological, and public. He has repeatedly attacked OpenAI for moving away from its original nonprofit mission. And XAI is clearly positioned as a counterweight to OpenAI's dominance. So, from Elon's perspective, working with Anthropic may be awkward, but helping one of OpenAI's biggest rivals gain capacity might still serve his broader goal. That does not mean Elon suddenly loves Anthropic. It means the incentives lined up. Anthropic needed compute. SpaceX had compute. Elon wants Open AI challenged. Anthropic wants to close the gap with Open AI. Both sides can benefit, even if the relationship looks ridiculous from the outside. And yes, Elon did soften his tone publicly. After meeting senior anthropic people, he said he was impressed, that they seemed highly competent, and that no one triggered his evil detector. That is a pretty funny reversal considering his earlier comments. But in AI right now, old insults apparently matter less than available GPUs. The bigger story is that Anthropic is no longer just trying to be the careful alternative to OpenAI. It is trying to move into the same weight class as OpenAI. According to reports, Anthropic is looking to raise up to $50 billion this summer. The rumored pre- money valuation is around $900 billion and after the financing, the company could approach $1 trillion. If that happens, Enthropic could surpass OpenAI's reported $852 billion valuation and become the most valuable AI startup in the world. That would have sounded insane not long ago. Anthropic was founded as the safety first lab. The company talked about alignment, constitutional AI, controlled deployment, and responsible scaling. Open AAI was the explosive consumer platform. Google had the research labs and infrastructure. Meta had open models. XAI had Elon in speed. Anthropic was the serious, quieter, more cautious player. Now it is being valued like a company that could become one of the central operating systems of the AI economy. Reports say Anthropic's annualized revenue could soon exceed $45 billion compared with around 9 billion at the end of 2024. The big drivers are clawed code for developers and co-work for non-technical enterprise users. So, Anthropic is not only selling chatbot access. It is moving into work itself. Coding, enterprise assistance, internal tools, regulated industries, and high value business workflows. But every new customer needs compute. Every coding session needs compute. Every API product built on claude needs compute. Every enterprise deployment needs reliable infrastructure. Every stronger model needs even more training and inference capacity. That is why Anthropic is signing deals everywhere. The SpaceX deal is only one piece. Anthropic says it also has an up to 5 gawatt agreement with Amazon, including nearly 1 gawatt of new capacity by the end of 2026. It has a 5 gawatt agreement with Google and Broadcom expected to start coming online in 2027. It has a strategic partnership with Microsoft and Nvidia that includes $30 billion of Azure capacity. It has a $50 billion investment in American AI infrastructure with fluid stack. Then there is the Reuters report saying Anthropic has committed to spending $200 billion with Google Cloud over 5 years. That number is so huge that it reportedly represents more than 40% of Google's disclosed revenue backlog. And Google is not just a vendor here. Alphabet is reportedly investing up to $40 billion into Anthropic. So the relationship is both partnership and rivalry. That is the weird thing about the AI industry now. Everyone is competing and depending on each other at the same time. Anthropic competes with Google's Gemini, but needs Google Cloud and TPUs. Anthropic competes in a world dominated by OpenAI and Microsoft, but it has Microsoft and Nvidia capacity. Anthropic competes with XAI, but uses SpaceX compute. It works with Amazon, while Amazon has its own AI chips and ambitions. Clean rivalries do not really exist anymore because compute is too important. And it is not only tech companies getting involved. Kazakhstan's National Investment Corporation became a direct shareholder in Anthropic through its series fround, investing $25 million alongside major international investors. Compared with the giant cloud deals, that number is small, but symbolically it matters. A country is taking a direct position in a frontier AI company because these companies are starting to look like strategic assets, not just startups. And that brings us to the government side where the story gets even more messy. The Pentagon recently signed AI agreements with eight major tech companies. SpaceX, OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, Oracle, and Reflection. Anthropic was not included. According to the reporting, the Trump administration had blacklisted Anthropic after a fight over safety guard rails for military use of AI. Anthropic reportedly refused to accept terms that would allow Claude to be used for all lawful purposes, including autonomous weapons and mass surveillance. That is a very anthropic conflict. The company wants enterprise and government relevance, but it also wants safety boundaries. The Pentagon wants powerful AI tools inside classified networks. competitors are willing to sign. Anthropic pushes back and suddenly it is labeled a supply chain risk which is an extremely serious label usually associated with companies tied to foreign adversaries. Anthropic sued and a federal judge blocked the government's effort at least temporarily. The White House also reportedly reopened discussions after Anthropic announced major technology breakthroughs. So Anthropic may still return to the table but the message is clear. Once Frontier AI becomes part of national security, safety principles will collide with government demands. And that collision becomes even more intense when you look at Mythos. Claude Mythos preview is reportedly so powerful at finding software vulnerabilities that Anthropic refused to release it to the public. Instead, it would only be available to selected companies to scan and fix their own software. Mosilla reportedly used Mythos to find 271 vulnerabilities in Firefox, which were then fixed. On the defensive side, that sounds great. If AI can find vulnerabilities before attackers do, software becomes safer, companies can patch faster, security teams can automate work that used to take huge amounts of time. But the darker side is obvious. If models become better at finding vulnerabilities, attackers can use similar capabilities, too. Not just elite hackers. Criminal gangs, ransomware crews, and smaller groups could scan code, find weak points, generate exploit strategies, and move faster than human defenders are used to. Bruce Schneider's argument goes even further. Mythos itself may not be totally unique because other models, including OpenAI's GPT 5.5 and smaller systems, have reportedly shown comparable abilities in some evaluations. But that almost makes it scarier. The danger is not just one secret anthropic model. The danger is that this capability is spreading across the whole AI ecosystem. And once AI becomes good at finding flaws in software, the same pattern may apply to other complex rule systems, tax codes, financial rules, environmental regulations, legal frameworks. Any system filled with rules, exceptions, loopholes, edge cases, and incentives could become something AI can analyze at superhuman scale. That is a different kind of risk from the usual chatbot story. This is about AI accelerating the discovery of exploitable weaknesses in the system society runs on. So, Anthropic is in a strange position. On one side, it is racing to become possibly the most valuable AI startup in the world. On another side, it is still trying to present itself as the lab that takes dangerous capabilities seriously. Those two identities can exist together, but the tension is getting stronger. The public sees Claude, a clean chat interface, and a coding tool. Underneath that, there is a massive industrial machine forming around it. NVIDIA GPUs in Memphis, Google TPUs coming online in 2027, Amazon Tranium capacity, Azure deals, Broadcom chips, fluid stack infrastructure, sovereign investors, Pentagon fights, cyber security models, and even potential orbital compute discussions with SpaceX. The orbital compute part sounds almost absurd, but it also fits the moment. Anthropic said that as part of the SpaceX agreement, it has expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity. That sounds like science fiction marketing. But when data centers are limited by land, power, pooling, grid access, permits, and politics, companies start looking at extreme options. This is where AI stops looking like normal software. A normal software company can scale with cloud servers and better code. Frontier AI companies need chips, power, data centers, networking equipment, cooling systems, long-term capital, and political permission. The model is only the visible layer. The real battle is underneath. Claude did not suddenly become important because of one new benchmark. It became important because people actually want to use it, especially for coding and serious work. But the more people use it, the more Anthropic needs infrastructure. The more infrastructure it needs, the more it must depend on giants like Google, Amazon, Microsoft, Nvidia, SpaceX, and Broadcom. And the more it depends on those giants, the more tangled the entire AI industry becomes. That is the part that should make OpenAI nervous. Anthropic is now showing that it can attract almost every major compute provider at once. It can pull in capital. It can get enterprise traction. It can turn clawed code into a developer weapon. It can convince investors that it belongs near OpenAI's valuation range, and it can still maintain enough of a safety brand that people take Mythos seriously when the company says it is too sensitive for public release. That combination is powerful, but it is also fragile. Enthropic now has to prove that all this compute and money actually turns into a better product experience. More capacity needs to mean fewer frustrating limits, clearer subscription value, better API reliability, stronger models, and less confusion for developers. If users still feel blocked after all these deals, the backlash will be worse because expectations are now much higher. The company also has to manage the political side carefully. Refusing certain military uses may protect anthropic safety image, but it can also cost them enormous government contracts. Re-entering those discussions may unlock money and influence, but it can also create criticism from people who supported anthropic because of its stronger safety stance. Then there is the mythos problem. Holding back dangerous capability sounds responsible, but it also invites skepticism. Some people will say Anthropic is being careful. Others will say it is using fear to boost valuation. And if similar capabilities are available from other models, Anthropic's restraint may matter less than the broader industry trend. Anthropic is trying to beat OpenAI, work with Elon's infrastructure, depend on Google's cloud, use Amazon's chips, take Microsoft and Nvidia capacity, satisfy enterprise customers, stay credible on safety, navigate the Pentagon, manage cyber security risks, and justify a valuation that could approach $1 trillion. That is an insane position for any company to be in. And the wild part is that Anthropic may actually have a shot. Claude has real fans. Claude Code has serious momentum. Enterprise demand is clearly there. The compute bottleneck is being attacked from every direction. Investors are lining up. Governments and sovereign funds are paying attention. The company is no longer just the careful alternative to open AI. It is becoming one of the main characters in the AI infrastructure war. Also, if you want more content around science, space, and advanced tech, we've launched a separate channel for that. Links in the description. Go check it out. So, what do you think is really happening here? Let me know in the comments. Subscribe for more AI news like this and hit the like button if you enjoyed the video. Thanks for watching and I'll catch you in the next one.

More from AI