A federal judge just called the Pentagon’s blacklisting of Anthropic “classic illegal First Amendment retaliation” — and blocked it. Two days after a hearing where she told government lawyers their national security argument seemed like “a pretty low bar,” Judge Rita Lin issued a 43-page preliminary injunction halting the Trump administration’s ban on Anthropic across all federal agencies. The supply chain risk designation — the first ever applied to an American company — is frozen. The directive ordering agencies to stop using Claude is paused. And the judge used the word “Orwellian” in the ruling.

That same week, Anthropic accidentally leaked its next model. OpenAI killed Sora to clear the deck for IPO. SoftBank borrowed $40 billion against that bet. Huawei’s new chip landed its first major orders. And a threat actor spent the week cascading through the AI supply chain, compromising Trivy, Checkmarx, LiteLLM, and telnyx in nine days. This was the week where the fights that actually matter — legal, financial, and operational — drowned out the model benchmarks.


⚡ Anthropic Won in Court. Then Leaked Its Most Powerful Model.

”Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary for expressing disagreement with the government.”

That’s from Judge Rita Lin’s March 26 ruling. The 43-page preliminary injunction blocks both the Pentagon’s supply chain risk designation and President Trump’s directive banning federal agencies from using Claude. The order is stayed for seven days.

The March 24 hearing lasted 95 minutes. Lin questioned the government’s lawyer directly, asking whether being “stubborn” in contract negotiations was really enough to brand a company a national security threat. She called the restrictions “troubling” and said they weren’t “tailored to the stated national security concern.”

The backstory: Anthropic signed a $200 million Pentagon contract last July and was the first AI lab to deploy on classified networks. Talks broke down in September when the Pentagon demanded unrestricted access for “any lawful purpose” and Anthropic held firm on two red lines — no mass domestic surveillance, no fully autonomous weapons. Defense Secretary Hegseth gave CEO Dario Amodei a four-day ultimatum in late February. Amodei refused. The blacklisting followed within days. Anthropic filed suit on March 9. Microsoft, former military leaders, the Cato Institute, and the Electronic Frontier Foundation all filed amicus briefs supporting Anthropic.

Lin’s language suggests she believes Anthropic is “likely to succeed on the merits” — the legal standard for granting a preliminary injunction. The government’s Under Secretary called the ruling “a disgrace.” The appeals court case in D.C. is still pending.

Then, on the same day as the ruling, Fortune reported a data leak. A misconfigured content management system left nearly 3,000 unpublished Anthropic assets in a publicly accessible data store — including a draft blog post announcing Claude Mythos, described as “the most capable we’ve built to date.” Anthropic confirmed the model is real, calling it “a step change” in performance, and said it’s already being tested with early access customers. The draft introduces a new model tier called Capybara — sitting above Opus, meaning bigger, smarter, and more expensive. The draft also flagged that Mythos is “currently far ahead of any other AI model in cyber capabilities” and warned it “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.” The rollout plan reportedly focuses on giving cyber defenders a head start before broader release. Anthropic attributed the leak to “human error” in its CMS configuration.

The irony of a safety-focused company leaking its most powerful model through an unsecured data store writes itself. But the substance matters more: Anthropic is sitting on a capability tier that could reshape cybersecurity, and it’s choosing a cautious release over a race to market. That’s the same philosophy that got it blacklisted.

Why it matters for builders: Two signals. First, the legal precedent: companies can still set safety boundaries on government AI use without facing economic annihilation — for now. Second, the Mythos signal: a new capability tier is coming, and Anthropic is telegraphing that cybersecurity implications are severe enough to change its entire release strategy. If you’re building security tooling, the window just got shorter.


🧠 OpenAI Killed Sora. SoftBank Borrowed $40B. Do the Math.

The IPO consolidation play is now visible from space

OpenAI announced on March 24 that Sora is being shut down entirely — the standalone app, the API, and Sora.com. Disney’s $1 billion investment deal, structured as stock warrants and signed just three months ago, is dead. A Disney spokesperson told Variety they “respect OpenAI’s decision to exit the video generation business.”

The economics explain the decision. Sora downloads peaked in November at roughly 3.3 million and dropped to about 1.1 million by February, according to Appfigures data cited by TechCrunch. Lifetime in-app revenue was approximately $2.1 million. OpenAI told CNN it needed to “make trade-offs on products that have high compute costs.” The Sora research team is being redirected toward robotics and world simulation research.

Three days later, on March 27, SoftBank secured a $40 billion unsecured bridge loan — maturing in just 12 months — to fund its $30 billion commitment to OpenAI’s $110 billion raise. JPMorgan, Goldman Sachs, Mizuho, SMBC, and MUFG signed on. An unsecured 12-month loan of that magnitude only makes sense if the lenders believe a liquidity event is coming fast. TechCrunch connected the dots explicitly: the banks are betting on an OpenAI IPO in late 2026. SoftBank’s total OpenAI exposure is now over $60 billion.

Connect the dots. Kill compute-intensive consumer products. Consolidate around enterprise and agentic workflows. Raise $110 billion. Prepare for what could be the largest tech IPO in history at a reported $730 billion valuation. Sora didn’t die because video generation failed. It died because IPO math demands focus.

NBC News reported that the closure comes as OpenAI has “come under intense pressure from rival Anthropic, whose AI systems have soared in popularity among leading businesses and software engineers.” Anthropic focused compute on text and code. OpenAI spread across images, video, and social features. The market picked a winner.

Why it matters for builders: If you built on the Sora API, you’re in migration mode now — Runway Gen-4.5, Kling, and Google Veo are the leading alternatives. But the broader lesson is about vendor dependency. OpenAI just killed a flagship product with minimal notice. If your core feature depends on a third-party API from a company optimizing for IPO optics, build the contingency plan now.


👀 Huawei’s 950PR Just Landed Its First Real Orders

The chip that learned to speak enough CUDA to matter

Reuters broke on March 27 that ByteDance and Alibaba are placing orders for Huawei’s Ascend 950PR — the first time major Chinese private-sector tech firms have committed to a Huawei AI chip at scale. Previous attempts with the Ascend 910C struggled to gain adoption. This time is different, and the reason is software, not silicon.

The key change: Huawei built a new translation layer called CANN Next that lets developers run code written for Nvidia’s CUDA ecosystem with significantly reduced migration friction. To be clear — this is not native CUDA support. Nvidia’s developer platform is proprietary and has never been licensed to Huawei. What Huawei built is a compatibility bridge that mimics CUDA workflows well enough for the use case ByteDance and Alibaba care about most right now: running deployed AI models at scale, not training new ones. Reuters described the chip as “more compatible with Nvidia’s CUDA software system” — a careful framing that matters.

The 950PR offers only a small improvement in raw computing power over the 910C, but it’s optimized for inference workloads — answering queries, running agents, serving predictions. That aligns with where China’s AI sector is actually spending compute as it shifts from model development to mass deployment, a trend accelerated by the OpenClaw agent explosion.

Huawei plans to ship 750,000 units in 2026, with mass production starting in April and full shipments in the second half. Pricing starts at roughly $6,900 per card (DDR version), with a premium HBM variant at about $9,600. Meanwhile, Nvidia’s best chips remain banned from China by U.S. export controls, and the approved H200 is still in regulatory limbo.

Why it matters for builders: If 750,000 units actually ship, it establishes something Huawei hasn’t had before: a commercial installed base that future chips can build on. Huawei doesn’t need to beat Nvidia globally — it needs to be good enough, available, and familiar enough to serve China’s largest AI operations while Nvidia’s access stays uncertain. For builders working across global markets, the hardware stack bifurcation between the U.S. and China just became more concrete.

Hype vs. Reality: 6/10 — The orders are real and the migration friction reduction is meaningful. But “plans to ship” and “shipping at scale” are different things, and the translation layer’s limitations on more demanding workloads remain untested in production.


🚨 Your AI Supply Chain Is Under Active Attack

A threat actor cascaded through five ecosystems in nine days

The threat group TeamPCP ran a coordinated supply chain campaign this week that should have every AI builder auditing their dependencies right now. The attack cascaded across security tools and AI infrastructure in a pattern that specifically targeted the most security-conscious organizations — the ones running vulnerability scanners in their CI/CD pipelines.

The timeline, verified across reports from Wiz, Kaspersky, SANS, Datadog, and Arctic Wolf:

March 19: TeamPCP compromised Aqua Security’s Trivy — one of the most widely used open-source vulnerability scanners — by exploiting a previously stolen service account. They force-pushed malicious binaries to 76 of 77 trivy-action version tags and published an infected Trivy binary (v0.69.4). CVE-2026-33634, CVSS 9.4.

March 20: Stolen npm tokens fed a self-propagating worm (CanisterWorm) that infected 66+ npm packages.

March 23: The campaign hit Checkmarx’s KICS GitHub Action and AST GitHub Action — infrastructure-as-code security scanning tools. Between 12:58 and 16:50 UTC, any pipeline running the compromised action executed the attackers’ credential stealer. Malicious VS Code extensions were also published.

March 24: LiteLLM — a universal proxy for 100+ LLM APIs, present in 36% of cloud environments according to Wiz, with roughly 95 million monthly PyPI downloads — was compromised via a stolen PyPI publishing token that Trivy had inadvertently exfiltrated from LiteLLM’s own CI pipeline. Versions 1.82.7 and 1.82.8 contained a credential stealer targeting API keys, cloud credentials, Kubernetes configs, and more. PyPI quarantined the packages within about three hours.

March 27: The telnyx PyPI package (versions 4.87.1 and 4.87.2) was backdoored using the same playbook.

The pattern is what matters. TeamPCP didn’t attack random packages — they targeted security scanners and AI infrastructure tools that run with broad access by design. Compromising one gives the attacker every credential that tool was trusted to touch.

Why it matters for builders: If your CI/CD pipelines used Trivy, Checkmarx KICS, or LiteLLM between March 19 and March 27, treat every secret those tools touched as compromised. Rotate API keys, cloud credentials, and Kubernetes tokens. Pin all dependencies to verified hashes. Use scoped credentials. And recognize that the “standard” AI development workflow — unpinned pip installs in automated pipelines — is now a documented attack vector.


📡 Quick Signals

GitHub will start training on your Copilot interaction data on April 24. Inputs, outputs, code snippets, and associated context from Free, Pro, and Pro+ users will be used to improve AI models unless you opt out. Business and Enterprise customers are excluded. The signal: privacy is becoming a paid product tier, and the most valuable AI training data is shifting from the public web to live developer workflows. If you work with proprietary code on Copilot Free or Pro, opt out before April 24.

Sanders and AOC introduced the AI Data Center Moratorium Act on March 25. The bill would ban all new data center construction until Congress passes comprehensive AI regulation, including worker protections, environmental safeguards, and pre-release government review of AI models. Senator Warner called it “idiocy.” Fetterman called it “China First.” It won’t pass, but the populist backlash against AI infrastructure is finding its legislative voice.

Washington Governor Ferguson signed two AI safety bills on March 24. HB 2225 regulates AI companion chatbots — requiring disclosure that the bot is AI, protections for minors against explicit content and manipulative engagement techniques, and suicide/self-harm protocols for all users. It includes a private right of action and takes effect January 1, 2027. Separately, Illinois introduced HB 5044, the Chatbot Provider Liability Act, which would designate chatbots as products subject to strict liability — a more aggressive legal framework that’s still in committee. The regulatory patchwork is accelerating.

Trump is forming an AI-focused Technology Council. WSJ reported on March 25 that Zuckerberg, Ellison, and Huang are being appointed to a 24-person council co-chaired by AI czar David Sacks. This is the policy arm of the same administration that just lost the Anthropic injunction — and it’s composed entirely of executives whose companies benefit from fewer AI regulations.

OpenAI launched a Safety Bug Bounty on March 25 with scope that explicitly includes “agentic risks including MCP.” That’s a frontier lab paying external researchers to find vulnerabilities in the agent-to-tool layer. Safety is becoming middleware.

MCP crossed 97 million monthly SDK downloads as of March 2026 — up from roughly 2 million at launch in November 2024. Every major AI provider now ships MCP-compatible tooling. Over 5,800 servers are live. The 2026 roadmap, published March 9, prioritizes enterprise readiness: auth, audit trails, and gateway behavior. If your internal tools aren’t MCP-accessible, they’re invisible to the agent ecosystem.


💰 The Opportunity: AI Security and Compliance Infrastructure

Every attack, every ruling, every liability bill is drawing you a product roadmap

This week produced a federal court ruling on AI safety boundaries, a state law with a private right of action against chatbot operators, a coordinated supply chain attack compromising five ecosystems, GitHub turning workflow data into a training set, and OpenAI launching a bug bounty targeting agent-layer vulnerabilities. The pattern: the security and compliance layer of AI is being built right now — by courts, legislatures, and attackers — and the tooling hasn’t caught up.

Specifically:

  • CI/CD security for AI dependencies — TeamPCP proved that the standard pip install workflow is a documented attack vector. Tools that verify, sandbox, and monitor AI library supply chains have an immediate market.
  • Agent audit and observability — Enterprise customers need session tracing, approval checkpoints, anomaly detection, and immutable logs for every agent action. Nobody owns this category yet.
  • Compliance middleware for chatbot operators — Washington’s HB 2225 requires disclosure, minor protections, and self-harm protocols. Illinois is considering strict product liability for chatbot providers. Builders who ship deterministic guardrails and regulatory reporting into agent workflows will win enterprise contracts.
  • Privacy-tiered developer tooling — GitHub just demonstrated that data privacy is a market segment. Developer tools that guarantee data isolation at every pricing tier have a real competitive wedge.

Time to first dollar: 4-8 weeks if you’re building on existing security or compliance frameworks. The demand exists. The tooling doesn’t.


🎯 The Playbook

Your move this week

  1. Audit every AI dependency in your CI/CD pipeline today — TeamPCP compromised Trivy, Checkmarx KICS, LiteLLM, and telnyx in nine days by cascading through stolen credentials. If your pipelines used any of these tools between March 19 and March 27, rotate every secret they touched. Pin all dependencies to verified hashes. Use scoped credentials. This isn’t optional hygiene — it’s active incident response.

  2. Opt out of GitHub’s Copilot training before April 24 — If you use Copilot Free or Pro and work with proprietary or client code, go to Settings → Copilot → and disable training data collection. Don’t wait.

  3. Build a Sora migration plan if you were on the API — Evaluate Runway Gen-4.5, Kling, and Google Veo this week. The broader lesson: never build a core feature on a single vendor API without a documented fallback, especially when that vendor is optimizing for an IPO.

  4. Make one internal tool MCP-accessible — 97 million monthly downloads. 5,800+ servers. Figma, Stripe, Salesforce all shipping MCP servers. Tools that aren’t MCP-compatible are invisible to the agent ecosystem. Pick your most-used internal API and wrap it.

  5. Start building compliance into your agent architecture now — Washington’s chatbot safety law takes effect January 1, 2027. More states are moving. Build audit trails, consent flows, and deterministic guardrails before your enterprise customers require them in the next RFP.


🔥 What’s Viral Right Now

The Anthropic Injunction — A federal judge told the Pentagon its blacklisting of an American AI company “looks like an attempt to cripple” it. The 43-page ruling is worth reading if you care about the future of AI safety policy. The word “Orwellian” appears in a federal court order. That doesn’t happen often.

Claude Mythos — Anthropic’s leaked next-generation model. New tier above Opus. “Step change” in capabilities. Already in early access testing. The cybersecurity implications are reportedly severe enough to change the release strategy. The leak itself — through a misconfigured CMS — is the kind of irony that sticks.

SoftBank’s $40B Bridge Loan — Unsecured. 12-month maturity. JPMorgan and Goldman signed on. Either SoftBank has lost its mind or the banks believe an OpenAI IPO is happening this year. The smart money says it’s the latter.

TeamPCP — The threat group that cascaded from a vulnerability scanner to the AI supply chain in nine days. If you build with LiteLLM, Trivy, or Checkmarx KICS, check your pipelines. This isn’t theoretical.

Huawei’s 950PR — Not a Nvidia killer. Not CUDA-native. But “good enough, available, and familiar enough” for ByteDance and Alibaba to place real orders. If 750K units ship this year, it changes the global AI hardware map.


Stay building. 🛠️

— Matt