In a federal courtroom in Oakland last week, Elon Musk took the stand against OpenAI and its CEO Sam Altman in a lawsuit that looked like a billionaire grudge match from the outside. From the inside, it was something more significant: the first major legal reckoning over who controls the most powerful AI companies in the world — and what they actually owe the rest of us.
For NYC business owners paying monthly subscriptions for ChatGPT, Microsoft Copilot, or any tool powered by OpenAI's models, this trial isn't background noise. The outcome could reshape how these tools are priced, owned, and distributed.
What Happened in Court
Musk spent three days testifying in his lawsuit against OpenAI. The core claim: when he co-founded OpenAI in 2015, it was structured as a nonprofit with a public mission to develop AI safely for humanity. That nonprofit raised donations — including hundreds of millions from Musk himself.
Sam Altman, Musk alleges, engineered a transformation into a for-profit entity now valued in the hundreds of billions. The nonprofit's charitable assets were essentially converted into a private corporation without Musk's consent. His line from the stand became the headline of the week: "You can't just steal a charity."
OpenAI's defense: Musk left the board voluntarily in 2018, and his donations came with no legal strings. The for-profit structure was necessary to raise the capital needed to build frontier AI.
Week 2 is underway. The trial could run several more weeks, with more witness testimony and document evidence to come.
The Bombshell No One Saw Coming
The bigger surprise wasn't the argument. It was an admission.
On the stand, Musk confirmed that xAI — his competing AI company that makes the Grok chatbot — used OpenAI's models to train Grok through a technique called model distillation. In plain terms: Grok partially learned how to operate as an AI by studying the outputs of OpenAI's models.
This matters for two reasons. First, it's the same category of practice Musk's lawsuit implicitly criticizes — training on others' work without explicit consent. Second, it reveals how interconnected the AI industry actually is. The companies that market themselves as competitors aren't as independent as their branding suggests. The "AI race" between OpenAI, xAI, Google, and Anthropic involves companies that are, in some cases, learning from each other's outputs.
For business owners: the AI tools you pay for — and the supposed alternatives — share more DNA than you might expect.
The Pentagon Just Picked Sides
On the same week the trial opened, the U.S. Department of Defense made a move that reframes everything.
The Pentagon signed classified AI deals with OpenAI, Google, Nvidia, and Amazon Web Services, giving these companies direct access to deploy AI on classified military networks. The one notable absence: Anthropic, maker of the Claude AI models, remains on a Pentagon blacklist following a dispute with the current administration.
The implications are significant. The AI companies behind tools your business uses are now, in many cases, official U.S. military AI contractors operating in national security-sensitive environments. That changes the risk profile of these platforms for businesses handling sensitive client data.
For NYC firms in finance, law, and healthcare — sectors with strict data confidentiality obligations — understanding whether your AI vendor holds classified government contracts is no longer optional background knowledge. It's due diligence.
"AI Could Kill Us All" — What to Make of That
During testimony, Musk repeated his long-held warning: AI poses an existential threat to humanity if not developed carefully. He framed the OpenAI mission betrayal as a safety issue, not just a financial one — arguing that an AI company optimized for profit has structurally weaker incentives to prioritize safety.
Is this posturing? In part, certainly. But Musk's courtroom statements pushed "AI existential risk" into mainstream news coverage in a way that think-tank reports and academic papers never managed. Policymakers on both sides of the aisle are watching this trial closely. Congressional staffers are pulling documents filed in the case.
Any ruling that restructures OpenAI — which the court could theoretically impose — will generate new regulations, new oversight requirements, and potentially new constraints on the tools businesses use today.
What NYC Business Owners Should Do Right Now
New York has an above-average concentration of firms in three sectors now directly exposed to this trial's outcome: finance, law, and media. Wall Street firms using AI for research and document review. Law firms using AI for contract analysis. Media companies using AI for content production. All of them are relying on tools whose legal structure and future pricing is being contested in federal court.
Here's what's actionable:
1. Don't build your operations on one AI platform. If OpenAI faces a forced restructuring, terms and pricing can change fast. Any business fully dependent on ChatGPT or Copilot without a fallback is exposed. At minimum, test an alternative — Anthropic's Claude, Google Gemini, or open-source models.
2. Know the actual AI company behind your tools. "Microsoft Copilot" is powered by OpenAI. "AWS Bedrock" pulls from multiple AI vendors including Anthropic and Meta. Knowing who's behind your software stack means you won't be blindsided by a legal ruling that affects one company upstream.
3. Document your current AI costs. If OpenAI is required to honor any aspect of its original nonprofit structure, it creates pressure on the for-profit business model — pressure that historically results in pricing changes. What you're paying now is worth tracking.
4. Review your AI vendor's data policies. The Pentagon's classified deals mean these platforms operate under national security-level scrutiny. If your business handles sensitive client data, get clarity on your vendor's data retention, access, and government cooperation policies before the next renewal.
The Bigger Picture
The Musk v. Altman trial is not a niche tech story. It's the first legal proceeding to directly examine who owns the profits from AI systems built with public-mission funding, and whether the people who built these systems owe anything to the public that was their stated beneficiary.
The outcome — whether Musk wins, Altman wins, or the case settles — will set precedent for how AI companies are structured, governed, and regulated for the next decade.
For NYC business owners, the takeaway from Week 1 is straightforward: the companies behind your AI tools are in a fight, the U.S. government is now in the mix, and the landscape is moving fast. The best preparation is diversification, awareness, and not assuming today's setup is permanent.
This trial isn't over. Neither is the transformation of AI from tech-industry product to contested public infrastructure.
Stay current on AI developments shaping NYC businesses — subscribe to The Metro Intel.
