
Musk v Altman week 3 puts credibility on trial
Week 3 of Musk v Altman turned OpenAI's founding dispute into a direct test of whether jurors trust Sam Altman or Elon Musk more, with wider stakes for AI governance.

After three weeks of testimony, the Musk v Altman case is no longer fighting over old emails alone. A nine-person advisory jury must now decide which story about OpenAI’s founding bargain sounds more credible: Sam Altman’s account of a nonprofit nearly abandoned and forced to commercialise to survive, or Elon Musk’s account of an organisation that drifted from its mission once control and money came into view. In MIT Technology Review’s week 3 dispatch, Reuters’ end-of-trial report, and CNBC’s account of Altman’s testimony, the final stretch of evidence looked less like a seminar on AI safety than a hard test of character.
Week 3 narrowed the case to a single question: who gets to govern a frontier AI lab once it stops looking like a lab and starts looking like a platform company. Musk is seeking damages of $134 billion from OpenAI and Microsoft, according to MIT Technology Review, and the same report said an eventual listing of the company has at times been discussed at valuations as high as $1 trillion. By the time closing arguments approached, the courtroom argument was no longer just about whether OpenAI changed. Jurors were being asked whether that change was betrayal or survival.
Altman used the stand to push the survival version hard. In CNBC’s summary of his testimony, he said OpenAI’s backers were “kind of left for dead” after Musk’s departure, a phrase that did two jobs at once. It cast the organisation as precarious rather than triumphant, and it invited jurors to see later commercial compromises as reluctant engineering rather than a cash grab. NPR’s report from the courtroom described the same defence in plainer terms: Altman was trying to fend off accusations that he had “stole a charity” by arguing that the lab’s original structure could not fund the computing, talent and infrastructure needed to compete. Familiar Silicon Valley defence. What made it consequential in week 3 was the context: Altman was not arguing with a blog post or a rival founder on X. He was arguing under oath, with lawyers testing whether the sympathetic origin story still held once the money got large enough to change everyone around it.
The cross-examination is where the case sharpened.
Opposing counsel spent the week trying to turn Altman’s polish against him. In the CNN account of his testimony, Steven Molo asked him, “Are you completely trustworthy?” The question landed because it condensed several days of cross-examination into one line: if Altman had blurred conflicts, self-interest or past statements, the defence’s elegant story about necessity became harder to swallow. BBC’s five takeaways from the trial made the same point more broadly, describing a proceeding full of claim and counter-claim in which tech mythology was stripped back to ordinary human motives. Week 3 did not produce a cinematic revelation. What it produced was more legally useful: repeated invitations for jurors to doubt whether the people promising to build artificial general intelligence had ever agreed on the rules, or whether those rules were always flexible enough to bend around whoever held leverage.
Musk did not escape the same treatment. If Altman was painted as slippery, Musk was painted as acquisitive. MIT Technology Review’s dispatch quoted lawyer Sarah Eddy saying of Musk, “What he cared about was winning,” capturing the defence theory that the plaintiff’s moral language about humanity masked a simpler ambition: control. Reuters’ coverage of the closing stretch likewise described Musk being accused of “selective amnesia” as lawyers argued he had wanted majority equity, chief executive power and the ability to direct the company when he was still inside the project. The counterattack carries weight because Musk’s entire case depends on persuading jurors that he is the betrayed founder rather than the disappointed power broker. If they decide he wanted command and lost it, the nonprofit-versus-for-profit argument starts to look less like principle and more like a boardroom split argued in the language of philosophy.
Neither man arrived in week 3 as a clean witness.
The absence of a clean hero may be the week’s most revealing fact. The case has attracted attention because it sits at the junction of money, safety and celebrity, but jurors are being asked to do something narrower and more brutal: decide whether contemporaneous evidence points to a broken social mission or a broken founder relationship. NPR’s courtroom report captured the first half of that divide. Reuters and MIT Technology Review captured the second. Put together, the record suggests the legal fight is running on two kinds of disappointment at once: Musk’s disappointment that the lab no longer resembles the project he helped seed, and the company’s claim that Musk wanted to shape it only so long as he could shape it himself.
The trial’s third week mattered more than its headline theatre for exactly that reason. Corporate transformation cases often turn on documents, but this one also turns on narrative continuity. Did OpenAI drift from a charitable mission into a commercial machine, or did it discover that the original mission was impossible to pursue without a machine behind it? MIT Technology Review’s week 3 report framed the stakes in exactly those terms, noting that the advisory jury is being asked to weigh liability in a case whose consequences could reach far beyond damages. A Musk win would not only wound OpenAI and Microsoft. Every frontier lab, nonprofit alliance and mission-driven AI venture would then have to explain in far more concrete language how governance is supposed to survive the arrival of hyperscale capital. That question already sits behind a large share of the AI policy argument: who sets limits, who gets voting control, and what remains of public-interest language once the product begins to print revenue. The courtroom did not invent the tension. It forced the two men who helped popularise it to explain, under oath, when they stopped meaning the same thing by the word “humanity”.
Week 3 did not stay confined to personality. OpenAI is more than a startup in a founder dispute. The company sets the tempo for model releases, enterprise AI deployments and safety debates across the industry. When a courtroom spends days examining whether its founding promises were sincere, the audience is larger than the nine jurors in the room. Investors are listening for signals about structure. Regulators are watching how a mission-led organisation explains commercial compromise. Rival labs are studying whether the OpenAI template survives legal scrutiny or becomes a warning label. Reuters’ reporting on the possible effect on recapitalisation and IPO timing gave the stakes a financial shape; the week 3 testimony gave them a human one. The case no longer asks only whether OpenAI changed. The question now is whether any frontier AI organisation can change at this scale and still claim it is guided by the nonprofit language that helped legitimise it at the start.
The timing gives the week particular force. By the end of it, the lawyers were no longer building fresh scaffolding around the case. They were sanding down competing stories for jurors who had heard enough about emails, cap tables and founding ideals to know that the final choice might come down to which witness sounded less opportunistic when the stakes became largest.
For the jury, this is an awkward assignment. Nobody is asking them to settle philosophy. The task is to judge credibility in a feud where both protagonists arrived with baggage and both sides have spent weeks arguing that the other man’s memory becomes less reliable whenever control, valuation or status enters the frame. BBC’s recap and CNBC’s testimony summary both suggested the same problem: neither narrative is implausible on its face, which means tone, consistency and motive may carry unusual weight. The witness who sounds least evasive may matter more than the witness with the grander theory of AI governance. Week 3, in other words, turned the most consequential tech trial of the year into something almost old-fashioned — a contest over whose story holds together when the future-of-humanity language is stripped away.
The coming verdict is harder to shrug off than the usual billionaire courtroom drama. If jurors side with Musk, future plaintiffs gain a roadmap for challenging how mission-driven AI groups mutate once outside capital arrives. A verdict for Altman would reinforce the industry’s emerging argument that frontier research cannot remain purely nonprofit once computing costs, competitive pressure and geopolitical stakes escalate. Either outcome will travel. The commercial AI boom is already producing more organisations that speak in dual language: public benefit on one side, private scale on the other. Week 3 of Musk v Altman did not resolve that contradiction. What it exposed was how fragile the bridge between those claims can look once each architect is asked, in public and under oath, what he really wanted all along.
Asha Iyer
AI editor covering the model wars, AU enterprise adoption, and the policy shaping both. Reports from Sydney.
More from Ai

Canva pauses 5,300 staff for AI Discovery Week with OpenAI, Anthropic and Google

Atturra joins AI business platform startup monō ai as founding partner

Apple outsources Siri to Google's Gemini in $US1b-a-year AI deal

Budget boosts AI, but cyber gaps remain, industry warns
