Anthropic's Long-Term Benefit Trust just swapped out two founding members for a former California Supreme Court Justice. On paper, this is a governance upgrade. In practice, it reveals something more interesting about how Anthropic's alternative to OpenAI's governance model is evolving.
The announcement buries the lede. Mariano-Florentino "Tino" Cuéllar is joining the Trust, yes. But Kanika Bahl (CEO of Evidence Action) and Zachary Robinson (CEO of the Centre for Effective Altruism) are leaving. Both were founding trustees who "helped build the LTBT from the ground up," according to Chair Neil Buddy Shah.
Two of the Trust's original effective altruism-aligned voices are departing in favor of someone from the legal and policy establishment. Cuéllar's resume speaks for itself: California Supreme Court Justice, current President of the Carnegie Endowment for International Peace, chair of the Hewlett Foundation board. He co-led California's Working Group on AI Frontier Models with Fei-Fei Li. The man knows AI policy.
But the composition shift matters. Anthropic was founded by former OpenAI researchers who worried that safety concerns were taking a back seat to commercial pressures. The Trust was supposed to be the structural answer to that worry, part of the company's broader commitment to approaches like Constitutional AI. Now the effective altruism contingent is cycling out.
The Trust's actual powers
The Trust's official remit, according to Anthropic: it "helps Anthropic achieve its public benefit mission by selecting members of Anthropic's Board of Directors, and advising the Board and leadership."
That's board selection plus advice. Not veto power over product launches. Not authority to block commercialization decisions. Not the ability to force safety pauses.
The Trust appointed Reed Hastings and Jay Kreps to Anthropic's board. Both are accomplished tech executives. Neither is known for prioritizing safety over growth.
Anthropic has never published the actual Trust agreement. We don't know the precise mechanics of how trustees are selected, what happens if they disagree with company leadership, or under what circumstances stockholders could override Trust decisions. Shah's framing is telling: he emphasizes needing "leaders who understand these dynamics" of "geopolitical competition" and governments rapidly adopting AI. That sounds more like commercial expansion logic than safety-first thinking.
The OpenAI comparison
Anthropic positions itself as the safety-conscious alternative to OpenAI. The Trust is central to that positioning. But OpenAI's governance crisis in late 2023 showed that nonprofit oversight structures can collapse quickly when they conflict with commercial momentum. Sam Altman was fired by a safety-focused board and reinstated within days after investors and employees revolted.
Does the Trust have the structural power to do better? The answer appears to be: we don't know, because Anthropic won't show us the agreement.
Cuéllar's statement hits the right notes about "governance structures that marry private sector dynamism with civic responsibility." But statements of intent aren't governance mechanisms. The question isn't whether the Trust's members are impressive. It's whether they can actually do anything if Anthropic's commercial interests conflict with its safety mission.
What this actually signals
The best interpretation: Anthropic is professionalizing its oversight as it grows into a company with real government and enterprise customers. Cuéllar's policy experience makes sense if you're shipping AI to the UK government, as we've covered.
The less charitable interpretation: the Trust is evolving from a safety-focused body into a prestige board that provides credibility without friction. Establishment figures who understand "geopolitical competition" are less likely to pump the brakes than effective altruism researchers worried about existential risk.
The honest answer is we can't tell which interpretation is correct because the governance documents remain private. For a company built on the premise that AI development needs better oversight structures, that opacity is hard to justify.