AI Accountability

Stop Giving AI Agents Agency. Give Them Liability.

The agent boom is turning mistakes into “liability laundering,” where vendors blame prompts, users blame models, and no one can be meaningfully held to account. The fix is not another safety slogan. It is standardizing proof at the moment of action, so agents cannot become blameless proxies.

The Abort Switch: Designing an Abort Doctrine for Frontier AI

Without a red button to halt a runaway model, AI labs risk crossing dangerous boundaries. This feature proposes an abort doctrine, echoing mission rules and market circuit breakers, to define thresholds, assign independent authority and mandate learning. It calls for balancing innovation with safety