The durable shift in agentic AI is not pricing. It is allocation of downside. Once autonomy is coupled to privileged tools, the entity that deploys the system inherits the liability profile of its outcomes.
Agents are not "automation at scale." They are delegated judgment with broad permissions. That changes two things boards should care about immediately: (1) error can propagate across interconnected systems faster than human detection, and (2) compromise becomes a control problem, not a model-quality problem.
The economic baseline
Meaningful enterprise downtime routinely prices in the six-figure-per-hour range, and materially higher in complex environments. Agentic failure does not need to be catastrophic. It only needs to cascade.[1]
The board question
The board question is governance and allocation, not model behavior: Who is liable. Who absorbs the loss. Who can constrain autonomy, and on what triggers.
A precedent already exists. In Moffatt v. Air Canada, the organization deploying the chatbot was held responsible for the misinformation delivered by the automated surface. Deployment concentrated accountability.[2]
Security becomes insider-risk economics
Agents sit at the intersection of intent and privilege. If an adversary can steer an agent, they can execute like an insider with delegated authority. NIST explicitly calls out indirect prompt injection and downstream impacts in LLM-integrated applications, including unintended actions across connected tools and workflows.[3]
Anthropic's disclosure on AI-orchestrated espionage reinforces the operational direction: attacks evolve from "model jailbreak" to "workflow compromise" against real systems.[4]
The predictable failure mode
Most enterprises are adopting agentic AI through procurement and implementation motions designed for conventional software. Those motions assume errors are contained, reversible, and attributable. Agents violate all three when autonomy, privilege, and scale combine.
If the operating answer to material downside is "we will handle it in implementation," risk has already moved past signature and into operations, then economics, then governance.
Before commitments harden
Boards should require written answers to three items:
- Allocation: where liability and loss land for material harm
- Control: stop-the-line authority, triggers, and escalation path
- Exit: reversibility economics, including the real cost and time to unwind
Discounts expire. Lock-in persists. Risk allocation becomes the real contract.
Sources
- ITIC, Hourly Cost of Downtime Report (2024) — itic-corp.com
- ABA, Moffatt v. Air Canada: Chatbot Misinformation and Enterprise Liability (Feb 2024) — americanbar.org
- NIST AI 600-1, Generative AI Risk Management Framework — nist.gov
- Anthropic, Disrupting an AI-Orchestrated Cyber-Espionage Campaign (Nov 2025) — anthropic.com
Disclosure
Thoraya conducts independent Decision Integrity Reviews in the window before major commitments harden. We evaluate decision integrity through five lenses: decision rights, lock-in points, governance readiness, operating-model fit, and risk and cost allocation. This memo addresses the fifth. Thoraya does not resell, implement, or hold commercial relationships with the platforms under review.