Definition
Autonomous Process Index (API)
The Autonomous Process Index (API) is a framework for describing how independently a process can run from goal → execution → verification without human handoffs. It is designed to separate automation (predefined steps) from autonomy (self-directed completion under changing conditions).
Working definition: An API score reflects the degree to which a process can accept a goal, plan and execute steps, monitor outcomes, handle exceptions, and continue until completion with minimal operator intervention.
What the Index is for
- Comparing the autonomy of processes within a team or across organizations.
- Tracking progress over time as workflows become more agentic and less operator-driven.
- Diagnosing where “automation” stops and human handoffs begin.
What the Index is not
- Not a vendor scorecard and not an endorsement of any tool.
- Not a promise of safety, compliance, or correctness.
- Not a replacement for risk management, controls, or human accountability.
Core idea
A process can be highly automated and still be low autonomy if humans must interpret exceptions, coordinate handoffs, or decide when the work is “done.” API focuses on the boundary where the process can maintain momentum and make progress under uncertainty.
Index levels
The following levels are intentionally simple. The supporting pages define components and a construction method.
- API‑0 — Manual: Human performs steps; tooling assists but does not run the process.
- API‑1 — Scripted Automation: Deterministic steps execute, but humans handle exceptions and completion.
- API‑2 — Assisted Autonomy: System proposes actions and can execute bounded steps; frequent human checkpoints.
- API‑3 — Conditional Autonomy: System executes end‑to‑end in normal cases and resolves common exceptions.
- API‑4 — Robust Autonomy: System adapts plans, verifies outcomes, and escalates only novel or high‑risk cases.
- API‑5 — Continuous Autonomy: System runs continuously with monitoring, self‑healing, and structured governance.
Evaluation snapshot
- Goal intake: Can the system accept intent, constraints, and success criteria?
- Planning: Can it generate and update a plan as conditions change?
- Execution: Can it take actions across tools/systems with reliable state?
- Verification: Can it confirm completion and quality, not just “ran steps”?
- Exception handling: Can it recover, reroute, or safely stop when things go wrong?
- Governance: Are escalation, auditability, and permissions explicit and testable?
For the detailed component model and a suggested scoring method, see Components and Methodology.