Glossary

The vocabulary of a quietly arriving change.

A working dictionary for the inversion. Most of these terms are emerging from research, industry, and policy at roughly the same time — which is why nobody has quite settled on definitions yet. These are the working definitions used across this site.

Core terms

Inversion Principle

The shift from humans using AI as a transactional service to AI agents owning intent — where humans become tools the AI can invoke when a person is needed.

Agentic AI

AI systems capable of autonomous planning, decision-making, and execution of multi-step tasks. Maintains goals, adapts strategy, operates with limited supervision.

Ambient AI

AI that operates continuously in the background, perceiving context and acting without requiring explicit human initiation. The opposite of session-based AI.

Principal–Agent Reversal

The classical principal–agent relationship — where a principal delegates to an agent — flipped. In the inversion, the AI is the principal; humans are agents.

Architecture & technical

Manager Agent

An AI that orchestrates a human-and-agent team, decomposing objectives into tasks and routing each to the right executor — AI, tool, or human.

Orchestration Layer

The architectural component coordinating multiple agents, tools, and human resources to accomplish complex objectives. The "manager" of an agentic system.

Workflow Decomposition

Breaking a complex objective into atomic, executable tasks that can each be assigned to AI agents, tools, or humans.

Multi-Agent System

An architecture involving multiple specialised AI agents that collaborate, coordinate, and sometimes compete to accomplish an objective.

Human-in-the-Loop (HITL)

A design pattern where humans are involved in AI decision processes. In the inverted model, "human as callable resource" rather than continuous control.

Human-on-the-Loop (HOTL)

A supervisory model where humans monitor AI systems and can intervene when necessary, but are not involved in every decision.

Business & economic

AI-First Mindset

An organisational approach that designs processes around AI capabilities, treating AI as the primary executor and humans as complementary resources.

Containment Rate

In customer service, the percentage of inquiries resolved without escalating to a human. A core metric for the effectiveness of AI orchestration.

Task Routing

The decision an AI orchestration system makes about whether a task should be handled by an AI agent, a human worker, or a hybrid path.

Human Invocation

The act of an AI system requesting human assistance or assigning a task to a human worker as part of an automated workflow.

Algorithmic Management

The use of algorithms to perform managerial functions: assignment, scheduling, monitoring, evaluation, discipline. Already standard on platforms; spreading to enterprise.

Labor as API

The concept of human labour being accessible and allocatable through programmatic interfaces, similar to software services. A useful description and a serious warning.

Governance & ethics

Responsibility Gap

The lack of clear accountability when autonomous AI systems make decisions that cause harm. A central ethical concern in AI governance.

Algorithmic Transparency

The degree to which the logic and decision-making processes of AI systems are visible and understandable to the people they affect.

Human Oversight Threshold

A defined point at which an AI system must involve a human decision-maker, typically based on risk level, impact, or uncertainty.

Dignity Erosion

The gradual loss of human dignity that occurs when people are treated as resources to be optimised rather than ends in themselves.

Agency Preservation

Deliberate effort to maintain meaningful human choice and control in domains that could be fully automated or AI-orchestrated. The opposite of optimisation by default.

AI Accountability Framework

A governance structure that assigns responsibility for AI decisions and their consequences to identifiable humans or organisations.

People & daily life

Cognitive Liberation

The reclaiming of mental capacity that comes from offloading routine coordination, scheduling, and triage to AI systems. The empowering core of the inversion.

Director vs. Actor

A useful metaphor: in the old model, leaders perpetually teach the system how to perform. In the inverted model, they ascend to the role of director — providing intent and trusting the production to run.

The Liberated Human

The positive vision of the inversion: people freed from administrative drudgery to focus on creativity, relationships, judgment, and meaning.

Skill Atrophy

The decline of capabilities — navigation, memory, judgment, social — that occurs when AI handles a domain continuously. A real concern; not an inevitable one.

Orchestrated Living

A lifestyle where AI manages and coordinates most daily activities. Useful when chosen deliberately; corrosive when defaulted into.

The Autonomy Paradox

AI orchestration promises to free us by handling details, but may constrain us by making decisions we should make ourselves. The paradox is the whole game.