Skip to content

AI capability models

These models define the level of responsibility an AI system assumes within a user workflow.

They help teams align on what the AI does, how much control it has, and how it should behave in the user experience.

1. Informational AI (RAG / Knowledge Assistance)

AI informs, but does not influence or act

Characteristics

  • Explains, summarizes, answers questions
  • Provides context and insight
  • No system changes or state mutation
  • No workflow ownership

User role

Interprets and decides independently

UX implications

  • Emphasize clarity and source grounding
  • Avoid prescriptive language
  • Keep interaction lightweight

Examples

  • In-product help assistant
  • Documentation chat
  • “Why is this happening?” explanations

2. Assistive AI (Decision Support)

AI suggests, user decides

Characteristics

  • Drafts, recommends, validates
  • Operates within an existing workflow
  • Does not execute actions independently

User role

Evaluates, edits, and approves

UX implications

  • Show confidence and rationale
  • Provide alternatives or edits
  • Make acceptance/rejection easy

Examples

  • Report generation assistance
  • Configuration recommendations
  • Step-by-step setup guidance

3. Agent-Assisted AI (Operational with Oversight)

AI executes tasks with user approval and guardrails

Characteristics

  • Performs multi-step actions
  • Executes defined “skills” or workflows
  • Requires user approval at key steps
  • Operates within governance boundaries

User role

Delegates and supervises

UX implications

  • Require explicit consent before execution
  • Provide visibility into planned actions
  • Support interruption, rollback, and audit

Examples

  • Guided remediation workflows
  • Multi-step configuration automation
  • Ops orchestration (like Opsmith-style flows)

4. Agent-Autonomous AI (Fully Operational)

AI acts independently within defined constraints

Characteristics

  • Executes workflows without real-time user approval
  • Continuously monitors and responds to conditions
  • Operates under predefined policies and safeguards

User role

Defines rules, monitors outcomes

UX implications

Strong emphasis on:

  • transparency (what happened and why)
  • auditability control (pause, override, rollback)

Clear communication of scope and limits

Examples

  • Auto-scaling infrastructure based on load
  • Self-healing systems (auto-remediation)
  • Policy-driven traffic routing adjustments

Future Measurement Layers

These future measurement layers define how we will assess the real impact of AI on user behavior, decision quality, and long-term product outcomes.

  • Informational → engagement metrics
  • Assistive → acceptance / override rates
  • Operational → success / rollback / trust signals