Decisions · ADR-008

AI auto-approve only

Status
Accepted
Date
Surfaces on

Compliance determination engine has no AI input path. CivicAI is read-only by architecture; cannot import, call, or mutate compliance functions.

Context

AI systems can introduce bias, drift, and unexplainable outcomes. Compliance determinations affect Medicaid coverage, which is high-stakes and legally reviewable. Allowing AI to deny benefits or terminate coverage creates fair-hearing exposure (the platform cannot explain the determination beyond “the model said so”) and erodes audit defensibility (model versions and training data are not the same kind of evidence as deterministic rule evaluation). The current capabilities of AI-explainability tooling cannot bridge this gap.

Decision

The compliance determination engine is a deterministic, version-pinned rule evaluation. There is no AI input path. The CivicAI assistant exists as a beneficiary-facing chat interface but is architecturally read-only. It can answer questions about a beneficiary’s situation by reading compliance status, but it cannot import, call, or mutate any function that affects compliance. The architectural invariant is enforced by the lack of any code path between the two layers; the type system would refuse to compile a CivicAI handler that tried to call a determination engine function.

Consequences

AI improvements cannot accelerate denial decisions; only approval-side improvements (faster routing of clear-approval cases) are even possible. Auditors can fully replay any historical determination because no AI variability is in the loop. Operational cost: CivicAI’s helpfulness is bounded by what the read layer can answer. The platform cannot use AI for adverse actions even if a state requested it. Stating that publicly removes a category of legal exposure on the state’s side.

Alternatives considered

  • AI-recommended determinations with human in the loop. Rejected: the human-in-the-loop architecture pattern in fair-hearing-reviewable contexts has been criticized for human-rubberstamp behavior. AI-assisted denials still introduce bias and drift; auditor explainability suffers.
  • AI for ex parte verification only (verification, not denial). Considered for future variant; not in v1. Verification automation is a different decision and would need its own ADR if pursued.

References

  • standard Architectural separation of concerns as the basis for capability isolation
  • standard Algorithmic accountability literature on AI in benefits-determination contexts