華文

Measures

Each pack answers a distinct phase question in the care cycle. The measures below are assigned so that each metric belongs to exactly one pack.

These metrics are designed for sufficiency, not maximisation. Each deployment context defines a threshold — "good enough" for that community. Crossing the threshold is the goal; score-chasing beyond it risks the same metric-gaming the 6-Pack warns against. The kami tends its garden — it does not compete to tend the most gardens.

Pack 1 — Attentiveness: did we look at the right things?

MetricWhat it answers
CoverageWhat share of affected people provided input?
Absence rateWhich demographic segments have no voice in the bridging map?
Voice equityHow much facilitated time do least-heard groups get vs. most-heard?
Bridging mapWhich stakeholder clusters are represented, and which cleavages are reinforcing vs. cross-cutting?

Pack 2 — Responsibility: did we make and keep the right promises?

MetricWhat it answers
Promise coverageWhat share of identified needs have a named owner and SLA?
SLA adherenceWhat share of cases are resolved within their published response window?
Adopt-or-explain rateWhat share of Assembly outcomes have documented adoption or published deviation with remedy?

Pack 3 — Competence: did we execute correctly?

MetricWhat it answers
Decision accuracyWhat share of decisions are overturned on audit or appeal?
Guardrail integrityWhat share of red-line tests pass?
Trace completenessWhat share of decisions have a full trace (rule, source, uncertainty, receipt)?
Canary healthWhat share of canary releases complete without triggering rollback?

Pack 4 — Responsiveness: did the care land well?

MetricWhat it answers
Resolution rateWhat share of pause-triggering cases are resolved within the promised window?
Appeal closure timeHow does actual vs. target appeal closure time compare across case types?
Harm recurrenceAt what rate do resolved incidents re-appear within 90 days?
Trust-under-lossWhat is the trust score after a bad outcome — did repair work?

Pack 5 — Solidarity: is the ecosystem structurally fair?

MetricWhat it answers
Bridge indexWhat are the cross-group participation and endorsement rates in shared decisions, published quarterly?
Portability rateWhat share of users successfully export data when leaving?
Agent ID coverageWhat share of agents have verifiable meronymous attestations?

Pack 6 — Symbiosis: is the system bounded and sustainable?

MetricWhat it answers
Scope complianceWhat share of agent actions fall within declared purpose bounds?
Succession readinessWhen was the last successful exit drill?
Sunset complianceWhat share of agents have current attestation and active sunset timer?
Ecology diversityHow many independent agents serve equivalent needs in the same domain?

Trust decomposition

Trust is present across all six phases, but each pack measures a distinct dimension:

PackTrust dimension
1 — AttentivenessTrust in being heard (voice equity)
2 — ResponsibilityTrust in promises (SLA adherence, adopt-or-explain)
3 — CompetenceTrust in execution (trace completeness, guardrail integrity)
4 — ResponsivenessTrust after harm (trust-under-loss)
5 — SolidarityTrust across groups (bridge index)
6 — SymbiosisTrust over time (succession readiness, scope compliance)

Cross-group endorsement appears in two distinct roles: as an RLCF training signal in Pack 4 (a training objective — how you shape the model) and as the basis of the bridge index in Pack 5 (an ecosystem audit — what you report). These are separate uses.

Pack 6: Symbiosis FAQ