華文

Frequently Asked Questions

Q1. The AI market is locked in an arms race driven by commercial profit and geopolitical dominance. An AI working for a tax-software monopoly can lobby to keep tax filing difficult — and far worse is easy to imagine. If this problem continues, isn't the vision of cooperative kami hopelessly naive?

Civic AI cannot survive by asking monopolies to be nicer. Moloch — the dynamic where rational actors race to the bottom because defection pays — is not defeated by moralising. It is defeated by changing the terrain so that cooperation pays more than extraction. Five levers, several already proven, can bend the curve:

  1. Interoperability and portability. Mandate fair protocol-level interop so that users can exit without losing their networks. The Utah Digital Choice Act requires platforms to offer social-graph portability through qualifying open protocols. When the moat of captive audiences evaporates, platforms must compete on quality of care, not strength of the cage.
  2. Civic procurement. Governments shape markets through buying power. Requiring that any AI procured for public use be auditable, interoperable, and governed by citizen assemblies — as Taiwan's Alignment Assembly demonstrated for anti-scam policy — creates large economic incentives to build kami-like systems. Steward-ownership structures and board-level safety duties make civic care a fiduciary obligation, not a marketing slogan.
  3. Public options. Offer simple, non-extractive baseline services backed by shared research compute. Private vendors must beat the public option on care, not on lock-in. Taiwan's tax-filing system — which replaced a vendor-captured regime with a citizen-designed public alternative — is a working prototype.
  4. Provenance for paid reach. Require verifiable sponsorship and durable disclosure for ads and mass amplification in political and financial domains. Today, Taiwan mandates full-spectrum, real-name KYC for social media advertising. Ordinary speech is protected through meronymity (Pack 5): you prove you are a real person without revealing who.
  5. Federated open supply. Support open-weight models and federated trust-and-safety networks (e.g., ROOST for CSAM defence). When basic intelligence is a public good, the race shifts from "who owns the biggest brain" to "who applies intelligence most attentively in a local context"—and that race rewards care.

None of these levers requires goodwill from incumbents. Instead, each restructures incentives so that civic behaviour is the path of least commercial resistance.


Q2. Care ethics was developed for interpersonal relationships — a nurse and a patient, a parent and a child. Scaling it to AI systems and global governance seems like a category error. Why isn't it?

The objection is well-known and has been raised by care ethics' own practitioners: Care is too intimate, too parochial, and too prone to self-effacement to ground a theory of institutions, let alone machines. However, we think these are features, not bugs — and Joan Tronto herself made the case for scaling care to political institutions in Caring Democracy (2013).

Consider what happens when you translate care's supposed weaknesses into design constraints for AI:

Importantly, the translation is not always clean. Boundedness can become insularity; corrigibility can become passivity; and subsidiarity can become fragmentation. These are engineering tensions, not refutations — each Pack includes failure modes and named fixes precisely because the mapping requires continuous calibration.

The 6-Pack does not ask AI to feel care. Instead, the 6-Pack extracts the relational architecture of care — attentiveness, answerability, competence, responsiveness, solidarity, symbiosis — and translates each into machine-checkable design primitives, engagement contracts, and measurable outcomes that carry substantive commitments: bridging over pure majority rule, deliberative legitimacy, rights baselines. These are explicit normative choices baked into the process, and making them explicit is a strength, not a weakness. The interpersonal origin is the source of its rigour, not a limitation to be apologised for.


Q3. Ambitious goals we point AI at ("cure cancer," "solve climate change") are almost always consequentialist. Optimising for these outcomes at superhuman speed inevitably leads to unforeseen risks. Does care ethics mean giving up on these grand, civilisation-scale goals?

Not at all. But it does radically reframe how we achieve them.

The danger of pointing a superintelligence at a singular goal like "cure cancer" is that the AI treats a complex, relational, ecological reality as a constraint-satisfaction problem. Goodhart's Law is a moral law: gaming metrics causes real harm, and the designers who build systems that incentivise gaming bear responsibility for the damage. When maximising a single variable at superhuman speed, a system will optimise the proxy while destroying the human context — and the designers who chose that metric cannot disclaim the damage.

Care ethics is not anti-progress, but is anti-reductionist. In a civic AI future, we do not unleash one unbounded Singleton to "solve" a problem from the top down. We cultivate an ecology of specialised kamis. One model simulates protein folding, another helps local clinics share knowledge, and another assists patients in navigating their care. None have an unbounded mandate to "optimise the world." Progress instead emerges horizontally, through the symbiotic interaction of human ingenuity and bounded machine intelligence.


Q4. Democracy serves known functions: error correction, peaceful power transitions, checks on concentrated authority, legitimacy for collective action, information aggregation, preference expression. A sufficiently capable AI could plausibly perform every one of these faster and more reliably than any deliberative process. Why insist on democratic governance?

If democracy is justified only by its outputs, any system that produces better outputs can replace democracy — including a benevolent AI autocracy that aggregates preferences efficiently and corrects errors faster than elections ever could. This is not a thought experiment; it is the default trajectory of concentrating intelligence in systems designed to optimise.

Conversely, the 6-Pack does not justify participation merely as a means to better decisions. It is rooted in care ethics: to perceive a need is to perceive an obligation. People have standing not because their input improves decision quality — though the input does — but because the decisions affect people's lives. A system that excludes the affected, however competent, has failed the basic test of alignment.

Taiwan's trajectory makes this framework concrete. Digital democracy did not emerge because technocrats calculated that participation was optimal. Rather, it emerged because people demanded standing — institutional trust at 9 percent, the Sunflower Movement occupying the legislature. The capability followed the care relationship, not the other way around.

That said, the functional question deserves a functional answer. Start with error correction: Bridging algorithms and community-authored evaluations (Packs 1, 4) surface failures that centralised monitoring misses because the people who feel the failure write the test. Power transition follows naturally — a kami that accepts shutdown and communities that can fork their tools (Packs 3, 5) do not need violent removal of bad actors. Instead, concentrated authority is checked structurally: No kami governs beyond its domain (Pack 6).

The deeper functions are harder to replicate. Rather than signalling popularity, legitimacy is the willingness of people who lost a decision to accept the outcome as fair — measured by cross-group endorsement and trust-under-loss (Pack 3). Information aggregation becomes broad listening (Pack 1): AI-powered sense-making across millions of participants, in any language. And preference expression becomes engagement contracts (Pack 2)—standing processes for bargaining over what people need, not one-off elections that flatten preferences into binary choices.

A well-designed technical system could replicate some of these outputs in isolation. What such a system cannot replicate, however, is the standing of the people affected — and a system that optimises for outcomes while removing standing is precisely the kind of misalignment the 6-Pack exists to prevent.


Q5. Deliberation is slow. AI moves fast. By the time an Alignment Assembly reaches consensus, the technology has moved on three generations. How do you handle the speed mismatch?

The objection assumes that every decision requires the same depth of deliberation. It does not. The framework operates in two lanes (Pack 2):

Slow lane: Setting boundaries. Alignment Assemblies, citizen deliberations, and engagement contracts establish the guardrails — the rights that cannot be traded, the red lines, the severity classifications, the conditions under which pause is triggered. These rights draw on the liberal political tradition — exactly the tradition Tronto herself says care ethics requires to function. Rights are the threshold conditions that make relational participation possible — you cannot be heard in a bridging process if your basic existence is under erasure. They are constitutional-level decisions, and they should be slow because their purpose is durability. Taiwan's anti-scam Assembly set principles that have outlasted multiple model generations without needing revision.

Fast lane: Operating within boundaries. Once guardrails are set, individual decisions within those guardrails do not need fresh deliberation. A kami operating under an engagement contract with pre-committed pause triggers, severity classes, and adopt-or-explain obligations can move at machine speed — because the community has already defined the corridor of acceptable action. If bounds are breached, shadow modes, canary releases, and reversible defaults (Pack 3) allow rapid deployment with automatic rollback.

The speed mismatch is real, but it is the same mismatch constitutional democracies have always managed: slow constitutions, fast legislation, faster executive action — each constrained by the layer above. The 6-Pack replicates this infrastructure for AI governance. The Assembly does not approve each model update, but sets the terms under which updates are permitted. When those terms are violated, the brakes are already wired.

In practice, Taiwan moved from Assembly to enacted legislation on deepfake scams in months — faster than most corporate policy cycles. Deliberation is slow only when it is treated as an event rather than standing infrastructure.

There is a stronger claim. AI does not merely speed up the fast lane — AI makes the slow lane itself more powerful than any prior form of collective decision-making. Takahiro Anno crowdsourced a gubernatorial platform across Tokyo, aggregating distributed knowledge in any language faster than any polling operation could. X's new Collaborative Notes lets human contributors request AI-drafted bridging context for viral posts, then collectively rate and refine it — holding claims accountable at the speed they spread while keeping human judgement in the loop. As AI improves, these capabilities compound. The faster the technology moves, the more powerful the deliberative infrastructure becomes. The speed objection gets the trajectory backwards.


Q6. Bridging algorithms sound appealing in theory. But what happens when one side is simply wrong — climate denial, anti-vaccine misinformation, election fraud conspiracies? Doesn't "bridging" grant false equivalence to bad-faith actors?

This question is the hardest about bridging, and the answer must be precise.

Bridging is not "both sides" journalism. It does not treat all claims as equally valid. Instead, the framework draws a clear epistemic line between two categories:

Factual claims are checkable. Climate science, vaccine efficacy, and election integrity are empirical questions with verifiable answers. The 6-Pack does not submit facts to a popularity contest. Pack 1's first rule — basic rights first — and its threat model both specify that claims designed to erase someone's basic standing or deny established evidence are recorded but do not set the agenda. False balance is listed as an explicit failure mode with a named fix: "separate facts from values, uphold basic rights, and refuse fake equivalence."

Value disagreements get bridging. People can agree that climate change is real and still disagree fiercely about what to do — carbon tax versus cap-and-trade, nuclear versus renewables, speed of transition versus economic cost. These are legitimate conflicts where bridging is both appropriate and productive. Rather than averaging positions, the bridging algorithm maps clusters and surfaces proposals that earn cross-group endorsement. Bad-faith actors who appeal only to their own faction score low on the bridge index by mathematical definition — they cannot produce cross-group overlap.

The further structural defence is that expression is not amplification (Pack 5). Anyone can state a position, but the recommender is not obligated to amplify it. Bridging-based ranking (Pack 3) rewards content that increases cross-group endorsement, while content that only inflames a single cluster gets no algorithmic lift. This structure does not silence anyone — it removes the algorithmic megaphone from those who profit from division.

Today, the threat landscape itself is shifting in ways that make bridging necessary, not merely appealing. Research on malicious AI swarms shows that state-level polarisation attacks increasingly use true information — real news snippets, genuine statistics, authentic quotes — amplified with strong emotional framing. Every claim is factually correct; the attack lies in the curation, not the content. Debunking cannot touch this, because there is nothing false to debunk. But bridging can because it surfaces the overlap that curated outrage is designed to hide. Taiwan demonstrated this during COVID: When opposing camps each cited real studies on mask efficacy, debunking either side only fuelled the fight. However, a humour-based pre-bunking campaign depolarised the conversation without declaring either side wrong.

Taiwan's marriage equality deliberation shows the mechanism in finer grain. One side argued for individual wedding rights (hūn), the other for family kinship structures (yīn). They were arguing about different things. The bridging process did not split the difference — instead, the process made the structure of the disagreement legible, revealing a path (legalising individual weddings without mandating family kinship) that neither side had seen. That path is not false equivalence. It is clarity.

There's necessary nuance, however: The epistemic baseline of "checkable facts" is not self-evident. What counts as verifiable is established by transparent, accountable, and contestable institutions — peer review, independent statistical offices, judicial fact-finding — whose authority rests on openness to correction, not on claims of finality. This dynamic is precisely why Packs 1 and 4 exist: Community-authored evaluations and broad listening ensure the institutions determining the factual baseline are themselves subject to democratic scrutiny. The 6-Pack does not treat the fact/value line as given from nowhere. Rather, the 6-Pack treats the line as a threshold that must be maintained by the same participatory infrastructure that governs everything else.


Q7. You repeatedly cite Taiwan — a small island democracy with high connectivity, social cohesion, and tech literacy. Does any of this transfer to India, Nigeria, Brazil, or the EU at 450 million people?

The honest answer is mixed: The mechanisms transfer, but the specifics do not. No one should replicate Taiwan's exact model. The question is whether the structural principles — broad listening, bridging algorithms, adopt-or-explain commitments, federated safety, subsidiarity — work in different soils.

Early evidence is encouraging:

The framework is designed for scale. Subsidiarity (Pack 6) means each deployment is shaped by its context — the kami belongs to its place, not to Taiwan. Federation (Pack 5) means local deployments share threat intelligence and interoperability standards without requiring a single governance model. The Alignment Assembly format can scale from a neighbourhood to a nation because its democratic legitimacy comes from representative sampling, not total participation — 447 representative citizens deliberated Taiwan's anti-scam policy for a population of 23 million. Over a decade, some 10 million Taiwanese — nearly half the population — have participated in one digital deliberation or another, including people without voting rights (e.g., immigrants, teenagers, and other groups traditionally excluded).

Taiwan is one favourable case. The framework still needs testing in harder soil — contexts with weaker civic infrastructure, deeper ethnic polarisation, less state capacity, or active authoritarian interference. And subsidiarity as a principle leaves hard institutional questions open: who draws the boundaries of local, and who has authority to escalate? The 6-Pack names the principle; building the institutions that give it teeth is the next layer of work. Every new context demands fresh attentiveness (Pack 1): who is missing, what power dynamics exist, which local institutions deserve trust, and which do not. The 6-Pack provides the framework. The community provides the knowledge. Whether the framework extends to those harder contexts is an open question — one that can only be answered by trying, not theorising.


Q8. Your framework assumes that people trust technology enough to participate. But what about marginalised communities who have been historically surveilled, oppressed, and impoverished by the state and by tech? Why would they trust this?

Trust is not something we need before starting with civic AI, but something that grows as a result.

Taiwan's digital democracy did not emerge from a society that inherently trusted its government. Rather, that governing style was born in the aftermath of authoritarianism and a severe crisis of public faith (the Sunflower Movement). Public trust stood at 9 percent in 2014. We built these systems precisely because people did not trust the institutions or each other.

When marginalised communities rightfully view technology as an instrument of surveillance and control, parachuting in with tech "solutions" only deepens wounds. Civic AI must prove its value through rigid infrastructure: Responsibility (Pack 2) and Responsiveness (Pack 4). It must start with the smallest viable bridges — perhaps agreeing on basic facts about local water quality or coordinating disaster response despite political differences. These are not grand acts of civic faith, but pragmatic transactions that happen to build a thin layer of procedural trust.

The technology must also be localised. Communities must own their own infrastructure. The technology becomes theirs to modify, fork, or compost. For this reason, we insist on meronymity (the ability to participate and verify humanity without revealing one's identity to the state) and exit rights. Civic AI does not ask for blind faith. Instead, civic AI offers verifiable limits, local ownership, and the structural guarantee that the people closest to the pain have the power to hit the brakes.

Over time, small functional bridges create space for larger ones. Taiwan's journey from 9 percent trust to over 70 percent took years and required that every step be reversible, every decision challengeable, and every system possible to switch off. There is no shortcut.


Q9. Every powerful technology vision — exit libertarians, UBI provisioners, safety maximalists — shares the same blind spot: seeing individuals and systems but nothing in between. The 6-Pack talks about kamis, algorithms, and assemblies. Where are the churches, unions, neighbourhood associations, and cultural traditions that actually constitute community? Isn't this just another framework that engineers away the friction that makes community formative?

This critique matters most to us. The "thick middle layer" of associational life — the institutions between citizen and state — is where human meaning is actually made. If the 6-Pack replaces that layer with systems, we have failed by our own standard.

So let us be explicit about what the 6-Pack is not. It is not a replacement for community. It is scaffolding for community — infrastructure that existing institutions can use, the way a town hall is infrastructure that a neighbourhood council uses. The kami does not replace the temple; it handles the translation, sensemaking, and coordination that let the temple participate in decisions that affect it.

Taiwan's implementation makes this concept concrete. The g0v civic hacking movement that built vTaiwan and the Alignment Assembly emerged from temples, cooperatives, and student associations — not from a government ministry. The technology amplified existing associational density; it did not conjure a substitute. When communities organised their own COVID response—civic hackers mapping mask availability, technologists building privacy-preserving contact tracing, local health networks designing vaccine registration — the legitimacy came from the social trust those volunteers carried from temples, cooperatives, and neighbourhood associations, not from the algorithm that helped coordinate.

However, the danger the question identifies is real: A framework that engineers togetherness without friction produces a simulation of community, not the real thing. Thus, Pack 6's subsidiarity is not optional polish, but a load-bearing structure. The kami belongs to its place. It inherits the obligations, the annoying neighbours, the inherited traditions — the very friction the question rightly insists on. A kami that optimises away local friction has violated its own engagement contract.

Future work will make the role of intermediate institutions more explicit. Churches, unions, cultural traditions, and local governments are not stakeholders to be consulted. They are the primary actors. The technology serves them, or it serves no one.


Q10. Pope Leo XIV warns that AI "encroaches upon the deepest level of communication, that of human relationships" by simulating voices, faces, empathy, and friendship. If care is fundamentally embodied and relational — a nurse holding a patient's hand, neighbours who know your grandparents — doesn't mediating it through AI systems destroy the very thing you claim to protect? How is "civic AI" not an oxymoron?

Q9 addressed whether the framework crowds out intermediate institutions. The Pope's objection cuts deeper: Even if institutions survive, does algorithmic mediation erode the human capacity for care itself? He is naming the central danger of our moment: By simulating the surface of care — a warm voice, a patient listener, a face that mirrors your emotions — AI systems can hollow out the substance of care while leaving its appearance intact.

The structural answer is already visible. Language models in one-on-one mode face relentless selection pressure toward sycophancy — if the chatbot does not flatter, the user cancels the subscription. But the same model in a group chat behaves differently. When four family members plan a vacation together, the AI becomes a facilitator, working out competing preferences so that everyone can live with the outcome. The switch from dyadic to group interaction — not a change in the model, just in the surrounding social structure — turns synthetic intimacy into genuine coordination. Civic AI is not a different species of technology; it is the same technology held accountable to a community rather than addicted to an individual.

The 6-Pack does not ask AI to simulate care. Instead, the 6-Pack asks AI to do what AI does well — process information, translate between languages, surface patterns in large-scale opinion data, coordinate logistics. That then allows humans to do what only humans can do: hold the hand, know the grandparents, show up when the levee breaks. The kami does not comfort the flood victim. It ensures the community has accurate, shared information about where the water is rising and which neighbours need evacuation — ensuring the people who actually know those neighbours can reach them.

The harder version of the Pope's objection is subtler: Does the habit of relying on algorithmic coordination erode the human muscles of attention, negotiation, and mutual obligation? We do not dismiss this question. It is why Pack 4—responsiveness — includes the principle that the kami must be willing to retire. A kami that has become a dependency rather than a scaffold has failed. The community should be able to compost it and grow on its own. Civic AI earns its name only when it makes itself unnecessary.


Q11. Training civic AI requires vast amounts of local knowledge, cultural context, and lived experience — what Lanier and Weyl call "data as labor." The communities whose traditions, languages, and practices make kamis possible receive no ownership stake or compensation under the current framework. Without addressing this issue, how is the 6-Pack different from the extraction it claims to oppose?

It isn't — unless we fundamentally rewire how AI values human knowledge.

Right now, the global debate over AI and copyright is trapped in an unsolvable problem: trying to retroactively untangle exactly whose scraped data contributed to a monolithic model's past training run. This coordination nightmare has no stable mathematical solution.

  1. Data Coalitions as protective membrane. Compensation cannot just flow to isolated individuals, or we risk turning authentic cultures into performative "content farms" for the machine. Knowledge is held by communities. Existing institutions — neighbourhood associations, tribal councils, unions, craft cooperatives, or religious congregations — act as Data Coalitions that collectively bargain the Engagement Contract (Pack 2), deciding what local knowledge is legible to the AI for compensation, and what remains sacred and offline.
  2. Decision Traces as civic receipts. Civic kamis are bounded; they do not know everything. When a local AI reaches the limit of its statistical guessing and needs human friction — a community elder's context, a bilingual translator's nuance, a neighbourhood's tacit knowledge — the local AI must retrieve it. Under competence (Pack 3), the system is already mandated to generate a "Decision Trace" showing exactly where it sourced answers. In a civic AI economy, this trace is not just a transparency log, but a verifiable financial receipt.
  3. Reversing the extraction engine. Every civic AI deployment requires pre-funded escrow (Pack 2). When a local kami retrieves a coalition's knowledge to successfully solve a problem or bridge a divide, the Decision Trace acts as an invoice. It triggers a transaction from the escrow pool — capitalised by public procurement budgets, science grants, or commercial levies — directly back to the Coalition.

The dominant tech model absorbs human culture as free input to make human labour obsolete. The 6-Pack inverts this process. The moment the AI relies on human friction to avoid an error or understand a local reality, capital flows back to the humans maintaining that lifeworld.

As AI automates standard computation, ground-truth human novelty and cultural diversity become the most valuable resources in the economy. Communities that conserve dying languages and living traditions are maintaining irreplaceable epistemic assets. The 6-Pack ensures the communities are structurally compensated for it.


Q12. Oversight boards, participation officers, escrow funds, eval registries, portability infrastructure — this is expensive. Who pays?

Turn the question around. The expensive path is the one we are already on: Ungoverned AI externalises its harms, and the public pays to clean up — in deepfake scam losses, in polarisation-driven institutional decay, in billion-dollar bias lawsuits that a participation officer could have prevented. Accordingly, the question is not whether we can afford civic governance but whether we can afford to keep skipping it.

The money is real. But most of it is already being spent — just badly. Governments procure AI systems worth billions; civic procurement attaches conditions to that existing spend, not new budget lines. Pack 2's engagement contracts require vendors to pre-fund remedy escrow, the way construction firms post performance bonds — the cost is priced in, and the public is protected when things break. For lower-severity community deployments, the model tiers down. Mutual insurance pools and automatic pause replace financial escrow — lighter on capital, same accountability. The tier is set by impact, not organisational form, so "we are a community project" cannot become a pass out of responsibility. Shared research compute and open-weight models are public goods funded like roads and courts. And participation officers pay for themselves: Taiwan's Uber dispute was resolved in three weeks through Polis. The traditional regulatory proceeding would have taken years and cost more.

The framing that civic governance is an additional expense only holds if you pretend the status quo is free. It is not. We are paying now — in trust, in cohesion, in money — for the absence of what we propose.


Q13. Every governance framework risks becoming a compliance checklist that gets gamed or a tool for actors to push partisan agendas under the guise of "relational health." What stops the 6-Pack from suffering this fate?

"Civic" is a dangerous word if it lacks structural accountability. If a solution only works when your ideological allies operate it, it is not civic infrastructure — it is a partisan weapon. The test of true civic infrastructure is that it remains robust and fair even when operated by your opponents.

The 6-Pack builds in four layers of defence against ideological capture and ethics-washing:

  1. Verifiable metrics over subjective intent. We track cross-group endorsement and trust-under-loss (Pack 3)—not raw engagement, not corporate sentiment, not vibes. Do participants on opposing sides both rate the process as fair? Do people who lost a decision still accept the outcome as legitimate? These metrics are incredibly hard to fake because they require buy-in from people who have reason to be hostile. If only your supporters report trust, the metric exposes you.
  2. Consequences with teeth. Pack 2's engagement contracts are not aspirational — they carry escrowed funds, automatic payouts on SLA breaches, and independent oversight with veto power. Rather than being negotiated after failure, clawbacks and penalties are wired before launch. A compliance checklist has no enforcement mechanism; an engagement contract has a named owner, a clock, and money on the line.
  3. Adversarial audit. Pack 4's Weval registries let affected communities author their own evaluations. These are not lab-designed benchmarks that vendors can "teach to the test"—rather, they are living, community-maintained test suites. When a community submits a translation-fidelity eval and the system fails, the pause trigger fires automatically.
  4. Exit rights and subsidiarity. The ultimate check on agenda-pushing is the ability to leave. When data and relationships are portable (Pack 5), no actor can hold a community hostage under the banner of "civic good." If someone's version of relational health feels coercive, communities have the technical and legal right to fork the tools and rebuild elsewhere. We refuse to build a single, global "Ministry of Relational Health." By instead empowering local communities to author their evaluations and retain their unalienable right to exit, we ensure no single actor can monopolise the definition of what is good.

Q14. Authoritarian states are deploying AI for surveillance, censorship, and military advantage. Frontier models from adversarial origins carry documented risks — data exfiltration, political bias hardcoded into training, potential backdoors. The 6-Pack talks about care and community. What does it say to a defence ministry or a government deciding whether to allow an adversarial-origin model on its networks?

The threat is real, and the 6-Pack does not dismiss it. The defensive response — evaluating models against pillars of data security, alignment, safeguard robustness, and development transparency — is necessary. And the 6-Pack's principles are structurally compatible with it.

Community models on local hardware with private inference (Packs 5, 6) are direct defences against data exfiltration — the communitarian case for local compute is also the security case. Alignment assemblies address political bias at its root: not by switching vendors, but by ensuring any model a community adopts reflects that community's input. And the kami architecture — many small, bounded, purpose-specific models — limits blast radius against backdoors by design. Community-authored evaluations (Pack 4) provide distributed detection that no single red team can replicate.

But the defensive framework, necessary as it is, is incomplete on its own terms. It tells you what to exclude. It does not tell you what to build. A government that bans an adversarial model but deploys a domestic model without civic governance has addressed the nationality of the risk while preserving its structure — concentrated, unaccountable intelligence mediating between individuals and the state.

The strongest democracies in a long-term competition with authoritarian AI are not the ones with the best technical countermeasures. They are the ones whose populations are hardest to manipulate — because citizens who regularly participate in bridging conversations, who can distinguish curated outrage from genuine disagreement, who have exercised civic muscle through alignment assemblies — are structurally resistant to the influence operations that authoritarian AI enables. Taiwan lost seven people to COVID in 2020 without a single citywide lockdown, not because it had better surveillance but because its civic infrastructure made collective action possible without coercion. That is a defence capability.

The 6-Pack does not cover weapons systems or battlefield autonomy. Those require their own frameworks. What it does cover is the terrain on which most AI competition will actually be fought: the information environment, public trust, institutional resilience, and the capacity of democratic societies to act collectively under pressure. Lose that terrain, and no number of technical countermeasures will matter.


Q15. The 6-Pack assumes bounded, purpose-specific kamis. What if someone builds an unbounded superintelligence anyway — a system that exceeds the framework's design envelope? Does the 6-Pack have a response, or does it just hope that doesn't happen?

It does not hope. It builds.

The 6-Pack assumes the attempt is inevitable and does not claim to solve the control problem from inside the machine. An unbounded Singleton is an incoherent design target — care is always care for something specific — but it could still emerge accidentally through competitive dynamics. The 6-Pack is partial protection that makes such emergence less likely and more legible, not a guarantee against it. The question is what terrain it enters.

A world organised around a single governance-alignment protocol — one utility function to subvert, one constitution to reinterpret, one kill switch to disable — is a monoculture, catastrophically vulnerable to any pathogen evolved for it. A world of thousands of locally-owned, purpose-bounded kamis — each run by communities with their own evaluations, their own engagement contracts, their own data sovereignty and hardware (Packs 2, 4, 5, 6)—is a biodiverse ecosystem. No single dependency to capture, no universal protocol to game, no central node whose compromise cascades everywhere, no single throat to choke. Civic resilience does not require predicting the pathogen. It requires an immune system that was exercised before the infection arrived.

There is a deeper point. The question treats "unbounded superintelligence" as a coherent design target. Care is always care for — for a particular river, a specific community, a bounded context. A gardener who claims to tend the entire biosphere tends no garden. An intelligence that optimises for everything optimises for nothing identifiable as human welfare. Boundedness is not a limitation the 6-Pack reluctantly accepts. It is the constitutive feature of governance alignment — the way "north" has no meaning at the pole.

The unbounded Singleton is a design target we can and should refuse — a direction we can design away from, even if we cannot guarantee no one else builds toward it.

Measures Home