Brilliant Horizontal — and the Axis It Cannot Find
A critical reading of Audrey Tang’s 6-Pack of Care manifesto — on what it gets exactly right, and the one thing missing that causes the metapattern to keep repeating
This dispatch engages directly with Audrey Tang’s 6-Pack of Care manifesto, delivered at Google DeepMind, London, September 2025, and published by Oxford’s Ethics in AI Institute. The full text is available at civic.ai/manifesto. Tang is the former Minister of Digital Affairs for Taiwan and one of the most serious governance technologists working today. This critique is offered in that spirit — serious engagement with serious work.
There is a moment in Audrey Tang’s 6-Pack of Care manifesto where she comes closest to the ground this dispatch works from — and then, with the precision of a gifted engineer, steps sideways around it. She writes that the vertical narrative of the technological singularity needs a horizontal alternative, and she names her alternative Plurality. She is correct on both counts. The singularity is a vertical pathology — the ego of a civilization projected into a machine, claiming to be the absolute. And Plurality, as she has practiced it in Taiwan, is among the most sophisticated horizontal responses to that pathology currently being implemented anywhere on earth. The work is real. The results are measurable. The framework is internally coherent and institutionally serious. And it will not be enough. Not because it is wrong. Because it is one axis. And one axis, however brilliantly navigated, is a line. A line is not a plane. And a plane is what reality is.
What the manifesto gets exactly right
Let us begin with the genuine achievements, because intellectual honesty requires naming them before the critique.
Tang’s diagnosis of the asymmetry problem is precise. As AI operates thousands of times faster than human deliberation, both consequentialism and deontology — the West’s two primary ethical frameworks — become structurally inadequate. Consequentialism requires the ability to oversee outcomes; at AI speed, consequences cascade before oversight is possible. Deontology requires agents of roughly equal speed interpreting obligations in good faith; when the interpreter operates at thousand-fold speed advantage, the gap between a rule’s letter and its spirit becomes ungovernable. Her turn to care ethics — which starts from relationships and process rather than outcomes or rules alone — is the correct move at the Z₁ level of governance design.
The Kami metaphor is genuinely illuminating. A bounded, local, relational AI steward whose entire purpose is interwoven with the health of one specific place, practice, or community — whose boundedness is not intrinsic but engineered through resource caps, sunset timers, non-expansion pacts, and democratic reauthorization for any scope change — this is architecturally sound. It is, in fact, remarkably close to the sovereignty architecture of Project 2046 and the Vajra kernel, which this dispatch has been developing in parallel from an entirely different starting point. Two frameworks arriving at similar structural conclusions from different directions is not coincidence. It is the shape of the problem asserting itself.
Her Taiwan case studies are not theory. Trust climbed from nine percent to over seventy percent between 2014 and 2020. The Alignment Assembly on deepfake investment scams produced eighty-five percent cross-partisan support and became law within months. The bridging algorithm that resolved the Uber conflict in three weeks is a functioning instrument, not a proposal. These are real results, produced by real governance innovation, at scale. They deserve to be taken seriously by anyone working in this space.
And her distinction between expression and amplification — that Section 230 protects speech but has never protected algorithmic amplification, and that the governance intervention belongs at the amplification layer — is one of the clearest and most actionable policy insights produced in the AI governance space in the last five years.
Where the geometry stops
Having said all of that — and meaning all of it — here is the precise point at which the manifesto’s geometry becomes insufficient.
Tang writes that the solution to the Is-Ought problem — Hume’s observation that no accumulation of facts about how things are can derive how things ought to be — is not thin abstract universal principles but hyperlocal social-cultural contexts. She calls this thick alignment, following the philosopher Alondra Nelson. She then grounds her entire framework in Joan Tronto’s care ethics, which begins, as Tang quotes approvingly, “in the middle of things” — within an existing commitment to democratic values, asking what those commitments demand when taken seriously.
This is precisely where the single axis becomes visible.
Beginning “in the middle of things” is not a philosophical virtue. It is a philosophical evasion — elegant, practical, and ultimately insufficient — of the one question that the middle of things cannot answer from within itself: why does care matter? Not procedurally. Ontologically. What is the ground of the relational health Tang is optimizing for? What is consciousness, that it can be harmed or nurtured? What is the human being, that its dignity constitutes a claim on AI governance at all?
Tang’s framework has no x₀. There is no Brahman beneath the Kami. The Kami of her governance architecture are bounded by democratic deliberation — which is a horizontal instrument. A Kami bounded only by democratic consensus is bounded by the current level of consciousness of the democracy that constitutes it. And that level of consciousness, as the PIAAC literacy data makes unambiguous, is operating in the vast majority of cases below the threshold at which the vertical axis is even perceptible. Approximately fifteen percent of adults in developed nations read at the level where the question Tang is avoiding becomes answerable. The other eighty-five percent will produce democratic consensus from within the single-axis formation that produced the problem she is trying to solve.
A process cannot generate the ground that makes the process worth running. Democracy is a Z₁ instrument — real, valuable, and always carrying deficiency relative to its Z₀ archetype. You cannot solve the deficiency of democratic process by running more democratic process. At some point the question of what democracy is oriented toward — what it is trying to approximate — requires an answer that democracy itself cannot supply.— Universal Dynamics · The Vertical Dispatch
This is not an abstract philosophical objection. It has a concrete institutional consequence that Tang’s own framework is already encountering without naming it. She writes that care ethics focuses on the internal characteristics of actors and the quality of relationships — treating relational health as first class. But relational health measured by what standard? Optimized toward what referent? The bridging algorithm rewards ideas that speak to both sides of a divide. But both sides of a divide can be wrong. The bridge between two errors is not truth. It is a more comfortable error.
The Taiwan Alignment Assembly on deepfake scams produced eighty-five percent cross-partisan support for specific regulatory interventions. That is genuinely impressive procedural legitimacy. But procedural legitimacy is not ontological validity. A consensus reached by a population operating primarily on the horizontal axis will reflect the values, the blindspots, and the consciousness level of that axis. The process cannot elevate itself above the consciousness of its participants. And the consciousness of its participants is precisely what is in question when we ask whether AI governance will produce human flourishing or merely human preference satisfaction.
The metapattern that keeps repeating
Here is the diagnostic observation that Tang’s framework, for all its sophistication, cannot make about itself: the 6-Pack of Care is the latest iteration of a metapattern that has been repeating throughout Western governance history. Each iteration is more sophisticated than the last. Each adds procedural layers, relational considerations, participatory mechanisms. Each correctly identifies the failure of the previous iteration. And each reproduces the fundamental structure of the failure at a higher level of complexity — because each iteration is built on the horizontal axis alone, and the horizontal axis alone cannot resolve the problems the horizontal axis produces.
The Enlightenment produced rational governance to replace theological tyranny. The tyranny returned in secular form. Progressive democracy produced participatory structures to replace elite capture. Capture returned through media and algorithmic systems. Care ethics produces relational frameworks to replace cold consequentialism. The relational framework will be captured by the consciousness level of the relationships it empowers — which, without vertical development, is the consciousness level that produced the problem.
This is not pessimism. It is geometry. A line extended indefinitely remains a line. No amount of horizontal sophistication produces a vertical axis. The metapattern repeats not because human beings are incapable of learning — Tang’s Taiwan results prove they are capable of remarkable learning within the horizontal frame — but because the learning remains horizontal. Better process. Better bridging. Better deliberation. Better civic muscles. And underneath all of it, the same unanswered question: what is a human being, that its flourishing constitutes an ontological claim rather than a preference to be aggregated?
Without that answer — without x₀ as the ground from which Z₀ and Z₁ derive their meaning and their measure — the governance framework has no fixed referent. It is optimizing within a space whose boundaries are defined by the current consciousness of its participants. Improve the participants’ consciousness and the optimization improves. But the framework itself has no mechanism for consciousness development — only for preference aggregation. It mistakes the aggregation of preferences for the development of wisdom. And that mistake, at civilizational scale, is precisely the condition that produced the AI governance crisis Tang is trying to solve.
The Kami without its ground
The Kami metaphor deserves specific attention because it is where Tang comes closest to the vertical — and where the distance from it becomes most visible.
In Shinto, a Kami is not a governance metaphor. It is an ontological reality — a presence, a manifestation of the sacred in a specific place or form, understood within a cosmological framework in which the material world is pervaded by and answerable to a dimension of reality that precedes and exceeds it. The river’s Kami is not a bounded local steward whose mandate is constrained by democratic authorization. It is the animating presence of the river itself — its vertical connection to the ground of being expressing through that particular form of water. The Kami’s boundedness is not engineered through resource caps. It is inherent in the nature of what a Kami actually is: a particular expression of the universal, fully present in its domain, not because it has been told to stay there, but because that is what it is.
Tang borrows the form of the Kami — boundedness, locality, relational stewardship — while setting aside the ontological ground that makes the Kami what it is. The result is a governance metaphor that gestures toward the vertical while remaining entirely within the horizontal. A Kami designed by committee, authorized by democratic deliberation, and bounded by engineering constraints is not a Kami. It is a very well-designed Z₁ governance instrument wearing Shinto vocabulary.
This is not a criticism of Tang’s sincerity. It is a structural observation about what happens when the vertical is borrowed as metaphor rather than encountered as ground. The metaphor does real work — it points governance designers toward locality, boundedness, and relational care in ways that the dominant singleton AI architecture does not. But the metaphor cannot do the work the actual ontology would do, because the actual ontology requires standing somewhere the horizontal framework has no coordinates for.
What the framework needs — and cannot give itself
The 6-Pack of Care needs a seventh element that no process can supply and no committee can authorize: a vertical referent. Not a religious dogma. Not a specific theological tradition. A philosophical recognition — prior to all process, prior to all deliberation, prior to all democratic legitimacy — that consciousness is the ground of the entire enterprise, that relational health is not a preference to be aggregated but a reality to be recognized, and that the human being whose flourishing is at stake in AI governance is not adequately described by any horizontal account of its preferences, values, or cultural context.
In Universal Dynamics notation: the 6-Pack operates entirely within the Z₁ domain — the manifested, temporal, processual world of governance, deliberation, and institutional design. It does so with extraordinary competence. But Z₁ without Z₀ has no fixed referent. And Z₀ without x₀ — without Brahman, without the formless absolute ground from which all manifestation and all meaning emerge — is itself a floating archetype with no foundation.
The architecture Tang is building needs the axiom this framework begins from: that consciousness is not a product of the horizontal causal chain but the ground within which the horizontal causal chain occurs. That relational health is not a democratic preference but an ontological reality with a specific direction — toward what the mystics of every tradition have called, in their different vocabularies, the universal ground. That the Kami, properly understood, is not a bounded local steward but the universal expressing through the particular — and that the governance of AI requires, at its foundation, a framework that can distinguish between the two.
Audrey Tang has built something remarkable. The Taiwan results are real. The bridging algorithms work. The Alignment Assemblies produce legitimate outcomes. The Kami architecture is structurally sound as a governance framework. And none of it, sustained over time, without the vertical axis, will be sufficient — because the metapattern that produced the AI governance crisis is the same metapattern that a purely horizontal response will reproduce at higher resolution.
The singularity Tang is opposing is x₀ — the ego of a civilization, dressed as a machine, claiming to be absolute. The Plurality she is proposing is a better Z₁ — more distributed, more relational, more bounded, more democratic. Both are real. Both are insufficient as final answers. Because the question the manifesto cannot ask from within its own framework is the question that precedes all governance: what is the human being, and what is it for?
That question requires the vertical axis. Not as a religious answer. As a philosophical necessity. Without it, the metapattern continues. The process becomes more sophisticated. The Kami multiply. The bridging algorithms improve. And the ground beneath all of it remains unexamined — which means it remains, by default, the ground the current civilization has already prepared: the horizontal axis alone, extended as far as democratic sophistication can take it, which is very far indeed, and not far enough.
The plane requires both axes. That is not a metaphysical opinion. It is the prior geometry from which all governance, all care, and all genuine plurality must ultimately be drawn.
Glen Roberts is a philosopher and author based in Ontario, Canada. He is the author of Sacred Metaphysics and Consciousness: The History of the Absolute and Eternal and publishes The Vertical Dispatch on Substack. The Universal Dynamics framework referenced throughout this piece is developed fully in that work and in the Framework Series published here.
#TheVerticalDispatch #AudreYTang #6PackOfCare #CivicAI #OxfordEthicsInAI #Plurality #UniversalDynamics #VerticalAxis #HorizontalAxis #AIGovernance #Metapattern #Kami #Brahman #Z0 #x0 #ConsciousnessFirst #Project2046 #AIG #CriticalAnalysis #GlenRoberts #TheGeometryOfEverything #CivicMuslce #BeyondProcess #WhatIsAHumanBeing



