Why this question matters now
AI governance is finally taking shape.
Across standards, audits, risk management, and institutional capacity-building, societies are learning—sometimes painfully—how to govern systems that influence human cognition and decision-making.
But another shift is approaching, quieter and less visible: quantum capabilities.
This is not a story about “quantum being faster than AI.”
The deeper issue is that quantum technologies change what societies can compute about themselves, and how multiple systems can be optimized together.
When optimization moves from isolated domains to system-level coordination, the assumptions underpinning governance begin to shift.
This article asks a simple question:
What actually changes for governance in the Quantum–AI Era?
A simple proposition
In the Quantum–AI Era, public systems will increasingly produce actionable outcomes whose internal causal paths are not fully observable.
This does not mean governance becomes impossible.
But it does mean that many familiar tools—traceability, linear accountability, single-sector regulation—become insufficient on their own.
Governance must evolve from controlling individual decisions
to shaping system-level behavior, constraints, and resilience.
Why start from risks
I begin with risks not to frame quantum as dangerous,
but because risks reveal where existing governance assumptions break down.
They show us what governance will be asked to do, before we decide how to do it.
What new risks does quantum make visible?
1) Public health & societal risks
Quantum-scale optimization enables real-time coordination across sectors—mobility, energy, healthcare, logistics, environment, and finance.
This coupling introduces new societal risk patterns:
Cross-sector cascade failures in tightly linked systems
Overconfidence in probabilistic predictions, treated as certainty
Mismatch between model outputs and real population behavior
Amplified inequities if advanced optimization capacity is unevenly distributed
Over-optimization lock-in, reducing flexibility, resilience, and social diversity
Public health, in this sense, is not limited to hospitals or medicine.
It includes mobility, access, environment, behavior, trust, and societal resilience.
2) Human & epistemic risks
Quantum-enabled systems often produce results that work—
without providing visible causal explanations.
This creates deeper epistemic challenges:
Loss of epistemic transparency (“we cannot explain why it works”)
Erosion of collective learning and human wisdom, as causal understanding no longer accumulates
Behavioral homogenization, as optimized systems converge on similar outcomes
Decline of everyday agency, where individuals gradually stop deciding and start complying
Even beneficial outcomes can weaken societies if they erode learning, judgment, and diversity over time.
3) Technical & infrastructure risks
Technical properties translate directly into governance challenges:
Instability from noise and decoherence
Unreliable convergence in complex optimization tasks
Distribution shifts between simulated and real-world conditions
Cryptographic disruption with uncertain timelines
Opaque intermediate states, difficult to audit or explain using conventional tools
The issue is not technical failure alone,
but system-level fragility when governance assumes linear, observable processes.
From risks back to system properties
These risks are not accidental.
They emerge from a combination of:
System-level properties (how quantum interacts with possibility and optimization)
Societal responses (interdependence, real-time coordination, large-scale deployment)
Without understanding these layers, governance design risks addressing symptoms rather than causes.
This is why governance cannot start from regulation alone.
The Quantum 4-Layer Framework (briefly)
To structure this thinking, I use a four-layer framework:
Layer 0 — System properties
Layer 1 — Societal and institutional effects
Layer 2 — Governance design
Layer 3 — Real-world risks and system dynamics
The logic is simple:
Only by understanding Layers 0, 1, and 3
can Layer 2—governance—be meaningfully designed.
This framework is not a taxonomy, nor a regulatory checklist.
It is a way to align system behavior, societal impact, risk visibility, and governance intent.
I will elaborate on the framework itself in a separate article.
Here, I focus on what it reveals for governance.
What this means for governance
Many existing governance approaches—especially in AI—assume that systems remain broadly countable:
decisions can be traced,
causes can be reconstructed,
responsibility can be assigned linearly.
In the Quantum–AI Era, this assumption weakens—
not everywhere, not always, but in the most consequential domains.
Governance must therefore shift focus:
from individual decisions to system behavior
from perfect explainability to bounded assurance
from static rules to dynamic constraints and resilience
This is not about removing human control.
It is about preserving human agency in systems that increasingly optimize on our behalf.
Where the governance challenges actually lie
Across societies and institutions, similar questions are beginning to emerge:
What should remain optimizable—and what should not?
How do we preserve human choice and diversity in hyper-optimized environments?
Who has access to large-scale optimization capabilities—and who does not?
How do we prevent optimization capacity itself from becoming a source of structural inequality or power asymmetry?
Which failures are unacceptable, even if statistically rare?
How do institutions maintain legitimacy when explanations are incomplete or probabilistic?
How do we design accountability for outcomes emerging from system-level interactions and long-term lock-in?
These questions are not national or sector-specific.
They are structural and increasingly unavoidable as optimization moves from tools to environments.
Why am I working on this
I approach quantum governance not only as a physicist,
but through experience with large-scale digital and organizational transformation.
Again and again, I have seen systems fail not because technology was wrong,
but because governance frameworks did not match how systems actually behaved.
Quantum technologies make that misalignment visible earlier—and at greater scale.
That is why governance must evolve before deployment becomes a dependency.
Key takeaways
Quantum governance is not about regulating faster machines.
It requires rethinking governance frameworks themselves—
because system-level optimization changes how decisions, risks, and responsibilities emerge.The most critical risks are not only technical.
They are societal and human:
public health, collective learning, autonomy, and long-term resilience.Governance must shift its unit of design.
From governing isolated decisions or models
to shaping system-level behavior, constraints, and human agency.
Closing refection
Instead of asking how to control quantum-enabled systems,
we may need to rethink a more foundational question:
What kind of computable society do we want to create?
Quantum technologies will be used—
to accelerate drug discovery, improve healthcare systems,
optimize energy use, and address complex environmental challenges.
They will be so helpful and useful.
The question is not whether to use them,
but how they reshape the conditions under which societies decide, coordinate, and learn.
Governance, in this context, is not about limiting computation.
It is about aligning powerful new capabilities with human values, institutional goals, and societal resilience.
Quantum expands what societies can compute.
What kind of society do we want to create?
Mari Sekino, Ph.D in Physics,
Founder & CEO of QERA




Thanks for writing this, it clarifies a lot, paricularly highlighting the crucial challenge of evolving governance from individual oversight to shaping complex system-level behaviors given that causal paths become inherently less traceable in the quantum-AI paradigm.