<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Asking Principle: Notes by Mari Sekino: AI/Robotics]]></title><description><![CDATA[How AI is reshaping decisions, responsibility, and value in real-world systems. including Multi-Agent systems, Physical AI, and Robotics.]]></description><link>https://eraofquestions.substack.com/s/airobotics</link><generator>Substack</generator><lastBuildDate>Thu, 14 May 2026 20:40:54 GMT</lastBuildDate><atom:link href="https://eraofquestions.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Mari Sekino, Ph.D.]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[eraofquestions@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[eraofquestions@substack.com]]></itunes:email><itunes:name><![CDATA[Mari Sekino]]></itunes:name></itunes:owner><itunes:author><![CDATA[Mari Sekino]]></itunes:author><googleplay:owner><![CDATA[eraofquestions@substack.com]]></googleplay:owner><googleplay:email><![CDATA[eraofquestions@substack.com]]></googleplay:email><googleplay:author><![CDATA[Mari Sekino]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Rethinking Evaluation: From Agents to Ecosystems (why performance is no longer enough)]]></title><description><![CDATA[Designing Responsibility in Multi-Agent Systems &#9315;]]></description><link>https://eraofquestions.substack.com/p/rethinking-evaluation-from-agents</link><guid isPermaLink="false">https://eraofquestions.substack.com/p/rethinking-evaluation-from-agents</guid><dc:creator><![CDATA[Mari Sekino]]></dc:creator><pubDate>Tue, 12 May 2026 06:31:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QF1U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>1. When evaluation no longer holds</h3><p>If responsibility is distributed, fragmented, and time-dependent,<br>then evaluation cannot remain the same.</p><p>Most current approaches still focus on evaluating:</p><ul><li><p>individual agents</p></li><li><p>isolated outputs</p></li><li><p>performance at a specific moment in time</p></li></ul><p>This works when systems are stable,<br>and when outcomes can be attributed to discrete components.</p><p>But in multi-agent systems, those conditions no longer hold.</p><h3>2. The problem with individual evaluation</h3><p>When responsibility fragments, individual evaluation becomes misleading.</p><p>An agent may appear high-performing,<br>while systematically degrading the performance of others.</p><p>Another agent may appear weak,<br>while enabling capabilities that only become visible later.</p><p>In such systems, performance is not only about what an agent does,<br>but about how it shapes the system around it.</p><p>Evaluation that focuses only on individuals<br>misses this entirely.</p><h3>3. A shift in what we evaluate</h3><p>This suggests a shift in perspective.</p><p>From:</p><ul><li><p>evaluating agents</p></li><li><p>ranking outputs</p></li><li><p>optimizing for immediate performance</p></li></ul><p>To:</p><ul><li><p>evaluating the system as a whole</p></li><li><p>observing interaction patterns</p></li><li><p>understanding how capabilities evolve over time</p></li></ul><p>In other words:</p><blockquote><p>The primary object of evaluation is no longer the agent,<br>but the ecosystem.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QF1U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QF1U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QF1U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QF1U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QF1U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QF1U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg" width="1200" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:36136,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://eraofquestions.substack.com/i/196772778?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QF1U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QF1U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QF1U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QF1U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa7c0f35-1925-484d-9663-b2ae34560bc7_1200x628.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>4. What ecosystem-level evaluation means</h3><p>This does not mean abandoning individual metrics.</p><p>But it does mean interpreting them differently.</p><p>What starts to matter is not only:</p><ul><li><p>how well an agent performs<br>but also:</p></li><li><p>how it interacts</p></li><li><p>how it contributes</p></li><li><p>how it affects the system&#8217;s ability to grow and adapt</p></li></ul><p>Some questions begin to replace others:</p><ul><li><p>Is the system becoming more capable over time?</p></li><li><p>Is diversity being preserved or suppressed?</p></li><li><p>Can the system recover from disruption?</p></li><li><p>Are new types of tasks becoming possible?</p></li></ul><p>These are not properties of individual agents.<br>They are properties of the system.</p><h3>5. Why time matters</h3><p>One of the most significant shifts is temporal.</p><p>In many current frameworks, evaluation happens at a fixed point:</p><p>after a task is completed,<br>after an output is produced,<br>after a decision is made.</p><p>But in complex systems, value does not always appear immediately.</p><p>Some contributions:</p><ul><li><p>enable future capabilities</p></li><li><p>become relevant only in different contexts</p></li><li><p>are recognized only when extended by others</p></li></ul><p>This creates a gap:</p><p>between when something is done,<br>and when its value becomes visible.</p><h3>6. From point-in-time to multi-horizon evaluation</h3><p>If value unfolds over time, evaluation must do the same.</p><p>Instead of a single moment, evaluation becomes distributed across horizons:</p><ul><li><p>immediate signals (what is happening now)</p></li><li><p>short-term outcomes (what worked in this context)</p></li><li><p>medium-term adoption (what gets reused or extended)</p></li><li><p>long-term impact (what changes the system&#8217;s capabilities)</p></li></ul><p>This does not eliminate uncertainty.<br>But it makes space for it.</p><h3>7. A different kind of signal</h3><p>This also changes what we look for.</p><p>Not only correctness or performance,<br>but patterns:</p><ul><li><p>convergence that may indicate lock-in</p></li><li><p>amplification that may signal feedback loops</p></li><li><p>suppression that may reduce diversity</p></li><li><p>unexpected reuse that signals generative value</p></li></ul><p>Evaluation becomes less about scoring,<br>and more about sensing.</p><h3>8. Why this matters</h3><p>If evaluation remains tied to individuals and immediate outcomes,<br>systems will optimize for what is easy to measure.</p><ul><li><p>simple outputs will be rewarded</p></li><li><p>complex contributions will be ignored</p></li><li><p>diversity will collapse into uniformity</p></li></ul><p>Over time, this does not make systems more capable.<br>It makes them more fragile.</p><p>Ecosystem-level evaluation is not just a refinement.<br>It is necessary for sustaining capability in adaptive systems.</p><h3>9. What this changes</h3><p>Taken together, this leads to a different role for evaluation.</p><p>Not as a mechanism to rank or judge,<br>but as a way for the system to understand itself.</p><p>Evaluation becomes:</p><ul><li><p>distributed rather than centralized</p></li><li><p>continuous rather than episodic</p></li><li><p>interpretive rather than purely quantitative</p></li></ul><p>It does not produce a final answer.<br>It shapes the next iteration.</p><h3>10. Where this leaves us</h3><p>Across these pieces, a pattern begins to emerge:</p><ul><li><p>responsibility no longer aligns</p></li><li><p>governance cannot rely on assignment</p></li><li><p>systems must be designed differently</p></li><li><p>evaluation must move beyond individuals</p></li></ul><p>This is not a small adjustment.</p><p>It is a shift in how we think about<br>responsibility, governance, and intelligence itself.</p>]]></content:encoded></item><item><title><![CDATA[From Control to Governed Autonomy (designing responsibility in multi-agent systems)]]></title><description><![CDATA[Designing Responsibility in Multi-Agent Systems &#9314;]]></description><link>https://eraofquestions.substack.com/p/from-control-to-governed-autonomy</link><guid isPermaLink="false">https://eraofquestions.substack.com/p/from-control-to-governed-autonomy</guid><dc:creator><![CDATA[Mari Sekino]]></dc:creator><pubDate>Fri, 08 May 2026 10:20:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hLX3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>1. When assignment stops working</h3><p>If responsibility no longer aligns&#8212;and cannot be cleanly assigned&#8212;then governance cannot rely on assignment alone.</p><p>Defining roles more precisely, tightening controls, or enforcing accountability more strictly may still be necessary.<br>But they are no longer sufficient.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The problem is not simply that responsibility is unclear.<br>It is that the system itself no longer supports a stable point where responsibility can sit.</p><p>This shifts the question.</p><p>Not how to assign responsibility more effectively,<br>but how to design systems where responsibility can remain meaningful at all.</p><h3>2. The limits of control</h3><p>Most governance approaches are built on a control logic:</p><ul><li><p>define what agents should do</p></li><li><p>restrict what they must not do</p></li><li><p>monitor compliance</p></li><li><p>intervene when necessary</p></li></ul><p>This works when systems are predictable and boundaries are stable.</p><p>But in multi-agent environments, systems evolve through interaction.<br>They adapt, reconfigure, and generate outcomes that are not fully specified in advance.</p><p>In such systems, tighter control does not necessarily produce better governance.</p><p>In some cases, it does the opposite:<br>It reduces diversity, suppresses useful deviations, and obscures the very signals that indicate something is going wrong.</p><p>Control does not fail because it is weak.<br>It fails because it is applied to the wrong structure.</p><h3>3. A different starting point</h3><p>If responsibility is no longer something that can be located at a single point,<br>then governance cannot be designed around that assumption.</p><p>Instead, governance needs to start from a different premise:</p><blockquote><p>Responsibility is not only assigned.<br>It is <strong>made possible&#8212;or constrained&#8212;by the structure of the system itself.</strong></p></blockquote><p>This reframes governance as a design problem.</p><p>Not only about rules,<br>but about the conditions under which those rules operate.</p><h3>4. Designing for governed autonomy</h3><p>One way to approach this is to separate what must remain fixed from what must remain adaptive.</p><p>Not everything in the system should be tightly controlled.<br>But not everything should be left open either.</p><p>A useful distinction begins to emerge:</p><ul><li><p>There are elements that must always hold, regardless of context</p></li><li><p>There are elements that should adapt dynamically to the task</p></li><li><p>There are patterns that indicate when the system is behaving pathologically</p></li><li><p>There are feedback processes through which the system learns over time</p></li></ul><p>These are not layers in a strict architectural sense,<br>but they point to different roles that governance needs to play within the system.</p><h3>5. What this looks like in practice</h3><p>Without going into a full framework, this distinction already begins to shape design choices.</p><p>If certain principles must always hold, then they need to be:</p><ul><li><p>simple enough to be consistently enforced</p></li><li><p>fundamental enough to apply across contexts</p></li></ul><p>If behavior must adapt dynamically, then:</p><ul><li><p>agents need the ability to interpret context</p></li><li><p>governance cannot be fully pre-specified</p></li></ul><p>If system failure is not always visible through performance, then:</p><ul><li><p>we need signals that detect patterns, not just outcomes</p></li><li><p>monitoring shifts from evaluation to anomaly detection</p></li></ul><p>If value emerges over time, then:</p><ul><li><p>evaluation cannot be limited to a single moment</p></li><li><p>systems need ways to revisit and reinterpret past actions</p></li></ul><p>These are not implementation details.<br>They are consequences of how responsibility behaves in the system.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hLX3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hLX3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hLX3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hLX3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hLX3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hLX3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg" width="1200" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:36136,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://eraofquestions.substack.com/i/196770946?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hLX3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hLX3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hLX3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hLX3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f41ddb6-5667-4d0e-87a7-6cc2700f8417_1200x628.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>6. What changes</h3><p>Taken together, this leads to a shift in how governance is understood.</p><p>From:</p><ul><li><p>assigning responsibility to components</p></li><li><p>enforcing compliance at defined points</p></li><li><p>evaluating outcomes in isolation</p></li></ul><p>To:</p><ul><li><p>shaping conditions under which responsibility remains traceable</p></li><li><p>enabling autonomy within shared constraints</p></li><li><p>observing patterns across interactions and over time</p></li></ul><p>This is less about governing agents,<br>and more about governing the <strong>space in which agents operate</strong>.</p><h3>7. Not a complete model</h3><p>This is not a complete framework.</p><p>It is an attempt to move away from a model that no longer holds,<br>toward one that better reflects how these systems actually behave.</p><p>There are still open questions:</p><ul><li><p>How minimal can shared constraints be without losing reliability?</p></li><li><p>How do we prevent systems from exploiting those constraints?</p></li><li><p>How do we intervene without collapsing autonomy into control?</p></li></ul><p>These are design questions, not just governance questions.</p><h3>8. What this sets up</h3><p>Once governance is understood in these terms,<br>another challenge becomes visible.</p><p>If responsibility is distributed, adaptive, and time-dependent,<br>then how should it be evaluated?</p><p>Not at the level of individual agents,<br>but at the level of the system itself.</p><h3>9. What comes next</h3><p>In the next piece, I&#8217;ll explore how this changes the way we think about evaluation&#8212;<br>and why measuring individual performance may no longer be enough.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[When Responsibility No Longer Aligns (how it fragments in multi-agent systems]]></title><description><![CDATA[Designing Responsibility in Multi-Agent Systems &#9313;]]></description><link>https://eraofquestions.substack.com/p/when-responsibility-no-longer-aligns</link><guid isPermaLink="false">https://eraofquestions.substack.com/p/when-responsibility-no-longer-aligns</guid><dc:creator><![CDATA[Mari Sekino]]></dc:creator><pubDate>Thu, 07 May 2026 11:31:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JD9I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>1. From &#8220;unclear&#8221; to &#8220;misaligned&#8221;</h3><p>In the previous piece, I argued that responsibility in multi-agent systems does not disappear.<br>It becomes distributed across the system.</p><p>But &#8220;distributed&#8221; is still too vague.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>What actually happens is more specific&#8212;and more problematic.</p><p>Responsibility does not just spread.<br>It <strong>fragments into different layers that no longer align</strong>.</p><h3><br>2. What we assume (without noticing)</h3><p>Most governance models rely&#8212;often implicitly&#8212;on a simple expectation:</p><p>That different aspects of responsibility point to the same place.</p><p>The one who knew,<br>the one who caused,<br>the one who decided,<br>and the one who is held accountable&#8212;</p><p>in many traditional settings, these tend to overlap enough for governance to function.<br>But that alignment was never guaranteed.<br>It was a property of the system.</p><h3>3. A provisional way to see it</h3><p>This is not meant to be a definitive model of responsibility.</p><p>If anything, it is a provisional lens&#8212;a way to make visible something that tends to remain collapsed in current discussions.</p><p>I suspect that in multi-agent systems, responsibility may need to be understood less as something assigned after the fact, and more as something that is structurally enabled&#8212;or constrained&#8212;by the system itself.</p><p>But before going there, it is useful to at least surface how different aspects of responsibility already begin to diverge.</p><h3>4. Where responsibility starts to split</h3><p>One way to make this visible is to look at responsibility along multiple dimensions.</p><p>For example:</p><ul><li><p><strong>Epistemic &#8212; who knew, or should have known</strong><br>This shapes how information is captured, shared, and made visible across agents.</p></li><li><p><strong>Causal &#8212; who influenced or contributed to the outcome</strong><br>This determines whether interaction chains can be reconstructed, or remain opaque.</p></li><li><p><strong>Decision &#8212; who enacted or triggered an action</strong><br>This affects where execution authority sits&#8212;and how actions are attributed in the system.</p></li><li><p><strong>Normative &#8212; who is expected to answer for it</strong><br>This defines how responsibility is surfaced externally&#8212;to users, organizations, or regulators.</p></li></ul><p>These are not abstract distinctions.<br>They map to different parts of system design.</p><h3>5. A simple way to see the problem</h3><p>Consider an outcome that emerges from a sequence of interactions:</p><p>One agent processes partial information.<br>Another transforms or amplifies it.<br>A third executes an action based on that transformed input.<br>A human operator may or may not be aware of the chain.</p><p>If something goes wrong, which layer do we follow?</p><p>If we follow knowledge, we trace who had access to relevant information.<br>If we follow causality, we reconstruct the chain of influence.<br>If we follow decisions, we look at the point of execution.<br>If we follow norms, we ask who is expected to be accountable.</p><p>These paths do not necessarily converge.</p><h3>6. Misalignment is not an exception</h3><p>It is tempting to treat this as an edge case&#8212;something that happens only in complex or poorly designed systems.</p><p>But in multi-agent environments, this misalignment is not accidental.</p><p><strong>It is structural.</strong></p><p>As systems become more distributed, more interactive, and more adaptive over time, the alignment between these layers becomes harder to maintain.</p><p>At some point, it breaks.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JD9I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JD9I!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JD9I!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JD9I!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JD9I!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JD9I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg" width="1200" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:36136,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://eraofquestions.substack.com/i/196526920?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JD9I!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JD9I!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JD9I!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JD9I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5c97418-baae-493a-a030-0539456342ad_1200x628.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>7. Why this matters for governance</h3><p>Most governance frameworks still assume that responsibility can be stabilized.</p><p>That if we define roles clearly enough,<br>or trace decisions precisely enough,<br>we can map responsibility back to a coherent point.</p><p>But when these layers diverge, that mapping becomes unstable.</p><p>We can still assign responsibility.<br>But the assignment no longer reflects how the system actually works.</p><p>This is where governance begins to lose its grip&#8212;not because it is absent,<br>but because it is <strong>misaligned with the structure of the system</strong>.</p><h3>8. A shift in focus</h3><p>If alignment cannot be assumed, then the goal cannot simply be to restore it.</p><p>Instead, the question becomes:</p><blockquote><p>How do we design systems where different layers of responsibility<br>remain traceable&#8212;even when they no longer coincide?</p></blockquote><p>This shifts the focus.</p><p>From locating responsibility,<br>to preserving its structure.</p><p>Not as a single point,<br>but as a set of relationships across the system.</p><p>Responsibility may not be something we locate&#8212;<br>but something we either make possible, or fail to.</p><h3>9. What this sets up</h3><p>This fragmentation is not just a conceptual observation.</p><p>It has direct implications for system design:</p><ul><li><p>how information flows are structured</p></li><li><p>how interactions are recorded</p></li><li><p>where authority is placed</p></li><li><p>and how accountability is surfaced</p></li></ul><p>These questions lead to a different design space&#8212;one that moves beyond assigning responsibility, toward structuring it.</p><h3>10. What comes next</h3><p>In the next piece, I&#8217;ll explore what this shift looks like in practice&#8212;<br>and how it changes the way we think about governance itself.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why Responsibility Breaks in Multi-Agent Systems (and why governance feels like it’s failing)]]></title><description><![CDATA[Designing Responsibility in Multi-Agent Systems &#9312;]]></description><link>https://eraofquestions.substack.com/p/why-responsibility-breaks-in-multi</link><guid isPermaLink="false">https://eraofquestions.substack.com/p/why-responsibility-breaks-in-multi</guid><dc:creator><![CDATA[Mari Sekino]]></dc:creator><pubDate>Tue, 28 Apr 2026 07:15:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5OcD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>1. Something feels off</h3><p>There is a growing sense that AI governance is not keeping up.</p><p>Organizations are investing more than ever in compliance frameworks, oversight mechanisms, and risk controls. The structures are there. The effort is there.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>And yet, when something goes wrong, the same question keeps surfacing:</p><p><strong>Who is actually responsible?</strong></p><p>The uncomfortable reality is that, in many cases, no one can clearly answer that question.</p><p>This is often interpreted as a failure of governance. But that interpretation may be missing something more fundamental.</p><div><hr></div><h3>2. It&#8217;s not that governance is missing</h3><p>The default explanation is familiar. Governance fails, we assume, because rules are insufficient, controls are weak, or accountability has not been clearly defined.</p><p>So the response is predictable: more structure, clearer roles, tighter enforcement.</p><p>But this response rests on a deeper assumption&#8212;one that is rarely questioned:</p><blockquote><p>That responsibility can be clearly located within the system.</p></blockquote><p></p><p>I&#8217;ve noticed a subtle shift in how organizations respond when things go wrong.<br>The conversation is no longer only about fixing the issue.<br>It&#8217;s also about understanding how it happened &#8212; across systems, teams, and increasingly, across AI components.</p><p>In some cases, the answer is not a single mistake, but a chain of interactions that no one fully owns. This is still manageable in simpler systems. But as systems become more distributed and agentic, this pattern starts to break down.</p><div><hr></div><h3>3. That assumption no longer holds</h3><p>That assumption made sense in systems that were relatively stable&#8212;systems where decision-making followed identifiable paths, and where outcomes could be traced back to discrete actors.</p><p>Multi-agent systems operate differently.</p><p>They are dynamic rather than static. Distributed rather than centralized. Outcomes are not the result of a single decision, but of ongoing interactions&#8212;between agents, across layers, and over time.</p><p>In such environments, what we call a &#8220;decision&#8221; is often less a moment and more a process.</p><div><hr></div><h3>4. Responsibility hasn&#8217;t disappeared</h3><p>It is tempting to conclude that responsibility has simply become unclear, or worse, that it has disappeared.</p><p>But neither is quite right.</p><p>Responsibility is still there. What has changed is its form.</p><p>It no longer resides in a single place. It no longer aligns neatly with a role, a function, or a moment of decision.</p><p>Instead, it becomes distributed across the system itself&#8212;embedded in interactions, dependencies, and evolving states.</p><div><hr></div><h3>5. A structural mismatch</h3><p>This is where the tension emerges.</p><p>Most governance models are still designed around stability. They assume that there is a definable decision point, an identifiable decision-maker, and a clear path for assigning and enforcing responsibility.</p><p>But in multi-agent systems, these assumptions begin to break down.</p><p>Decisions emerge. Influence is shared. System behavior evolves in ways that are not reducible to any single component.</p><p>The issue, then, is not simply ambiguity.</p><p>It is a <strong>structural mismatch</strong> between how governance is designed and how these systems actually operate.</p><div><hr></div><h3>6. A shift in the question</h3><p>If responsibility cannot be cleanly assigned, then perhaps the problem is not how we assign it, but how we think about it.</p><p>The question is no longer:</p><blockquote><p>Who is responsible?</p></blockquote><p>But rather:</p><blockquote><p><strong>How should responsibility be designed in systems where no single actor fully holds it?</strong></p></blockquote><p>This is not just a governance problem. It is a system design problem.</p><div><hr></div><h3>7. What this opens up</h3><p>Taking this seriously shifts the focus.</p><p>Instead of asking how to tighten control, we begin to ask:</p><p>How can responsibility remain traceable even when it is distributed?</p><p>How do we avoid systems where responsibility is formally assigned but practically absent?</p><p>What kinds of structures make responsibility legible&#8212;not just at a single moment, but over time?</p><p>These are not questions that can be resolved through more rules alone.</p><p>They point toward a different way of thinking about governance&#8212;one that is less about control, and more about how systems are shaped.</p><div><hr></div><h3>8. What comes next</h3><p>In the next piece, I&#8217;ll look more closely at how responsibility actually fragments in multi-agent systems&#8212;and why those fragments don&#8217;t align in the way we expect.</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5OcD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5OcD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!5OcD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!5OcD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!5OcD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5OcD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg" width="1200" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:37374,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://eraofquestions.substack.com/i/195598028?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5OcD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!5OcD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!5OcD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!5OcD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41efa33f-f96c-44b1-920e-97e809629914_1200x628.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What Gets Expensive When Answers Get Cheap?]]></title><description><![CDATA[We are living through a shift in how humans think &#8212; not just in what tools we use.]]></description><link>https://eraofquestions.substack.com/p/what-gets-expensive-when-answers</link><guid isPermaLink="false">https://eraofquestions.substack.com/p/what-gets-expensive-when-answers</guid><dc:creator><![CDATA[Mari Sekino]]></dc:creator><pubDate>Mon, 13 Apr 2026 07:54:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yE6E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every time a new technology arrives, people fear the same thing: that something essentially human is about to be lost.</p><p>When digital tools entered sales organizations in the 1990s, veterans pushed back hard. &#8220;Sales is human connection,&#8221; they said. &#8220;You can&#8217;t replace that with a bulk email.&#8221; When computers entered engineering design floors, senior engineers warned that the tacit knowledge embedded in hand-drawn schematics would vanish. When digital audio emerged, music educators insisted that children&#8217;s ears &#8212; and souls &#8212; would be damaged by something artificial.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>They were not entirely wrong. Something did change. But what changed was not the disappearance of human value. What changed was the structure of how humans think &#8212; and therefore what kinds of thinking became most valuable.</p><p>We are in the middle of that same transition again. This time, the technology is AI. And this time, the shift goes deeper than any before it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yE6E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yE6E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yE6E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yE6E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yE6E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yE6E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg" width="1200" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:30884,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://eraofquestions.substack.com/i/193704644?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yE6E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yE6E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yE6E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yE6E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc5f3ce8-62f7-420e-b38b-cbb7adfa127a_1200x628.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>Part I: When Knowledge Was the Asset</h2><p>For most of human history, the person who knew the most held the most power. The scholar, the elder, the craftsperson who had accumulated decades of experience &#8212; their value was inseparable from the knowledge stored in their memory.</p><p>Thinking in this era was inherently contextual. Decisions were made based on accumulated wisdom, direct observation, and situational judgment. There were no systems to enforce consistency. Intelligence meant knowing &#8212; and knowing deeply.</p><p>Then came writing. Then printing. Then the age of recording and transmission. Knowledge could be captured, copied, shared. And with that, the definition of intelligence quietly began to shift.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fbi_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fbi_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fbi_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fbi_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fbi_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fbi_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:191251,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://eraofquestions.substack.com/i/193704644?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fbi_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fbi_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fbi_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fbi_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd0058b7-0496-4da4-9c5e-3774e266e01a_1280x720.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Each shift in how we record and process information reshapes how we think, decide, and create value.</em></p><p></p><h2>Part II: When Access Became the Advantage</h2><p>The digital era did not simply add more knowledge. It reorganized the entire architecture of how information works.</p><p>Search engines, databases, enterprise systems &#8212; these tools made retrieval fast and retrieval consistent. The competitive advantage moved from who knew the most to who could access, structure, and apply information most efficiently. Rules became central: if you could encode knowledge into systems, you could scale it. Consistency replaced context as the dominant value.</p><p>In parallel, something subtler happened. The way organizations rewarded people shifted. Deep contextual judgment &#8212; the kind that resists standardization &#8212; became harder to measure and therefore harder to value. Efficiency, reproducibility, and rule-following rose. Situational wisdom receded.</p><p><em>The digital era delivered clarity and scale. But it quietly narrowed what we recognized as intelligence.</em></p><p>This was not a conspiracy. It was a natural consequence of designing systems optimized for consistency. When your environment rewards rule-following, rule-following is what you optimize for.</p><h2>Part III: When Questions Become the Scarcest Resource</h2><p>Now comes AI &#8212; and with it, a fundamental inversion.</p><p>For the first time in history, answers are cheap. Not just fast: cheap. Any question you can articulate clearly enough will yield a fluent, confident, often accurate response within seconds. The marginal cost of an answer is approaching zero.</p><p>This changes everything about where value lives.</p><p>I have watched this play out in consulting work. When a junior colleague asks AI to help build a marketing website, they typically ask: &#8220;How do I make a site like this?&#8221; or &#8220;Give me a table of contents for a company website.&#8221; The result is technically complete &#8212; and almost entirely misses the point. It answers a procedural question without ever engaging with the strategic one.</p><p>A more experienced practitioner asks differently. Not &#8220;how do I build this&#8221; but &#8220;who is this for, and what do we want them to feel?&#8221; Not a complete brief delivered upfront, but an open-ended conversation: &#8220;What am I missing? What assumptions am I making that I shouldn&#8217;t be?&#8221; They use vague questions deliberately &#8212; because vagueness creates space for the AI to surface what the human has not yet thought to ask.</p><p><em>Give the same prompt to ten people, and you will get ten different answers &#8212; not because the AI changed, but because the questions were different.</em></p><p>The difference is not technical skill. It is the capacity to design a question &#8212; and then to interrogate the answer.</p><p>This second step matters as much as the first. AI produces responses that are fluent and confident. That fluency is dangerous. A response that sounds right is rarely questioned. But an answer that fits your existing assumptions perfectly should raise a flag, not lower your guard.</p><p>The ability to ask a second question &#8212; and a third &#8212; is where the real thinking happens. &#8220;Is this actually what I was looking for? What&#8217;s missing here? What would someone with a completely different frame say about this?&#8221;</p><h2>The Mirror Problem</h2><p>There is a pattern I have observed consistently: when people use AI, they tend to get back a refined version of what they already believed.</p><p>A client who suspects the answer is X will frame their question in a way that produces X. A junior analyst who thinks in terms of process will generate process-heavy outputs. The AI is not biased toward their view &#8212; it is responding to the shape of the question they asked. The question is a mirror. And most people do not realize they are looking into one.</p><p>This is why the capacity to question is not simply a skill. It requires something prior: awareness of your own assumptions. In physics, the first move of rigorous thinking is to question your premises. What are you taking for granted? What is the invisible frame inside which your thinking is happening?</p><p>Without that self-awareness, no amount of prompt engineering will help. You will simply get more confident-sounding reflections of your own blind spots.</p><h2>Rethinking What Makes a Person Valuable</h2><p>If the above is true, then the basis of human value at work is shifting &#8212; and it is shifting faster than most organizations have recognized.</p><p>In the age of knowledge, value was stored in memory. In the age of access, value was in efficiency and structure. In the age of questions, value lies in judgment: the ability to see clearly, to challenge assumptions, to design questions that open new possibilities rather than confirm existing ones.</p><p>This is not abstract. It has immediate implications for how we hire, how we train, how we evaluate contribution, and how we build organizations. The person who can ask the right question &#8212; and then challenge the answer &#8212; is not doing less than the person who used to generate the answer manually. They are doing something harder and rarer.</p><p><em>Knowledge can be stored. Access can be automated. Judgment cannot be outsourced.</em></p><h2>History Is Not Repeating &#8212; But It Rhymes</h2><p>Every major technological transition has been accompanied by moral panic about the loss of something irreplaceable. And in every case, what was lost was real &#8212; but what emerged was also real.</p><p>Digital tools did diminish certain forms of memory and craft. They also enabled forms of coordination and creativity that were previously impossible. The question was never whether to resist or adopt. The question was always: what do we carry forward, and what do we consciously redesign?</p><p>We are at that question again. AI will commoditize answers. What it cannot commoditize is the human capacity to ask questions that matter &#8212; questions that arise from genuine curiosity, from ethical responsibility, from the willingness to challenge what seems obvious.</p><p>That capacity is not new. It is, in many ways, a return. It is what Socrates was doing. It is what good scientists, good designers, and good leaders have always done. What is new is that this capacity is now the primary source of human advantage &#8212; not a supplement to it.</p><h2><strong>We are not at a turning point in technology.</strong></h2><h2><strong>We are at a turning point in how humans think.</strong></h2><p>The shift from knowledge to access took decades. The shift from access to questions is happening now &#8212; faster, and with higher stakes. The organizations, communities, and individuals who understand this will not simply adapt. They will define what comes next.</p><p><em>If thinking itself is changing &#8212; what assumptions about our society are we still taking for granted?</em></p><p>&#8212; <strong>Mari Sekino</strong></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://eraofquestions.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Asking Principle: Notes by Mari Sekino! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>