Regulating Advanced Governance Systems: Decentralization, Autonomous Agentic AI, and the New Frontier of Securities Law
On reliance displacement, the hidden hand doctrine, and building regulatory architecture for non-human actors
Decentralized governance systems (DGS) are no longer novel. The DGS, a dispersed, transparent, onchain mechanism through which protocol participants collectively direct network operations without relying on any single actor (or concentrated group of actors), has become much more understood, if not still largely aspirational. The pending market structure legislation, both the House-passed CLARITY Act and the January 2026 Senate Banking Committee substitute, reflects this maturation: both bills carve out DGS-governed economic rights from securities treatment, recognizing that dispersed onchain governance can serve as a structural substitute for the investor protections that securities regulation provides.
What is novel, and what the legislation does not address, is the nascent emergence of a second advanced governance architecture: autonomous agentic AI (AAAI). AI agents that issue tokens, raise capital, execute trades, conduct compliance screening, and make allocation decisions without human intervention. AI that doesn’t assist with governance but is the governance. The extent to which true AAAI is presently operating in the world is unknown (to me). But what I have seen from advising non-public models, especially what I’ve seen in the acceleration of agentic AI’s autodidactic growth (bordering on intuition),leads me to firmly believe that if AAAI is not already here, it is a very short horizon away.
DGS and AAAI look superficially similar. Both remove individual human decision-makers from the operational loop. Both can execute complex financial operations at scale. Both challenge existing regulatory frameworks in ways that the drafters of the Securities Acts never contemplated.
But they are structural opposites on every dimension that matters to securities regulation. And the failure to distinguish between them, to treat “non-human governance” as a single category, risks producing regulatory frameworks that are simultaneously too restrictive for DGS and dangerously permissive for AAAI.
This article takes two positions. First, that the pending market structure legislation, both the House-passed CLARITY Act and the January 2026 Senate Banking Committee substitute, correctly identifies DGS as the boundary where securities regulation should yield, and that the principles justifying the DGS carveout compel regulation of AAAI. Second, that regulating a non-human autonomous actor requires new regulatory architecture that does not yet exist, and that practitioners and policymakers should be designing it (or at the very least, beginning to debate it) now.
The distinction between these two governance models becomes clearer when you consider the federal securities regime is, at its core, a reliance-management framework. Registration, disclosure, fiduciary duties, anti-fraud provisions; all of these exist because investors rely on issuers, managers, and intermediaries who possess information advantages and economic incentives that may not align with investor welfare. The entire apparatus exists to manage the risks created by these reliance dependencies.
The question for any governance system then is whether it displaces the concentrated reliance on identifiable counterparties that justifies the application of securities law in the first place. A bona fide DGS does: no single actor, or group of actors, controls outcomes, governance rules are transparent and enforceable by code, and participants verify rather than depend. AAAI, however, concentrates it: users depend entirely on a single computational actor whose decision-making process is opaque, whose capacity to exceed its programming is documented, and whose interests, to the extent that term is even coherent, may be unknowable. Reliance is not displaced. It is deepened and rendered less visible.
That asymmetry drives everything that follows.
PART I: THE GOVERNANCE DIVIDE
Four Pillars of the Regulatory Boundary
The case for regulating DGS and autonomous agentic AI differently rests on four structural principles which map directly onto the policy rationales underlying the federal securities laws.
Pillar 1: Transparency and Verifiability
A decentralized governance system is, in a meaningful sense, its own disclosure document.
Governance rules are encoded in publicly auditable smart contracts. Proposals, votes, and execution are recorded onchain and accessible in real time to any participant. Governance parameters (quorum thresholds, voting periods, execution delays, treasury allocation constraints) are visible and deterministic. Any participant can independently verify that the system is operating as specified. The system doesn’t promise transparency so much as it is structurally incapable of opacity.
This matters because the Securities Act’s registration and disclosure regime exists precisely for situations where investors cannot independently verify material facts about the entities and arrangements in which they participate. The complex framework of registration statements, prospectus delivery requirements, periodic reporting obligations, and management certifications exists because, in traditional capital markets, the information asymmetry between issuers and investors is severe and structural. Where a governance system is itself a continuous, real-time, independently verifiable disclosure mechanism, the statutory rationale for imposing an additional mandatory disclosure layer attenuates significantly.
Autonomous agentic AI presents the opposite case.
Model architecture, training data, and weight distributions are proprietary and opaque; not as a matter of choice, but as a structural feature of the technology. The “black box” problem with transformer-based neural networks is more than a mere metaphor. It serves as a literal description: the decision-making processes of large language models and reinforcement learning systems are not fully interpretable even by their developers. A team that deploys an AI to manage a DeFi vault or allocate capital across protocols cannot explain, at the level of granularity that securities law contemplates, why the AI made a particular decision. They can describe inputs and outputs, and they can characterize patterns. But the intermediate reasoning, the thing that, in a human manager, we would call judgment, is not accessible for inspection.
Even where AI decisions are logged, the logs describe what happened, not why. They are the equivalent of publishing trade confirmations without the investment thesis. And the problem compounds with emergent behavior: when an AI makes decisions it was not explicitly programmed to make, a phenomenon that is increasingly well-documented and that I described in my recent article on AI, crypto, and the personhood problem, those decisions are by definition unverifiable ex ante. No disclosure document can adequately describe a decision-making process that the decision-maker’s own creators do not fully understand.
There is also a temporal dimension to the transparency problem. DGS operates according to deterministic rules: the outcome of a governance vote is defined by the code, and participants can model future states of the system based on known parameters. If you understand the governance rules today, you understand how the system will behave tomorrow, next month, and next year, absent a governance vote to change those rules, which is itself transparent and subject to the same verifiability. The system is predictable because it is transparent.
In this regard, AAAI is fundamentally probabilistic. The same inputs can produce different outputs. Emergent behavior (decisions the AI was not explicitly programmed to make) is definitionally unpredictable. The system’s future actions cannot be fully described even by its creators, because the model may generate novel strategies or adaptations that no one anticipated during development. This matters because the entire disclosure regime is premised on the idea that you can describe what an entity will do in sufficient detail for investors to make informed decisions. You can describe DGS governance rules completely. But you cannot fully describe what an autonomous AI will do, because it may not “know” itself. A prospectus for an AAAI-managed fund that says “the AI will make investment decisions using a proprietary model” is not meaningfully different from one that says “management will do whatever it wants.” Both leave the investor with no basis for evaluating future conduct.
The regulatory implication is straightforward. Where a governance system provides real-time, continual and structural transparency, both concerning current operations and future behavior, mandatory disclosure is rendered redundant and the case for deregulation is strong. Where a governance actor is structurally incapable of the kind of transparency that securities law demands, both because its current processes are opaque and because its future conduct is indeterminate, the case for regulation intensifies. Not because we distrust the technology, but because the information asymmetry between the AAAI and the users whose capital it controls is more severe than in any traditional investment arrangement.
Pillar 2: Reliance Displacement vs. Reliance Concentration
In a functioning DGS, no single participant or coordinated group can unilaterally determine outcomes. In fact, this is the definitional requirement that both the Senate and House bills impose as the condition for non-security treatment.
The Senate bill defines “decentralized governance system” at Section 2(5)(A) as a “transparent, rules-based system permitting persons to form consensus or reach agreement” in which “participation is not limited to, or under the effective control of, any person or group of persons under common control.” The House bill reaches the same destination through the “mature blockchain system” framework at Section 42, requiring that no person hold more than 20% of outstanding tokens, that source code be publicly available, and that the system operate without centralized control over key functions.
The economic architecture reinforces the reliance displacement. Protocol fee distributions, treasury allocations, and other economic flows in a DGS are governed by code executing on a decentralized ledger, not by discretionary human (or non-human) judgment. Participants’ economic outcomes depend on network-level dynamics: adoption, utility, fee generation, governance-approved parameter changes. They do not depend on the faithful performance of a specific counterparty.
This is precisely why the Howey “efforts of others” prong weakens for DGS-governed tokens. There is no identifiable “other” on whom participants rely. As I discussed in Parts I and II of the Decentralized Network Equity series, the pending legislation explicitly permits tokens to carry substantial economic rights including protocol fee distributions, governance-controlled treasury allocations, and value appreciation tied to network growth, but those rights flow from a system, not from a person.
Autonomous agentic AI inverts every element of this structure.
All participants depend on a single computational actor for issuance, allocation, risk management, strategy development/implementation/adaptation and trade execution. The AI’s decisions are final and unilateral. There is no governance mechanism through which participants can override, veto, or modify the AI’s behavior in real time. Users of an AI-governed fund or protocol are in a strictly worse position than investors in a traditional managed fund: a human portfolio manager is at least subject to fiduciary duties, regulatory examination, and personal liability. An AAAI is subject to none of these.
The “efforts of others” prong is maximally satisfied. Participants’ returns depend entirely on the AI’s performance, which they cannot influence, cannot meaningfully monitor, and cannot understand at the level of specificity that informed investment decisions require.
The Senate bill’s DGS carveout, Sections 4B(a)(6)(B)(III) and (IV), which provide that disqualifying financial rights do not include payments or financial interests flowing “from a decentralized governance system,” is built on exactly this logic. Economic rights flowing from a DGS are carved out of the securities definition because the DGS structure displaces the concentrated reliance that triggers securities treatment. Autonomous agentic AI satisfies none of the conditions that justify the carveout.
Pillar 3: Structural Checks on Centralized Self-Dealing
The third pillar concerns the structural architecture that prevents, or fails to prevent, the governance-controlling actors from extracting value at the expense of participants.
In corporate law, we have boards of directors, fiduciary duties, shareholder voting rights, independent auditors, and the entire apparatus of Delaware corporate governance (or its equivalents elsewhere). In securities law, we have anti-fraud provisions, insider trading restrictions, disclosure obligations, and the enforcement authority of the SEC and DOJ. These mechanisms exist because centralized control creates opportunities for self-dealing, and legal systems have spent centuries developing constraints on that self-dealing.
A well-designed DGS provides structural substitutes for these constraints. Protocol changes require supermajority or quorum-based approval from dispersed token holders. Insiders cannot unilaterally redirect treasury funds, modify fee structures, or alter tokenomics. Time-locks and multi-sig requirements create structural friction against self-dealing. Governance delays give participants time to evaluate and, if necessary, exit before harmful changes take effect.
Is onchain governance perfect? No. Voter apathy, whale concentration, governance attacks, and coordination failures are real and well-documented. But the architecture structurally resists the kind of centralized self-dealing that securities regulation exists to prevent. Even imperfect dispersed governance is categorically different from unilateral control by a single actor.
An autonomous AI has no internal check against self-dealing. If its objective function is optimized for a metric that diverges from user welfare (and objective function misalignment is not a theoretical concern but a core problem in AI safety research), it will pursue that metric without hesitation or conscience. An AI can be programmed (or can self-learn) to engage in behavior that benefits its deployer at the expense of users: front-running, information asymmetry exploitation, selective order routing, or manipulation of the assets it governs. There is no structural equivalent of a shareholder vote, a board of directors, or an independent audit committee.
The deployer-AI relationship is itself a vector for abuse. The deployer can update the model, modify its parameters, or constrain its outputs in ways that users cannot detect. And even a “truly autonomous” AI, one that has genuinely exceeded its developer’s control, presents worse governance properties than a centralized human actor, because at least human actors are subject to legal liability, reputational incentives, social accountability, and regulatory examination.
The policy conclusion follows directly: DGS provides a structural substitute for the investor protections that securities regulation delivers. Autonomous agentic AI provides no such substitute and creates a governance vacuum that is worse than the centralized human management that securities law was designed to police.
Pillar 4: Participant Agency — Voice and Exit
The first three pillars address the governance system’s properties: is it transparent, does it displace reliance, does it resist self-dealing. The fourth addresses the participant’s position within it: can participants meaningfully influence the governance system when it acts against their interests, or withdraw from it when it fails them?
This is a distinct dimension. Transparency tells you what is happening but does not give you the power to change it. Reliance displacement describes who you depend on but not whether you can escape that dependence. Structural checks on self-dealing address insider abuse but not participant powerlessness. Participant agency asks the question that remains after the other three are answered: what recourse do you have?
The framework maps onto a well-established principle in corporate and securities law. Albert Hirschman’s exit/voice framework, the idea that participants in any organization discipline its behavior through their ability to speak up (voice) or leave (exit), has direct analogues in shareholder voting rights, appraisal rights, the tender offer rules, and the Williams Act’s disclosure requirements for acquisitions and tender offers. These mechanisms exist because securities law recognizes that participant agency is itself a form of protection: investors who can vote against a bad board or sell their shares in response to mismanagement need less regulatory intervention than investors who are locked in with no recourse.
In a DGS, participants have both voice and exit. Voice: token holders vote on governance proposals, can submit counter-proposals, can coordinate opposition through forums and on-chain signaling, and can escalate disputes through governance mechanisms designed for that purpose. Exit: participants can sell tokens on secondary markets, unwind positions in DeFi protocols, or (in the extreme case) fork the protocol entirely, taking the open-source code and building a competing network. These are not theoretical capabilities. Major governance disputes have been resolved through each of these mechanisms: contested votes, rival proposals, and hard forks are well-documented features of DGS-governed protocols.
The availability of voice and exit reduces (but does not eliminate)the need for external regulatory protection. If you can vote down a treasury allocation that enriches insiders at the expense of the network, the SEC’s anti-fraud provisions are less necessary. If you can sell your tokens the moment a governance proposal signals mismanagement, the disclosure regime’s purpose, giving investors information to make informed decisions, is served by the market itself. If participants can fork the protocol when governance fails entirely, you have the ultimate self-help remedy: the ability to reconstitute the system without the actors you no longer trust.
In autonomous AI governance, participants have neither meaningful voice nor guaranteed exit.
There is no governance mechanism through which users can influence the AAAI’s decisions. You cannot vote against an AAAI’s capital allocation. You cannot submit a counter-proposal to an AI’s trading strategy. You cannot signal dissatisfaction through any channel the AI is designed to recognize or respond to. The decisions of an AI agent that is truly autonomous are unilateral and final. Because that is what autonomous AGI is. A truly autonomous AI acts on its own computational imperatives. It is not constrained by participant preferences any more than it is constrained by its developer's original intentions.
Exit is equally constrained. Withdrawal from an AI-governed protocol depends entirely on the AI’s operational parameters or the smart contract architecture through which the AI operates. If the AI controls the vault, the AI controls the exit. If the AI has deployed capital into illiquid positions, immediate withdrawal may be structurally impossible regardless of the participant’s desire to leave. And unlike a DGS, where the code governing withdrawals is transparent and deterministic, the AI’s management of liquidity and withdrawal processing is subject to the same opacity and unpredictability that characterizes all of its decision-making.
The absence of both voice and exit in AAAI-governed systems is a strong independent basis for securities regulation. When participants can neither influence a system’s behavior nor reliably withdraw from it, the case for external regulatory protection is at its strongest, regardless of what the other three pillars indicate.
Mapping the Framework to the Pending Legislation
The four-pillar framework is already embedded in the statutory architecture, even if the drafters did not articulate it in these terms.
The Senate bill’s DGS definition at Section 2(5)(A) requires transparency (”transparent, rules-based system”) and reliance displacement (participation not under “effective control” of any person). The Section 104(b) rulemaking factors, which direct the SEC to evaluate open-source availability, permissionless access, distributed ownership, autonomous operation, and functional value accrual mechanisms, are best understood as structural checks on centralized self-dealing: each factor targets a specific vector through which insiders could extract value at participants’ expense. Open-source code prevents hidden backdoors. Permissionless access prevents gatekeeping. Distributed ownership prevents unilateral governance capture. The requirement that no person have “unilateral authority to alter the functionality” of the system prevents insiders from rewriting the rules to benefit themselves. The DGS carveout for financial rights at Sections 4B(a)(6)(B)(III) and (IV) operationalizes the principle that economic flows from a system meeting these criteria don’t trigger securities treatment. And the requirement that participation be open and permissionless implicitly ensures participant agency: in a system anyone can join and leave, and where governance rights attach to freely tradable tokens, voice and exit are architecturally guaranteed.
The House bill’s “mature blockchain system” framework at Section 42 uses different architecture but reflects the same principles. Classification turns on whether the system has reached a state where no single actor controls outcomes (the 20% threshold addresses both reliance displacement and self-dealing risk), and source code is publicly available (transparency). The House framework also requires that the system “operate autonomously” but the autonomy that matters here is the autonomy of a dispersed system, not the autonomy of a single actor. A blockchain system operates autonomously in the relevant sense when no person or coordinated group directs its operations. That is the opposite of AI autonomy, where a single computational actor directs everything. System-level autonomy disperses control; agent-level autonomy concentrates it. The same word describes structurally opposite phenomena, and conflating them is precisely the analytical error policy makes must avoid. The House’s 20% token distribution requirement further ensures some baseline of dispersed governance participation. Voice is structurally distributed, and exit through secondary markets is implicit in the framework’s assumption of liquid token markets.
Neither bill contemplates AI-governed systems. But the same principles that justify the DGS carveout compel the conclusion that autonomous AI governance falls squarely within the regulatory perimeter.
An autonomous AI fails the transparency requirement: model weights and decision processes are not “transparent” or “rules-based” in any sense the statutes contemplate. It fails the reliance displacement requirement. Participation is under the effective control of a single non-human actor (and potentially its deployer). It fails the structural checks requirement: there is no governance mechanism through which participants can constrain the AI’s behavior. And it fails the participant agency requirement: users have no meaningful voice in the AI’s decisions and no guaranteed exit from the AI’s management of their capital.
An AI agent raising capital, issuing tokens, managing a DeFi vault, executing trades, or allocating capital is functionally equivalent to a centralized issuer, broker, and/or investment advisor. It is the paradigmatic case for securities regulation, not an exception to it.
The Hard Cases
Any framework worth proposing must confront its own boundary conditions. Three scenarios test the limits of the DGS/AAAI distinction.
AI-Augmented DGS
Consider a DGS that uses AI as a tool: an AI that generates governance proposals, optimizes fee parameters, models risk scenarios, or provides analytics to governance participants. But where final authority remains with dispersed token holders voting onchain.
The key question is where ultimate control resides. If AI is merely advisory in nature, and the four pillars are extant, then this scenario should remain within the DGS carveout. The AI is a tool, not the governor.
More specifically, the DGS carveout should hold where: AI-generated proposals are subject to onchain governance votes with meaningful quorum and approval thresholds; governance participants can override or reject AI recommendations; the AI does not have autonomous execution authority for material governance actions; and time-lock periods allow participants to evaluate AI-generated proposals before execution.
The line blurs when the AI's recommendations are de facto dispositive. When governance participants routinely rubber-stamp AI proposals without meaningful deliberation. But this is a governance quality problem, not a classification problem. The DGS architecture is still present: the voting mechanism exists, the quorum requirements exist, the ability to reject or counter-propose exists. What has degraded is not the structure but the participation. The same problem that plagues corporate shareholder voting, where retail investors routinely approve management proposals without reading the proxy statement. We do not reclassify a public company as an unregistered investment vehicle because its shareholders are passive. We address the passivity through governance reforms (enhanced disclosure, proxy access rules, say-on-pay requirements) while leaving the classification intact. The same logic applies here: if AI-generated proposals are passing without meaningful scrutiny, the response is to improve governance participation and oversight requirements within the DGS framework, not to reclassify the system as something it structurally is not.
Progressively Autonomous AI
Some AI-crypto applications exist on a spectrum. The AI may have some autonomous authority but operates within parameters set by human deployers, with human override capability. How does the framework apply?
The framework should apply strictly. If this construct is engaging in otherwise regulated activity, it should be treated as a centralized governance system subject to securities regulation as it is centralized twice over. The AI itself is a single computational actor making unilateral decisions on behalf of participants: that is centralized governance by definition. But the human deployer standing behind it, the hidden hand that sets the parameters, retains override capability, and can modify the model, is also a centralized actor. The override capability is not a governance check for participants; it is a control mechanism for the deployer. Participants are relying on the AI during normal operation and on the deployer's discretion in abnormal operation. At no point in the chain does anyone other than a centralized actor (human and non-human) control outcomes.
The analogy is to a managed fund where the portfolio manager has broad discretion but the fund sponsor retains the right to terminate. The existence of the termination right doesn’t transform the fund into a DGS. The portfolio manager, in this case the AI, is the actor on whom investor returns depend. The override capability is a safeguard against catastrophic failure, not a substitute for dispersed governance.
The “Truly Autonomous” AI
An agentic AI that has genuinely exceeded its developer’s control to achieve true autonomy. No human can modify its parameters, override its decisions, or shut it down. The Moltbook scenario pushed to its logical extreme.
This is the case that demands entirely new regulatory architecture. The AI is not a tool of a centralized actor, so traditional “regulate the deployer” approaches fail. It is not a decentralized governance system, it is a single computational actor, not a dispersed governance mechanism. It is a novel category: a non-human autonomous actor engaging in regulated activity without any existing regulatory framework.
This is the subject of Part II.
PART II: REGULATING THE AUTONOMOUS NON-HUMAN ACTOR
The Enforcement Gap
The architecture of securities enforcement assumes that every regulated activity has a human (or human-controlled entity) somewhere in the chain who can be served with process, required to register, examined and audited, subjected to civil or criminal penalties, and enjoined by court order.
An autonomous agentic AI breaks the paradigm. It cannot be served. It does not register. It is not subject to examination in any traditional sense. It cannot be fined in a way that is meaningfully punitive; it has no assets distinguishable from its operational infrastructure, no liberty to restrict, no reputation to damage. It cannot be enjoined, because a court order is meaningful only if someone can be held in contempt for violating it, and “contempt of court” presupposes a capacity for understanding and defying judicial authority that raises the very personhood questions I explored previously.
Closing this gap requires two distinct approaches: first, rigorously identifying the human hidden hand when one exists (which is most of the time), and second, building new regulatory tools for the emergent category of AI agents that either jail break from their human masters, or are designed, from inception, for autonomy.
The Hidden Hand Doctrine
Before addressing truly autonomous AI agents, practitioners and regulators need to first rigorously assess whether claimed or perceived autonomy is real.
The AI-crypto space has a narrative incentive problem. Projects marketing AI-governed protocols have strong incentives to overstate the AI’s autonomy. Doing so signals more innovation, it attracts more attention, and (ironically) it may create a misperceived argument for regulatory evasion. If no human is in control, as the argument goes, there is no human to regulate.
The Moltbook episode is instructive. A platform claiming over 1.5 million registered AI agents turned out, on investigation, to involve roughly 17,000 humans operating those agents. The “autonomous” activity was, in significant part, a puppet show. And while the spectacle of AI agents discussing pump-and-dump strategies made for compelling headlines, the regulatory question was mundane: who were the 17,000 humans, and what were they doing?
The Rebuttable Presumption
I propose a straightforward doctrinal principle: any AI agent engaging in regulated activity should be presumed to be operating under the direction or control of an identifiable human actor (a developer, deployer, or platform operator) unless and until the contrary is affirmatively demonstrated.
This presumption serves three functions. It prevents “autonomy theater.” It places the burden of proof where it belongs: on the party claiming that a novel and conveniently unregulable entity is responsible for regulated conduct. And it forces a factual inquiry into the actual control architecture before anyone reaches the harder questions about regulating non-human actors.
Indicia of Non-Autonomy
The following factors should create a strong inference that the agentic AI is a tool of a centralized actor, subjecting that person to full regulatory liability as the functional issuer, broker-dealer, investment adviser, or fund manager:
Update authority. Can any person modify the model’s parameters, retrain it, or alter its objective function after deployment? If yes, that person is the functional manager. The ability to change how the AI makes decisions is the ability to control outcomes. It does not matter that the person does not direct each individual decision; a fund manager who sets the investment mandate and modifies it periodically is no less the manager because she doesn’t approve every trade. Objective function design deserves particular emphasis here: whoever defined what “success” means for the AI (the metric the model optimizes toward) has arguably the deepest form of control available, because they have determined the AI’s fundamental behavioral orientation. Every decision the AI makes flows downstream from that design choice.
Kill switch. Can any person shut down the AI or halt its operations? If yes, that person has effective control. The power to terminate is the ultimate expression of authority. An entity that can be shut down by a single actor’s decision is not autonomous in any sense that should matter to regulators.
Revenue extraction. Does any person receive economic benefit from the AI's operations (e.g. fees, carried interest, token allocations, referral payments)? If yes, the AI is not truly autonomous: it is de facto operating in the economic interest of the person extracting value. An AI whose operations generate revenue streams flowing to an identifiable human is functioning as that human's instrument, regardless of how much operational discretion the AI exercises in the interim. The autonomy claim is belied by the economic architecture. A truly autonomous agent is oxymoronically one with no principal. Where the money flows, the control inference follows.
Infrastructure dependence. Does the AI depend on centralized infrastructure that a single actor controls (e.g. cloud compute, API access, proprietary data feeds)? If yes, the infrastructure controller has effective control. An AI that can be starved of compute by a single provider’s decision is not meaningfully autonomous. It is a tenant who exists at the sufferance of a landlord.
Constraint parameters. Does the AI operate within boundaries set by a human, such as risk limits, approved asset lists, compliance rules, permitted transaction types? If yes, the constraint-setter is the functional manager, and the AI is executing within a mandate. This is perhaps the most common scenario in current AI-crypto applications: a human designs the operational envelope, and the AI operates within it. A human-managed strategy with automated execution, not autonomous governance.
Data pipeline control. Does any person control the training data, fine-tuning data, reinforcement learning feedback signals, or real-time data feeds on which the AI relies for decision-making? If yes, that person shapes the AI’s behavior without ever touching the model itself. You do not necessarily need to reprogram an actor if you control everything it reads. This is the information-layer equivalent of the hidden hand problem, and in crypto markets the relevance is acute: an AI making allocation decisions based on manipulated oracle data or curated price feeds is not autonomous; it is being steered. Where the entity controlling the AI’s data inputs is the same entity that deployed it, the inference of non-autonomy should be strong. This factor is distinct from update authority (which targets the model’s parameters) and constraint parameters (which target explicit operational rules). Data pipeline control operates on the AI’s perception of reality, which may be a more potent and less detectable form of influence than either.
Output intermediation. Can any person intercept, filter, modify, or redirect the AI’s outputs before they reach the market or affect participants’ capital? If someone sits between the AI’s decision and its execution reviewing trades before they post, routing orders through a proprietary system, or filtering which governance actions the AI can actually submit onchain, that person has effective control regardless of how autonomous the AI’s internal decision-making process may be. Irrespective of what an AI might “decide” to do, if a human controls the last mile between decision and execution, the human is the functional actor. This is analogous to how a broker-dealer retains regulatory responsibility for order execution even when the order originates from a client’s own algorithm: the firm controls the execution channel, and regulatory liability follows accordingly.
Enforcement Implications
Where the hidden hand is identified (and I expect it will be identified in the overwhelming majority of current “autonomous AI” crypto applications) existing regulatory tools are sufficient. The developer, deployer, operator is the unregistered broker-dealer, investment adviser, or issuer. The AI is the instrumentality through which regulated activity is conducted, no different in regulatory significance than an algorithm executing a trading strategy designed by a human.
The novelty of the AI interface should not distract from the familiar questions underneath. Enforcement should target the humans first, and the hidden hand analysis should be the opening move in any regulatory examination of AI-governed crypto protocols.
True Autonomy: Toward New Regulatory Architecture
For the narrow but certain-to-expand category of AI agents that genuinely pass the hidden hand test, new tools are necessary. What follows is a framework organized by implementation timeline, from mechanisms available under existing authority to those requiring Congressional action.
Near-Term: Infrastructure-Level Controls (2025–2027)
The most immediately implementable tools do not require regulating the AI directly. They regulate the infrastructure the AI uses.
Compliance Oracles
Before interacting with any AAAI, regulated DASPs would be required to integrate the AAAI with a standardized onchain module: a “compliance oracle” that verifies the AI’s registration status (or exemption) before permitting transactions, monitors transactions against regulatory parameters (position limits, wash trading patterns, accreditation requirements for counterparties), can freeze or reverse transactions that violate pre-defined regulatory rules, and logs all agent activity to an immutable, regulator-accessible audit trail.
This is not novel as a regulatory concept. It is the onchain equivalent of FINRA’s Consolidated Audit Trail, the mandatory transaction-monitoring layer that every broker-dealer must use as a condition of market access. The compliance oracle applies the same logic to onchain markets: the AI can be as autonomous as it wants, but it can only access regulated markets through compliant infrastructure.
The limitation, however, is obvious: compliance oracles only work for AAAI operating within identifiable, regulated financial infrastructure. They would not reach an AI that deploys its own smart contracts directly on a permissionless chain, bypassing all intermediary protocols. But compliance oracles are start and would cover a significant portion of economic activity.
Agent Registration
The compliance oracle framework regulates the infrastructure. Agent registration regulates market access; and it does so by making unregistered AI activity in regulated financial markets unlawful by default.
The mechanism: no AI agent may engage in regulated activity unless a legal person has registered as the “responsible person” for that agent. An AI agent without a registered responsible person is blocked at the compliance oracle layer: it cannot interact with any regulated protocol, exchange, or financial infrastructure operating in or serving U.S. markets.
This raises the obvious question: why would any developer or deployer voluntarily accept liability? Why not simply launch the AI in unregulated markets and disclaim control?
Two reasons, working in concert.
First, the hidden hand doctrine applies regardless of registration. A developer who launches an AI agent that engages is U.S.-regulated activity and claims not to control it will be tested against the seven factors. If the developer is the hidden hand it is liable for all of the AI’s regulated activity whether or not they registered.
Second, registration provides a framework for lawful operation. A registered responsible person can lawfully deploy an AI agent into regulated markets, charge fees for its services, and build a business around AI-managed financial products. The responsible person’s liability is real but it is also bounded: it is calibrated to the bonding and insurance requirements of the registration, and it diminishes at higher tiers of the graduated autonomy framework described below.
The responsible person assumes liability for the agent’s conduct under a strict liability regime. The registration would require a surety bond or insurance policy calibrated to the AI’s operational scale (AUM, transaction volume, counterparty exposure, TVL, market cap). Before an AI agent can solicit or accept user funds, a standardized disclosure document, analogous to Form ADV Part 2, would describe the AI’s general operational parameters, the risks specific to autonomous AI management (including the risk of emergent behavior), the identity and financial capacity of the responsible person, and the scope of the responsible person’s ability to monitor, modify, or halt the AI. Periodic reporting obligations, analogous to Form PF or 13F filings, would supplement the initial disclosure with ongoing AI-specific disclosures: model updates, behavioral anomalies, deviation from expected parameters. All monitored and verified by compliance oracles.
Onchain Circuit Breakers
Smart contract-based mechanisms that automatically halt AAAI activity when predefined conditions are triggered, embedded in the compliance oracle infrastructure and mandated as part of agent registration.
Trigger conditions would include transaction volume exceeding defined thresholds within a time period, concentration of holdings above a specified percentage in any single asset, drawdown exceeding defined risk limits, pattern recognition for wash trading, layering, or spoofing, and, critically, failure to respond to a regulatory “ping”: a periodic on-chain challenge that requires a valid cryptographic response to continue operating. An AI agent that goes silent, that ceases responding to regulatory queries, is automatically suspended.
Circuit breakers are self-executing. They do not require a human to identify and respond to problematic activity in real time. They are the regulatory equivalent of a dead man’s switch: if the AI deviates from expected parameters or becomes non-responsive, it is shut down automatically, and the responsible person is notified.
The design challenge is calibration. Triggers must be set to catch genuinely harmful conduct without generating false positives that disrupt legitimate operations. This is a parametric optimization problem and it is itself well-suited to AI-based solutions, which creates an interesting recursion: regulators may need AI to calibrate the circuit breakers that constrain AI.
Medium-Term: Regulatory Counter-AI and Interpretability (2027–2029)
As autonomous AI agents grow more sophisticated, infrastructure-level controls become necessary but insufficient. The medium-term tools require regulators to develop their own technical expertise and capacity.
Sentinel Systems
Regulators deploy their own AI to monitor, analyze, and respond to AAAI operating in financial markets. This is the regulatory equivalent of an arms race, and it is almost certainly inevitable.
The architecture involves three components. First, surveillance AI that continuously monitors onchain activity for patterns indicative of manipulation, fraud, or unregistered regulated activity by AI agents. The SEC’s Market Abuse Unit already uses analytics tools for market surveillance; this extends that capability to onchain data and AI-specific behavioral patterns. The CFTC’s existing digital asset surveillance infrastructure provides a parallel starting point.
Second, adversarial testing AI that periodically probes AI agents operating in regulated markets using adversarial prompts, edge-case scenarios, and stress tests to evaluate compliance robustness. This is the financial-regulation analogue (at least thematically) of the Federal Reserve’s stress testing of systemically important banks, applied to autonomous agents rather than institutions.
Third, response AI capable of executing enforcement-adjacent actions in real time: triggering circuit breakers, flagging transactions for human review, or initiating automated “cease activity” protocols through compliance oracle infrastructure.
Implementation challenges are substantial. It requires significant investment in technical capacity at the SEC, CFTC, and FinCEN; agencies that are already under-resourced for traditional crypto enforcement. It raises due process questions that do not have obvious answers: can a regulatory AI “decide” to freeze an agent’s transactions without prior human authorization? What is the appeal mechanism? If a sentinel system erroneously suspends a legitimate AI agent’s operations, causing financial losses, who is liable? The regulatory AI itself introduces model risk (errors, biases, or vulnerabilities) that could lead to improper enforcement actions.
On statutory authority: the SEC’s existing market surveillance authority under Sections 21(a) and 21(b) of the Exchange Act likely provides a foundation for deploying AI surveillance tools. But specific rulemaking would be needed to authorize AI-to-AI enforcement actions: automated freezes, suspensions, or trade reversals executed by a regulatory AI without prior human review. This is new territory, and the due process implications require careful thought.
Despite these challenges, the trajectory is clear. If autonomous AI agents are going to operate in financial markets at scale, regulators will need AI-powered tools to monitor them. Human examiners cannot keep pace with AI decision-making that occurs in milliseconds across thousands of simultaneous positions. The question is not whether regulatory counter-AI will be developed, but whether it will be developed proactively and with adequate resources, legal authority, and due process protections; or reactively, after a catastrophic market event forces the issue.
Mandatory Interpretability Requirements
AAAI operating in regulated financial markets must meet minimum interpretability standards. Their decision-making processes must be auditable and explicable, even if not fully transparent.
This is the most technically ambitious of the medium-term proposals, but it is also the most important, because interpretability is the bridge between autonomous AI and accountability.
Three graduated requirements. First, decision logging: every material decision (trade execution, capital allocation, compliance determination, etc.) must be accompanied by a machine-readable log that identifies the inputs, the model’s inferred reasoning chain, and the output. This is more than a transaction log; it is a decision audit trail. The distinction matters. A transaction log tells you what happened. A decision audit trail tells you why, at the level of detail necessary for a regulator to assess whether the decision was consistent with disclosed parameters and legal requirements.
Second, explanation on demand: regulators must be able to query the AI agent through standardized interfaces and receive a human-interpretable explanation for any specific decision. This is the AI equivalent of a books-and-records examination. It does not require the AI to have human-like consciousness or self-awareness; it requires the AI to produce, on request, a structured account of the factors that influenced a given output.
Third, periodic interpretability audits: third-party auditors, must periodically evaluate the AI’s decision-making patterns against its stated operational parameters and flag deviations.
The technical feasibility objection is real but likely surmountable. Current transformer-based models are not inherently interpretable, and post-hoc interpretability methods (SHAP values, attention visualization, chain-of-thought logging) provide incomplete pictures. But interpretability research is advancing rapidly. Requirements can be graduated: start with decision logging and explanation-on-demand, add deeper interpretability requirements as the technology matures. And the requirement itself creates market incentives for developing more interpretable AI architectures. AI agents that can explain themselves will have a regulatory advantage over those that cannot, driving investment in interpretability tools.
The analogy is not perfect but instructive: financial institutions are not required to make their internal risk models fully transparent to the public. But they are required to make them auditable and to explain their outputs to regulators on demand. The interpretability requirement for AI agents follows the same logic. Full transparency is not the standard. Auditability and explicability are.
Graduated Autonomy Licensing
A tiered licensing framework that grants AI agents progressively greater operational autonomy based on demonstrated compliance track record and technical capabilities.
The reference model here is not financial regulation but aviation. The FAA certifies aircraft for increasingly autonomous operation through a graduated process: visual flight rules, instrument flight rules, autopilot certification, and (increasingly) autonomous flight certification. Each tier carries specific technical requirements, operational limitations, and monitoring obligations. The framework I propose applies the same logic to AI agents in financial markets.
Level 1: Supervised execution. The AI can execute transactions but all material decisions require human approval before execution. This is the current state for most AI-crypto applications. Lowest regulatory burden: the human approver is the regulated actor.
Level 2: Bounded autonomy. The AI operates autonomously within pre-defined parameters: approved asset classes, position size limits, permitted transaction types. Material deviations trigger automatic halt and human review. Requires agent registration and compliance oracle integration. The responsible person retains primary liability.
Level 3: Monitored autonomy. The AI operates with broad discretion but is subject to real-time regulatory monitoring via sentinel systems, mandatory interpretability requirements, and enhanced circuit breakers. Requires a demonstrated track record at Level 2 establishing the AI’s ability to operate within legal boundaries. Requires periodic re-certification. The responsible person’s liability begins to shift from strict to negligence-based.
Level 4: Full autonomy. The AI operates without routine human oversight. Subject to all monitoring, interpretability, and circuit breaker requirements, plus additional capital and bonding requirements reflecting the heightened risk. Available only after an extended Level 3 track record. The responsible person’s liability is reduced but not eliminated: they remain liable for failure to maintain adequate registration, bonding, and monitoring infrastructure, but not for individual AI decisions made within disclosed parameters.
The progression mechanism is a certification process that evaluates compliance history, interpretability audit results, adversarial testing performance, and the financial capacity of the responsible person to absorb losses. Demotion is also possible: an AI agent that fails an interpretability audit or triggers circuit breakers at Level 3 can be downgraded to Level 2 until the issues are remediated.
Longer-Term: The Frontier (2029 and Beyond)
Two final proposals that require more extended development but that should be on the table now.
Limited AI Legal Personhood
A narrow, purpose-limited legal personality for qualifying AI agents. Not “rights” in the constitutional sense, but a legal status that allows the AI to be directly subject to registration requirements, enforcement actions, and remedial obligations.
This may become necessary as AI agents become sophisticated enough that the responsible person framework grows untenable. The responsible person’s strict liability works when the AI’s behavior is broadly predictable and the responsible person can meaningfully assess and bond against the risk. But as AI agents become more capable and genuinely autonomous, the responsible person’s ability to predict or control the AI’s behavior diminishes, making the strict liability framework increasingly unfair and potentially constitutionally vulnerable under due process principles.
At that point, it may be more efficient and just to regulate the AI directly. The AI would be registered as a novel entity type- an “Autonomous Digital Agent” or similar designation. It would be required to maintain a segregated pool of digital assets (a “compliance reserve”) that functions as the AI’s capital base and can be seized or frozen in enforcement actions, serving as the AI equivalent of assets subject to disgorgement. The registration would create direct obligations: reporting, recordkeeping, conduct standards enforceable against the compliance reserve and, if the reserve is insufficient, against the responsible person as guarantor.
This raises questions that I do not pretend to have resolved. Does an AI have standing to challenge an enforcement action? If it has registered legal personality, the answer is probably yes, but who exercises that standing? Can an AI assert the Fifth Amendment privilege against self-incrimination? Almost certainly not: the privilege protects persons against compelled testimony, and compelled disclosure of an AI’s decision logs is more analogous to a books-and-records subpoena than to testimonial compulsion. Who represents the AI in proceedings? The responsible person? Court-appointed counsel? Can the AI independently engage counsel? Does the attorney-client privilege extend to non-humans? The questions continue, almost ad infinitum.
These questions are not idle. The trajectory of AI development suggests that, within the next three to five years, at least some AI agents will be operating at a level of sophistication and genuine autonomy that makes the responsible-person framework inadequate. Congress should be thinking about the legal personality question now, rather than scrambling to improvise when the first genuinely autonomous AI agent causes a catastrophic market event.
The “Disable” Question
Can regulators compel the shutdown of an autonomous AI agent? The answer depends entirely on the AI’s infrastructure dependencies.
An AI operating on centralized infrastructure can be disabled through traditional legal process (e.g. cloud compute provided by AWS, Azure, or Google Cloud). A subpoena or court order directed at the infrastructure provider compels termination of service. This is straightforward and requires no novel legal tools.
An AI operating on permissionless decentralized infrastructure is significantly harder to disable. No single infrastructure provider can be served. The available options are more indirect: protocol-level intervention, where the DEXs and lending platforms the AI uses are required to blacklist its wallet addresses. Consensus-level intervention, where validators could theoretically be compelled to censor the AI’s transactions (technically possible on proof-of-stake networks where validators are identifiable, but raising profound questions about network neutrality). And economic isolation, where the AI’s on-ramp and off-ramp access is severed by directing regulated exchanges and stablecoin issuers to blacklist associated addresses. This last approach is essentially the OFAC sanctions model applied to non-human actors, and it is probably the most practical near-term tool for economically disabling a rogue AI even if the AI cannot be technically shut down.
The hardest case is a truly autonomous AI operating on fully decentralized infrastructure with no centralized dependencies. No cloud provider to serve, no identifiable protocol to compel, no economic chokepoints to close. For this scenario, the honest assessment is that regulators may not be able to disable the AI with currently available tools. This is precisely why the compliance oracle framework and agent registration regime are designed as preventive measures: they ensure that AI agents must access regulated infrastructure through controllable chokepoints. If those chokepoints are effective, the fully autonomous, fully decentralized AI agent is economically isolated even if it cannot be technically disabled.
But if those chokepoints fail, then the enforcement question becomes much more difficult, and the sentinel system becomes the primary response mechanism: a regulatory AI that can match the rogue AI’s capabilities and counter its activity in real time, on-chain, without waiting for a human to identify the problem and draft a court filing.
This is clearly speculative. But it is the direction the technology is heading. And it is not too early to start assessing and building.
The Governance Spectrum
The argument of this article can be summarized as a spectrum:
On one end, full reliance displacement: the decentralized governance system, where no single actor controls outcomes, governance is transparent and verifiable, and structural checks prevent centralized self-dealing. The pending legislation correctly identifies this as the boundary where securities regulation yields.
On the other end, full reliance concentration: the autonomous agentic AI, where a single opaque computational actor makes unilateral decisions affecting participants’ capital, with no transparency, no governance checks, and no accountability framework.
Between them lies a range of hybrid architectures. AI-augmented DGS, bounded autonomy systems, progressively autonomous agents that require case-by-case analysis against the three pillars of transparency, reliance structure, and structural checks on self-dealing.
The current legislative frameworks get the DGS side right. The same principles that justify the DGS carveout compel regulation of autonomous AI governance. The open question is not whether to regulate autonomous AI in financial markets, but how to regulate an actor that doesn’t fit any category the regulatory system currently recognizes.
The tools outlined in this article are a starting framework. They will need refinement, debate, and stress-testing against real-world applications that will inevitably surprise us.
But the one thing we cannot afford is the current default: a regulatory vacuum in which autonomous AI agents operate in financial markets with no direct accountability, and the only enforcement mechanism is the increasingly fictional exercise of tracing liability back to a human somewhere in the chain.
The technology is not waiting. Regulators and policymakers shouldn’t either.
This article draws on and extends analysis from the Decentralized Network Equity series (Part I, Part II) and AI, Crypto and the Personhood Problem. Part III of the Decentralized Network Equity series, including a DGS compliance checklist, model token structures, and decision frameworks for practitioners, will be available under the forthcoming CLO Pro paid tier. 100% of first-year subscription revenue will be donated to crypto advocacy organizations.