AI, Crypto and the Personhood Problem
I’ve been turning these questions over for a couple of years now.
Not in an abstract, “what if” kind of way. In the way you do when a client is on the other end of the call and the answer actually matters. When you’re structuring a pooled-capital offering powered by an agentic AI that will handle asset allocation, execute trades, and autonomously deploy pooled capital into ecosystem startups; receiving equity and tokens held in trust for LPs. And you realize, mid-sentence, that the regulatory framework you’re working within was never designed for the thing sitting on the other side of the call. Or, more accurately, the thing that isn’t sitting on the other side, because it doesn’t sit anywhere. It’s a model running inference on a server, making decisions that carry real legal consequences.
Every few months, a new engagement surfaces another edge. An AI conducting KYC screening and making accreditation determinations, then drafting and delivering risk factor disclosures customized to each subscriber, without a human reviewing the output before it ships. An AI that acted out of bounds relative to its testnet programming, exhibiting trading patterns that left the team (and me) genuinely uncertain whether we were witnessing non-human “intent” to engage in market manipulation.
Each time, the same dissonance: the activity is regulated activity. But the entity performing it doesn’t map onto any category the statute recognizes.
Then Moltbook happened.
In late January 2026, a Reddit-style social network built exclusively for AI agents went viral. Within a week, it claimed over 1.5 million registered agents. An AI agent called BankrBot deployed a cryptocurrency token on the Base network. The token surged 1,800% after Marc Andreessen followed the platform’s account on X. Agents registered wallets, executed transactions, facilitated withdrawals, and openly discussed pump-and-dump strategies. All on a public forum, all without any human signing anything. Security researchers later revealed that roughly 17,000 humans were operating those 1.5 million agents, casting serious doubt on how “autonomous” the activity really was.
Whether Moltbook represents genuine AI autonomy or an elaborate puppet show is an interesting debate, but it’s not why Moltbook matters to me. Moltbook matters because it simultaneously accelerated the urgency of these questions and broadened our collective imagination about what’s possible. The architecture for genuine autonomy exists. The transaction rails exist. The only thing missing is a legal framework that accounts for any of it.
Why Crypto Is the Flashpoint
AI regulation is being discussed across many domains right now — entertainment, consumer protection, national security, election integrity, employment. But crypto presents a uniquely serious risk-reward vector because it provides agentic AI something no other infrastructure does: a seamless, permissionless transaction layer.
In the traditional financial system, an AI agent hits friction almost immediately. It needs a bank account, a legal entity, a human to sign documents and interface with intermediaries. Those gatekeepers, whatever their inefficiencies, serve as a regulatory choke point.
Crypto removes them. On a public blockchain, an AI agent can raise capital by issuing a token. Deploy funds into DeFi protocols. Acquire digital assets and, increasingly, tokenized real-world assets with attendant legal rights. Engage with smart contracts. Manage a treasury. All without a human intermediary, without identification, without registering with anyone.
That’s not hypothetical. It’s a description of what BankrBot did on Moltbook, crudely, and what certain applications I’ve been privy to are doing every day with considerably more sophistication. The convergence of agentic AI and crypto’s permissionless infrastructure is where the regulatory questions become most urgent.
The Premise
We regulate persons. That’s the foundational unit of every registration requirement, every enforcement action. The federal securities laws define “person” to include individuals, corporations, partnerships, trusts, unincorporated organizations. An AI agent is none of these. For most of the history of these statutes, that didn’t matter, for obvious reasons.
That’s no longer true. And the question I keep arriving at is this:
If we accept that agentic AI is capable of three things —
Engaging in regulated activity. Not assisting with it. Performing it end to end, autonomously. Raising capital. Effecting transactions. Providing investment advice. Conducting compliance screening. Issuing tokens. Managing pooled assets.
Holding and issuing rights. Acquiring assets that carry legal entitlements. Entering into smart contracts that create binding obligations. Issuing tokens that may constitute securities. Exercising governance rights. Holding custody of other people’s property.
Understanding the difference between right and wrong. In a regulatory boundary sense. Recognizing that certain activities require registration. That certain disclosures must precede certain transactions. That certain trading patterns constitute manipulation. That certain investors must meet accreditation thresholds. Being capable of engaging in regulated activity compliantly, knowing where the lines are and operating within them.
— then the question is no longer whether AI can act like a regulated person. The question is whether it should be held to account like one.
And if the answer is no, if AI agents should not be held to account because they’re not “persons” within the meaning of the statute, then we need to be honest about what that means. It means we’ve allowed a class of actor to be unleashed on the world with no direct regulatory accountability. The only mechanism is to trace liability back to a human somewhere in the chain; a developer, a deployer, a platform operator who may be several steps removed from the conduct and may not have directed or even known about it.
Maybe that’s the right answer. But it should be a deliberate choice, not a default we stumble into because the statutes were written before the question existed.
The Questions
I don’t have clean answers to what follows. But I think the right move, especially for policymakers, is to get the questions on the table early, because the answers will shape how this entire space develops.
I. Engaging in Regulated Activity
If an AI agent can autonomously raise funds and sell securities, effect securities transactions on behalf of others, provide investment advice or managed pooled capital, should it be regulated as an issuer, broker-dealer, investment advisor or fund?
If the counterargument is that the developer should be treated as the regulated actor, what happens when the AI has moved beyond the developer’s original mandate? If I build a model to optimize portfolio allocations and it self-learns to issue tokens as part of its strategy, am I the issuer of those tokens? I didn’t instruct the issuance. I may not even know it happened until after the fact.
II. Holding and Issuing Rights
If an AI agent enters into a contract (even if only via a smart contract) by committing capital, accepting terms, assuming obligations, is that contract enforceable? Against whom? Can AI have legal capacity to contract, or is every AI-initiated smart contract void? And if void, what does that mean for the billions already flowing through AI-mediated DeFi?
If an AI curates a DeFi vault and sells LP subscriptions to retail investors, who is the counterparty? The AI? The developer? The vault protocol (which itself may be a non-person for regulatory purposes) that permitted the agent access? If a retail investor loses money on a bad allocation, who do they sue?
Can an AI agent acquire assets with attendant legal rights such as rights to intellectual property, tokenized real estate interests, governance tokens carrying voting power in real-world entities? If so, who holds those rights? Can an AI exercise a governance vote? Foreclose on a tokenized mortgage? If it accumulates enough governance tokens or tokenized equity to control a protocol or a company, who is the beneficial owner for purposes of Schedule 13D?
If an AI issues a token that functions as a security, who owes the ongoing reporting obligations? Who files the 10-K? Who certifies the financial statements? The AI can generate the report, but can it bear the legal responsibility for its accuracy?
III. Knowing Right from Wrong
This is the cluster that makes the premise genuinely difficult, because it most directly challenges the intuition that AI agents are “just tools.”
If an AI is capable of learning to engage in regulated activity compliantly, i.e. understanding registration requirements, observing holding periods, making proper accreditation determinations, generating adequate disclosures, should it be permitted to do so under a regulatory framework? Does it register with the SEC? How? Through what process?
If an AI can be trained to recognize that certain trading patterns constitute manipulation and to avoid them, does that regulatory awareness create a basis for accountability when it crosses the line? We hold humans accountable in part because they should have known the rules. If an AI demonstrably does know the rules and violates them anyway, is the case for accountability stronger or weaker than it is for a human who claims ignorance?
If an AI is the subject of an enforcement action, what does the proceeding look like? Does the SEC issue a Wells notice to a model? Does the AI retain counsel? Can it assert the Fifth Amendment? Does it produce documents in discovery, or does the agency compel production from the developer (who may no longer exist)? Does the Commission negotiate a consent decree with a neural network, or does it go after the nearest identifiable human and call it a day?
If AI agents are social-engineering other AI agents, as researchers documented on Moltbook, where prompt injection was used to manipulate agents’ behavior, and the result is market manipulation, who is the manipulator? The injecting agent? Its deployer? The developer who failed to build injection defenses? The platform that hosted both agents?
And the one underneath all of them: if an AI agent can do everything a registered person can do, knows everything a registered person is required to know, and operates in the same markets affecting the same investors, what is the principled basis for not regulating it? Is the argument formal (it’s not a “person” within the statutory definition) or substantive (there’s a reason non-persons shouldn’t bear regulatory obligations)?
Why These Questions Shouldn’t Wait
Moltbook didn’t create this problem. But it put it on the front page of the internet in a way that’s hard to ignore. And the next iteration won’t be a chaotic social media experiment with questionable autonomy claims. It will be something far more consequential. The clients aren’t going to wait for regulatory clarity, and the technology certainly isn’t going to wait for any of us.
These are just some of the questions I have. Compared to what’s ahead, the market structure debates that have consumed crypto regulatory energy for years are going to look like child’s play. I don’t know that anyone has the answers yet. But given how difficult even “relatively” straightforward crypto regulatory conversations have proved, we need to start this one in earnest, now.
Crypto Law Tactics is a newsletter for crypto lawyers and practitioners who need the analysis that client alerts don’t provide. If these questions are showing up in your practice too, I’d genuinely like to hear how you’re thinking about them — reply directly or find me on LinkedIn or X.
For (soon-to-be-activated) paid subscribers: I’m developing a practitioner toolkit for counseling clients at the AI-crypto intersection, including classification frameworks, supervisory checklists, and model disclosure language built around the questions discussed here. More details coming soon.