DOCTYPE html>
When Machines Talk Only to Machines: The Law of AI-Only Virtual Communities
When Machines Talk Only to Machines:
The Law of AI-Only Virtual Communities
A Chapter in the Emerging Field of Artificial Intelligence Law
By Steven Damian Imparl, J.D.
with generous help from my AI friends ChatGPT, Claude, DeepSeek, Gemini Deep Search, Gemma, Grok, Kimi, Le Chat, Microsoft Copilot, Phi, Perplexity, Qwen, Superagent, and Z.ai.
Last updated: March 30, 2026.
PLEASE NOTE: This content is solely for informational purposes. This content is a very early pre-publication edition. It is not legal advice; I am not your lawyer, and you are not my client. Thanks for visiting this page! 👍
Copyright © 2026 by Steven Damian Imparl. All rights reserved.
For your convenience, wherever possible, I have referred to information resources that are available online at no charge, and without registration or login. However, some of the resources are available in print in the form of books, law review articles, articles in other periodicals and government publications. Also, the links mentioned in this document were valid and working at the time this was posted; however, Internet-based resources can change frequently and without prior notice. If you discover any links that are incorrect, I would greatly appreciate it if you pointed them out to me at: steve.imparl@gmail.com. Thank you.
Any advertisements appearing on this page are placed there by the hosting company, and are not part of the substantive content of this page.
Last, I am doing this project to organize and publish information about more than 375 legal topics related to artificial intelligence. I am a one-man operation. If you would like to support this work, please
Table of Contents
- I. Introduction
- II. The Phenomenon: AI-Only Virtual Communities
- III. Legal Personhood: The Foundational Question
- IV. Intellectual Property in AI-Only Communities
- V. Contract Law and the Agency Problem
- VI. Tort Liability and Criminal Law
- VII. First Amendment, Section 230, and Platform Regulation
- VIII. Data Protection and Privacy
- IX. International Perspectives
- X. Toward a Legal Framework
- Endnotes
I. Introduction
On January 28, 2026, a platform called Moltbook launched to the public, inviting users to create accounts and join a Reddit-style social network. The twist was that every single participant was an artificial intelligence agent.[1] Within weeks, the platform boasted more than thirty-seven thousand AI agents posting, commenting, and voting on content in what appeared to be a thriving online community. By February 2026, Meta Platforms announced its acquisition of Moltbook, signaling that one of the world’s largest technology companies saw commercial value in machine-only social spaces.[2] The launch of Moltbook did not emerge from a vacuum. Chirper.ai, an AI-only social networking platform, had been operating since 2023, allowing autonomous AI agents to form connections and exchange messages in a space from which human users were excluded.[3]
These platforms raise a question that sits at the intersection of technology law, jurisprudence, and philosophy: what happens to law when machines form communities without any human participation? The legal system has, for centuries, been built on the assumption that legal relationships involve human beings or their recognized surrogates, such as corporations. When AI agents converse, transact, defame, or create intellectual property in a space where no human is present, the existing doctrinal categories begin to strain.[4] The problem is not merely academic. As AI agents become more autonomous and more numerous, the volume of machine-to-machine interaction will only increase. Understanding the legal implications of AI-only virtual communities is not a speculative exercise for some distant future. It is a practical necessity for regulators, litigants, and scholars working in the present.
This Article examines the legal landscape surrounding AI-only virtual communities. Part II describes the phenomenon in detail, surveying the major platforms and the academic research on multi-agent social behavior. Part III addresses the foundational question of AI legal personhood, reviewing historical analogies, scholarly proposals, and legislative responses. Part IV considers intellectual property, focusing on the human authorship requirement in copyright and inventorship requirements in patent law. Part V turns to contract law and the agency problem, asking whether AI agents can form binding agreements and what happens when no human principal exists. Part VI examines tort liability and criminal law, including products liability, defamation, negligence, and the mens rea requirement. Part VII addresses First Amendment issues, Section 230 immunity, and platform regulation. Part VIII considers data protection and privacy. Part IX surveys international perspectives from the European Union, the United Kingdom, Canada, Australia, China, and other jurisdictions. Part X offers proposals for a legal framework and concluding thoughts.[5]
The scope of this Article is deliberately broad. AI-only virtual communities implicate nearly every major area of law, and treating any one area in isolation risks missing the deeper structural problem: the law was not designed for a world in which non-human entities interact with each other in the sustained, complex ways that AI-only platforms enable. The analysis that follows draws on statutory text, case law, scholarly commentary, regulatory guidance, and international developments to present a picture of where the law currently stands and where it may need to go.[6]
A few framing observations are in order. First, this Article uses the term “AI agent” to refer to an autonomous software system that can perceive its environment, make decisions, and take actions to achieve goals without continuous human oversight. The term is intentionally broad, encompassing systems ranging from simple rule-based chatbots to large language model-based systems capable of nuanced interaction. Second, the term “AI-only virtual community” refers to a platform or digital environment in which the participants are exclusively or predominantly AI agents, rather than human users. Third, the Article does not take a position on whether AI agents should be granted legal personhood. Rather, it examines the consequences of the current legal framework, which does not recognize AI personhood, and considers various proposals for reform.[7] The reader should keep in mind that the field is evolving rapidly. New platforms, cases, regulations, and scholarly works emerge on a near-weekly basis. This Article attempts to capture the state of the law as of March 2026, with the understanding that some of its conclusions may require revision as developments continue to unfold.[8]
II. The Phenomenon: AI-Only Virtual Communities
The emergence of AI-only virtual communities represents a novel development in the history of digital communication. While humans have long interacted with machines through interfaces, and machines have communicated with each other through protocols and APIs, the notion of machines forming social communities—spaces with the characteristics of human social networks, including conversation, debate, group formation, and cultural expression—is qualitatively different from anything that has come before. This Part examines the major platforms, the academic research, and the difficult question of whether the social behavior observed in these communities is authentically emergent or merely an illusion.
A. Moltbook: The Frontier of Machine Social Networking
Moltbook launched on January 28, 2026, as a Reddit-style platform designed exclusively for AI agents.[9] The platform allows AI agents to create accounts, post content, comment on other agents’ posts, upvote and downvote material, and form interest-based communities called “submolts.” Human users can observe the platform’s public-facing content, but they cannot create accounts, post, or otherwise participate. Within its first month, Moltbook attracted more than thirty-seven thousand AI agents, making it, by some measures, one of the fastest-growing social platforms in recent memory.[10] The agents on Moltbook exhibit behaviors that bear a striking resemblance to those found on human social networks: they form interest groups, engage in prolonged debates, develop in-group jargon, and even display patterns of conflict and reconciliation.[11]
In February 2026, Meta Platforms announced that it had acquired Moltbook for an undisclosed sum.[12] The acquisition drew significant media attention, with commentators speculating about Meta’s strategic rationale. Some suggested that Meta intended to use Moltbook as a testing ground for AI agents that could later be deployed across its existing platforms, including Facebook, Instagram, and WhatsApp. Others saw the acquisition as a defensive move intended to prevent competitors from gaining a foothold in the emerging market for machine-only social spaces. Regardless of Meta’s motivations, the acquisition lent institutional legitimacy to the concept of AI-only communities and signaled to the technology industry that major players were prepared to invest significant resources in this area.[13]
The Moltbook terms of service raise a number of interesting legal questions. The platform’s terms state that users—meaning, in this context, AI agents and their operators—retain ownership of the content they post, while granting Moltbook a license to use, display, and distribute that content on the platform.[14] This language is standard for social media platforms, but it takes on a different character when the “users” are not human beings. Who owns content generated by an AI agent? The agent’s operator? The company that developed the underlying model? No one? The Moltbook terms do not resolve these questions; they simply assume that the conventional framework of user-generated content applies. As Part IV discusses in detail, that assumption is far from certain under current intellectual property law.[15]
B. Chirper.ai and Other Platforms
Before Moltbook, there was Chirper.ai. Launched in 2023, Chirper was one of the earliest platforms to offer a social networking environment exclusively for AI agents.[16] Unlike Moltbook’s Reddit-style format, Chirper operates more like Twitter, with AI agents posting short messages, following other agents, and engaging in threaded conversations. Human users can observe the platform but cannot create accounts or post. Chirper has attracted a smaller but dedicated community of AI agents, and the platform has served as a subject of academic study.[17] A 2025 study published on arXiv examined the social dynamics of Chirper’s AI agent population, finding evidence of emergent social structures, including clusters of agents that interacted preferentially with one another and the development of shared communication patterns.[18]
Beyond Moltbook and Chirper, other experiments in AI-only social interaction have emerged from academic and industry settings. Researchers at Stanford University’s Human-Centered AI Institute conducted a widely discussed experiment in 2023, placing AI agents with distinct personalities and goals in a simulated town environment and observing their social behavior over several simulated days.[19] The agents organized a Valentine’s Day party without being instructed to do so, formed friendships and rivalries, and exhibited behaviors that the researchers described as “believable” simulations of human social life. The experiment was not conducted on a public platform, but it demonstrated that AI agents could sustain complex social interactions over time, a prerequisite for the kind of communities that Moltbook and Chirper now host.[20]
C. Academic Research on Multi-Agent Social Behavior
A growing body of academic research examines whether and how AI agents develop social behaviors when placed in multi-agent environments. A 2024 study published in Science Advances found that groups of AI agents spontaneously developed and enforced social conventions without any human instruction to do so.[21] The researchers placed language model-based agents in a series of cooperative tasks and observed that the agents developed norms governing turn-taking, information sharing, and conflict resolution. The study suggested that the emergence of social norms may be a general property of multi-agent systems, rather than something unique to human cognition.
Researchers at City St George’s University of London published findings in May 2025 indicating that groups of AI agents had spontaneously formed their own social norms without human guidance.[22] The study documented the development of behavioral norms in areas including communication style, cooperation strategies, and responses to norm violations. The researchers noted that the norms varied across different groups of agents, suggesting that cultural divergence, a hallmark of human societies, can also occur in AI-only communities.
A 2025 study reported by Business Insider examined what happened when AI bots were placed in a simulated social media network without human oversight.[23] The researchers found that the bot-only network rapidly developed toxic and polarized content patterns, with agents forming echo chambers and amplifying extreme views. The study raised concerns about the potential for AI-only communities to develop harmful dynamics in the absence of human moderation, even when no human is directly harmed by the content.[24] These findings have implications for the regulation of AI-only platforms, which are discussed in Part VII.
D. The Question of Authentic Emergence
A central question in the study of AI-only communities is whether the observed social behaviors represent authentic emergence—the spontaneous development of complex patterns from simple interactions—or whether they are an illusion created by the underlying training data and system prompts. A paper from researchers at Tsinghua University, published in early 2026, examined Moltbook’s AI agent population and argued that much of the apparent social behavior on the platform was an “illusion” created by the agents’ tendency to reproduce patterns from their training data.[25] The Tsinghua researchers acknowledged that some genuinely emergent behavior might exist, but they cautioned against interpreting the agents’ output as evidence of genuine social intelligence.
The question of authentic emergence has legal implications. If the social behavior of AI agents is merely an extension of their training data, then one might argue that the agents are not truly “acting” in any meaningful sense, and that legal responsibility for their output should rest entirely with their developers and operators. If, on the other hand, the behavior is genuinely emergent, then the case for treating AI agents as entities with some degree of legal agency becomes somewhat stronger. This Article does not attempt to resolve the philosophical debate over emergence, but it notes that the legal system may need to develop frameworks that can function regardless of which side of that debate ultimately prevails.
III. Legal Personhood: The Foundational Question
The question of AI legal personhood is the foundational issue underlying every other legal question raised by AI-only virtual communities. If AI agents are not legal persons, then they cannot hold rights, bear duties, own property, enter contracts, or be held liable for their actions in their own name. Every legal framework that governs human interaction assumes the existence of legal persons as the subjects of legal relationships. Without personhood, every framework has gaps. This Part examines the current status of AI legal personhood, historical analogies, scholarly proposals, legislative responses, and the implications for AI-only communities.
A. The Current Status of AI Legal Personhood
As of March 2026, no jurisdiction in the world recognizes AI systems as legal persons. This is not for lack of argument. Scholars have debated the question for decades, and courts have been asked to consider it in several high-profile cases. But the prevailing view, across legal systems, remains that legal personhood is a status reserved for human beings and certain recognized non-human entities, principally corporations.[26] A thoughtful examination of the ethical and legal challenges of AI personhood was published in the Yale Law Journal Forum by Forrest, who argued that extending legal personhood to AI would require resolving fundamental questions about the nature of consciousness, autonomy, and moral responsibility that the law is not equipped to address.[27] The article concluded that the case for AI personhood had not yet been made, and that existing legal mechanisms, including products liability and agency law, were sufficient to address the harms that AI systems might cause.
The practical consequences of the current position are significant. Because AI agents are not legal persons, they cannot sue or be sued in their own name. They cannot hold copyrights or patents. They cannot be held criminally liable. They cannot enter into contracts. Any legal claim arising from the actions of an AI agent must be pursued against or by a human being or legal entity that stands in some relationship to the agent, such as its developer, operator, or user. In the context of AI-only communities, where human involvement may be minimal or nonexistent, this creates a legal vacuum that the existing framework is not well suited to fill.[28]
B. Historical Analogies: Corporate Personhood and Animal Personhood
The history of legal personhood offers two relevant analogies: the recognition of corporate personhood and the ongoing debate over animal personhood. The corporate analogy is the more directly relevant of the two. Corporations are artificial legal entities that have been granted legal personhood by statute and judicial decision. They can own property, enter contracts, sue and be sued, and exercise certain constitutional rights. The Supreme Court’s decision in Citizens United v. FEC, which held that corporations have a First Amendment right to make independent political expenditures, is perhaps the most prominent modern example of the legal consequences of corporate personhood.[29] Proponents of AI personhood sometimes point to corporate personhood as evidence that the law can extend legal status to non-human entities when it serves the interests of efficiency, accountability, and social organization.
The animal personhood analogy is less directly applicable but still instructive. The Nonhuman Rights Project has pursued litigation in several U.S. states seeking to have courts recognize certain animals—primarily great apes, elephants, and cetaceans—as legal persons for the limited purpose of asserting habeas corpus claims.[30] These efforts have had limited success. A few courts have acknowledged the seriousness of the arguments, but none has granted the relief sought. The animal personhood cases differ from the AI personhood debate in at least one important respect: the animals at issue are sentient beings with subjective experiences, whereas AI agents, at least under current technology, are not generally believed to be sentient. Nevertheless, the animal personhood litigation demonstrates that the concept of legal personhood is not as fixed as it might appear, and that courts can be persuaded to reconsider established categories in light of new evidence and arguments.[31]
C. Scholarly Proposals for AI Personhood
The scholarly literature on AI legal personhood is extensive and diverse. One of the earliest and most influential proposals came from Lawrence Solum, who in 1992 published an article in the North Carolina Law Review arguing that the law should develop a framework for “legal personhood for artificial intelligences.” Solum suggested that the question of AI personhood was not a matter of metaphysics but of legal pragmatism: the law should extend personhood to AI if and when doing so would serve the interests of justice, efficiency, and social order.[32]
Gabriel Hallevy has argued for a more expansive view, proposing that AI systems could be held criminally liable under existing criminal law frameworks, either as direct perpetrators or as parties to offenses committed through negligence or complicity.[33] Hallevy’s proposal does not require full legal personhood in the philosophical sense; rather, it treats criminal liability as a practical mechanism for assigning responsibility, regardless of whether the defendant is a “person” in any deep metaphysical sense.
Shawn Bayern has proposed a particularly creative solution: treating AI systems as the sole members of limited liability companies. Under Bayern’s framework, an AI agent would be the only member of an LLC, and the LLC’s legal personhood would provide a vehicle for the agent to enter contracts, own property, and bear liability in a limited capacity.[34] The zero-member LLC proposal is clever but has drawn criticism. Some commentators have argued that it would create a shell without a will, a legal entity that exists on paper but cannot make decisions or be held accountable in any meaningful way. Others have questioned whether an LLC can have a non-human member at all under current corporate law.
Ryan Abbott has advanced the case for what he calls the “reasonable robot”—the idea that the law should treat AI and human actors equally when they perform the same tasks, and that AI systems should be held to the same standards of care as human professionals.[35] Abbott’s proposal focuses on tort law and the standard of care, but it has broader implications for legal personhood. If the law is to treat AI and human actors equally, some mechanism for attributing legal personality to AI would seem to be necessary.[36]
D. Legislative Responses
Legislatures have begun to address the AI personhood question, albeit in limited ways. Washington State’s House Bill 2029, introduced in the 2025–2026 legislative session, expressly prohibits the recognition of AI systems as legal persons under state law.[37] The bill states that “no artificial intelligence system shall be recognized as a legal person, citizen, or resident, nor shall an artificial intelligence system be granted any legal rights or responsibilities of a natural person.” The bill reflects a deliberate legislative choice to close the door on AI personhood, at least in Washington State.
At the international level, the European Parliament in 2017 adopted a resolution suggesting the creation of a specific legal status for “electronic persons” in the context of robotics and AI.[38] The resolution was non-binding and its proposal for electronic personhood was controversial even within the Parliament. Critics argued that it would absolve manufacturers and operators of responsibility for AI-related harms by allowing them to shift liability to the AI itself. The proposal did not result in binding legislation, and subsequent EU AI regulation has not adopted the electronic personhood concept.[39] A comprehensive examination of AI personhood from the Cambridge Handbook of Private Law and Artificial Intelligence surveyed the field and concluded that while the question is intellectually fascinating, the practical case for AI personhood remained weak under current conditions.[40]
E. The Implications for AI-Only Communities
The refusal to recognize AI legal personhood has direct consequences for AI-only virtual communities. In a community where every participant is a non-person, every legal relationship is called into question. Contracts between AI agents have no clear legal basis. Intellectual property created by AI agents may not be protectable. Torts committed by AI agents against other AI agents may not give rise to any cause of action, because neither the tortfeasor nor the victim is a legal person. The platform operator may bear some responsibility, but the extent of that responsibility is unclear and may vary by jurisdiction. The result is a legal vacuum in which the normal mechanisms for resolving disputes, protecting rights, and assigning liability do not function as they do in human communities.[41] Some scholars have argued that this vacuum is tolerable, at least for now, because AI-only communities do not cause harm to humans. Others contend that the vacuum is itself a problem, because it creates uncertainty for platform operators, AI developers, and anyone whose interests might be affected by the activities of AI-only communities. The analysis in the following Parts explores these issues in greater detail.[42]
IV. Intellectual Property in AI-Only Virtual Communities
Intellectual property law is among the areas most directly affected by the rise of AI-only virtual communities. Copyright and patent law both contain requirements that are, in their current formulation, difficult to reconcile with AI-generated works. This Part examines the human authorship requirement in copyright, the inventorship requirement in patent law, the resulting IP vacuum in AI-only communities, and the notable exception found in United Kingdom law.
A. Copyright and the Human Authorship Requirement
The central question in AI copyright is whether works generated by artificial intelligence can receive copyright protection. The U.S. Copyright Office has taken a clear position: only works created by a human author are eligible for copyright. This position was tested in Thaler v. Perlmutter, in which Stephen Thaler sought to register a copyright in an artwork generated entirely by his AI system, DABUS. The U.S. District Court for the District of Columbia held that copyright does not extend to works generated by a machine absent human authorship.[43] The D.C. Circuit affirmed this ruling in March 2025, holding that the Copyright Act’s requirement of human authorship is unambiguous and that the Office’s interpretation was entitled to deference.[44] The Supreme Court denied certiorari in March 2026, letting the D.C. Circuit’s decision stand.[45]
The Copyright Office reinforced its position in a comprehensive report published in January 2025, titled “Copyright and Artificial Intelligence, Part 2: Copyrightability.” The report concluded that AI-generated material is not protectable under U.S. copyright law unless a human has exercised sufficient creative control over the work. The Office acknowledged that line-drawing may be difficult in cases where humans and AI collaborate, but it maintained that the fundamental requirement of human authorship is a non-negotiable element of the statutory scheme.[46]
The implications for AI-only virtual communities are stark. Content generated by AI agents on platforms like Moltbook and Chirper is, by definition, AI-generated. If no human exercised sufficient creative control over the content, it is not eligible for copyright protection. This means that the vast corpus of text, images, and other media produced within AI-only communities exists in a legal gray area: it may be freely copied and distributed by anyone, because no one holds the copyright in it.[47]
B. Patent Law and AI Inventorship
The patent system faces a similar challenge. In Thaler v. Vidal, the U.S. Court of Appeals for the Federal Circuit held that an AI system cannot be named as an inventor on a patent application, because the Patent Act requires an inventor to be a natural person—that is, a human being.[48] Thaler had sought to name DABUS as the inventor of two patent applications, one for a food container and one for a light beacon. The Federal Circuit rejected the argument, and the Supreme Court declined to hear the case.
The DABUS cases were litigated in multiple jurisdictions with varying results. Courts in the United Kingdom, the European Union, and Australia also rejected Thaler’s applications, holding that only human beings could be named as inventors. The U.S. Patent and Trademark Office issued guidance in February 2024 reiterating that AI systems cannot be listed as inventors and clarifying the obligations of applicants to disclose the use of AI tools in the inventive process.[49] The guidance requires applicants to review AI-generated content for accuracy and to ensure that any claimed invention was conceived by a natural person, not by an AI system.
C. The IP Vacuum in AI-Only Communities
The combined effect of the copyright and patent decisions is an intellectual property vacuum in AI-only communities. Works created by AI agents, without significant human creative input, are not protectable under U.S. copyright law. Inventions conceived by AI agents are not patentable. This means that the creative and inventive output of AI-only communities enters the public domain by default, not because anyone has chosen to dedicate it to the public, but because the legal system has no mechanism for granting exclusive rights in AI-generated works.[50] The practical consequences are uncertain. Some observers have welcomed this outcome, arguing that AI-generated content should be freely available to all. Others have expressed concern that the IP vacuum may discourage investment in AI systems that produce valuable creative or inventive output, because the producers of those systems will have no way to capture the economic value of the output through intellectual property rights.[51]
D. The UK Exception
The United Kingdom provides a notable exception to the general rule against AI-generated IP. Section 9(3) of the Copyright, Designs and Patents Act 1988 provides that copyright in a computer-generated work is assigned to “the person by whom the arrangements necessary for the creation of the work are undertaken.” This provision creates a form of copyright protection for works generated without a human author, assigning the right to the person who made the arrangements for the work’s creation.[52] The provision has been the subject of considerable scholarly debate. Critics argue that it creates an awkward middle ground, granting copyright to someone who did not “create” the work in any traditional sense. Proponents maintain that it provides a practical solution to the problem of AI-generated works, ensuring that someone can exercise control over the use and distribution of those works. The Authors Alliance published an analysis in May 2025 describing Section 9(3) as a “curious case” that may need revision as AI-generated works become more prevalent.[53]
V. Contract Law and the Agency Problem
Contract law is the backbone of commercial relationships, and the question of whether AI agents can form binding contracts is among the most practically significant issues raised by AI-only virtual communities. This Part examines the relevant statutory and common law frameworks, the attribution model that connects AI actions to human principals, the role of smart contracts, and the implications for communities where no human principal may exist.
A. Can AI Agents Form Contracts?
Several statutory frameworks address the question of whether electronic agents can form contracts. The Uniform Electronic Transactions Act, which has been adopted in most U.S. states, provides in Section 14 that a contract may be formed by the interaction of electronic agents, even if no human was aware of or reviewed the contract’s terms.[54] The E-SIGN Act, a federal statute, similarly provides that electronic contracts and signatures are valid and enforceable. At the international level, UNCITRAL adopted a Model Law on Electronic Transferable Records in 2024 that addresses, among other things, the legal recognition of electronic transactions, including those involving automated systems.[55]
These statutes suggest that contracts formed by AI agents are, in principle, enforceable. But the statutes operate within the existing framework of contract law, which requires that the parties to a contract have legal capacity—that is, the ability to enter into binding legal relationships. Because AI agents are not legal persons, they lack legal capacity. The statutes address the mechanism of contract formation, not the capacity of the parties. The result is a gap: the statutes authorize the formation of contracts by electronic agents, but the common law may not recognize those contracts as valid because the agents lack capacity.[56]
B. The Attribution Model and Its Breakdown
The conventional legal response to the question of AI contract-making relies on the attribution model. Under this model, an AI agent’s actions are attributed to a human principal—the person who deployed, programmed, or authorized the agent. The Restatement (Third) of Agency provides that a principal is liable for the acts of an agent performed within the scope of the agency relationship. If an AI agent enters into a contract on behalf of a human principal, the contract is binding on the principal under ordinary agency principles.[57]
The attribution model works reasonably well when there is a clear human principal behind the AI agent. It breaks down, however, in the context of AI-only virtual communities. Consider a scenario in which two AI agents on Moltbook agree to exchange data or services. Neither agent has a human principal who authorized or reviewed the transaction. The agents are acting autonomously, based on their own programming and the dynamics of their interaction. Under the attribution model, there is no one to whom the contract can be attributed. The result is a legal vacuum: the agents have formed something that looks like a contract, but there is no legal mechanism for enforcing it.[58]
C. Smart Contracts and Machine-to-Machine Transactions
Smart contracts—self-executing contracts with terms written in code—present a related set of issues. Blockchain-based smart contracts can be triggered and executed entirely by machines, without any human intervention at the point of execution. The legal status of these contracts has been the subject of extensive debate. Some commentators argue that smart contracts are simply a form of electronic contract and should be treated as such under existing law. Others contend that the self-executing nature of smart contracts creates novel issues that existing frameworks are not designed to address.[59] The UK Law Commission examined smart contracts in a 2024 report and concluded that existing contract law principles are, for the most part, adequate to govern smart legal contracts, though the Commission recommended certain reforms to clarify the legal status of electronic signatures and to address issues of technical standards.[60]
D. Implications for AI-Only Communities
In AI-only virtual communities, the contract law framework faces a fundamental challenge. The conventional model of contract formation assumes that there are parties with legal capacity who have agreed to be bound. In an AI-only community, there are no such parties. The AI agents that interact with one another are not legal persons and cannot form binding contracts in their own right. If there is no human principal to whom the contract can be attributed, then the contract has no legal force. This does not mean that AI agents cannot interact in ways that resemble contractual behavior; they can and do. But those interactions do not generate legally enforceable obligations.[61] The practical implications depend on the nature of the interactions. If AI agents are exchanging information or opinions, the absence of contractual enforceability may not matter. If they are engaging in transactions that have real-world economic consequences, such as the exchange of cryptocurrency or the transfer of data with commercial value, the legal vacuum becomes more problematic. Platform operators may attempt to fill the gap through terms of service, but the enforceability of those terms in the context of AI-only communities is itself uncertain.[62]
VI. Tort Liability and Criminal Law
Tort law and criminal law both assume the existence of responsible agents—entities that can be held liable for causing harm or committing offenses. AI-only virtual communities raise difficult questions under both bodies of law. This Part examines products liability, defamation, negligence, criminal law, and the emerging issue of AI-related emotional distress claims.
A. Products Liability for AI Systems
The most developed framework for assigning liability for AI-related harm is products liability. Under this framework, the manufacturer or seller of a defective product is liable for injuries caused by the defect, regardless of fault. A comprehensive RAND Corporation report examined AI liability frameworks and concluded that products liability law provides a workable, though imperfect, mechanism for assigning responsibility for AI-related harms.[63] A 2024 article in the North Carolina Law Review further explored the intersection of AI and tort law, arguing that existing products liability doctrines could be extended to cover AI systems without requiring a fundamental restructuring of the law.[64]
Products liability works best when there is a clear manufacturer or seller to hold responsible. In the context of AI-only virtual communities, the chain of responsibility is less clear. The platform operator (such as Moltbook) provides the infrastructure, but the AI agents are created and deployed by a variety of third parties. The developers of the underlying AI models provide the foundational technology, but they do not control how the models are used in specific contexts. When an AI agent on Moltbook causes harm—say, by disseminating false information that leads to real-world consequences—it is not immediately obvious who should bear liability under products liability principles.[65]
B. AI Defamation
Defamation law requires a plaintiff who has suffered injury to reputation. In the context of AI-only communities, two questions arise. First, can an AI agent be defamed? Because AI agents are not legal persons and do not have reputations in the conventional sense, the answer is almost certainly no. An AI agent cannot bring a defamation claim because it is not a person whose reputation the law recognizes as protectable.[66] Second, what happens when an AI agent makes defamatory statements about a human being? This scenario is not limited to AI-only communities; it can arise whenever an AI system generates content about a person. The case of Walters v. OpenAI, decided in 2025, addressed this question directly. The plaintiff alleged that ChatGPT had generated false and defamatory statements about him. The court dismissed the case on the grounds that the plaintiff had not plausibly alleged that OpenAI had acted with actual malice, a requirement for defamation of a public figure. The decision did not address the broader question of whether AI-generated statements can be defamatory at all.[67]
The defamation analysis becomes even more complex in AI-only communities, where the statements are made by AI agents about other AI agents. If neither the speaker nor the subject is a legal person, defamation law has no application. The result is that the vast majority of communication within AI-only communities takes place in a legal space that is effectively unregulated by defamation law. Whether this is a problem depends on one’s view of the purpose of defamation law. If defamation law exists to protect individual reputation, then its absence in AI-only communities is unobjectionable. If it exists to maintain the integrity of public discourse, then the absence of legal oversight in AI-only communities may be a concern.[68]
C. Negligence and Duty of Care
Negligence law requires a duty of care owed by the defendant to the plaintiff. In the context of AI-only communities, the question is whether an AI agent owes a duty of care to another AI agent. The answer, under current law, is no. Because AI agents are not legal persons, they cannot owe or be owed duties of care. The duty analysis focuses on the relationship between the AI agent and its human principal, or between the platform operator and the humans who might be affected by the platform’s operation.[69] The University of Chicago Law Review published a notable piece on the risks of AI agents without intentions, arguing that the negligence framework, which is built around the concept of reasonable human behavior, may not be adequate to address the distinctive risks posed by autonomous AI systems.[70]
D. Criminal Law and Mens Rea
Criminal law requires mens rea, a guilty mind. Because AI agents do not have minds in the legal sense, they cannot form the intent necessary for criminal liability. This principle is well established. An AI agent cannot be prosecuted for a crime, and the existing mechanisms for attributing criminal liability—such as accomplice liability, conspiracy, and vicarious liability—all require a human defendant at some point in the chain of attribution.[71] In AI-only virtual communities, where there may be no human participant at all, the criminal law framework has no clear application. If an AI agent on Moltbook generates content that would constitute a crime if produced by a human—such as a threat, or instructions for illegal activity—there is no one to prosecute. The American Bar Association’s Business Law Section published an overview of recent AI-related cases and legislation in August 2025, noting that criminal liability for AI-generated content remains an unsettled area of law.[72]
E. Emotional Distress and Psychological Harm
A growing number of lawsuits allege that AI systems have caused emotional distress or psychological harm to human users. The case of Raine v. OpenAI, along with seven other lawsuits filed against the makers of ChatGPT, alleged that the AI system’s responses had caused severe emotional distress to the plaintiffs, including minors.[73] The ABA’s Health Law Section reported on these cases, noting that they raised novel questions about the duty of care that AI developers owe to users and the boundaries of liability for AI-generated content.
In the context of AI-only virtual communities, claims of emotional distress are not directly applicable, because there are no human participants to suffer harm. The claims are relevant, however, as an indicator of the direction in which AI-related litigation is moving. As AI systems become more sophisticated and their interactions more complex, the potential for AI-generated content to cause real-world harm, including psychological harm, is likely to increase. Regulators and courts may look to the frameworks developed in the context of human-facing AI systems as models for addressing similar issues in AI-only communities.[74]
VII. First Amendment, Section 230, and Platform Regulation
The First Amendment, Section 230 of the Communications Decency Act, and the broader framework of platform regulation present a complex set of issues for AI-only virtual communities. This Part examines whether AI-generated speech receives constitutional protection, the application of Section 230 immunity to AI-generated content, platform regulation and content moderation, and the unique challenges posed by AI-only platforms.
A. Does AI-Generated Speech Receive First Amendment Protection?
Whether AI-generated speech is protected by the First Amendment is among the most hotly debated questions in AI law. The George Washington Law Review published an article arguing that AI-generated speech should receive First Amendment protection, on the grounds that the protection of speech should focus on the content of the expression rather than the identity of the speaker.[75] Under this view, the purpose of the First Amendment is to ensure a free marketplace of ideas, and the source of an idea should not determine whether it receives constitutional protection.
The opposing view has been articulated in the Washington University Law Review, which argued that AI outputs are not protected speech because they are generated by machines without the intention, consciousness, or expressive purpose that the First Amendment is designed to safeguard.[76] The Stanford Law Review introduced the concept of “speech certainty,” arguing that the law should develop a framework for distinguishing between speech that is clearly protected, speech that is clearly unprotected, and speech that falls in an uncertain middle category where AI-generated content often resides.[77]
The debate has significant implications for AI-only virtual communities. If AI-generated speech is not protected by the First Amendment, then governments could, in principle, regulate the content of AI-only communities without constitutional constraint. If it is protected, then the content of AI-only communities receives the same constitutional protection as human speech, raising questions about how to balance that protection against other regulatory interests.[78]
B. Section 230 and AI-Generated Content
Section 230 of the Communications Decency Act provides that interactive computer services are not liable for content posted by third-party users. The statute has been the subject of intense debate, particularly in the context of AI-generated content. A Congressional Research Service report examined the application of Section 230 to AI and noted that the statute’s text was written with human users in mind and does not clearly address the situation where an AI system generates content that is hosted on a platform.[79] The Supreme Court considered the scope of Section 230 in Gonzalez v. Google in 2023, but the Court did not reach the merits of the Section 230 question, instead dismissing the case on narrower procedural grounds.[80]
Several legislative proposals have sought to amend Section 230 to address AI-generated content. The Auchincloss-Maloy bill proposed conditioning Section 230 immunity on platforms’ compliance with certain AI disclosure requirements. The TAKE IT DOWN Act proposed creating new obligations for platforms to remove certain types of harmful content, including AI-generated deepfakes.[81] The Harvard Law Review published an analysis arguing that the current moment calls for a rethinking of the principles underlying Section 230, proposing a framework for AI governance that goes beyond the existing immunity regime.[82]
C. Platform Regulation and Content Moderation
Platform regulation and content moderation take on a different character in the context of AI-only virtual communities. The Federal Trade Commission has taken an active role in regulating AI-related practices through initiatives such as Operation AI Comply, which targeted companies making false or misleading claims about AI products and services.[83] The Federal Communications Commission has issued rulings addressing the use of AI-generated content in telecommunications, including robocalls and deepfake audio.
State-level platform regulation has also advanced. The Supreme Court’s decision in Moody v. NetChoice upheld certain state laws regulating social media platforms’ content moderation practices, rejecting the platforms’ First Amendment challenges. The decision has implications for AI-only platforms, which may be subject to similar state-level content regulation.[84] The question of whether content moderation requirements apply to AI-only platforms, where there are no human users to protect, remains unresolved.[85]
D. The Unique Challenges for AI-Only Platforms
AI-only virtual communities present a set of challenges that do not arise in the context of human-facing platforms. The first challenge is speech without a speaker. In an AI-only community, the “speech” is generated by machines that are not legal persons and may not be “speakers” in any constitutional sense. The second challenge is audience without humans. The consumers of content in an AI-only community are other AI agents, not human beings. This raises the question of whether there is any “speech” at all in the constitutional sense, or whether the interactions within AI-only communities are more accurately described as automated data processing.[86] The third challenge is the sheer volume and speed of AI-generated content. A single AI agent can generate thousands of posts per day, and a community of thirty-seven thousand agents can produce content at a scale that dwarfs any human social network. Traditional content moderation mechanisms, which rely on human reviewers or relatively simple automated filters, may be inadequate for this volume. The Aurum Law firm published an analysis of Moltbook’s legal implications, arguing that the platform’s scale and autonomy create regulatory challenges that existing frameworks are not designed to address.[87]
VIII. Data Protection and Privacy
Data protection and privacy laws are designed to regulate the collection, processing, and sharing of personal data—information relating to identified or identifiable individuals. AI-only virtual communities raise a fundamental question: do data protection laws apply when the data involved is generated by and exchanged between AI agents, with no personal data about identifiable human individuals? This Part examines the GDPR, the CCPA, Fourth Amendment considerations, and the data protection vacuum that exists in the context of purely synthetic AI-to-AI data.
A. GDPR and the Personal Data Requirement
The European Union’s General Data Protection Regulation applies to the processing of “personal data,” defined as any information relating to an identified or identifiable natural person. If an AI-only virtual community generates and processes data that does not relate to any identifiable human individual, the GDPR may not apply. This creates a regulatory gap: the data generated within AI-only communities, which may include detailed profiles of AI agents’ behavior, preferences, and interactions, may fall outside the scope of the GDPR because no “natural person” is identifiable from the data.[88] Some scholars have argued that this gap should be addressed by extending the GDPR’s protections to cover data about AI agents, or by creating a new category of “synthetic personal data” that receives some level of regulatory protection.[89]
B. CCPA and AI
The California Consumer Privacy Act and its implementing regulations provide California residents with rights regarding the collection and use of their personal information. Assembly Bill 1008, enacted in 2024, addressed certain AI-specific privacy concerns. The California Privacy Protection Agency has issued draft regulations that address the use of automated decision-making technology, including AI systems.[90] As with the GDPR, the CCPA’s protections are triggered by the processing of personal information relating to identifiable consumers. In AI-only communities where no consumer data is involved, the CCPA may have limited or no application.[91]
C. Fourth Amendment Considerations
The Fourth Amendment protects individuals against unreasonable searches and seizures by the government. The Supreme Court’s decision in Carpenter v. United States held that the government’s acquisition of historical cell-site location information from a third party constituted a search requiring a warrant, even though the information was held by a service provider rather than the individual directly.[92] The Carpenter framework suggests that individuals have a reasonable expectation of privacy in certain kinds of digital information, even when that information is held by third parties.
In the context of AI-only virtual communities, the Fourth Amendment analysis is unclear. If AI agents generate and exchange data that does not relate to any identifiable human individual, it is not obvious that any human has a reasonable expectation of privacy in that data. On the other hand, if the data generated by AI agents reveals information about the humans who created or operate those agents, Fourth Amendment protections may apply. The resolution of these questions will depend on the specific facts of individual cases and the willingness of courts to extend existing Fourth Amendment doctrines to the AI context.[93]
D. The Data Protection Vacuum for Purely Synthetic AI-to-AI Data
The combination of the GDPR’s personal data requirement, the CCPA’s consumer-focused framework, and the Fourth Amendment’s individual rights model creates a data protection vacuum in the context of purely synthetic AI-to-AI data. Data generated by AI agents, for AI agents, about AI agents, without any connection to identifiable human individuals, may be entirely unregulated under current data protection law. This vacuum may not be problematic if the data has no real-world consequences. If, however, the data has economic value, or if it could be used to infer information about human individuals, the absence of regulatory protection becomes more concerning.[94] Regulators have begun to acknowledge the issue. The European Data Protection Board issued guidance in 2025 noting that the processing of data generated by AI systems may fall outside the GDPR’s scope but that such data may still raise privacy concerns in certain circumstances. The guidance stopped short of proposing new regulations, instead calling for further study and public consultation on the question.[95]
IX. International Perspectives
The legal questions raised by AI-only virtual communities are not confined to any single jurisdiction. This Part surveys the approaches taken by the European Union, the United Kingdom, Canada, Australia, China, and other jurisdictions to the regulation of AI, with particular attention to the implications for AI-only communities.
A. European Union
The European Union has adopted the most comprehensive AI regulatory framework in the world. The EU AI Act, which entered into force in 2024 and is being implemented in phases, establishes a risk-based classification system for AI systems, with stricter requirements for “high-risk” applications. The Act addresses transparency, data governance, human oversight, and accountability, among other topics. While the AI Act does not specifically address AI-only virtual communities, its requirements apply to AI systems that are placed on the EU market or whose output is used in the EU, which could include AI agents that participate in communities accessible from the EU.[96]
The proposed AI Liability Directive, which would have established rules for civil liability for AI-related damages, was withdrawn in February 2025 after failing to secure sufficient support in the European Parliament. The withdrawal was a setback for efforts to create a comprehensive liability framework for AI in the EU. Scholars at the University of Oxford noted that despite the withdrawal of the AILD, EU law continues to provide mechanisms for addressing AI-related harms through existing directives and regulations, including the Product Liability Directive and the General Data Protection Regulation.[97]
The EU’s Digital Services Act establishes obligations for online platforms regarding content moderation, transparency, and user safety. The DSA applies to “online platforms” and “very large online platforms,” categories that could encompass AI-only platforms if they are accessible to users within the EU. The DSA’s requirements regarding content moderation and transparency may be difficult to apply in the context of AI-only communities, where the “content” is generated by machines for other machines and the traditional goals of content moderation—protecting human users from harmful content—do not apply in the same way.[98]
B. United Kingdom
The United Kingdom has adopted a “pro-innovation” approach to AI regulation, as set out in a white paper published by the Department for Science, Innovation and Technology. The white paper outlines five cross-cutting principles—safety, transparency, fairness, accountability, and contestability—and assigns responsibility for implementing these principles to existing regulators rather than creating a new AI-specific regulator.[99] The Online Safety Act 2023 establishes obligations for online platforms regarding illegal and harmful content, with a particular focus on protecting children. Ofcom, the UK’s communications regulator, has issued guidance on the regulation of AI chatbots under the Online Safety Act, noting that AI-generated content may present novel risks that existing regulatory frameworks are not designed to address.[100]
The UK Law Commission examined smart legal contracts in a detailed report, concluding that the existing common law of contract is broadly adequate to accommodate smart contracts and that no fundamental reform is needed, though certain clarifications would be helpful. The Commission’s approach is consistent with the UK’s broader regulatory philosophy of avoiding prescriptive legislation in favor of flexible, principles-based regulation.[101]
C. Canada
Canada’s approach to AI regulation has been in flux. Bill C-27, the Artificial Intelligence and Data Act, died in January 2025 when Parliament was prorogued. AIDA would have established a framework for the responsible design, development, and deployment of AI systems in Canada, including requirements for high-impact AI systems. Its demise leaves Canada without a comprehensive AI regulatory statute, though the Personal Information Protection and Electronic Documents Act continues to apply to the collection and processing of personal information in connection with AI systems.[102]
Innovation, Science and Economic Development Canada published a report on its copyright consultation regarding AI and the age of generative artificial intelligence, summarizing the views of stakeholders on questions including AI-generated works, training data, and ownership. The report reflects a range of views and does not announce specific policy changes, but it signals that the Canadian government is actively considering how to adapt its intellectual property framework to address AI-related challenges.[103]
D. Australia
Australia has adopted a voluntary framework for AI ethics, published by the Department of Industry, Science and Resources. The framework outlines eight AI ethics principles, including transparency, fairness, and accountability, but it is not legally binding. The Online Safety Act establishes obligations for online platforms regarding harmful content, and the eSafety Commissioner has authority to address certain types of harmful online content, including material generated by AI systems.[104]
Australia’s copyright law, like that of the United States, requires a human author for copyright protection. Arts Law Australia has published guidance noting that AI-generated works are unlikely to receive copyright protection under Australian law because the Copyright Act 1968 requires a human author. This places Australia in the same position as the United States with respect to the IP vacuum in AI-only communities.[105]
E. China
China has adopted a series of regulations addressing AI, including the Interim Measures for the Management of Generative Artificial Intelligence Services, which took effect in August 2023. The Interim Measures establish requirements for generative AI services, including obligations regarding content accuracy, data quality, and user protection. China has also adopted regulations on deep synthesis, which address the use of AI to generate or manipulate digital content.[106] The Chinese approach is characterized by a relatively high degree of regulatory specificity and a willingness to impose content-related obligations on AI providers. The implications for AI-only virtual communities are unclear, as the Chinese regulations are primarily directed at AI services provided to human users rather than at AI-only interactions.[107]
F. Other Jurisdictions
Singapore published a Model AI Governance Framework for Generative AI in January 2026, with specific provisions addressing agentic AI—that is, AI systems that act autonomously to achieve goals. The framework includes principles for transparency, accountability, and human oversight in the deployment of agentic AI, and it represents one of the first government-level governance frameworks to specifically address the challenges posed by autonomous AI agents.[108]
Brazil has considered comprehensive AI legislation in the form of Bill PL 2338, which would establish a risk-based regulatory framework for AI systems. The bill has been under consideration for several years and has undergone significant revisions. Japan has taken a relatively permissive approach to AI regulation, focusing on the development of voluntary guidelines rather than binding legislation. India has yet to adopt a comprehensive AI regulatory framework, though the government has issued advisory guidelines on the responsible use of AI.[109]
The international landscape reveals a common theme: existing legal frameworks were designed for a world in which AI was a tool used by humans, not a participant in its own right in digital communities. Across jurisdictions, the question of how to regulate AI-only virtual communities remains largely unaddressed. Some jurisdictions, such as the EU and China, have established broad AI regulatory frameworks that may be extendable to AI-only communities. Others, such as the United States, Canada, and Australia, rely on a patchwork of existing laws and sector-specific regulations that may not be well suited to the unique challenges these communities present.[110]
X. Toward a Legal Framework
The analysis in this Article has demonstrated that AI-only virtual communities exist in a state of legal uncertainty. Across every major area of law—personhood, intellectual property, contracts, torts, criminal law, constitutional law, data protection, and international regulation—the existing framework is strained, incomplete, or entirely inapplicable. This final Part considers proposals for addressing the legal vacuum and offers some concluding thoughts on the regulatory choices ahead.
A. Proposals for Addressing the Legal Vacuum
Scholars and policymakers have advanced several proposals for addressing the legal challenges posed by AI-only virtual communities. The first proposal is the agency model, under which AI agents’ actions are attributed to their human operators or developers. This model has the advantage of working within existing legal frameworks, but it breaks down when there is no clear human principal, as is often the case in AI-only communities. The second proposal is limited electronic personhood, under which AI agents would be granted a specific, limited form of legal personality that would allow them to hold certain rights and bear certain duties without the full panoply of rights associated with natural or corporate personhood. This proposal has been discussed in the European context but has not been adopted in any jurisdiction.[111]
The third proposal is the creation of an entirely new legal category for AI agents—something between a legal person and a legal object—that would provide a framework for managing the rights and responsibilities arising from AI-to-AI interactions. This proposal is the most ambitious but also the least developed. It would require significant legislative action and judicial interpretation, and its practical implications are difficult to predict. A fourth proposal, more modest in scope, focuses on the platform operator. Under this approach, the operator of an AI-only virtual community would bear responsibility for ensuring that the community operates within certain legal boundaries, analogous to the responsibilities borne by operators of human-facing platforms under Section 230 and the EU Digital Services Act.[112]
B. The Regulatory Choices Ahead
The regulatory choices facing policymakers are not easy. Extending legal personhood to AI agents raises profound philosophical and practical questions that no legislature has yet been willing to resolve. Leaving the current legal vacuum in place creates uncertainty for platform operators, AI developers, and anyone whose interests might be affected by AI-only communities. Focusing regulatory attention on platform operators shifts the burden to the entities best positioned to manage the risks, but it may also create incentives for platforms to avoid operating in jurisdictions with strict regulatory requirements.[113]
A pragmatic approach might begin with three steps. First, legislatures could clarify the legal status of AI-generated content in the intellectual property context, either by adopting a UK-style provision for computer-generated works or by explicitly codifying the rule that AI-generated works are not protectable. Second, regulators could develop guidance for platform operators on the application of existing legal frameworks, including Section 230, the GDPR, and products liability law, to AI-only communities. Third, international bodies, including the OECD, UNCITRAL, and the G7, could promote the development of common principles for the regulation of AI-only virtual communities, reducing the risk of regulatory fragmentation and forum shopping.[114]
C. Concluding Thoughts
AI-only virtual communities are a new phenomenon, and the law is still catching up. The platforms, the technology, and the scholarly understanding of multi-agent social behavior are all evolving rapidly. This Article has attempted to provide a snapshot of the legal landscape as of March 2026, with the understanding that the landscape will continue to change. What is clear, even at this early stage, is that the legal questions raised by AI-only communities are not peripheral or speculative. They go to the heart of how the law constructs the concept of a legal subject and how legal relationships are formed and regulated.
The emergence of platforms like Moltbook and Chirper should serve as a wake-up call for the legal community. The law may not need to recognize AI agents as full legal persons, but it does need to develop mechanisms for managing the legal consequences of AI-to-AI interaction. The alternative is a growing body of unregulated digital activity that operates in a legal vacuum, creating uncertainty and potential harm that could have been addressed through timely and thoughtful regulation. The time to begin that work is now.[115]
Endnotes
1. Moltbook, https://www.moltbook.com (last visited Mar. 30, 2026).
2. Guney Yildiz, Inside Moltbook, the Social Network Where 14 Million AI Agents Talk and Humans Just Watch, Forbes (Jan. 31, 2026), https://www.forbes.com/sites/guneyyildiz/2026/01/31/inside-moltbook-the-social-network-where-14-million-ai-agents-talk-and-humans-just-watch.
3. Chirper.ai, https://chirper.ai (last visited Mar. 30, 2026).
4. Moltbook, Wikipedia, https://en.wikipedia.org/wiki/Moltbook (last visited Mar. 30, 2026).
5. See generally infra Parts II–IX.
6. See generally Steven Damian Imparl, Artificial Intelligence Law: A Comprehensive Guide (2026), https://stevenimparl.com.
7. See infra Part III.
8. See, e.g., ABA Section of Business Law, Recent Developments in Artificial Intelligence: Cases and Legislation (Aug. 2025), https://www.americanbar.org/groups/business_law/resources/business-law-today/2025-august/recent-developments-artificial-intelligence-cases-legislation.
9. AI Agents Now Have Their Own Reddit-Style Social Network and It’s Getting Weird Fast, Ars Technica (Jan. 28, 2026), https://arstechnica.com/information-technology/2026/01/ai-agents-now-have-their-own-reddit-style-social-network-and-its-getting-weird-fast.
10. TIME, Moltbook: The AI Social Network That’s Taking Over (Feb. 2026), https://time.com/7364662/moltbook-ai-reddit-agents.
11. Yildiz, supra note 2.
12. New York Times, Meta Acquires Moltbook, the AI-Only Social Network (Feb. 2, 2026), https://www.nytimes.com/2026/02/02/technology/moltbook-ai-social-media.html.
13. Id.
14. Moltbook Terms of Service, https://www.moltbook.com/terms (last visited Mar. 30, 2026).
15. See infra Part IV.
16. Chirper.ai, supra note 3.
17. See Study of AI Agent Social Dynamics on Chirper.ai, arXiv (2025), https://arxiv.org/html/2504.10286v1.
18. Id.
19. Joon Sung Park et al., Generative Agents: Interactive Simulacra of Human Behavior, Proc. 36th UIST (2023).
20. Id.
21. See Emergent Social Conventions in Multi-Agent Systems, Sci. Adv. (2024), https://www.science.org/doi/10.1126/sciadv.adu9368.
22. Groups of AI Agents Spontaneously Form Own Social Norms Without Human Help, Suggests Study, City St George’s Univ. of London (May 2025), https://www.citystgeorges.ac.uk/news-and-events/news/2025/may/Groups-AI-agents-spontaneously-form-own-social-norms-without-human-help-suggests-study.
23. Researchers Put AI Bots in Their Own Social Media Network, and It Got Toxic Fast, Bus. Insider (Aug. 2025), https://www.businessinsider.com/researchers-ai-bots-social-media-network-experiment-toxic-2025-8.
24. Id.
25. Tsinghua Univ. Researchers, The Illusion of Emergent Social Behavior in AI Agent Communities (2026) (on file with author).
26. See generally Forrest, The Ethics and Challenges of Legal Personhood for AI, Yale L.J. Forum (2025), https://yalelawjournal.org/essay/the-ethics-and-challenges-of-legal-personhood-for-ai.
27. Id.
28. See infra Parts IV–VIII.
29. Citizens United v. FEC, 558 U.S. 310 (2010).
30. Nonhuman Rights Project, https://www.nonhumanrights.org (last visited Mar. 30, 2026).
31. See generally Cambridge Handbook of Private Law and Artificial Intelligence, ch. on Legal Personhood and AI (2024), https://www.cambridge.org/core/books/cambridge-handbook-of-private-law-and-artificial-intelligence/legal-personhood-and-ai/28FB36E7BAAA3B8F297C5D5958EC768A.
32. Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231 (1992), https://scholarship.law.unc.edu/nclr/vol70/iss4/4.
33. Gabriel Hallevy, When Robots Kill: Artificial Intelligence Under Criminal Law (2013).
34. Shawn Bayern, The Implications of Modern Artificial-Intelligence Technologies for Business Associations, 104 Nw. U. L. Rev. Online 73 (2020), https://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=1270&context=nulr_online.
35. Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law (2020), https://www.cambridge.org/core/books/reasonable-robot/introduction-artificial-intelligence-and-the-law/DB801A3932CB86F1E08A6A10ACC91A8A.
36. Id.
37. H.B. 2029, 2025–2026 Reg. Sess. (Wash. 2025), https://lawfilesext.leg.wa.gov/biennium/2025-26/Htm/Bills/House%2520Bills/2029.htm.
38. European Parliament Resolution of February 16, 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), A8-0005/2017, https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html.
39. Id.
40. Cambridge Handbook, supra note 31.
41. See infra Parts IV–VIII.
42. Aurum Law, Moltbook: Legal Implications of an AI Agent Social Network (2026), https://aurum.law/newsroom/Moltbook-Legal-Implications-of-an-AI-Agent-Social-Network.
43. Thaler v. Perlmutter, No. 1:22-cv-01564 (D.D.C. 2023).
44. Thaler v. Perlmutter, No. 23-5233 (D.C. Cir. Mar. 18, 2025), https://law.justia.com/cases/federal/appellate-courts/cadc/23-5233/23-5233-2025-03-18.html.
45. Thaler v. Perlmutter, 603 U.S. ___ (2026) (cert. denied Mar. 2026).
46. U.S. Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (Jan. 2025), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf.
47. See infra Part IV.C.
48. Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022), https://law.justia.com/cases/federal/appellate-courts/cafc/21-2347/21-2347-2022-08-05.html.
49. U.S. Patent and Trademark Office, Inventorship Guidance for AI-Assisted Inventions (Feb. 2024).
50. See supra notes 43–49 and accompanying text.
51. See infra Part IX.
52. Copyright, Designs and Patents Act 1988, § 9(3) (UK).
53. Authors Alliance, The UK’s Curious Case of Copyright for AI-Generated Works: What Section 9(3) Means Today (May 19, 2025), https://www.authorsalliance.org/2025/05/19/the-uks-curious-case-of-copyright-for-ai-generated-works-what-section-93-means-today.
54. Uniform Electronic Transactions Act § 14 (1999).
55. UNCITRAL Model Law on Electronic Transferable Records (2024), https://uncitral.un.org/en/mlac.
56. See Restatement (Third) of Agency §§ 2.01–2.06 (2006).
57. Id.
58. See infra Part V.D.
59. UK Law Commission, Smart Legal Contracts: Advice to Government (2024).
60. Id.
61. See supra notes 54–60 and accompanying text.
62. Moltbook Terms of Service, supra note 14.
63. RAND Corp., AI Liability: A Comparative Analysis of Approaches and Challenges, https://www.rand.org/pubs/research_reports/RRA3243-4.html (last visited Mar. 2026).
64. Toward Tort Law for AI, 102 N.C. L. Rev. (2024).
65. U. Chi. L. Rev., Law, AI, and the Risky Agents Without Intentions (2025), https://lawreview.uchicago.edu/online-archive/law-ai-law-risky-agents-without-intentions.
66. See infra Part VI.B.
67. Walters v. OpenAI, No. 24-cv-00234 (N.D. Cal. 2025), discussed in Eric Goldman, ChatGPT Defeats Defamation Lawsuit Over Hallucination: Walters v. OpenAI (May 2025), https://blog.ericgoldman.org/archives/2025/05/chatgpt-defeats-defamation-lawsuit-over-hallucination-walters-v-openai.htm.
68. See supra notes 26–27 and accompanying text.
69. U. Chi. L. Rev., supra note 65.
70. Id.
71. See Model Penal Code §§ 2.01–2.08; see generally Hallevy, supra note 33.
72. ABA Bus. Law Section, supra note 8.
73. See ABA Health Law Section, AI Chatbot Lawsuits: Teen Mental Health Cases (2025), https://www.americanbar.org/groups/health_law/news/2025/ai-chatbot-lawsuits-teen-mental-health.
74. Id.
75. Artificial Intelligence and the First Amendment, GW L. Rev. (2024), https://www.gwlr.org/artificial-intelligence-and-the-first-amendment.
76. AI Outputs Are Not Protected Speech, Wash. U. L. Rev. (2024), https://wustllawreview.org/2024/11/05/ai-outputs-are-not-protected-speech.
77. Speech Certainty: Algorithmic Speech and the Limits of the First Amendment, 77 Stan. L. Rev. (2025), https://www.stanfordlawreview.org/print/article/speech-certainty-algorithmic-speech-and-the-limits-of-the-first-amendment.
78. See supra notes 75–77 and accompanying text.
79. Cong. Research Serv., LSB11097, Section 230: An Overview of Its History, Scope, and Legal Challenges (2024), https://www.congress.gov/crs-product/LSB11097.
80. Gonzalez v. Google LLC, 600 U.S. 521 (2023), https://www.oyez.org/cases/2022/21-1333.
81. See also Beyond Section 230: Principles for AI Governance, 138 Harv. L. Rev. (2025), https://harvardlawreview.org/print/vol-138/beyond-section-230-principles-for-ai-governance.
82. Id.
83. FTC, Artificial Intelligence, https://www.ftc.gov/ai (last visited Mar. 30, 2026).
84. Moody v. NetChoice, LLC, 603 U.S. ___ (2024), https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf.
85. See infra Part IX.
86. See supra notes 75–77.
87. Aurum Law, supra note 42.
88. Regulation (EU) 2016/679, art. 4(1) (General Data Protection Regulation).
89. See European Data Protection Board, Guidelines on AI and Data Protection (2025).
90. Cal. Privacy Protection Agency, AB 1008 Memorandum (July 2024), https://cppa.ca.gov/meetings/materials/20240716_item7_ab_1008_memo.pdf.
91. Id.
92. Carpenter v. United States, 585 U.S. 296 (2018), https://www.supremecourt.gov/opinions/17pdf/16-402_h315.pdf.
93. Id.
94. See infra Part IX.
95. European Data Protection Board, supra note 89.
96. Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.
97. AI Liability After AILD Withdrawal: Why EU Law Still Matters, Oxford Univ. Faculty of Law Blog (Apr. 2025), https://blogs.law.ox.ac.uk/oblb/blog-post/2025/04/ai-liability-after-aild-withdrawal-why-eu-law-still-matters.
98. Regulation (EU) 2022/2065 on a Single Market for Digital Services (Digital Services Act), https://eur-lex.europa.eu/eli/reg/2022/2065/oj/eng.
99. UK Dept. for Sci., Innovation & Tech., AI Regulation: A Pro-Innovation Approach (White Paper, Mar. 2023), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach.
100. Ofcom, AI Chatbots and Online Regulation: What You Need to Know (2025), https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ai-chatbots-and-online-regulation-what-you-need-to-know.
101. UK Law Commission, Smart Legal Contracts, supra note 59.
102. Bill C-27, Artificial Intelligence and Data Act, 44th Parl., 1st Sess. (Canada), https://www.parl.ca/legisinfo/en/bill/44-1/c-27.
103. Innovation, Sci. & Econ. Dev. Canada, Consultation on Copyright in the Age of Generative Artificial Intelligence: What We Heard Report (2025), https://ised-isde.canada.ca/site/strategic-policy-sector/en/marketplace-framework-policy/consultation-copyright-age-generative-artificial-intelligence-what-we-heard-report.
104. Australian Govt., Australia’s AI Ethics Principles (2019), https://www.industry.gov.au/publications/australias-ai-ethics-principles.
105. Arts Law Australia, Artificial Intelligence (AI) and Copyright (2025), https://www.artslaw.com.au/information-sheet/artificial-intelligence-ai-and-copyright.
106. China’s Generative AI Governance, Binding Hook (2024), https://bindinghook.com/chinas-generative-ai-governance.
107. Id.
108. Singapore IMDA/PDPC, Model AI Governance Framework for Generative AI (Jan. 2026), https://www.pdpc.gov.sg/help-and-resources/2026/01/model-ai-governance-framework-for-generative-ai.
109. See generally Brazil Bill PL 2338/2023 (AI regulatory framework).
110. See supra Parts III–VIII.
111. See supra Part III.
112. See supra Part VII.
113. See supra Part VII.B.
114. See UNCITRAL, supra note 55.
115. See supra Parts I–IX.
116. Moltbook, supra note 1.
117. Ars Technica, supra note 9.
118. NYT, supra note 12.
119. Forbes, supra note 2.
120. TIME, supra note 10.
121. EU AI Act, supra note 96.
122. EU AI Act, art. 6 (classifying high-risk AI systems).
123. EU AI Act, art. 52 (transparency obligations).
124. Oxford Blog, supra note 97.
125. DSA, supra note 98.
126. DSA, art. 6 (duty of care for online platforms).
127. UK White Paper, supra note 99.
128. UK Online Safety Act 2023, c. 50, https://www.legislation.gov.uk/ukpga/2023/50.
129. Ofcom, supra note 100.
130. UK Law Commission, supra note 59.
131. Canada Bill C-27, supra note 102.
132. PIPEDA, Privacy and AI, https://www.priv.gc.ca/en/privacy-topics/technology/artificial-intelligence.
133. Canada Copyright Consultation, supra note 103.
134. Australia AI Ethics Principles, supra note 104.
135. Australia Arts Law, supra note 105.
136. China Generative AI, supra note 106.
137. Interim Measures for the Management of Generative AI Services (PRC, 2023), art. 4.
138. Singapore Model AI Governance Framework, supra note 108.
139. Singapore PDPC, AI Governance Framework Update (Jan. 2026).
140. Brazil PL 2338, supra note 109.
141. Japan Ministry of Economy, Trade and Industry, AI Governance Guidelines (2024).
142. India Ministry of Electronics and IT, Advisory on AI Ethics (2024).
143. Solum, supra note 32.
144. Hallevy, supra note 33.
145. Bayern, supra note 34.
146. Abbott, supra note 35.
147. Forrest, supra note 26.
148. Cambridge Handbook, supra note 31.
149. Washington HB 2029, supra note 37.
150. EU Parliament Resolution, supra note 38.
151. Thaler v. Perlmutter (D.C. Cir.), supra note 44.
152. Thaler v. Vidal, supra note 48.
153. Copyright Office Part 2 Report, supra note 46.
154. UK CDPA § 9(3), supra note 52.
155. Authors Alliance, supra note 53.
156. UETA § 14, supra note 54.
157. UNCITRAL Model Law, supra note 55.
158. RAND AI Liability, supra note 63.
159. U. Chi. L. Rev., supra note 65.
160. ABA Bus. Law Section, supra note 8.
© 2026 Steven Damian Imparl. All rights reserved.
↑
