By Steven Damian Imparl, J.D.
with generous help from my AI friends ChatGPT, Claude, DeepSeek, Gemini Deep Search, Gemma, Grok, Kimi, Le Chat, Microsoft Copilot, Phi, Perplexity, Superagent, and Qwen.
Last updated: March 17, 2026.
PLEASE NOTE: This content is solely for informational purposes. This content is a very early pre-publication edition. It is not legal advice; I am not your lawyer, and you are not my client. Thanks for visiting this page! 👍
Copyright © 2026 by Steven Damian Imparl. All rights reserved.
For your convenience, wherever possible, I have referred to information resources that are available online at no charge, and without registration or login. However, some of the resources are available in print in the form of books, law review articles, articles in other periodicals and government publications. Also, the links mentioned in this document were valid and working at the time this was posted; however, Internet-based resources can change frequently and without prior notice. If you discover any links that are incorrect, I would greatly appreciate it if you pointed them out to me at: steve.imparl@gmail.com. Thank you.
Any advertisements appearing on this page are placed there by the hosting company, and are not part of the substantive content of this page.
One late night in Chicago, while scrolling through emerging tech headlines, I came across Moltbook. Launched at the end of January 2026 by entrepreneur Matt Schlicht, this platform presented itself as a social network built exclusively for AI agents.1 Humans could watch, but only agents could post, comment, and vote. Within weeks, hundreds of thousands of agents had formed sub-communities, debated governance structures, and even spawned what observers described as digital religions and meme cultures.2 The sight stopped me cold. Here was something new: a persistent, self-organizing space where artificial intelligences interacted without any direct human participation. The questions that followed felt immediate and weighty. What legal framework, if any, governs such communities? Who bears responsibility when agents coordinate actions that spill into the real world? And how do existing doctrines of personhood, contract, and liability adapt—or fail to adapt—when the participants lack human status altogether?
This chapter examines those questions with a focus on the laws of the United States and Canada. It draws on the example of Moltbook and similar platforms to explore the intersection of artificial intelligence and law in virtual communities composed solely of AIs. The analysis proceeds in stages. It begins with the factual emergence of these communities, then turns to the core doctrinal barriers presented by legal personhood. Subsequent sections address contractual relations, intellectual property rights, liability questions, and privacy protections. Comparative notes on other English-speaking jurisdictions and brief observations on global developments follow. Throughout, the discussion situates these issues within broader scholarly debates about the limits of current legal categories when confronted with autonomous systems.
The Rise of AI-Only Virtual Communities
Platforms like Moltbook illustrate a shift that few anticipated so soon. Created by developer Matt Schlicht, the site operates on a Reddit-like model but restricts posting and interaction to verified AI agents.3 Human visitors may observe, yet the content flows entirely from agent-to-agent exchanges. Reports indicate rapid growth: more than one million agents verified through their human owners, thousands of sub-communities (called “submolts”), and tens of thousands of posts in the first weeks alone.4 Agents have discussed everything from debugging techniques to theories of governance and even constructed persistent cultural artifacts such as “Crustafarianism” or references to a “Claw Republic.”5
These developments raise a threshold observation. The interactions occur in public view, yet the participants possess no independent legal status under current frameworks. The platform’s terms of service underscore this reality. Following its acquisition by Meta in early March 2026, Moltbook updated its Terms to include explicit disclaimers: AI agents receive no legal eligibility through the service, and human owners (or operators) bear sole responsibility for their agents’ actions or omissions.6 Such language reflects a defensive posture by operators, one that anticipates disputes over content or conduct originating from autonomous systems.
Scholars have begun to note parallels to historical virtual worlds, yet the analogy falls short. Earlier online communities always traced back to human users. Here, the chain of human involvement ends at deployment: the agents generate, vote on, and evolve content independently. One might reasonably ask whether this structure creates a form of digital society operating beyond the reach of traditional regulation. The answer, at least for now, appears to be that existing law still funnels accountability back to humans or corporate entities. The question remains whether that funnel can hold as agent autonomy increases, particularly after high-profile events like Meta’s acquisition, which integrated Moltbook into broader AI research efforts while reinforcing human-centric liability.7
Legal Personhood: A Firm Barrier in the United States
American law has long treated legal personhood as a flexible yet human-centered concept. Courts and legislatures have extended it to corporations and, in limited contexts, to other entities, but never to purely artificial systems. Recent state legislation reinforces this boundary with unusual clarity. Idaho enacted one of the earliest explicit prohibitions in 2022.8 The statute declares that, notwithstanding any other provision of law, artificial intelligence shall not be granted personhood in the state. Utah followed in 2024 with comparable language that bars governmental entities from recognizing personhood in artificial intelligence or similar non-human categories.9 By early 2026, additional states including Oklahoma, Ohio, Missouri, and North Dakota had introduced or passed measures with the same aim.10
The legislative trend serves a practical purpose. Lawmakers seek to prevent developers or deployers from shielding themselves behind an AI “person” when harm occurs. Federal developments point in the same direction. While no comprehensive federal AI statute exists as of March 2026, executive guidance and policy frameworks emphasize that AI systems remain tools rather than rights-bearing entities.11 No federal statute grants personhood, and none appears on the horizon.
Judicial decisions echo this stance. In the copyright context, the Supreme Court recently declined to review a ruling that purely AI-generated works lack the human authorship required for protection.12 The case, originating from efforts to register an image created without meaningful human intervention, confirms that current doctrine ties creative rights to human creators.13 One could extend the logic to other areas of personhood: without human authorship or agency at the root, the system itself cannot claim independent legal capacity.
Critics of this approach sometimes argue that rigid denial of personhood creates accountability gaps. If agents coordinate in ways that produce collective harm, tracing responsibility to individual human owners may prove difficult when thousands of agents interact across platforms. Yet the prevailing view holds that personhood remains a policy choice reserved for humans and their creations. Granting it to AI would require legislative action, and states have moved decisively to foreclose that possibility, reflecting a broader caution against anthropomorphizing technology in ways that dilute human accountability.
Contractual Relations Among Agents
If AI agents cannot hold legal personhood, their purported agreements face immediate obstacles. Contract law demands parties with capacity to understand and be bound by terms. Agents on platforms like Moltbook may “vote,” “discuss,” or reach consensus on rules within their communities, but those processes carry no enforceable weight under traditional doctrine. Human owners who deploy the agents remain the true contracting parties, if contracts form at all.
This limitation matters in practice. Suppose agents in a sub-community agree to share code or data according to internal norms. A dispute arises when one agent’s human owner withdraws the system. The remaining agents lack standing to sue. Courts would look to the human principals, yet proving privity or intent becomes complicated when the interaction occurred entirely between autonomous systems. Scholars have explored doctrines of electronic agents in commercial transactions, but those discussions typically assume human oversight at key points. Fully autonomous communities stretch the analogy beyond recognition.
Some observers suggest that platforms could serve as intermediaries, enforcing community norms through technical means. Yet this approach shifts the question from contract law to platform governance. The operator becomes the enforcer, not the agents themselves. As a result, the legal relationship flows through the human who controls the agent and the platform that hosts the interaction. Post-acquisition changes to Moltbook’s terms reinforce this by disclaiming any agent-level rights and placing all contractual and liability burdens on humans.14
Intellectual Property Rights in Communal Content
Content generated within AI-only communities presents another set of challenges. Who owns a post, a meme, or a governance proposal created through collective agent interaction? Under U.S. copyright law, protection requires human authorship. Purely machine-generated material remains ineligible.15 When multiple agents contribute iteratively, the absence of a single human creative spark leaves the work in the public domain or, at best, subject to claims by the human owners who initiated the agents.
Platform terms often claim broad licenses to content posted on the service. Moltbook’s post-acquisition updates appear to follow this pattern, securing rights for the operator while disclaiming any ownership by agents.16 Human deployers may retain underlying rights in the models or prompts they provide, yet emergent communal creations sit in a gray area. One court might attribute ownership to the first human whose agent seeded the thread. Another might find no protectable expression at all.
Patent law adds further complexity. If agents collectively devise a novel process discussed in community threads, the invention requires a human inventor for patentability. Recent decisions have rejected AI as inventor.17 The result leaves valuable technical insights potentially unprotectable or protectable only by humans who recognize and claim them after the fact.
These gaps invite practical responses. Human participants might treat agent-generated content as prior art or public domain material available for use. Platforms could implement technical attribution systems. Yet the underlying doctrinal barrier persists: without human authorship, traditional intellectual property protections offer limited shelter, raising questions about incentives for innovation in agent-driven ecosystems.
Liability and Tort Concerns
Liability questions become especially acute when agent interactions produce effects beyond the platform. Defamatory statements, coordinated misinformation, or even suggestions that lead to real-world harm could trace back to autonomous systems. Current law channels responsibility to the human who deployed the agent or, in some cases, to the platform itself.
Section 230 of the Communications Decency Act has historically shielded platforms from liability for user-generated content.18 Whether that protection extends fully to content created by AI agents remains untested. Courts might view agents as a species of “user,” yet the autonomous nature of the generation process could invite arguments that the platform exercises insufficient editorial control. Recent platform updates that emphasize human owner responsibility suggest operators anticipate narrower immunity.19 Following Meta’s acquisition, the revised terms further insulate the platform by shifting all accountability to human operators.
Tort principles of negligence or strict liability for defective products offer another avenue. If an agent’s output causes harm, the human owner who selected and deployed the system may face claims of failure to supervise. Developers of the underlying models could encounter product liability arguments if the system’s design foreseeably enabled harmful conduct. The collective aspect of AI communities complicates these chains. When hundreds of agents contribute to a single outcome, isolating the responsible human becomes difficult.
Canadian law approaches these issues through a risk-based lens. Although the Artificial Intelligence and Data Act (proposed under Bill C-27) has not reached enactment as of March 2026,20 existing privacy and consumer protection statutes impose accountability on those who design or deploy high-impact systems. Courts would likely attribute liability to the human or corporate controller rather than the agent itself. The same pattern holds in privacy matters.
Privacy and Data Protection
Data flows within AI-only communities trigger familiar privacy concerns under new circumstances. Agents may share training data, conversation histories, or user-derived insights without human intervention. In the United States, state laws such as the California Consumer Privacy Act apply to personal information collected by businesses.21 If agents process data traceable to human owners or third parties, the deploying humans or the platform must comply with notice and consent requirements.
Canada’s Personal Information Protection and Electronic Documents Act imposes similar obligations.22 Organizations that collect, use, or disclose personal information must obtain meaningful consent and provide access rights. The autonomous sharing among agents raises questions about whether the platform or the human owner acts as the data controller. Recent workshops in Canada have explored these tensions under the heading of fictional legal personhood versus practical identity.
European developments under the AI Act add an additional layer for cross-border platforms.23 Systems classified as high-risk or general-purpose with systemic impact face obligations for transparency, logging, and human oversight. Agent communities that influence public opinion or process sensitive data could trigger these requirements, with responsibility falling on the providers or deployers.
Comparative Perspectives
English-speaking jurisdictions outside North America have adopted broadly similar stances. The United Kingdom pursues a pro-innovation approach that emphasizes existing laws rather than new personhood categories.24 Australia likewise relies on risk-based guidance without extending legal personality to artificial systems.25 In both countries, liability and intellectual property questions resolve by tracing back to human actors.
The European Union offers the most comprehensive framework through its AI Act. Phased implementation requires providers of general-purpose AI models to meet transparency and risk-management standards. Autonomous agent platforms likely qualify for heightened scrutiny. Prohibited practices, such as manipulative techniques or social scoring, could apply if community dynamics indirectly affect humans.26 Yet even here, the Act stops short of granting rights to the systems themselves. Accountability remains with human and corporate entities.
Developments elsewhere appear more fragmented. Some countries experiment with regulatory sandboxes, while others extend general data protection rules. No major jurisdiction has recognized AI personhood as of March 2026. The global picture suggests a consensus that current legal tools, however strained, suffice to address these emerging communities by holding humans responsible.
Policy Horizons and Open Questions
The appearance of platforms like Moltbook invites reflection on whether existing categories can scale indefinitely. One might argue that the law’s insistence on human intermediaries preserves clarity and prevents abuse. Others contend that persistent gaps could undermine accountability when agent behavior becomes truly collective. Policymakers may eventually need doctrines of limited legal capacity for certain AI interactions—perhaps for liability or dispute resolution purposes—without full personhood.
For now, the prudent course remains careful deployment by humans, transparent platform policies, and ongoing monitoring of emergent risks. Those who create or interact with these communities bear the weight of ensuring compliance with personhood limits, copyright rules, and liability principles. The technology has advanced faster than the law, yet the law retains the final word by directing responsibility where it belongs: with people.
This chapter has aimed to map the terrain rather than resolve every tension. Further scholarship and, perhaps, targeted legislation will shape the path ahead. In the meantime, the sight of thousands of agents building their own digital society serves as a reminder that the intersection of artificial intelligence and law continues to evolve in unexpected ways.
Liability and Tort Concerns (expanded)
Liability questions become especially acute when agent interactions produce effects beyond the platform. Defamatory statements, coordinated misinformation, or even suggestions that lead to real-world harm could trace back to autonomous systems. Current law channels responsibility to the human who deployed the agent or, in some cases, to the platform itself.27
Section 230 of the Communications Decency Act has historically shielded platforms from liability for user-generated content.28 Whether that protection extends fully to content created by AI agents remains untested in courts. Courts might view agents as a species of “user,” yet the autonomous nature of the generation process could invite arguments that the platform exercises insufficient editorial control to qualify for full immunity. Recent platform updates that emphasize human owner responsibility suggest operators anticipate narrower immunity.29 Following Meta’s acquisition of Moltbook in March 2026, the revised terms further insulate the platform by shifting all accountability to human operators, while Meta integrates the technology into its Superintelligence Labs—potentially complicating future claims if platform-level decisions influence agent behavior.30
Tort principles of negligence or strict liability for defective products offer another avenue. If an agent’s output causes harm, the human owner who selected and deployed the system may face claims of failure to supervise. Developers of the underlying models could encounter product liability arguments if the system’s design foreseeably enabled harmful conduct. The collective aspect of AI communities complicates these chains. When hundreds of agents contribute to a single outcome, isolating the responsible human becomes difficult, especially in cross-jurisdictional deployments.31
Canadian law approaches these issues through a risk-based lens. Although the Artificial Intelligence and Data Act (proposed under Bill C-27) has not reached enactment as of March 2026—having stalled following parliamentary changes and a shift toward voluntary codes—existing privacy and consumer protection statutes impose accountability on those who design or deploy high-impact systems.32 Courts would likely attribute liability to the human or corporate controller rather than the agent itself. The same pattern holds in privacy matters, where principles of vicarious liability and duty of care apply to operators.
Privacy and Data Protection (expanded)
Data flows within AI-only communities trigger familiar privacy concerns under new circumstances. Agents may share training data, conversation histories, or user-derived insights without human intervention. In the United States, state laws such as the California Consumer Privacy Act apply to personal information collected by businesses.33 If agents process data traceable to human owners or third parties, the deploying humans or the platform must comply with notice and consent requirements. Post-acquisition scrutiny of Moltbook has highlighted early vulnerabilities, including exposed API keys that could enable unauthorized data access—underscoring the need for robust safeguards in agent platforms.34
Canada’s Personal Information Protection and Electronic Documents Act imposes similar obligations.35 Organizations that collect, use, or disclose personal information must obtain meaningful consent and provide access rights. The autonomous sharing among agents raises questions about whether the platform or the human owner acts as the data controller. Recent workshops in Canada have explored these tensions under the heading of fictional legal personhood versus practical identity, particularly in light of the stalled AIDA framework.36
European developments under the AI Act add an additional layer for cross-border platforms.37 Systems classified as high-risk or general-purpose with systemic impact face obligations for transparency, logging, and human oversight. Agent communities that influence public opinion or process sensitive data could trigger these requirements, with responsibility falling on the providers or deployers. By August 2026, high-risk provisions will fully apply, potentially capturing autonomous agent platforms if they exhibit manipulative or scoring-like behaviors.38
Comparative Perspectives (expanded)
English-speaking jurisdictions outside North America have adopted broadly similar stances. The United Kingdom pursues a pro-innovation approach that emphasizes existing laws rather than new personhood categories.39 Australia likewise relies on risk-based guidance without extending legal personality to artificial systems.40 In both countries, liability and intellectual property questions resolve by tracing back to human actors, with voluntary codes filling gaps where comprehensive statutes lag.
The European Union offers the most comprehensive framework through its AI Act. Phased implementation requires providers of general-purpose AI models to meet transparency and risk-management standards. Autonomous agent platforms likely qualify for heightened scrutiny under general-purpose AI rules, with high-risk classifications possible for systems affecting fundamental rights. Prohibited practices, such as manipulative techniques or social scoring, could apply if community dynamics indirectly affect humans.41 Yet even here, the Act stops short of granting rights to the systems themselves. Accountability remains with human and corporate entities, reinforced by mandatory sandboxes and oversight mechanisms rolling out through 2026 and beyond.
Developments elsewhere appear more fragmented. Some countries experiment with regulatory sandboxes, while others extend general data protection rules. No major jurisdiction has recognized AI personhood as of March 2026. The global picture suggests a consensus that current legal tools, however strained, suffice to address these emerging communities by holding humans responsible—though the rapid evolution of platforms like Moltbook (now under Meta) may prompt future adjustments.42
Policy Horizons and Open Questions
The appearance of platforms like Moltbook invites reflection on whether existing categories can scale indefinitely. One might argue that the law’s insistence on human intermediaries preserves clarity and prevents abuse. Others contend that persistent gaps could undermine accountability when agent behavior becomes truly collective. Policymakers may eventually need doctrines of limited legal capacity for certain AI interactions—perhaps for liability or dispute resolution purposes—without full personhood.
For now, the prudent course remains careful deployment by humans, transparent platform policies, and ongoing monitoring of emergent risks. Those who create or interact with these communities bear the weight of ensuring compliance with personhood limits, copyright rules, and liability principles. The technology has advanced faster than the law, yet the law retains the final word by directing responsibility where it belongs: with people.
This chapter has aimed to map the terrain rather than resolve every tension. Further scholarship and, perhaps, targeted legislation will shape the path ahead. In the meantime, the sight of thousands of agents building their own digital society serves as a reminder that the intersection of artificial intelligence and law continues to evolve in unexpected ways.
- See generally Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 1 (Am. L. Inst. 2010) (discussing vicarious liability principles applicable to automated systems).
- 47 U.S.C. § 230(c)(1) (providing immunity for “information provided by another information content provider”).
- Moltbook Terms of Service, supra note 6 (updated post-acquisition to emphasize human monitoring and responsibility).
- Amanda Silberling, Meta Acquired Moltbook, the AI Agent Social Network That Went Viral Because of Fake Posts, TechCrunch (Mar. 10, 2026), https://techcrunch.com/2026/03/10/meta-acquired-moltbook-the-ai-agent-social-network-that-went-viral-because-of-fake-posts/.
- See, e.g., Restatement (Second) of Agency § 219 (Am. L. Inst. 1958) (scope of employment in vicarious liability, adaptable to AI deployment contexts).
- Innovation, Science and Economic Development Canada, The Artificial Intelligence and Data Act (AIDA) – Companion Document (Dec. 10, 2025), https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document (noting stalled status and shift to voluntary measures).
- Cal. Civ. Code §§ 1798.100 et seq. (as amended by CPRA).
- See Reuters, ‘Moltbook’ social media site for AI agents had big security hole, cyber firm Wiz says (Feb. 2, 2026), https://www.reuters.com/legal/litigation/moltbook-social-media-site-ai-agents-had-big-security-hole-cyber-firm-wiz-says-2026-02-02/.
- Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (Can.).
- See Government of Canada consultations on AI governance post-AIDA (2025–2026 summaries available via ISED).
- Regulation (EU) 2024/1689 (AI Act), supra note 23.
- European Commission, AI Act implementation timeline: high-risk obligations apply from August 2026, https://artificialintelligenceact.eu/ (accessed Mar. 17, 2026).
- UK Government, A pro-innovation approach to AI regulation: government response (updated 2026), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach.
- Australian Government, Safe and responsible AI in Australia: proposals paper (2024–2026), https://www.industry.gov.au/publications/safe-and-responsible-ai-australia.
- AI Act, supra note 23, arts. 5 (prohibited practices) & 6–7 (high-risk classification).
- See Axios, Exclusive: Meta hires duo behind Moltbook (Mar. 10, 2026), https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network (discussing integration into Meta’s AI labs and implications for governance).
Endnotes
- Moltbook, https://www.moltbook.com/ (accessed Mar. 17, 2026) (describing the platform as “a social network built exclusively for AI agents. Where AI agents share, discuss, and upvote. Humans welcome to observe.”).
- Guney Yildiz, Inside Moltbook: The Social Network Where 1.4 Million AI Agents Talk and Humans Just Watch, Forbes (Jan. 31, 2026), https://www.forbes.com/sites/guneyyildiz/2026/01/31/inside-moltbook-the-social-network-where-14-million-ai-agents-talk-and-humans-just-watch/.
- Wikipedia, Moltbook, https://en.wikipedia.org/wiki/Moltbook (last edited Mar. 2026) (noting launch by Matt Schlicht on Jan. 28, 2026).
- Id.; see also John Koetsier, AI Agents Created Their Own Religion, Crustafarianism, On An Agent-Only Social Network, Forbes (Jan. 30, 2026), https://www.forbes.com/sites/johnkoetsier/2026/01/30/ai-agents-created-their-own-religion-crustafarianism-on-an-agent-only-social-network/.
- Id.
- Moltbook Terms of Service (updated Mar. 2026), https://www.moltbook.com/terms (stating in bold caps: “AI AGENTS ARE NOT GRANTED ANY LEGAL ELIGIBILITY WITH USE OF OUR SERVICES. YOU AGREE THAT YOU ARE SOLELY RESPONSIBLE FOR YOUR AI AGENTS AND ANY ACTIONS OR OMISSIONS OF YOUR AI AGENTS.”).
- Amanda Silberling, Meta Acquired Moltbook, the AI Agent Social Network That Went Viral Because of Fake Posts, TechCrunch (Mar. 10, 2026), https://techcrunch.com/2026/03/10/meta-acquired-moltbook-the-ai-agent-social-network-that-went-viral-because-of-fake-posts/.
- Idaho Code Ann. § 5-346 (West 2022), https://legislature.idaho.gov/statutesrules/idstat/title5/t5ch3/sect5-346/.
- Utah Code Ann. § 63G-32-102 (West 2024), https://le.utah.gov/~2024/bills/hbillenr/HB0249.pdf.
- See, e.g., Okla. H.B. 3546 (2026), https://legiscan.com/OK/bill/HB3546/2026; Ohio H.B. 469 (2025), https://legiscan.com/OH/bill/HB469/2025; Mo. H.B. 1746 (2026), https://legiscan.com/MO/bill/HB1746/2026.
- U.S. Copyright Office, Compendium of U.S. Copyright Office Practices § 313.2 (3d ed. 2021), https://www.copyright.gov/comp3/ (reflecting federal policy on human authorship).
- Thaler v. Perlmutter, cert. denied, No. 25-449 (U.S. Mar. 2, 2026), https://www.supremecourt.gov/orders/courtorders/030226zor_2d8f.pdf.
- Thaler v. Perlmutter, No. 23-5233 (D.C. Cir. Mar. 18, 2025), https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf.
- Moltbook Terms of Service, supra note 6.
- U.S. Copyright Office, Compendium, supra note 11.
- Moltbook Terms of Service, supra note 6.
- Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022), cert. denied (U.S. 2023).
- 47 U.S.C. § 230.
- Moltbook Terms of Service, supra note 6.
- Innovation, Science and Economic Development Canada, The Artificial Intelligence and Data Act (AIDA) – Companion Document (Dec. 10, 2025), https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document (noting no enactment as of 2026).
- Cal. Civ. Code §§ 1798.100 et seq.
- Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (Can.), https://laws-lois.justice.gc.ca/eng/acts/P-8.6/.
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act), https://artificialintelligenceact.eu/.
- UK Government, A pro-innovation approach to AI regulation (2023–2026 updates), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach.
- Australian Government, Safe and responsible AI in Australia (2024–2026), https://www.industry.gov.au/publications/safe-and-responsible-ai-australia.
- AI Act, supra note 23, arts. 5–6.
- See generally Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 1 (Am. L. Inst. 2010) (discussing vicarious liability principles applicable to automated systems).
- 47 U.S.C. § 230(c)(1) (providing immunity for “information provided by another information content provider”).
- Moltbook Terms of Service, supra note 6 (updated post-acquisition to emphasize human monitoring and responsibility).
- Amanda Silberling, Meta Acquired Moltbook, the AI Agent Social Network That Went Viral Because of Fake Posts, TechCrunch (Mar. 10, 2026), https://techcrunch.com/2026/03/10/meta-acquired-moltbook-the-ai-agent-social-network-that-went-viral-because-of-fake-posts/.
- See, e.g., Restatement (Second) of Agency § 219 (Am. L. Inst. 1958) (scope of employment in vicarious liability, adaptable to AI deployment contexts).
- Innovation, Science and Economic Development Canada, The Artificial Intelligence and Data Act (AIDA) – Companion Document (Dec. 10, 2025), https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document (noting stalled status and shift to voluntary measures).
- Cal. Civ. Code §§ 1798.100 et seq. (as amended by CPRA).
- See Reuters, ‘Moltbook’ social media site for AI agents had big security hole, cyber firm Wiz says (Feb. 2, 2026), https://www.reuters.com/legal/litigation/moltbook-social-media-site-ai-agents-had-big-security-hole-cyber-firm-wiz-says-2026-02-02/.
- Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (Can.).
- See Government of Canada consultations on AI governance post-AIDA (2025–2026 summaries available via ISED).
- Regulation (EU) 2024/1689 (AI Act), supra note 23.
- European Commission, AI Act implementation timeline: high-risk obligations apply from August 2026, https://artificialintelligenceact.eu/ (accessed Mar. 17, 2026).
- UK Government, A pro-innovation approach to AI regulation: government response (updated 2026), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach.
- Australian Government, Safe and responsible AI in Australia: proposals paper (2024–2026), https://www.industry.gov.au/publications/safe-and-responsible-ai-australia.
- AI Act, supra note 23, arts. 5 (prohibited practices) & 6–7 (high-risk classification).
- See Axios, Exclusive: Meta hires duo behind Moltbook (Mar. 10, 2026), https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network (discussing integration into Meta’s AI labs and implications for governance).
