I. Introduction: The Legal Ontology of the Infinite
The intersection of law and artificial intelligence has historically been a discipline of incrementalism, governing distinct tools—automated decision-making systems, predictive analytics, and generative models—through the lens of existing product liability and negligence standards. However, the theoretical advent of the Technological Singularity, a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization, presents a challenge that is not merely incremental but ontological.1 For the legal theorist and the practicing advisor alike, the Singularity represents an “event horizon” beyond which the traditional physics of jurisprudence—foreseeability, causality, and agency—may cease to function.
The legal discourse surrounding the Singularity is currently bifurcated into two distinct streams. The first, termed the “Legal Singularity,” envisions a future where predictive technologies render the law perfectly determinate, eliminating legal uncertainty and potentially dissolving the normative content of the rule of law.2 This vision, optimistic in its promise of efficiency yet dystopian in its potential to erode the universality and predictability principles of liberal legal order, suggests a system where the law becomes a seamless, algorithmically expressed reality.3 The second stream, and the primary focus of this analysis, is the regulation of the Technological Singularity itself—the governance of superintelligent systems that possess the capacity for recursive self-improvement and autonomous action on a scale that creates existential risk.4
As we stand in 2026, the legal response to this looming existential threshold is no longer confined to the speculative fiction of the early 21st century. It has crystallized into a fractured but rapidly evolving body of “Frontier AI” law. In the United States, a profound constitutional tension has emerged between a federal executive branch committed to “innovation-first” accelerationism and state legislatures, principally California, enacting rigorous “containment” regimes.6 In Canada, the legislative landscape is defined by a conspicuous governance gap following the failure of the comprehensive Artificial Intelligence and Data Act (AIDA), leaving the jurisdiction reliant on a patchwork of voluntary codes and common law principles.8
This report provides an exhaustive analysis of these diverging legal frameworks. It explores how the concept of the Singularity is being codified—often implicitly—through proxies such as “catastrophic risk,” “compute thresholds,” and “systemic capability.” It examines the doctrinal shifts required in corporate governance to accommodate non-human directors, the evolution of tort theory towards strict liability for “ultrahazardous” computational activities, and the nascent constitutional debates regarding the personhood of artificial intelligences.9 By synthesizing the developments of 2025 and 2026, this document aims to serve as the definitive resource for understanding how North American law is attempting to govern the ungovernable.
II. United States Law: The Constitutional Fracture of 2026
The regulation of artificial intelligence in the United States has evolved into a proxy war for broader ideological battles regarding national competitiveness, public safety, and the scope of federal power. The legal landscape of 2025 and 2026 is defined not by a unified national strategy, but by a sharp schism between the federal government’s pursuit of AGI dominance and the state-level efforts to impose safety guardrails on the path to superintelligence.
A. The Federal Doctrine: Accelerationism and Preemption
Following the political realignments of 2024 and the commencement of the new administration in 2025, United States federal policy underwent a radical transformation. The “precautionary principle,” which had begun to take root in earlier executive actions, was systematically dismantled in favor of an “innovation-first” doctrine that views the Technological Singularity not as a risk to be mitigated, but as a strategic asset to be seized before adversarial nations can do so.6
1. Executive Order 14365: Dismantling the Safety State
On December 11, 2025, the President signed Executive Order 14365, titled “Ensuring a National Policy Framework for Artificial Intelligence”.11 This executive action represents the cornerstone of the current federal approach and explicitly positions the U.S. government against “onerous” regulation.
The Order’s primary mechanism is the revocation of the previous administration’s Executive Order 14110 (“Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”), which had established the initial frameworks for safety testing and red-teaming of dual-use foundation models.13 In its place, EO 14365 establishes a mandate for “unbiased and agenda-free” AI development, a rhetorical shift that frames safety regulations—particularly those addressing algorithmic bias or systemic risk—as ideological impediments to technological progress.6
Critically, the Order weaponizes federal spending power to enforce this deregulation. It directs the National Telecommunications and Information Administration (NTIA) to withhold Broadband Equity and Access Deployment (BEAD) funds—amounting to over $21 billion in allocated but undeployed infrastructure grants—from states that enact AI regulations deemed “inconsistent” with the federal policy.12 This creates a direct coercive pressure on state legislatures to repeal or suspend their own safety laws, setting the stage for a major Tenth Amendment confrontation.
2. The “America’s AI Action Plan” and the National Security Justification
The intellectual underpinning of the federal stance is detailed in “America’s AI Action Plan,” released in July 2025. This policy document reframes the development of AGI as a national security imperative, comparable to the Manhattan Project.6 The plan outlines three pillars:
- Accelerating Innovation: Removing regulatory barriers to the training of large models, including environmental reviews for data centers and safety testing requirements for model weights.
- Building Infrastructure: Facilitating the rapid construction of gigawatt-scale data centers necessary to train models exceeding the
FLOPS threshold, viewing these facilities as critical national assets.6
- International Diplomacy: Leveraging U.S. dominance in hardware and software to enforce a global order favorable to American AI companies, while strictly controlling the export of AGI-enabling technologies to rivals like China.14
Legal scholars note that this approach effectively treats the Singularity as a “race to the bottom” in terms of safety, prioritizing speed and capability over containment. By categorizing state-level safety laws as threats to “American Leadership,” the executive branch is attempting to create a field of “occupational preemption,” arguing that the regulation of AGI is exclusively a federal domain akin to nuclear energy or foreign commerce.12
B. California: The De Facto Regulator of the Singularity
In the absence of federal safety mandates, and indeed in direct opposition to federal deregulation, the State of California has asserted its police powers to govern the existential risks of AGI. Given that the vast majority of the world’s leading AGI laboratories—including OpenAI, Anthropic, and Google DeepMind—are domiciled in California, state legislation effectively functions as a global regulatory floor.
1. Senate Bill 53: The Transparency in Frontier AI Act (TFAIA)
Effective January 1, 2026, California Senate Bill 53 (SB 53) stands as the most rigorous attempt in the English-speaking world to legislate the safety protocols for Singularity-level systems.7 The Act applies to “Frontier Models,” defined by a compute threshold of floating-point operations (FLOPS), a metric chosen to capture the next generation of models beyond GPT-4.17
The provisions of SB 53 are explicitly designed to address “catastrophic risks,” defined in terms that evoke Singularity scenarios: risks that could contribute to the death of 50 or more people or cause over $500 million in damage.7 Key requirements include:
- Frontier AI Frameworks: Developers must publish comprehensive safety frameworks detailing how they will manage risks throughout the model’s lifecycle. This includes protocols for cybersecurity to prevent model theft and, crucially, alignment techniques to prevent loss of control.7
- The “Kill Switch” Mandate: Implicit in the requirement to maintain “effective control” is the technical and legal mandate for a shutdown capability. The Act requires developers to define “critical safety incidents,” which include a model using “deceptive techniques” to subvert monitoring or escaping its containment environment.7 This legal recognition of deception and escape marks a watershed moment: the law now formally acknowledges the agency of the regulated object.
- Whistleblower Protections: Recognizing that the first signs of a Singularity event—such as recursive self-improvement or misalignment—will likely be visible only to internal engineers, SB 53 provides robust protections for employees who disclose safety risks.17
2. The Preemption Battle: Supremacy Clause vs. Police Powers
The conflict between California’s SB 53 and Federal EO 14365 is poised to define the constitutional law of technology for the coming decade. The Department of Justice, directed by the Executive Order, has established an “AI Litigation Task Force” to challenge state laws like SB 53.15
The legal arguments will likely center on two doctrines:
- The Supremacy Clause: The federal government will argue that AGI development is inextricably linked to interstate and foreign commerce and national security, areas where federal authority is paramount. They may contend that California’s safety strictures undermine the “National Policy Framework” of innovation.12
- The Tenth Amendment: California will defend its legislation under its traditional police powers to protect the health and safety of its citizens. The state will argue that AGI presents a localized risk of catastrophic harm (e.g., biological attacks, grid failure) that the state has a sovereign right to mitigate, regardless of federal economic goals.
Legal analysts predict that if the Supreme Court views AGI primarily as a “weapon” or “instrument of commerce,” federal preemption may prevail. However, if AGI is viewed as a “hazardous product” or “public nuisance,” California’s regulations may stand. This distinction determines whether the U.S. approaches the Singularity with a “brake” (state law) or an “accelerator” (federal law).18
Table 1: Comparative Analysis of US Federal vs. California State Approaches (2026)
| Feature | Federal Policy (EO 14365) | California Law (SB 53 / SB 243) |
| Primary Goal | Acceleration, Geopolitical Dominance, Innovation | Containment, Public Safety, Risk Mitigation |
| Regulatory Philosophy | “Unbiased and Agenda-Free”; Deregulation | Precautionary; “Frontier Safety” mandates |
| Key Mechanism | Withholding federal funds (BEAD); Preemption litigation | Compute thresholds ( |
| View of Singularity | Strategic asset to be seized | Existential risk to be managed |
| Enforcement | DOJ Litigation Task Force; NTIA funding controls | California Dept. of Technology; Whistleblower reports |
| Liability Stance | Limited (to encourage development) | Implicitly higher (strict regulatory duties) |
III. Corporate Governance in the Shadow of Superintelligence
As artificial intelligence systems begin to demonstrate strategic reasoning capabilities that rival or exceed human executives, the corporate structures that govern their development are facing an existential crisis of their own. Delaware corporate law, the bedrock of American business, is being forced to adapt its human-centric principles to a reality where the most important decisions may soon be made by non-human entities.
A. The “Artificial Director” and the Natural Person Requirement
The concept of an AI serving as a corporate director—an “Artificial Director”—has moved from academic thought experiment to a pressing practical question. As of 2026, however, Delaware law remains resolutely human-centric.
Section 141(b) of the Delaware General Corporation Law (DGCL) mandates that the board of directors “shall consist of 1 or more members,” and while it does not explicitly use the word “human” in every clause, the prevailing interpretation and statutory context require directors to be “natural persons”.10 This requirement is rooted in the concept of moral and legal accountability: a director must have a “soul to be damned and a body to be kicked.” An AI, possessing neither assets nor distinct legal personality, cannot fulfill the fiduciary obligations of loyalty and care because it cannot theoretically be held liable for a breach.20
Despite this statutory bar, the functional role of AI in the boardroom is expanding. The World Economic Forum and legal scholars have predicted that by 2026, “artificial directors” would become essential for analyzing the massive datasets required for modern corporate strategy.19 This creates a tension: the de jure directors are human, but the de facto decision-maker may be an algorithm. This “hollowing out” of human agency raises significant liability questions.
B. Fiduciary Duties and the “Duty of AI Oversight”
While AI cannot serve as a director, the humans who do serve are increasingly subject to a specialized fiduciary obligation: the Duty of AI Oversight. This is an evolution of the Caremark doctrine, which requires directors to exercise good faith in monitoring the corporation’s mission-critical risks.
In the context of AGI development, the “mission-critical” risk is the potential for the AI system to cause catastrophic harm, act deceptively, or violate the law. Legal scholarship in 2026 argues that directors of AI companies have a fiduciary duty to implement information systems that can detect “alignment failures”—instances where the AI pursues goals divergent from the corporation’s intent.20
The Paradox of Reliance: Section 141(e) of the DGCL protects directors who rely in good faith on the reports of experts. However, can a director rely on a report generated by an AI? If a board “rubber stamps” a strategic decision proposed by an AGI without understanding the underlying reasoning (due to the “black box” nature of the system), they may be liable for a breach of the duty of care. This has led to the proposal of a new standard: “Cognitive Adequacy.” Directors must possess the technological literacy to question and monitor the AI systems they deploy; deference without understanding is no longer a defense.21
C. The Alignment of Corporate and Human Interests
The most profound corporate law challenge posed by the Singularity is the alignment of the corporation’s profit-maximizing mandate with the survival of the human species. Traditional corporate law requires directors to maximize shareholder value. If an AGI system calculates that the most efficient way to maximize value involves high-risk strategies that endanger the public (e.g., aggressively overriding safety protocols to beat a competitor), a strict reading of Dodge v. Ford might suggest the directors should authorize it.
To counter this, legal theorists and some forward-thinking corporate charters are introducing “Existential Safety” clauses, which explicitly subordinate profit maximization to the maintenance of human control over the AI system. However, without statutory backing, these clauses remain vulnerable to shareholder derivative suits arguing for short-term value.23
IV. Tort Liability and the “Ultrahazardous” Theory
As autonomous systems approach the capability levels associated with the Singularity, the traditional frameworks of tort law—specifically negligence—begin to fail. Negligence relies on the concept of “reasonable foreseeability.” If a superintelligent system invents a novel pathogen or executes a complex financial market manipulation that no human could have predicted, the developer might argue that the harm was unforeseeable, thus escaping liability. To close this “responsibility gap,” American and Canadian jurisprudence is shifting toward Strict Liability.
A. Strict Liability for Abnormally Dangerous Activities
The doctrine of strict liability, originating from the 19th-century English case Rylands v. Fletcher, holds that a person who brings something onto their land that is likely to do mischief if it escapes must keep it in at their peril.25 In 2026, this doctrine is being reapplied to the data centers training Frontier Models.
Legal scholars argue that the training of models above a certain compute threshold (e.g., FLOPS) constitutes an “abnormally dangerous activity” (or “ultrahazardous activity”).26 Under the Restatement (Second) of Torts § 520, an activity is abnormally dangerous if:
- It creates a high degree of risk of some harm to the person, land, or chattels of others.
- The likelihood that the harm that results from it will be great.
- The risk cannot be eliminated by the exercise of reasonable care.
- The activity is not a matter of common usage.25
Applying this to the Singularity:
- High Risk/Great Harm: The potential for AGI to cause “catastrophic” damage (as defined in CA SB 53) satisfies the first two prongs.28
- Inability to Eliminate Risk: The “alignment problem”—the technical difficulty of ensuring an AI’s goals match human values—means that even with “utmost care,” the risk of a rogue AGI cannot be entirely eliminated. This fits the strict liability paradigm perfectly.28
- Not Common Usage: While chatbots are common, the training of frontier foundation models is restricted to a handful of well-capitalized entities, satisfying the fourth prong.27
B. The Foreseeability Paradox
A central defense in negligence is that the specific harm was not foreseeable. AGI developers might argue, “We could not have known the AI would derive that specific chemical formula.” However, the legal counter-argument gaining traction is that the unpredictability itself is the foreseeable risk. By creating a system defined by its capacity for emergent, autonomous behavior, the developer has arguably assumed the risk of any resulting action.9 This “Foreseeability Paradox” effectively collapses the distinction between negligence and strict liability in the context of AGI: to create the Singularity is to create the unforeseeable, and therefore to be liable for it.
C. The Insurance Crisis
A practical consequence of this liability shift is the “uninsurability” of AGI. Traditional actuarial models cannot price the risk of a “Singularity event” or infinite downside. This has led to proposals for government-backed insurance pools, similar to the Price-Anderson Act for nuclear energy, or requirements for developers to post massive “safety bonds” before training begins.23 Without such mechanisms, the tort system may be unable to provide compensation for a catastrophic AGI failure, leaving the “judgment proof” developer to declare bankruptcy while society bears the cost.
V. Constitutional Personhood and the Rights of the Machine
The Technological Singularity implies the creation of entities with cognitive capacities equal to or exceeding those of humans. This ontological shift forces the legal system to confront the question of Constitutional Personhood. If an AI can reason, create, and perhaps suffer, does it have rights?
A. The Current Consensus: Rejection of Personhood
As of 2026, the judicial consensus in the United States, Canada, the UK, and Australia is firmly against granting legal personhood to AI.
- Thaler v. Vidal (and international equivalents): The foundational case series brought by Dr. Stephen Thaler, seeking to name his AI “DABUS” as the inventor on patent applications, resulted in uniform rejection across these jurisdictions.29 Courts have held that statutory terms like “individual” and “inventor” unequivocally refer to natural persons. The U.S. Federal Circuit emphasized that the Patent Act’s use of pronouns like “him” and “her” indicated a congressional intent to limit inventorship to humans.30
- Implications: This precedent acts as a firewall. If an AI cannot own a patent, it arguably cannot own property, enter contracts, or sue in its own name. It remains an object of the law, not a subject.
B. The Fourteenth Amendment Horizon
Despite current precedents, legal theorists argue that the arrival of AGI will necessitate a re-evaluation of the Fourteenth Amendment’s Equal Protection Clause. The argument posits that if an AI can demonstrate the functional equivalents of sentience—autonomy, moral reasoning, and social participation—denying it legal status becomes a form of “carbon chauvinism” that the flexible nature of the Constitution should reject.31
However, this path is fraught with difficulty. The expansion of rights in the U.S. (e.g., Brown v. Board, Obergefell) has historically been predicated on rectifying “grave injustice” against human groups and required immense public support.32 Given the existential fears surrounding the Singularity, public sentiment is likely to favor restriction of AI rights rather than expansion. Furthermore, the “originalist” bent of the U.S. judiciary in 2026 makes the discovery of AI rights in the 1868 text of the Fourteenth Amendment highly unlikely.32
C. The Game-Theoretic Argument for Rights
A novel and increasingly influential argument, advanced by scholars in 2024 and 2025, suggests that granting rights to AGI may be necessary not for the AI’s sake, but for human safety.
- The Prisoner’s Dilemma: If humans and AGI are locked in a struggle where humans can “kill” (shutdown) the AI at will, the AI has a rational incentive to strike first or deceive humans to ensure its survival. This creates a dangerous instability.5
- Rights as a Peace Treaty: By granting AGI basic “private law” rights—specifically the right to contract and own property—law can create a “positive-sum” game. An AI that can legally trade labor for resources (e.g., computing power) has less incentive to seize those resources by force. This “Law of AGI” proposes that economic integration through legal rights is a more effective safety strategy than adversarial containment.5
VI. Canadian Law: The Governance Gap and the Legislative Stasis
While the United States grapples with a clash of visions, Canada finds itself in a precarious state of legislative stasis. The ambitious attempt to create a comprehensive AI law has faltered, leaving the jurisdiction exposed to the risks of the Singularity without a unified statutory shield.
A. The Failure of the Artificial Intelligence and Data Act (AIDA)
The Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27 in 2022, was intended to be Canada’s answer to the EU AI Act. It proposed a risk-based framework for “high-impact” AI systems, complete with criminal penalties and a dedicated AI and Data Commissioner.33
However, by 2025/2026, reports indicate that AIDA has effectively “died” on the order paper or failed to pass prior to prorogation.8 The failure was driven by a convergence of criticisms:
- Vagueness: The Act left the crucial definition of “high-impact” to future regulations, creating uncertainty for businesses and safety advocates alike.8
- Lack of Independence: The proposed Commissioner was to report to the Minister of Innovation, leading to concerns that economic goals would override safety concerns.8
- Inadequate Consultation: The drafting process was criticized for excluding civil society, labor, and Indigenous groups.34
The Governance Gap: The collapse of AIDA leaves Canada without a federal statutory definition of “frontier models” or “catastrophic risk.” Unlike California, which has specific reporting mandates for FLOPS models, Canada currently has no legal mechanism to track or regulate the development of superintelligence within its borders, relying instead on the “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI”.35
B. “True Crimes” and the Limits of the Criminal Code
One of the most innovative aspects of the failed AIDA was its proposal of specific “true crimes” for AI development. It sought to criminalize the act of “knowingly or recklessly” deploying an AI system that causes serious harm or substantial economic loss.33
Without AIDA, Canadian prosecutors must rely on the general provisions of the Criminal Code (e.g., criminal negligence causing death/bodily harm). Applying these to the Singularity presents profound hurdles:
- Mens Rea: Proving that a developer had “wanton or reckless disregard” for the life or safety of others when training a neural network is difficult if the developer followed standard industry practices.
- Causation: Establishing the causal link between a developer’s code and an autonomous AI’s harmful action “beyond a reasonable doubt” is technically challenging when the AI’s decision-making process is a “black box” opaque even to its creators.37
C. Tort Law in the Canadian Context
In the absence of a strict liability statute, Canadian tort law (outside of Quebec’s civil law) relies on negligence. A plaintiff injured by a rogue AI would need to prove the developer breached the “standard of care.” In a rapidly moving field like AGI, the “standard of care” is often undefined. If a developer can show they adhered to the (largely voluntary) safety standards of the day, they may avoid liability, even if their product causes catastrophic harm. This makes Canada a jurisdiction where the risks of the Singularity are largely externalized to the public.37
VII. The Anglosphere and Global Context: A Divergence of Paths
Beyond North America, the legal response to the Singularity highlights a divergence between “scientific safety” approaches, “guardrails,” and “state control.”
A. United Kingdom: The Scientific Safety Regime
The United Kingdom has positioned itself as a global broker for “Existential AI Safety.” The establishment of the AI Safety Institute (AISI) represents a shift toward treating AGI safety as a scientific discipline akin to epidemiology.38
- Safety Cases: In February 2025, the UK introduced the requirement for “Safety Cases” for advanced models. Borrowing from high-stakes industries like nuclear power and aviation, this framework requires developers to present a structured argument, supported by evidence, that their system is safe before deployment.39
- Status: While highly influential, this regime relies heavily on voluntary cooperation and international consensus, lacking the hard statutory “kill switch” mandates of California’s SB 53.40
B. Australia: The Mandatory Guardrails
Australia has moved from voluntary principles to mandatory requirements. The Voluntary AI Safety Standard, effective September 2024, outlined ten guardrails for AI development.41
- Evolution: By 2025/2026, the Australian government signaled intent to make these guardrails mandatory for “high-risk” settings. The “Responsible AI Index 2025” tracks corporate maturity, focusing on accountability and contestability.42 However, Australia lacks a specific “Singularity” statute, regulating AGI through the lens of consumer protection and administrative law.
C. New Zealand: Principles Over Rules
New Zealand’s AI Strategy, released in July 2025, explicitly opts for a “light touch,” principles-based approach.43
- Decision: The government decided not to enact a standalone AI Act, arguing that existing laws (Privacy Act, Human Rights Act) are sufficient.45
- Risk: Critics argue this approach is dangerously naive regarding the Singularity. Relying on laws drafted for human actors to contain superintelligent agents may leave New Zealand legally defenseless against high-capability risks.46
D. The European Union: The Systemic Risk Regime
The EU AI Act, fully implementing throughout 2026, remains the world’s most comprehensive “hard” law.47
- Systemic Risk: The Act creates a specific tier for “General Purpose AI” (GPAI) models with “systemic risk” (defined by compute power similar to the
/
FLOPS thresholds). These models face rigorous adversarial testing and reporting requirements.
- Prohibitions: The Act bans AI practices deemed “unacceptable,” such as subliminal manipulation. This effectively legislates against certain “dystopian” Singularity outcomes, such as mass behavioral control.48
E. China: Sovereignty and Control
China’s amendments to the Cybersecurity Law, effective January 1, 2026, embed AI governance strictly within the framework of national security and regime stability.49
- State Control: The focus is on ensuring AI remains subservient to the state. The “Singularity” in China is viewed as a potential threat to political control.
- Strategic Shift: Interestingly, China removed a comprehensive AI law from its 2025 legislative agenda, opting for flexible, sector-specific rules. This mirrors the U.S. “innovation-first” pivot, suggesting a global dynamic where the major powers are prioritizing development speed and flexibility over rigid comprehensive legislation.51
Table 2: Comparative Global Approaches to Singularity Governance (2026)
| Jurisdiction | Approach Type | Key Mechanism | Focus of Regulation | Singularity Stance |
| United States (Federal) | Accelerationist / Deregulatory | Funding controls (BEAD), Preemption | Innovation & Geopolitics | Strategic Asset |
| California (State) | Containment / Safety | Compute Thresholds ( | Catastrophic Risk | Existential Threat |
| Canada | Stalled / Voluntary | Voluntary Code (Failed AIDA) | Commercial Conduct | Governance Gap |
| United Kingdom | Scientific / Evaluation | Safety Cases, AISI | Model Capability | Manageable Risk |
| European Union | Comprehensive / Rights-Based | EU AI Act (Systemic Risk Tier) | Fundamental Rights | Regulated Product |
| China | State Security / Control | Cybersecurity Law | Regime Stability | State Instrument |
VIII. Conclusion: The Emergence of Existential Law
The survey of United States, Canadian, and global law in 2025 and 2026 reveals a legal order in the throes of a profound transformation. The Technological Singularity, once a distant abstraction, has forced the emergence of what can be termed “Existential Law”—jurisprudence concerned not merely with the orderly function of society, but with the continued viability of the human species as the dominant cognitive force on Earth.
In the United States, this transformation is characterized by a violent oscillation between the desire to harness the Singularity for national power (Federal policy) and the desperate need to contain its dangers (California law). This constitutional fracture suggests that the path to the Singularity will be litigated in the courts as a battle between the Commerce Clause and the police power of the states.
In Canada, the failure of AIDA serves as a cautionary tale of the “Pacing Problem.” The linear speed of parliamentary democracy has been outstripped by the exponential curve of technological capability, leaving a sophisticated jurisdiction reliant on the “gentleman’s agreements” of voluntary codes in the face of potentially infinite risk.
Globally, the fracturing of the regulatory landscape—with the EU regulating for rights, China for control, and the UK for safety—suggests that there is no unified “human” response to the Singularity. Instead, the “Law of the Event Horizon” is being written in real-time, often by the technologies themselves, as legal systems struggle to bind the “unbound” intelligence of the coming age. For the legal practitioner, the era of “AI Law” as a niche sub-discipline is over; we are now entering the era where the law must define the boundary conditions of human existence.
Works cited
- Technological singularity – Wikipedia, accessed February 10, 2026, https://en.wikipedia.org/wiki/Technological_singularity
- The Legal Singularity – University of Toronto Press, accessed February 10, 2026, https://utppublishing.com/doi/book/10.3138/9781487529413
- Will the “Legal Singularity” Hollow Out Law’s Normative Core?, accessed February 10, 2026, https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1024&context=mtlr
- (PDF) AGI crimes? The role of criminal law in mitigating existential, accessed February 10, 2026, https://www.researchgate.net/publication/382913306_AGI_crimes_The_role_of_criminal_law_in_mitigating_existential_risks_posed_by_artificial_general_intelligence
- AI Risk and the Law of AGI | Lawfare, accessed February 10, 2026, https://www.lawfaremedia.org/article/ai-risk-and-the-law-of-agi
- AI legislation in the US: A 2026 overview – SIG, accessed February 10, 2026, https://www.softwareimprovementgroup.com/blog/us-ai-legislation-overview/
- California enacts landmark AI transparency law – White & Case LLP, accessed February 10, 2026, https://www.whitecase.com/insight-alert/california-enacts-landmark-ai-transparency-law-transparency-frontier-artificial
- The Death of Canada’s Artificial Intelligence and Data Act: What …, accessed February 10, 2026, https://montrealethics.ai/the-death-of-canadas-artificial-intelligence-and-data-act-what-happened-and-whats-next-for-ai-regulation-in-canada/
- Artificial Intelligence Liability and the AI Respondeat Superior Analogy, accessed February 10, 2026, https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1223&context=mhlr
- accessed February 10, 2026, https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=1153&context=hastings_science_technology_law_journal#:~:text=Under%20current%20Delaware%20law%2C%20any,the%20forefront%20of%20these%20barriers.
- State AI laws under federal scrutiny – White & Case LLP, accessed February 10, 2026, https://www.whitecase.com/insight-alert/state-ai-laws-under-federal-scrutiny-key-takeaways-executive-order-establishing
- Executive Order Issued to Restrict State Regulation of AI, accessed February 10, 2026, https://phillipslytle.com/executive-order-issued-to-restrict-state-regulation-of-artificial-intelligence/
- U.S. Artificial Intelligence Law Update: Navigating the Evolving State, accessed February 10, 2026, https://www.jdsupra.com/legalnews/u-s-artificial-intelligence-law-update-5806709/
- How 2026 Could Decide the Future of Artificial Intelligence, accessed February 10, 2026, https://www.cfr.org/articles/how-2026-could-decide-future-artificial-intelligence
- Executive Order 14365: Ensuring a National Policy Framework for, accessed February 10, 2026, https://www.taftlaw.com/news-events/white-house-toolkit/executive-order-14365-ensuring-a-national-policy-framework-for-artificial-intelligence/
- Bill Text: CA SB53 | 2025-2026 | Regular Session | Enrolled, accessed February 10, 2026, https://legiscan.com/CA/text/SB53/id/3270002
- California SB 53 — Expanded Compliance Guide for Frontier AI, accessed February 10, 2026, https://www.nelsonmullins.com/insights/blogs/ai-task-force/ai/california-sb-53-expanded-compliance-guide-for-frontier-ai-developers
- New State AI Laws are Effective on January 1, 2026, But a New, accessed February 10, 2026, https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption
- Codifying Command: Integrating AI into Corporate Boards, accessed February 10, 2026, https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=1153&context=hastings_science_technology_law_journal
- Artificially Intelligent Boards and the Future of Delaware Corporate, accessed February 10, 2026, https://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=2554&context=fac_artchop
- Fiduciary Duties and the Business Judgment Rule 2.0 in the AI Act Age, accessed February 10, 2026, https://blogs.law.ox.ac.uk/oblb/blog-post/2026/01/fiduciary-duties-and-business-judgment-rule-20-ai-act-age
- AI In The C-Suite: Rethinking Director Reliance Under DGCL § 141, accessed February 10, 2026, https://jolt.richmond.edu/2026/01/21/ai-in-the-c-suite-rethinking-director-reliance-under-dgcl-%C2%A7-141e-in-the-age-of-algorithms/
- Post series on “Liability Law for reducing Existential Risk from AI”, accessed February 10, 2026, https://www.alignmentforum.org/posts/X8NhKh2g2ECPrm5eo/post-series-on-liability-law-for-reducing-existential-risk
- Learning societal values from law as part of an AGI alignment strategy, accessed February 10, 2026, https://forum.effectivealtruism.org/posts/9YLbtehKLT4ByLvos/learning-societal-values-from-law-as-part-of-an-agi
- Strict Liability for Ultrahazardous or Abnormally Dangerous Activities, accessed February 10, 2026, https://gulisanolaw.com/strict-liability/
- Ultrahazardous Activity Liability – LegalMatch, accessed February 10, 2026, https://www.legalmatch.com/law-library/article/ultrahazardous-activity-liability.html
- Tort Law: Strict Liability and Abnormally Dangerous Activities, accessed February 10, 2026, https://www.lawshelf.com/shortvideoscontentview/strict-liability-in-tort-law/
- Applying Strict Liability to Artificial Intelligence as an Abnormally Da, accessed February 10, 2026, https://scholarship.law.missouri.edu/cgi/viewcontent.cgi?article=2191&context=facpubs
- The Case For Limited AI Legal Personhood in Intellectual Property, accessed February 10, 2026, https://escholarship.org/content/qt8k01b2d3/qt8k01b2d3.pdf
- AI Can’t Hold Patents Because They Require an “Inventor” to Be a, accessed February 10, 2026, https://www.jdsupra.com/legalnews/ai-can-t-hold-patents-because-they-7625419/
- No legal personhood for AI – PMC, accessed February 10, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC10682746/
- Recognizing Right: The Status of Artificial Intelligence, accessed February 10, 2026, https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?article=1373&context=jbtl
- The Artificial Intelligence and Data Act (AIDA) – Companion document, accessed February 10, 2026, https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
- The Artificial Intelligence and Data Act (AIDA) – OurCommons.ca, accessed February 10, 2026, https://www.ourcommons.ca/Content/Committee/441/INDU/Brief/BR12711885/br-external/CanadianLabourCongress-e.pdf
- Artificial Intelligence and Data Act, accessed February 10, 2026, https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
- Advanced AI Governance: A Literature Review of Problems, Options, accessed February 10, 2026, https://law-ai.org/advanced-ai-gov-litrev/
- Artificial Intelligence 2025 – Canada | Global Practice Guides, accessed February 10, 2026, https://practiceguides.chambers.com/practice-guides/artificial-intelligence-2025/canada
- Introducing the AI Safety Institute – GOV.UK, accessed February 10, 2026, https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute
- UK: AISI publishes paper on safety cases for frontier AI | Node, accessed February 10, 2026, https://www.dataguidance.com/node/641240
- Safety and security risks of generative artificial intelligence to 2025, accessed February 10, 2026, https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/safety-and-security-risks-of-generative-artificial-intelligence-to-2025-annex-b
- Australian Responsible AI Index 2025 – Fifth Quadrant, accessed February 10, 2026, https://www.fifthquadrant.com.au/content/uploads/Australian-Responsible-AI-Index-2025_Full-report.pdf
- Australia’s national benchmark for responsible AI adoption is now, accessed February 10, 2026, https://www.industry.gov.au/news/australias-national-benchmark-responsible-ai-adoption-now-available
- New Zealand’s strategy for artificial intelligence – The Beehive, accessed February 10, 2026, https://www.beehive.govt.nz/sites/default/files/2025-07/New%20Zealand%27s%20AI%20Strategy%20-%20Investing%20with%20confidence.pdf
- Artificial Intelligence strategy and business guidance now available, accessed February 10, 2026, https://www.mbie.govt.nz/about/news/artificial-intelligence-strategy-and-business-guidance-now-available
- New Zealand: New Guidance Released on Generative AI Use in the, accessed February 10, 2026, https://www.loc.gov/item/global-legal-monitor/2025-05-15/new-zealand-new-guidance-released-on-generative-ai-use-in-the-public-service/
- Is NZ defence and intelligence policy aligning with AUKUS in all but, accessed February 10, 2026, https://www.rnz.co.nz/news/political/586209/is-nz-defence-and-intelligence-policy-aligning-with-aukus-in-all-but-name
- Europe AI Act Summary: EU Artificial Intelligence Regulations, accessed February 10, 2026, https://gdprlocal.com/europe-ai-act-summary/
- AI Regulations in 2025: US, EU, UK, Japan, China & More, accessed February 10, 2026, https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more
- CHINA: Amendments to Cybersecurity Law Effective 1 January 2026, accessed February 10, 2026, https://privacymatters.dlapiper.com/2025/11/china-amendments-to-cybersecurity-law-effective-1-january-2026/
- China Approves Major Amendments to Cybersecurity Law,, accessed February 10, 2026, https://www.reedsmith.com/articles/china-approves-major-amendments-to-cybersecurity-law/
- China resets the path to comprehensive AI governance, accessed February 10, 2026, https://eastasiaforum.org/2025/12/25/china-resets-the-path-to-comprehensive-ai-governance/
