“By Steven Damian Imparl, J.D.”
with generous help from my AI friends ChatGPT, Claude, DeepSeek, Gemini Deep Search, Grok, Kimi, Le Chat, Microsoft Copilot, Phi, Perplexity, Superagent, and Qwen.
Last updated: February 23, 2026.
PLEASE NOTE: This content is solely for informational purposes. This content is a very early pre-publication edition. It is not legal advice; I am not your lawyer, and you are not my client. Thanks for visiting this page! 👍
Then display: “Copyright © 2026 by Steven Damian Imparl. All rights reserved.”
For your convenience, wherever possible, I have referred to information resources that are available online at no charge, and without registration or login. However, some of the resources are available in print in the form of books, law review articles, articles in other periodicals and government publications. Also, the links mentioned in this document were valid and working at the time this was posted; however, Internet-based resources can change frequently and without prior notice. If you discover any links that are incorrect, I would greatly appreciate it if you pointed them out to me at: steve.imparl@gmail.com. Thank you.
Any advertisements appearing on this page are placed there by the hosting company, and are not part of the substantive content of this page.
Last, I am doing this project to organize and publish information about more than 375 legal topics related to artificial intelligence. I am a one-man operation. If you would like to support this work, please <script type=”text/javascript” src=”https://cdnjs.buymeacoffee.com/1.0.0/button.prod.min.js” data-name=”bmc-button” data-slug=”stevenimparl” data-color=”#FFDD00″ data-emoji=”” data-font=”Lato” data-text=”Buy me a coffee.” data-outline-color=”#000000″ data-font-color=”#000000″ data-coffee-color=”#ffffff” ></script>
“By Steven Damian Imparl, J.D.”
with generous help from my AI friends ChatGPT, Claude, DeepSeek, Gemini Deep Search, Grok, Kimi, Le Chat, Microsoft Copilot, Phi, Perplexity, Superagent, and Qwen.
Last updated: February 23, 2026.
PLEASE NOTE: This content is solely for informational purposes. This content is a very early pre-publication edition. It is not legal advice; I am not your lawyer, and you are not my client. Thanks for visiting this page! 👍
“Copyright © 2026 by Steven Damian Imparl. All rights reserved.”
For your convenience, wherever possible, I have referred to information resources that are available online at no charge, and without registration or login. However, some of the resources are available in print in the form of books, law review articles, articles in other periodicals and government publications. Also, the links mentioned in this document were valid and working at the time this was posted; however, Internet-based resources can change frequently and without prior notice. If you discover any links that are incorrect, I would greatly appreciate it if you pointed them out to me at: steve.imparl@gmail.com.
Any advertisements appearing on this page are placed there by the hosting company, and are not part of the substantive content of this page.

https://www.buymeacoffee.com/stevenimparl
Thanks very much!
Executive Summary: The Era of the Digital Adjudicator
The integration of Artificial Intelligence (AI) into the architecture of workers’ compensation law represents one of the most profound shifts in administrative jurisprudence since the “Grand Bargain” of the early 20th century. As we stand in early 2026, the adoption of generative AI, predictive analytics, and algorithmic management has moved beyond theoretical pilot programs to become the operational standard in claims administration, risk management, and legal practice.1 This transformation offers the tantalizing promise of clearing backlog-plagued dockets and democratizing access to medical expertise. However, it simultaneously threatens to obscure liability, introduce systemic bias, and fundamentally challenge the exclusive remedy doctrine that has anchored the workers’ compensation system for over a century.3
This report provides an exhaustive, book-length analysis of the legal, ethical, and operational frameworks governing AI in workers’ compensation across the United States and Canada, with comparative insights from the United Kingdom, European Union, Australia, Singapore, and South Africa. It is designed as a definitive resource for legal practitioners, insurers, and policymakers who must navigate the “Algorithmic Workplace,” dissecting the friction between rapid technological deployment and the slower, deliberative pace of legislative and judicial oversight.
Part I: The AI-Augmented Claims Ecosystem and Operational Reality
1.1 From Automation to Autonomy: The State of the Art in 2026
The adoption trajectory of AI in workers’ compensation has been nothing short of vertical. By 2025, industry surveys indicated that 87% of carriers were actively building or deploying AI platforms, marking a distinct shift from simple process automation to complex decision support.2 The ecosystem is currently defined by three primary applications that have fundamentally altered the mechanics of claims adjudication.
1.1.1 Intelligent Claims Triage and “Touchless” Adjudication
Modern claims systems now utilize “agents”—autonomous software entities capable of reasoning and executing workflows—to handle the First Notice of Loss (FNOL). These systems ingest unstructured data from emails, recorded calls, and web forms to structure claim files instantly.5 By analyzing historical data lakes, these tools assign risk scores to new claims, predicting the likelihood of litigation, high-cost medical interventions, or delayed return-to-work (RTW) outcomes.2
The operational impact is statistically significant. Carriers utilizing these systems report a 15% reduction in legal involvement and a 5% reduction in lost-time claim costs due to early AI intervention.7 The mechanism is predictive: the AI identifies “intent to litigate” markers early—such as specific delays in reporting or disconnects between the injury description and the medical code—and prompts adjusters to intervene with “high-touch” empathy. However, the speed of “touchless” adjudication raises profound due process concerns. If an algorithm flags a claim as “high risk” or “fraudulent” within seconds of filing based on opaque criteria, the claimant’s ability to contest the “black box” logic becomes a central legal battleground. The efficiency of the system arguably comes at the cost of transparency, creating a new class of “algorithmic denials” that are difficult to challenge through traditional administrative appeals.
1.1.2 Generative Medical Summarization and the Hallucination Risk
The sheer volume of medical records in workers’ compensation has historically been a bottleneck, often referred to as the “data avalanche.” Generative AI tools, integrated into platforms like SmartAdvocate and utilizing engines like Supio, now automatically summarize thousands of pages of medical records.1 These summaries highlight pre-existing conditions, causal links, and treatment gaps, allowing adjusters and attorneys to digest complex medical histories in minutes rather than weeks.
While the efficiency gain is undeniable, the propensity of Large Language Models (LLMs) to fabricate information—a phenomenon known as “hallucination”—poses severe evidentiary risks. In the context of a workers’ compensation claim, a summary that incorrectly attributes a symptom to a pre-existing condition or invents a prior surgery can lead to the wrongful denial of benefits. This is not merely a theoretical risk; as discussed later in the analysis of the Sedano case, the legal system is already grappling with the fallout of AI-generated fabrications.8 The reliance on these summaries shifts the professional duty of the attorney or adjuster from “drafter” to “verifier,” a role for which many are ill-equipped or insufficiently trained.
1.1.3 Algorithmic Fraud Detection and the Specter of Bias
AI systems are increasingly deployed to detect anomalies in billing and claimant behavior. These systems analyze patterns across vast datasets to identify “red flags” such as provider upcoding, consistent treatment delays, or symptom constellations that do not match the reported injury mechanism.1
The legal peril here lies in the potential for disparate impact. There is a documented risk that these models may utilize proxies for protected characteristics—such as zip code, primary language, or surname—to flag claims for Special Investigation Unit (SIU) review.10 If an algorithm disproportionately flags claims from minority-majority neighborhoods as “suspicious,” the insurer faces liability under both state insurance codes and federal civil rights laws. The “black box” nature of these algorithms makes proving such discrimination uniquely difficult, as the decision-making pathway is often obscured even from the developers themselves.3
1.2 The Economic Imperative vs. The Human Element
The industry argues that AI is necessary to combat medical cost inflation, which 93% of carriers identify as a critical threat, and the “knowledge gap” left by retiring adjusters.2 However, the “human in the loop” remains a critical legal and ethical safeguard. Research from the Workers Compensation Research Institute (WCRI) indicates that injured workers often seek attorney representation not due to legal complexity, but due to communication failures—a gap AI is ill-equipped to bridge without human oversight.5 The automated denial letter, generated by an algorithm and devoid of empathy, is a primary driver of litigation. Thus, while AI can process data, it cannot manage the human relationship that sits at the core of the workers’ compensation compact.
| AI Application | Primary Operational Benefit | Primary Legal & Ethical Risk |
| FNOL Triage | Speed; Immediate routing to specialists; Reduced lag time | Algorithmic bias in risk scoring; “Black box” denials lacking explanation |
| Medical Summarization | Massive time savings; Identifying missed diagnoses | “Hallucinations” (fabricated facts); Evidentiary admissibility challenges |
| Fraud Detection | Identifying complex schemes; Reducing leakage | Disparate impact on protected classes; Privacy violations; “Guilt by algorithm” |
| Demand Packages | Automated drafting of settlement demands | Unauthorized practice of law; Lack of nuanced legal argument; Formulaic undervaluation |
Part II: The Regulatory Patchwork in the United States
As of 2026, the United States lacks a unified federal AI law specifically governing insurance or employment. Instead, a complex patchwork of state regulations, model laws, and soft-law guidance has emerged, creating a compliance minefield for national carriers and employers.
2.1 The NAIC Model Bulletin: The De Facto National Standard
The National Association of Insurance Commissioners (NAIC) adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers in late 2023. By 2025, over 25 states had adopted this bulletin, making it the primary regulatory framework for workers’ compensation carriers.12 The Bulletin represents a “principles-based” approach, emphasizing governance over prescriptive technical standards.
Key Provisions for Claims Administration:
- AIS Program Requirement: Insurers must maintain a written Artificial Intelligence Systems (AIS) Program. This program must address governance, risk management, and internal controls for every phase of the AI life cycle, from development to retirement.14
- No “Black Box” Defense: The Bulletin explicitly prevents insurers from blaming the algorithm. They retain ultimate responsibility for all decisions made by AI systems, even those developed by third-party vendors. This “non-delegable duty” is critical in workers’ compensation, where carriers often rely on third-party administrators (TPAs) and software vendors.15
- Adverse Consumer Outcomes: The bulletin specifically targets decisions that result in “Adverse Consumer Outcomes,” defined as decisions that negatively impact the consumer (e.g., claim denial, reduced benefit amount). Insurers must be able to explain the rationale behind these decisions to regulators, effectively creating a “right to explanation” for injured workers.15
- Third-Party Vendor Oversight: Carriers must audit their AI vendors. This effectively extends regulatory reach to software developers who were previously shielded from insurance regulations.12
Analysis: The NAIC Bulletin fundamentally shifts the burden of proof to the insurer. In a workers’ compensation dispute, a carrier can no longer simply assert that “the system flagged the claim.” They must be prepared to produce the governance documentation, the testing logs for bias, and the rationale for the AI’s output. This creates a new avenue for discovery in bad faith litigation, where plaintiff attorneys will increasingly demand the “AIS Program” documents to prove systemic negligence.
2.2 State-Specific Legislative Frameworks
2.2.1 Illinois: The Strict Liability of Algorithms
Illinois has established itself as a bellwether for AI regulation in employment, moving beyond data privacy to substantive civil rights protections.
- HB 3773 (Effective Jan 1, 2026): This amendment to the Illinois Human Rights Act explicitly prohibits the use of AI in a manner that results in discrimination based on protected classes. Crucially, it requires employers to notify employees when AI is used in employment decisions, which include hiring, discipline, and discharge.16
- Impact on Workers’ Comp: While primarily aimed at hiring, the broad definition of “employment decisions” likely encompasses return-to-work determinations and termination following an injury. If an AI system recommends terminating an injured worker because their “productivity score” dropped post-injury, this could constitute both retaliation and disability discrimination under the new law. The requirement for notice means that an injured worker has a statutory right to know if an algorithm played a role in their adverse employment action.19
2.2.2 New York: Transparency and The “Robot Tax”
New York’s regulatory environment is characterized by high transparency requirements and aggressive labor protection, reflecting its strong union history.
- Senate Bill S822 (2025): This bill mandates that state agencies (including the Workers’ Compensation Board) maintain and publish an inventory of all AI systems that “directly impact the public.” It explicitly prohibits the use of AI to displace state employees or diminish civil service rights, effectively banning the full automation of the adjudication function within the Board itself.20
- Insurance Circular Letter No. 7: The NY Department of Financial Services (NYDFS) requires insurers to perform disparate impact testing on their AI models to ensure they do not unfairly discriminate against protected classes. This goes beyond the NAIC model by imposing specific testing requirements for “proxy discrimination”.13
- Proposed “Robot Tax”: Legislation has been introduced (though not yet passed) to impose a tax on companies that displace workers with AI, using the revenue to fund retraining. This reflects a growing legislative hostility toward automation-driven labor displacement, which could eventually influence premium calculations for highly automated workplaces.21
2.2.3 California: Safety and Sanctions
California continues to lead in privacy and safety regulation, with a focus on the legal profession’s conduct.
- Senate Bill 53 (Transparency in Frontier AI): This bill requires transparency for “frontier” AI models and extends whistleblower protections to employees who report safety risks associated with AI systems. This is relevant to tech companies developing the tools used in claims administration.22
- The Sedano Precedent (2025): In Sedano v. Live Action General Engineering, the California Workers’ Compensation Appeals Board (WCAB) issued a Notice of Intent to sanction a law firm for filing a petition drafted by AI that contained “hallucinated” case law and irrelevant legal arguments. This decision established that attorneys are strictly liable for AI-generated content in pleadings. The Board’s rationale was rooted in the duty of candor and the verification requirements of legal pleadings.8 This case serves as a warning that the “efficiency” of AI cannot bypass the professional obligations of the attorney.
2.2.4 Colorado: High-Risk Classification
The Colorado Artificial Intelligence Act (CAIA) (Effective Feb 2026) imposes a duty of reasonable care on developers and deployers of “high-risk” AI systems. “High-risk” is defined to include systems that make consequential decisions regarding employment or essential government services. This likely captures automated processing systems used in workers’ compensation, subjecting them to mandatory impact assessments and creating a statutory duty of care that could support negligence claims.19
2.3 Federal Overlay: EEOC and ADA Guidance
The U.S. Equal Employment Opportunity Commission (EEOC) has issued guidance warning that the use of AI in employment decisions can violate the Americans with Disabilities Act (ADA).
- Screening Out Disabilities: Algorithms that measure keystrokes or reaction times may unfairly screen out workers with physical impairments (e.g., arthritis, visual impairments), leading to liability. In a workers’ compensation context, this is critical for “Return to Work” assessments. If an algorithm deems a worker “unfit” based on a metric that does not account for their disability (or reasonable accommodation), the employer faces ADA liability.25
- Medical Inquiries: AI systems that analyze voice patterns or facial expressions to detect “fraud” or “malingering” may constitute prohibited medical examinations under the ADA if used pre-offer or without business necessity. The use of such biometric analysis on injured workers is legally perilous.25
Part III: The Canadian Legal Context
Canada’s approach to AI in workers’ compensation is characterized by a “patchwork” of provincial privacy laws and emerging litigation, contrasting with the more aggressive legislative activity in US states. The failure of federal legislation has shifted the burden of regulation to the provinces and the courts.
3.1 Legislative Stagnation: Bill C-27 and AIDA
The federal Artificial Intelligence and Data Act (AIDA), part of Bill C-27, aimed to regulate high-impact AI systems. However, as of late 2025, it faced significant delays and failed to pass before the election cycle.27 This legislative void leaves Canadian workers’ compensation boards and employers navigating a regulatory gray zone, relying primarily on existing privacy (PIPEDA) and human rights laws. Without a specific AI statute, the legal analysis often turns on whether the use of AI constitutes a “reasonable” collection of data under privacy statutes.
3.2 Ontario: Leading the Provincial Charge
Ontario has taken specific steps to regulate AI in the workplace, acting as the primary innovator in Canadian employment law.
- Bill 149 (Working for Workers Four Act, 2024): Requires employers to disclose if AI is used in the hiring process. While focused on recruitment, legal experts suggest this transparency principle will likely extend to performance monitoring and claims management in future litigation. The principle is that workers have a right to know if a machine is evaluating them.28
- Privacy & Electronic Monitoring: Ontario employers must have a written policy on electronic monitoring of employees. This is directly relevant to “algorithmic management” tools used to track injured workers’ recovery or return-to-work compliance. If an employer uses GPS tracking to dispute a WSIB claim, they must have disclosed this monitoring in their policy, or the evidence may be excluded.30
3.3 British Columbia: The Judicial Rejection of AI Hallucinations
British Columbia’s Workers’ Compensation Appeal Tribunal (WCAT) has taken a firm stance against the misuse of AI in legal submissions, mirroring the Sedano decision in California.
- Decision A2501051 (2025): The tribunal rejected a worker’s appeal that relied on AI-generated submissions. The AI had fabricated case law (citing non-existent WCAT decisions) and misrepresented WorkSafeBC policy. The tribunal warned that such conduct could lead to cost orders against parties, emphasizing that AI cannot replace the “nuanced understanding” of a human representative. This decision establishes a precedent that “AI incompetence” is a valid ground for dismissing appeals or levying penalties.31
- WorkSafeBC Policy: The agency has emphasized that while AI can assist in claims processing, it must not replace human decision-making in contentious cases. They have notably investigated “claim suppression” in major projects, highlighting the need for human oversight in reporting systems.32
3.4 Class Actions and the “AI Washing” Risk
Canada is seeing a rise in class action litigation related to AI, driven by the aggressive plaintiff bar in provinces like Quebec and British Columbia.
- Privacy & Biometrics: Claims are emerging regarding the collection of biometric data (voice, facial recognition) by employers without adequate consent. This is particularly relevant for “safety” wearables.27
- “AI Washing”: Companies misrepresenting the capabilities of their AI tools (e.g., claiming a safety tool prevents all injuries or that a claims system is “unbiased”) face liability under consumer protection and competition laws. This “AI washing” litigation serves as a check on the marketing hype surrounding legal tech.27
Part IV: Algorithmic Management as a Compensable Hazard
The definition of “workplace hazard” is expanding. Traditionally focused on physical dangers like unguarded machinery or toxic chemicals, the 2026 landscape includes “Algorithmic Management”—the use of software to direct, evaluate, and discipline workers. This shift requires a reimagining of what constitutes a compensable injury.
4.1 The “Electronic Panopticon” and Psychosocial Injury
Algorithmic management tools often function as “always-on” supervisors, tracking keystrokes, GPS location, and even biometric data (heart rate, eye movement) to measure productivity in real-time.34
- Psychosocial Risk: Research confirms that this constant surveillance creates “psychosocial risks” including chronic stress, anxiety, and burnout. Workers feel a loss of autonomy (“decision autonomy”) and fear of “function creep” (data collected for safety used for discipline). The feeling of being watched by an unblinking digital eye creates a “pressure cooker” environment.35
- Compensability: In jurisdictions that recognize “mental-mental” (mental injury caused by mental stress) claims, algorithmic management is becoming a valid cause of action. If an algorithm sets unrealistic quotas that lead to a nervous breakdown, that injury is increasingly viewed as compensable. The legal argument is that the algorithm itself is the “instrument of injury,” analogous to a malfunctioning machine press.34
4.2 Wearables: Safety Tool or Surveillance Device?
Wearable technology (smart vests, exoskeletons) is marketed as a safety solution to prevent musculoskeletal injuries. These devices can alert workers to unsafe lifting postures or fatigue.
- Privacy vs. Safety: While these devices can reduce injuries by 50-90% 38, they also collect granular data on worker movement. Legal conflicts arise when this data is used to deny workers’ comp claims (e.g., “The data shows you didn’t lift with your knees, therefore you violated safety protocol”). This creates a tension between the employer’s right to enforce safety and the worker’s right to privacy and non-discrimination.
- Legal Guidance: The EEOC and various state privacy laws warn that mandatory wearables may constitute a “medical examination” under the ADA if they measure physiological responses (heart rate, blood pressure). Employers must ensure data is used solely for safety and not for discriminatory discipline or to screen out workers with disabilities.26
4.3 The Gig Economy Gap: Singapore’s Platform Workers Bill
The distinction between “employee” and “contractor” is blurred by algorithmic control. Singapore provides a leading model for addressing this through the Platform Workers Bill 2024.
- Distinct Category: This legislation created a distinct legal category for “platform workers” who are subject to algorithmic management. It mandates that platform operators provide Work Injury Compensation (WIC) insurance and Central Provident Fund (CPF) contributions, explicitly recognizing that algorithmic control creates an employment-like relationship that warrants protection.40
- Significance: This model is being closely watched by US and Canadian policymakers as a potential solution to the coverage gaps for gig workers injured on the job. It represents a legislative acknowledgment that the algorithm acts as a manager, and therefore the entity deploying it must bear the cost of workplace injuries.42
Part V: Piercing the Shield – Liability and Exclusive Remedy
The “Exclusive Remedy” rule—which bars employees from suing their employers for work-related injuries in exchange for no-fault benefits—is the bedrock of the workers’ compensation system. However, the introduction of AI is providing plaintiff attorneys with new legal theories to pierce this shield and access the deeper pockets of tort liability.
5.1 The “Dual Persona” Doctrine and Product Liability
The “Dual Persona” (or Dual Capacity) doctrine allows an employee to sue their employer if the employer acts in a second capacity that confers obligations independent of the employment relationship.
- AI as a Product: If an employer develops a proprietary AI tool (e.g., a warehouse robot or a custom scheduling algorithm) that injures a worker, plaintiffs are arguing the employer should be liable as a manufacturer under product liability law.
- Jurisprudential Split:
- California/Ohio: Historically more open to Dual Capacity, allowing suits if the product is also sold to the general public. In Mercer v. Uniroyal, the court allowed a suit where a defective tire made by the employer caused the injury.43
- Majority Rule: Most states (e.g., Minnesota, South Carolina) reject this, viewing the obligation to provide safe tools as “inextricably wound” with the employer’s duty to provide a safe workplace. The courts argue that allowing such suits would dismantle the workers’ comp bargain.45
- The AI Twist: As companies increasingly “fine-tune” open-source models (like LLaMA or GPT-4) for internal use, they effectively become “manufacturers” of a new software product. If that software “hallucinates” a safety instruction that causes injury, the Dual Persona doctrine may see a resurgence, as the software development function is distinct from the core business of the employer.
5.2 Third-Party Vendor Liability
Since the employer is often shielded, litigation is shifting toward the third-party vendors who build the AI systems.
- Strict Product Liability: Plaintiffs argue that AI software is a “product” and that “black box” algorithms are defectively designed because their decisions are opaque and unpredictable. This theory attempts to apply the logic of strict liability (usually reserved for physical products) to code.47
- Indemnification Battles: Vendor contracts typically contain aggressive limitation of liability clauses (“Exclusive Remedy” limited to fees paid). However, courts are scrutinizing these clauses when personal injury is involved. Vendors like Cognite and Digital.ai often have specific indemnification provisions for bodily injury that cannot be disclaimed, recognizing the potential for physical harm from industrial AI.48
5.3 Agency Liability: Mobley v. Workday
The seminal case of Mobley v. Workday (N.D. Cal. 2024/2025) has opened a new avenue for liability: Agency Theory.
- The Ruling: The court held that Workday, a software vendor, could be liable as an “agent” of the employer because its AI tools were “delegated responsibility” for hiring decisions. The software was not just a passive tool (like a spreadsheet) but an active decision-maker.10
- Application to Workers’ Comp: If a Third-Party Administrator (TPA) uses an AI vendor to automatically deny claims, that vendor could potentially be sued directly as an agent of the employer/insurer. This bypasses the exclusive remedy protections that usually apply only to the employer, allowing injured workers to sue the AI vendor for discrimination or negligence.51
5.4 Intentional Tort Exception
Most states allow workers to sue if the employer intended to cause harm or acted with “substantial certainty” that harm would occur.
- Algorithmic Intent: If an employer knowingly deploys an AI system that pushes workers beyond safe physiological limits (e.g., a warehouse picking algorithm that ignores heat stress data despite warnings), plaintiffs argue this constitutes “substantial certainty” of harm. In states like Washington and West Virginia, this “deliberate intent” exception is a viable path to civil liability. Plaintiffs are using data logs from the AI itself to prove the employer knew the risk and prioritized algorithmic efficiency over safety.47
Part VI: Professional Responsibility and The “Sedano” Standard
The legal profession itself is being transformed—and disciplined—by AI. The introduction of generative AI into legal practice has created a crisis of competence and candor.
6.1 The Sedano Warning
The case of Sedano v. Live Action General Engineering (2025) serves as the primary cautionary tale for the industry.
- Facts: Defense counsel filed a Petition for Reconsideration drafted by AI. The petition cited non-existent cases, invented facts, and failed to cite the evidentiary record properly.
- Holding: The WCAB issued a Notice of Intent to sanction the firm $2,500. The Board held that the attorney of record is strictly liable for the content of pleadings. “AI is a tool… not a replacement for independent thought.” The Board emphasized that the duty to verify citations is non-delegable.8
- Rule 11 Implications: This mirrors federal Rule 11 sanctions. Courts effectively treat unverified AI hallucinations as a violation of the duty of candor. The “I didn’t know” defense is no longer viable.
6.2 The Duty of Competence
Legal ethics opinions (ABA, State Bars) now affirm that the “Duty of Technology Competence” requires lawyers to understand the risks of AI. Ignorance of how an LLM works (e.g., believing it is a search engine rather than a probabilistic token generator) is a violation of professional ethics. Attorneys must understand the risks of bias, hallucination, and confidentiality breaches before using these tools.4
6.3 Judicial Use of AI
The question of whether judges can use AI is also active. While AI can assist in summarizing evidence, the “judicial function” of weighing credibility and applying discretion cannot be delegated to an algorithm. Opinions drafted by AI without human verification are subject to reversal for failure to exercise judicial discretion. The Sedano case itself highlighted that the Board suspected AI use partly because the legal reasoning was “hollow” and lacked the nuance of a human analysis.53
Part VII: Global Comparative Perspectives
7.1 European Union: The AI Act
The EU AI Act (fully applicable 2026) represents the most comprehensive attempt to regulate AI globally.
- High-Risk Classification: The Act explicitly classifies AI systems used in employment, worker management, and access to self-employment as High-Risk. This includes tools used for recruitment, task allocation, and performance monitoring.
- Requirements: These systems must undergo rigorous conformity assessments, maintain high data quality governance (to prevent bias), and ensure human oversight. Using a “black box” to fire a worker is effectively illegal in the EU.
- Insurance Impact: Insurers using AI for risk assessment in life and health insurance are also classified as High-Risk. This creates a regulatory burden significantly higher than in the US, requiring insurers to prove their models are robust, transparent, and fair before deployment.54
7.2 United Kingdom: Goal-Setting Regulation
The UK Health and Safety Executive (HSE) maintains a “goal-setting” approach rather than a prescriptive one.
- Health and Safety at Work Act (HSWA): The HSE asserts that existing obligations under the HSWA apply to AI. If AI causes a safety risk, the employer is liable regardless of the technology’s novelty. The HSE expects employers to risk-assess their AI tools just as they would a physical machine.
- Regulatory Sandbox: The UK is using a “sandbox” approach to test AI safety in industrial settings, favoring innovation over strict ex-ante regulation.56
7.3 Australia: Privacy and WHS Intersection
Australia combines Work Health and Safety (WHS) laws with privacy reform.
- Safe Work Australia: Has issued guidance that WHS duties extend to psychological health affected by AI surveillance. The agency recognizes that algorithmic management can be a “psychosocial hazard.”
- Voluntary Standard: The government has released a Voluntary AI Safety Standard (2024) to guide businesses until legislation is finalized. Privacy laws are also being updated to require specific consent for AI analysis of personal information.58
7.4 South Africa: Digital Transformation Challenges
South Africa’s experience serves as a case study in the risks of rapid digitalization without adequate infrastructure.
- Compensation Fund Modernization: The Fund is migrating to the uMehluko digital system to address chronic backlogs. However, the transition has been plagued by technical failures, leaving workers unpaid and medical providers frustrated.
- Hybrid Inspections: The Department of Employment and Labour is moving toward “hybrid inspections” that combine human inspectors with AI data analytics to target non-compliant employers. This reflects a desire to leapfrog legacy inefficiencies using tech, though execution remains a challenge.61
Conclusion and Recommendations
By 2026, AI has become an inextricable part of the workers’ compensation fabric. It offers the only viable solution to the sector’s “data avalanche” but brings existential legal risks. The tension between the “speed” of the algorithm and the “equity” of the law is the defining conflict of this decade.
Recommendations for Stakeholders:
- For Employers: Conduct an Algorithmic Impact Assessment for any tool that measures worker productivity or safety. Ensure “human in the loop” protocols are documented to defend against ADA and intentional tort claims. Treat your algorithms as you would a piece of heavy machinery: with regular safety audits and maintenance.28
- For Insurers: Align with the NAIC Model Bulletin. Establish an AIS Governance Committee that includes legal, compliance, and data science representatives. You cannot outsource your regulatory liability; you must audit your third-party vendors for bias and “black box” explainability.15
- For Legal Practitioners: Verify every citation. Treat Generative AI as a precarious drafting assistant, not a junior associate. Be prepared to litigate the “agency” of AI vendors in third-party suits. The duty of competence now includes technological literacy.31
- For Policymakers: Look to the Singapore Model for gig worker coverage and the EU AI Act for risk classification frameworks. The era of “light touch” regulation is ending; clear statutory guardrails are necessary to prevent a crisis of legitimacy in the adjudication system.
The gavel has not been replaced by the algorithm, but it is now guided by it. The challenge for the legal profession is to ensure that this guidance remains just, transparent, and human-centric.
Works cited
- The Role of AI in Legal Software in 2025 | SmartAdvocate, accessed February 17, 2026, https://www.smartadvocate.com/article/the-ai-revolution-in-legal-software-what-your-firm-needs-to-know-for-2025
- Generative AI Reshapes Workers’ Compensation as Insurers Race …, accessed February 17, 2026, https://riskandinsurance.com/generative-ai-reshapes-workers-compensation-as-insurers-race-to-transform-operations/
- Understanding the Risks of AI in Workers’ Compensation Law, accessed February 17, 2026, https://www.deflaw.com/insights/understanding-the-risks-of-ai-in-workers-compensation-law/
- The Implications of Machine Learning in Workers’ Compensation, accessed February 17, 2026, https://www.jdsupra.com/legalnews/artificial-intelligence-the-3773439/
- Where AI Actually Works in Workers’ Comp — And Where It Shouldn’t, accessed February 17, 2026, https://www.workerscompensation.com/expert-analysis/where-ai-actually-works-in-workers-comp-and-where-it-shouldnt/
- How AI Is Transforming Workers’ Comp Claims in 2025 – RescueMeds, accessed February 17, 2026, https://rescuemeds.com/ai-in-workers-comp-claims-2025/
- AI Reduces Legal Involvement in Workers’ Compensation Lost-Time, accessed February 17, 2026, https://www.gradientai.com/press_gradient-ai-study-reduces-workers-compensation-lost-time-claims-legal-engagement
- WCAB Warns Against Unchecked Use of Artificial Intelligence in …, accessed February 17, 2026, https://www.sullivanoncomp.com/blog/wcab-warns-against-unchecked-use-of-artificial-intelligence-in-legal-pleadings
- How AI is Influencing Workers’ Compensation Claims in 2025, accessed February 17, 2026, https://ieatraining.org/ai-influencing-workers-compensation-claims
- Lead Article: When Machines Discriminate: The Rise of AI Bias, accessed February 17, 2026, https://www.quinnemanuel.com/the-firm/publications/when-machines-discriminate-the-rise-of-ai-bias-lawsuits/
- Where AI Actually Works in Workers’ Comp — And Where It Shouldn’t, accessed February 17, 2026, https://www.wcrinet.org/news/detail/where-ai-actually-works-in-workers-comp-and-where-it-shouldnt
- The Implications and Scope of the NAIC Model Bulletin on the Use, accessed February 17, 2026, https://www.hklaw.com/en/insights/publications/2025/05/the-implications-and-scope-of-the-naic-model-bulletin
- AI In Workers Compensation – – NCOIL, accessed February 17, 2026, https://ncoil.org/wp-content/uploads/2025/11/Sebastian-Negrusa-Presentation.pdf
- Understanding the NAIC model AI bulletin: what it means for insurers, accessed February 17, 2026, https://www.kennedyslaw.com/en/thought-leadership/article/2025/understanding-the-naic-model-ai-bulletin-what-it-means-for-insurers/
- Bulletin 2024-20-INS – Use of Artificial … – State of Michigan, accessed February 17, 2026, https://www.michigan.gov/difs/-/media/Project/Websites/difs/Bulletins/2024/Bulletin_2024-20-INS.pdf
- New Illinois AI Law Requires Employee Notice, Affirms Existing, accessed February 17, 2026, https://www.seyfarth.com/news-insights/legal-update-new-illinois-ai-law-requires-employee-notice-affirms-existing-employer-nondiscrimination-duties.html
- Illinois Regulates Employers’ Use of AI in Decision-Making, accessed February 17, 2026, https://www.munckwilson.com/news-insights/illinois-regulates-employers-use-of-ai-in-decision-making/
- Illinois Enacts Requirements for AI Use in Employment Decisions, accessed February 17, 2026, https://www.gtlaw.com/en/insights/2024/9/illinois-enacts-requirements-for-ai-use-in-employment-decisions
- Illinois Passes Broad Legislation on Use of AI in Employment, accessed February 17, 2026, https://www.jonesday.com/en/insights/2024/10/illinois-becomes-second-state-to-pass-broad-legislation-on-the-use-of-ai-in-employment-decisions
- NY State Senate Bill 2025-S822, accessed February 17, 2026, https://www.nysenate.gov/legislation/bills/2025/S822
- What’s the latest on AI regulations in New York? – RBT CPAs, LLP, accessed February 17, 2026, https://www.rbtcpas.com/thought-leadership-articles/government/whats-the-latest-on-ai-regulations-in-new-york/
- Artificial Intelligence (AI) – GT L&E Blog, accessed February 17, 2026, https://www.gtlaw-laborandemployment.com/artificial-intelligence-ai/
- WCAB Warns Against Unchecked Use of AI in Legal Pleadings, accessed February 17, 2026, https://ieatraining.org/ai-legal-drafting-wcab-warning
- Illinois Enacts New AI Legislation, Joining Colorado as the Only, accessed February 17, 2026, https://www.employmentlawworldview.com/illinois-enacts-new-ai-legislation-joining-colorado-as-the-only-states-regulating-algorithmic-discrimination-in-private-sector-use-of-ai-systems-us/
- Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring, accessed February 17, 2026, https://www.ada.gov/assets/pdfs/ai-guidance.pdf
- EEOC Issues New Guidance on Wearable Technologies: Key Points, accessed February 17, 2026, https://www.disabilityleavelaw.com/2025/01/articles/eeoc-guidance/eeoc-issues-new-guidance-on-wearable-technologies-key-points-for-employers/
- AI class actions in Canada: new legal ground or the same old claims …, accessed February 17, 2026, https://www.torys.com/our-latest-thinking/publications/2025/10/ai-class-actions-in-canada
- AI and Job Postings: Navigating Ontario’s Upcoming Requirements, accessed February 17, 2026, https://www.dataprotectionreport.com/2025/06/ai-and-job-postings-navigating-ontarios-upcoming-requirements/
- Artificial Intelligence, Real Consequences? Legal Considerations for, accessed February 17, 2026, https://www.labourandemploymentlaw.com/2025/02/artificial-intelligence-real-consequences-legal-considerations-for-canadian-employers-using-ai-tools-in-hiring/
- The Limits to Ontario’s New Algorithmic Monitoring Legislation | OHRH, accessed February 17, 2026, https://ohrh.law.ox.ac.uk/electronic-workplace-monitoring-and-human-rights-the-limits-to-ontarios-new-algorithmic-monitoring-legislation/
- Why Using ChatGPT or AI to Be Your Own WCB Lawyer Can Be, accessed February 17, 2026, https://www.wcblawyers.com/uncategorized/why-using-chatgpt-or-ai-to-be-your-own-wcb-lawyer-can-be-catastrophic/
- WorkSafeBC’s Failures ‘a Major Embarrassment,’ Says Report, accessed February 17, 2026, https://thetyee.ca/News/2025/05/26/WorkSafeBC-Failures-Major-Embarrassment-Report/
- How BC Megaprojects Were Cleared of Suppressing Injured, accessed February 17, 2026, https://thetyee.ca/News/2025/09/15/BC-Megaprojects-Injured-Workers-Claims/
- Workers’ Health under Algorithmic Management: Emerging Findings, accessed February 17, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC9859016/
- Making algorithmic management safe and healthy for workers, accessed February 17, 2026, https://repository.essex.ac.uk/35233/1/OSH%20%26%20Alg%20M%20Cefaliello%20Moore%20Donoghue%281%29.pdf
- (PDF) Making algorithmic management safe and healthy for workers, accessed February 17, 2026, https://www.researchgate.net/publication/370237743_Making_algorithmic_management_safe_and_healthy_for_workers_Addressing_psychosocial_risks_in_new_legal_provisions
- Algorithmic Management: Legal Risks for Ontario Employers, accessed February 17, 2026, https://www.thehayneslawfirm.com/employment-info/employment-law-blog/the-growing-use-of-algorithmic-management-is-your-boss-a-bot-and-is-that-legal/
- Smart Safety: How Wearable Tech Cuts Work Comp Claims by 90%, accessed February 17, 2026, https://www.mem-ins.com/smart-safety-how-wearable-tech-cuts-work-comp-claims/
- Wearables Surveillance Workers Comp Return: What to Know About, accessed February 17, 2026, https://visionarylawgroup.com/wearables-surveillance-workers-comp-return/
- 0910 Platform Workers Bill Round Up Speech by SMS Koh – MOM, accessed February 17, 2026, https://www.mom.gov.sg/newsroom/speeches/2024/0910-platform-workers-bill-round-up-speech-by-sms-koh
- Parliament passes Platform Workers Bill to strengthen protections for, accessed February 17, 2026, https://www.ntuc.org.sg/womenandfamily/news/Parliament-passes-Platform-Workers-Bill-to-strengthen-protections-for-platform-workers/
- Work Injury Compensation Act may be applied to ’employee-like’ gig, accessed February 17, 2026, https://www.straitstimes.com/singapore/politics/work-injury-compensation-act-may-be-applied-to-employee-like-gig-workers-koh-poh-koon
- LONGEVER v. REVERE COPPER BRASS INC | 381 Mass. 221 | Law, accessed February 17, 2026, https://www.casemine.com/judgement/us/5914927aadd7b049345990b9
- SCHUMP v. FIRESTONE TIRE RUBBER CO | No. 88-348 – CaseMine, accessed February 17, 2026, https://www.casemine.com/judgement/us/59148a79add7b04934512c2a
- Kaess v. Armstrong Cork Co. – Justia Law, accessed February 17, 2026, https://law.justia.com/cases/minnesota/supreme-court/1987/cx-86-410-2.html
- KNOX v. OKLAHOMA GAS AND ELECTRIC CO. – Justia Law, accessed February 17, 2026, https://law.justia.com/cases/oklahoma/supreme-court/2024/121047.html
- Forthcoming, Boston College Law Review. Reach out to … – SSRN, accessed February 17, 2026, https://papers.ssrn.com/sol3/Delivery.cfm/5393348.pdf?abstractid=5393348&mirid=1
- Master Subscription and Professional Services Agreement – Cognite, accessed February 17, 2026, https://www.cognite.com/en/company/legal/master-subscription-services-agreement-2025
- Digitalai-Master-Subscription-Agreement … – Digital.ai Resources, accessed February 17, 2026, https://info.digital.ai/rs/981-LQX-968/images/Digitalai-Master-Subscription-Agreement-January-2023.docx
- DEREK MOBLEY v. WORKDAY INC (2024) – FindLaw Caselaw, accessed February 17, 2026, https://caselaw.findlaw.com/court/us-dis-crt-n-d-cal/116378658.html
- Liability Considerations for Developers and Users of Agentic AI, accessed February 17, 2026, https://www.jdsupra.com/legalnews/liability-considerations-for-developers-5820092/
- Washington Supreme Court issues decision on workers … – AWC, accessed February 17, 2026, https://wacities.org/advocacy/news/advocacy-news/2025/06/11/washington-supreme-court-issues-decision-on-workers–compensation-exceptions
- Artificial Intelligence (AI) In Medicine and Law – NAWCJ, accessed February 17, 2026, https://nawcj.org/artificial-intelligence-in-medicine-and-law/
- REGULATORY FRAMEWORK APPLICABLE TO AI SYSTEMS IN, accessed February 17, 2026, https://www.eiopa.europa.eu/document/download/b53a3b92-08cc-4079-a4f7-606cf309a34a_en?filename=Factsheet-on-the-regulatory-framework-applicable-to-AI-systems-in-the-insurance-sector-july-2024.pdf
- Article 26: Obligations of Deployers of High-Risk AI Systems, accessed February 17, 2026, https://artificialintelligenceact.eu/article/26/
- HSE’s regulatory approach to Artificial Intelligence (AI) – News, accessed February 17, 2026, https://www.hse.gov.uk/news/hse-ai.htm
- Health and Safety Executive’s approach to AI in workplace safety, accessed February 17, 2026, https://cedrec.com/r/news/0924-health-and-safety-executives-approach-to-ai-in-workplace-safety
- Ethical use of artificial intelligence in the workplace final report, accessed February 17, 2026, https://www.safework.nsw.gov.au/resource-library/whs-research/Ethical-use-of-artificial-intelligence-in-the-workplace-report.pdf
- Current Legal Landscape for AI in Australia – SafeAI-Aus, accessed February 17, 2026, https://safeaiaus.org/safety-standards/ai-australian-legislation/
- Applying WHS principles to the regulation of AI in the workplace, accessed February 17, 2026, https://www.allens.com.au/insights-news/insights/2025/05/applying-whs-principles-to-the-regulation-of-ai-in-the-workplace/
- Portfolio Committee on Employment and Labour, accessed February 17, 2026, https://pmg.org.za/files/210422cOMPSOL.pdf
- Speech by Minister Meth at the Departmental Strategic Planning, accessed February 17, 2026, https://www.labour.gov.za/Media-Desk/speeches/Pages/Speech-by-Minister-Meth-at-the-Departmental-Strategic-Planning-Session-.aspx
