top of page

Search Results

221 results found with an empty search

  • Ireland Publishes General Scheme of the Regulation of Artificial Intelligence Bill 2026, February 4, 2026

    On 4 February 2026, the Irish Department of Enterprise, Tourism and Employment published the General Scheme of the Regulation of Artificial Intelligence Bill 2026 (the Scheme), following Government Decisions of 4 March 2025 and 22 July 2025 approving Ireland's distributed regulatory model for the EU AI Act. The Scheme is at the pre-legislative stage: it sets out the Heads of Bill — the legislative blueprint — but has not yet been introduced to the Oireachtas as a formal Bill. It will undergo pre-legislative scrutiny before formal drafting. The statutory establishment day for the AI Office of Ireland (Oifig Intleachta Shaorga na hEireann) must occur on or before 1 August 2026, driven by the EU AI Act's implementation timeline under Regulation (EU) 2024/1689 — subject to the EU Omnibus proposal. The Scheme implements Articles 70 and following of the EU AI Act, which require Member States to designate national competent authorities and a single point of contact. The controlling authority is Regulation (EU) 2024/1689 (EU AI Act), specifically Articles 70–74, which require Member States to designate competent authorities for market surveillance and a national supervisory authority, and Article 70(2), which requires designation of a single point of contact. The Scheme proposes to make the AI Office of Ireland that single point of contact. Ireland adopts a distributed enforcement model: thirteen existing sectoral regulators will serve as competent authorities (market surveillance authorities) within their respective domains — including the Central Bank of Ireland for regulated financial services, Coimisiun na Mean for audiovisual media, the Data Protection Commission for personal data rights, and the Health Service Executive for certain high-risk health applications — while the AI Office coordinates nationally. Administrative sanctions proposed in Part 5 of the Scheme mirror EU AI Act Article 99: prohibited practices face fines of up to 35 million euros or 7% of total worldwide annual turnover, whichever is higher; non-compliance with high-risk obligations faces fines of up to 15 million euros or 3% of worldwide turnover. For AI system providers, deployers, importers, and distributors operating in Ireland or placing AI systems on the Irish market, the Scheme signals the specific sectoral authority that will regulate their activities. Companies operating across multiple regulated sectors — such as a fintech firm deploying AI in both financial services and employment contexts — may face simultaneous supervision by the Central Bank of Ireland and the Workplace Relations Commission. Sectoral authorities will hold enforcement powers including announced and unannounced on-site inspections, documentary review, product sampling, AI system testing, content removal orders, and — as a last resort for high-risk systems — access to source code. The Scheme also provides that authorities may challenge a provider's self-assessment that a system falls outside the high-risk category under Annex III of the EU AI Act, with potential reclassification triggering full high-risk compliance obligations for both providers and deployers. The 1 August 2026 statutory establishment deadline for the AI Office is subject to the EU's proposed Digital Omnibus Package, which could alter the EU AI Act's implementation timeline. The Scheme remains subject to amendment during pre-legislative scrutiny and subsequent formal drafting; its provisions are therefore not yet law. Administrative sanctions under the Scheme do not take effect until confirmed by the High Court, providing a layer of judicial oversight before penalties become operative. Businesses should treat the August 2026 deadline as a practical window for building compliance capability, particularly in documenting AI system risk classifications and preparing post-market monitoring and incident-reporting procedures. Source: Department of Enterprise, Tourism and Employment, Ireland, "General Scheme of the Regulation of Artificial Intelligence Bill 2026," published 4 February 2026, https://www.gov.ie/en/department-of-enterprise-tourism-and-employment/publications/general-scheme-of-the-regulation-of-artificial-intelligence-bill-2026/ (confirmed 4 March 2026). See also: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (EU AI Act), OJ L, 2024/1689, 12.7.2024, Arts. 70-74. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • Italy Enacts Framework Law on Artificial Intelligence, Law No. 132/2025, October 10, 2025

    Italy's Framework Law on Artificial Intelligence, Law No. 132 of 23 September 2025 (Legge 23 settembre 2025, n. 132), entered into force on 10 October 2025. The law was published in the Gazzetta Ufficiale n. 223 of 25 September 2025. It constitutes Italy's primary domestic AI legislation and applies in conjunction with, and subject to, Regulation (EU) 2024/1689 of the European Parliament and of the Council (the EU AI Act), to which it expressly cross-references under Article 1(2). The law is in final, effective form — no transitional phase remains before its core operative provisions took effect, though implementing acts may still follow. Law No. 132/2025 consists of 28 articles in six chapters: Principles and aims; Sector-specific provisions; National strategy, national authorities, and promotional measures; Provisions protecting users and on copyright; Criminal provisions; and Financial and final provisions. Article 2 provides operative definitions of "artificial intelligence system," "data," and "artificial intelligence model," tracking the glossary of the EU AI Act and drawing a functional distinction between a trained model (a mathematical function) and a system (the complete application deploying one or more models). Article 3 establishes general principles applicable across the entire life cycle of an AI system or model: transparency, proportionality, security, personal data protection, confidentiality, accuracy, non-discrimination, gender equality, sustainability, and respect for human autonomy and decision-making power. The law requires, under Chapter IV, that authorship of copyright-protected works belongs exclusively to human beings even where AI tools contribute to creation — provided the result reflects the human author's intellectual effort. Chapter V criminalises unlawful dissemination of AI-generated or manipulated content and extends criminal liability to text and data mining activities involving online materials. For AI system providers, deployers, and developers operating in Italy or directing AI products at Italian users, Law No. 132/2025 imposes the Article 3 principles as binding obligations across the entire system or model life cycle. Healthcare providers using AI in triage or clinical decision-support, employers using AI in hiring or workplace monitoring, and judicial bodies using AI in procedural or analytical functions all face sector-specific rules under Chapter II. Technology providers supplying AI tools to persons with disabilities must guarantee full accessibility. Courts applying the law will assess copyright authorship by examining the creative contribution of human prompts versus the autonomous generative role of the underlying model — a question with broad implications for AI-generated creative content commercialised in Italy. For minors, Article 2 specifies that a person under 14 years of age requires parental or guardian consent for personal data processing connected to AI system use; a minor who has reached 14 may consent autonomously. The law operates in parallel with the EU AI Act, and Italian courts and regulators must interpret Law No. 132/2025 consistently with Regulation (EU) 2024/1689 pursuant to Article 1(2). Because the EU AI Act's full prohibition and high-risk-system compliance deadlines are phased, the interaction between those deadlines and Law No. 132/2025's immediate entry-into-force date creates an area of open interpretive uncertainty that practitioners advising on Italian AI deployments must monitor. Source: Legge 23 settembre 2025, n. 132, "Disposizioni e deleghe al Governo in materia di intelligenza artificiale," published in Gazzetta Ufficiale n. 223 del 25-09-2025, entered into force 10 October 2025, https://www.normattiva.it/uri-res/N2Ls?urn:nir:stato:legge:2025-07-31;132 (confirmed 4 March 2026). See also: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (EU AI Act), OJ L, 2024/1689, 12.7.2024. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • SEC and CFTC Launch Joint Project Crypto for Unified Digital Asset Regulation, January 29, 2026

    On 29 January 2026, the U.S. Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) jointly announced Project Crypto at the CFTC's Washington D.C. headquarters. The initiative transforms what had been an internal SEC initiative into a bilateral inter-agency program. Project Crypto is in the policy and pre-rulemaking stage: no final rules have been promulgated, but agency heads have directed staff to begin drafting specific rulemakings and guidance. SEC Chairman Paul S. Atkins and CFTC Chairman Michael S. Selig made the announcement jointly, marking the first significant inter-agency coordination on digital assets under the current administration. The authority for each agency's participation rests on existing statutory grants: the SEC operates under the Securities Exchange Act of 1934 and the Securities Act of 1933, while the CFTC derives its jurisdiction from the Commodity Exchange Act (CEA), 7 U.S.C. § 1 et seq. Chairman Selig, in his remarks at the joint event, stated that "most crypto assets trading today are not securities" — aligning with Chairman Atkins' previously stated taxonomy, which would treat digital commodities, digital collectibles, and digital tools as outside the definition of "security" even when sold as part of an investment contract. Staff of both agencies have been directed to work jointly toward codifying a shared crypto-asset taxonomy as an interim measure pending congressional market structure legislation. For crypto exchanges, trading platforms, DeFi protocol operators, custodians, and software developers, Project Crypto's immediate directed priorities carry near-term compliance significance. CFTC staff has been directed to draft rules on: (1) enabling additional forms of eligible tokenized collateral; (2) onshoring perpetual derivative products; (3) establishing safe-harbor protections for software developers and non-custodial wallet providers; (4) facilitating multi-product "Super-Apps"; (5) clarifying rules for leveraged, margined, or financed retail crypto trading; and (6) developing a new designated contract market (DCM) registration category for retail leveraged crypto trading. The agencies will also formalize coordination through a memorandum of understanding establishing joint data-sharing, coordinated surveillance, weekly leadership calls, and harmonized rulemaking processes. Project Crypto does not resolve the underlying statutory ambiguity between SEC and CFTC jurisdiction, which can only be definitively addressed by Congress. Comprehensive market structure legislation — often referred to as the "digital asset market structure bill" — remains pending. The GENIUS Act, mentioned in Chairman Selig's remarks as already having become law, addresses stablecoins separately. Project Crypto's taxonomy and guidance will operate as interim administrative positions only, subject to modification by future rulemaking, judicial challenge, or legislative action. Market participants should treat current agency statements as directional signals rather than binding rules and continue to monitor formal rulemaking dockets at both the SEC and CFTC. Source: (1) Paul S. Atkins, SEC Chairman, "Opening Remarks at Joint SEC-CFTC Harmonization Event – Project Crypto," Jan. 29, 2026, https://www.sec.gov/newsroom/speeches-statements/atkins-remarks-joint-sec-cftc-harmonization-event-project-crypto-012926 (confirmed 4 March 2026). (2) Michael S. Selig, CFTC Chairman, "The Next Phase of Project Crypto: Unleashing Innovation for the New Frontier of Finance," Jan. 29, 2026, https://www.cftc.gov/PressRoom/SpeechesTestimony/opaselig1 (confirmed 4 March 2026). The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • FCA Commences High Court Proceedings Against Crypto Exchange HTX, February 2026

    The Financial Conduct Authority (FCA) filed legal proceedings against global crypto exchange HTX (formerly Huobi) on 21 October 2025 in the Chancery Division of the High Court of England and Wales. On 4 February 2026, the High Court granted the FCA permission to serve those proceedings out of the jurisdiction and by alternative means. The action is at the litigation stage, with no final judgment issued. The FCA simultaneously publicised the matter to UK consumers on 10 February 2026. The defendants named in Claim No. FS-2025-000015 include Huobi Global S.A. (incorporated in Panama) and multiple categories of Persons Unknown: those who own or control htx.com , those who are the HTX Operators as defined in the HTX Platform User Agreement dated 13 July 2023, and those who control HTX's social media accounts on platforms including X, Facebook, Instagram, Telegram, TikTok, YouTube, Discord, Medium, and LinkedIn. The controlling legal provision is the financial promotions regime for cryptoassets established under section 21 of the Financial Services and Markets Act 2000 (FSMA 2000), as applied to cryptoassets by the Financial Services and Markets Act 2000 (Financial Promotion) Order 2005 (SI 2005/1529), as amended by the Financial Services and Markets Act 2000 (Financial Promotion) (Amendment) Order 2023 (SI 2023/792). That amending Order, which came into force on 8 October 2023, requires all firms communicating financial promotions relating to cryptoassets to UK consumers — including overseas firms — to either be FCA-authorised, to have their promotions approved by an authorised person, or to qualify for an exemption. Advertising cryptoassets on social media or websites without complying with those conditions constitutes a criminal offence. HTX operates without FCA authorisation or registration. The FCA had previously placed HTX on its warning list and made repeated, unanswered engagement attempts. Following the commencement of proceedings, HTX restricted new UK customer registrations, but existing UK users retained access to the exchange and were still exposed to allegedly unlawful financial promotions. The FCA requested that social media platforms block HTX's accounts for UK-based users and asked Google and Apple to remove the HTX application from their UK app stores. This is the first enforcement action the FCA has brought against a crypto firm under the October 2023 financial promotions regime. The HTX matter confirms that the October 2023 regime applies to exchanges based outside the United Kingdom that direct marketing at UK consumers. Crypto exchanges, wallet providers, and any entity publishing cryptoasset promotions accessible by UK users must ensure those communications are either made by or approved by an FCA-authorised person, or fall within a specified exemption. The opaque ownership structure of HTX — named defendants are largely Persons Unknown — demonstrates the FCA's willingness to proceed against unidentified controllers via alternative service, raising the practical risk exposure for any offshore operator with UK-facing marketing activity. Source: FCA, "HTX (formerly Huobi): legal proceedings information," Claim No. FS-2025-000015, published 10 February 2026, https://www.fca.org.uk/news/statements/htx-huobi-legal-proceedings (confirmed 4 March 2026). See also: Financial Services and Markets Act 2000 (Financial Promotion) (Amendment) Order 2023, SI 2023/792, in force 8 October 2023. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • UK Supreme Court Replaces Aerotel AI Patent Test, Remits Emotional Perception Case, February 11, 2026

    On February 11, 2026, the UK Supreme Court (UKSC) handed down judgment in Emotional Perception AI Limited v Comptroller General of Patents, Designs and Trade Marks [2026] UKSC 3, allowing the appeal of Emotional Perception AI Ltd. The Court, constituted by Lords Briggs, Hamblen, Leggatt, Stephens, and Kitchin, set aside the Hearing Officer's refusal of Emotional Perception's patent application and formally abandoned the four-step Aerotel test used by UK courts since 2006 to assess whether computer-implemented inventions fall within the subject-matter exclusions in Article 52 of the European Patent Convention (EPC). The case was on appeal from the Court of Appeal's decision at [2024] EWCA Civ 825. The controlling provision is Article 52(2)-(3) EPC, as implemented by section 1(2) of the Patents Act 1977. The Court held that the Aerotel approach — which required courts to identify the "technical contribution" of the claimed invention compared to the prior art — had diverged from the EPO's approach in Enlarged Board of Appeal decision G1/19 (the "any hardware" approach). Under G1/19's framework, a claimed invention physically implemented on hardware — including a software-implemented artificial neural network (ANN) — is not excluded as a "program for a computer" solely because it runs on conventional digital hardware. Eligibility is assessed at a threshold stage by asking whether the claim, as a whole, has a technical character; the analysis of whether that character is novel and inventive is deferred to the inventive-step stage using the EPO's Comvik methodology. For AI-related patent applicants before the UK Intellectual Property Office (UKIPO), the Aerotel four-step screening test no longer applies. AI system claims implemented on any hardware — including ANNs, machine-learning models embedded in conventional computers, and dedicated inference chips — clear the Article 52 eligibility hurdle more readily under the G1/19 approach. The assessment of whether the AI-specific contribution is inventive moves primarily to the novelty and inventive-step stage. Applicants who received adverse decisions under the Aerotel test may have grounds to seek reconsideration or appeal. The Court declined to apply the new test to the specific claims before it, because neither party had made submissions on how G1/19's intermediate step should be conducted on these facts. The Hearing Officer must now re-examine Emotional Perception's claims under the G1/19 approach. The Court also explicitly reserved for future decision the question whether replacing Aerotel requires any departure from the UK's established Pozzoli/Actavis v ICOS inventive-step methodology. Source: Emotional Perception AI Limited v Comptroller General of Patents, Designs and Trade Marks [2026] UKSC 3, [2026] WLR(D) 95 (11 Feb. 2026), https://www.bailii.org/uk/cases/UKSC/2026/3.html . Currency confirmed via BAILII UKSC 2026 index, https://www.bailii.org/uk/cases/UKSC/2026/ (listing current as of March 3, 2026). Confirmed: March 3, 2026. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • Ontario IPC and OHRC Issue Joint AI Principles for Public Sector Use, January 21, 2026

    On January 21, 2026, the Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) jointly published a set of principles titled "Principles for the Responsible Use of Artificial Intelligence." The document is directed at Ontario public sector bodies — including provincial ministries, agencies, municipalities, school boards, and health-care institutions — that deploy or procure AI systems affecting individuals. The publication is a joint guidance document, not a statute or regulation, and carries no independent legislative force. The principles draw their authority from the IPC's mandate under the Freedom of Information and Protection of Privacy Act (FIPPA), R.S.O. 1990, c. F.31, and the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA), R.S.O. 1990, c. M.56, as well as the OHRC's mandate under the Ontario Human Rights Code, R.S.O. 1990, c. H.19. The document identifies nine principles: lawfulness and authority; fairness and non-discrimination; privacy by design; transparency and explainability; accountability; accuracy and reliability; security; limited use and data minimization; and human oversight. The IPC and OHRC state that AI use must comply with existing obligations under FIPPA, MFIPPA, and the Human Rights Code; the principles do not create new legal obligations beyond those statutes. Ontario public sector bodies procuring or deploying AI tools — including automated decision-support systems, predictive analytics, facial recognition, and generative AI platforms — must now weigh these principles when designing procurement terms, data-sharing agreements, and algorithmic impact assessments. The transparency and explainability principle requires that affected individuals be informed when AI is used to make or materially inform decisions about them. The human oversight principle requires that AI-generated outputs be subject to meaningful review by an accountable human decision-maker before they produce legal or significant practical effects on individuals. The document is guidance only; the IPC and OHRC have not announced any new enforcement mechanism tied specifically to the principles. Enforcement continues under existing FIPPA/MFIPPA complaint and order powers held by the IPC, and under the Ontario Human Rights Tribunal's jurisdiction over Code complaints. The principles do not address private-sector AI use, federal government AI systems, or AI systems regulated under the proposed federal Artificial Intelligence and Data Act (AIDA), which remains before Parliament as of March 3, 2026. Source: Information and Privacy Commissioner of Ontario and Ontario Human Rights Commission, "Principles for the Responsible Use of Artificial Intelligence" (January 21, 2026), https://www.ipc.on.ca/en/resources/principles-responsible-use-artificial-intelligence . Confirmed: March 3, 2026. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • California DFAL Licensing Deadline Set for July 1, 2026

    Governor Newsom signed Assembly Bill 39 (AB 39) and Senate Bill 401 (SB 401) on October 13, 2023, together constituting the Digital Financial Assets Law (DFAL) (California Financial Code section 3100 et seq.). AB 1934, signed September 29, 2024, extended the license effective date from July 1, 2025 to July 1, 2026. Under Financial Code section 3103, persons who exchange, transfer, or store digital financial assets with or on behalf of California residents, or who issue digital financial assets redeemable for money, must hold a DFPI license. Exemptions include certain banks and persons with less than $50,000 in projected annual gross revenue from subject activity. Exchanges, OTC desks, stablecoin issuers, custodians, and crypto-kiosk operators serving California residents must submit completed DFAL applications to the DFPI by July 1, 2026, to continue operations. Kiosk operators face additional obligations: by January 1, 2024, a $1,000-per-customer daily dispensing cap and receipt-provision requirements took effect; by January 1, 2025, transaction fee caps of $5 or 15% of the USD equivalent of digital assets transacted — whichever is greater — came into force under Financial Code section 3904. A DFAL license does not replace other California licenses (e.g., money transmission) that may be required. Source: California DFPI, "Digital Financial Assets" page, https://dfpi.ca.gov/regulated-industries/digital-financial-assets/ (confirmed July 1, 2026 license date and exemptions); California DFPI, "Digital Financial Assets Law Frequently Asked Questions," https://dfpi.ca.gov/regulated-industries/digital-financial-assets/digital-financial-assets-law-frequently-asked-questions/ (confirmed application via NMLS, "early 2026" application release). The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • DOJ Sentences Paxful to $4M Criminal Penalty for AML and Travel Act Violations, February 11, 2026

    On February 11, 2026, Paxful Holdings Inc., a now-defunct peer-to-peer virtual currency trading platform, was sentenced in the U.S. District Court for the Eastern District of California to a criminal monetary penalty of $4 million — the maximum the company could pay, per DOJ's independent financial analysis — following its December 10, 2025 guilty plea to a three-count criminal information. Prosecution was led by the DOJ Criminal Division's Money Laundering, Narcotics and Forfeiture Section (MNF) Bank Integrity Unit and the U.S. Attorney for the Eastern District of California, in coordination with FinCEN. Paxful pleaded guilty to: (1) conspiracy to violate the Travel Act (18 U.S.C. 1952) by facilitating illegal prostitution through interstate commerce; (2) conspiracy to operate an unlicensed money transmitting business (MTB) in violation of 18 U.S.C. 1960 by knowingly transmitting funds derived from criminal offenses including fraud and illegal prostitution; and (3) conspiracy to violate the Bank Secrecy Act's (BSA) anti-money laundering (AML) program requirement under 31 U.S.C. 5318(h). The applicable guideline fine range was $112.5 million, reduced by 25% to reflect cooperation credit, but was capped at $4 million based on Paxful's demonstrated inability to pay. Court documents establish that between July 2015 and June 2019, Paxful marketed its platform as not requiring know-your-customer (KYC) information, presented fake AML policies to third parties, and knowingly processed approximately $3 billion in trades — including at least $17 million worth of bitcoin transferred to Backpage and a copycat site involved in illegal prostitution — while collecting over $29.7 million in total revenue. Virtual asset exchanges, custody providers, and peer-to-peer platforms that fail to implement effective AML programs and file suspicious activity reports face equivalent criminal exposure under the same BSA and 18 U.S.C. 1960 theories. This resolution was coordinated with FinCEN and investigated jointly by ICE-HSI and IRS-CI. Paxful's co-founder and former CTO, Artur Schaback, separately pleaded guilty on July 8, 2024 to conspiring to fail to maintain an effective AML program. The sentencing of the entity completes the coordinated resolution. Press Release No. 25-1158 identifies Bank Integrity Unit Deputy Chief Kevin Mosley as the lead DOJ attorney. Source: "Virtual Asset Trading Platform Pleads Guilty to Violating the Travel Act and Other Federal Criminal Charges," DOJ Office of Public Affairs Press Release 25-1158 (Dec. 10, 2025), https://www.justice.gov/opa/pr/virtual-asset-trading-platform-pleads-guilty-violating-travel-act-and-other-federal-criminal ; sentence confirmed via DOJ Criminal Division, Money Laundering, Narcotics and Forfeiture Section, https://www.justice.gov/criminal/criminal-mnf (listing Feb. 11, 2026 sentencing entry). Confirmed: March 3, 2026. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • OCC Proposes GENIUS Act Stablecoin Regulations Across Five CFR Parts, March 2, 2026

    The Office of the Comptroller of the Currency (OCC) published a notice of proposed rulemaking (NPRM) in the Federal Register on March 2, 2026, to implement the Guiding and Establishing National Innovation for U.S. Stablecoins Act (GENIUS Act). The proposal is at the pre-final stage: the comment period closes May 1, 2026, and no final rule has been issued. The NPRM covers the issuance of payment stablecoins and related activities by entities subject to OCC jurisdiction, including national banks, federal savings associations, and OCC-licensed entities. The proposed regulations would amend 12 CFR Parts 3, 6, 8, 15, and 19 under Docket ID OCC-2025-0372, RIN 1557-AF41. The GENIUS Act (codified at 12 U.S.C. 5901 et seq.) defines a "payment stablecoin" under section 2(22) as a digital asset designed for use as a means of payment or settlement whose issuer must maintain a stable value relative to a fixed monetary amount and is obligated to convert or redeem it at that fixed amount. The proposal introduces a two-category licensing pathway: "Federal qualified payment stablecoin issuers" regulated exclusively by the OCC and subsidiaries of OCC-regulated insured depository institutions that obtain OCC approval as "permitted payment stablecoin issuers." Exchanges, custodians, and any DeFi protocol that seeks to partner with or integrate OCC-licensed stablecoin issuers must assess whether their arrangements trigger OCC-supervised activity thresholds. The GENIUS Act's self-executing provisions on state-law preemption apply immediately: section 4(b)(1) (12 U.S.C. 5903(b)(1)) provides that a Federal qualified payment stablecoin issuer is subject to a single licensing requirement — the OCC's — notwithstanding conflicting state licensing laws. This effectively displaces existing state money-transmission licenses for OCC-licensed issuers, a significant shift for multi-state stablecoin operators. The NPRM does not address GENIUS Act provisions applicable to non-OCC-regulated entities (such as state-chartered stablecoin issuers operating under a parallel state licensing regime established by the Act). Reserve requirements, redemption mechanics, and consumer protection obligations will be addressed in supplemental rulemaking. Market participants should submit comments by May 1, 2026, via Regulations.gov (Docket ID OCC-2025-0372). Source: "Implementing the Guiding and Establishing National Innovation for U.S. Stablecoins Act for the Issuance of Stablecoins by Entities Subject to the Jurisdiction of the Office of the Comptroller of the Currency," 91 FR 10202, Docket ID OCC-2025-0372, RIN 1557-AF41 (OCC, Mar. 2, 2026), https://www.federalregister.gov/documents/2026/03/02/2026-04089/implementing-the-guiding-and-establishing-national-innovation-for-us-stablecoins-act-for-the . Confirmed: March 3, 2026. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • EU Retail Investment Strategy: Provisional Deal on Value-for-Money, Inducements, and Retail Disclosures

    What operational and compliance changes should EU investment firms, insurers, and intermediaries plan for after the 18 December 2025 provisional political agreement between the European Parliament and the Council on the Retail Investment Strategy (RIS) package, covering retail advice and suitability, “value for money” checks, inducement controls, third‑party marketing (including financial influencers), and PRIIPs key information document disclosures? The 18 December 2025 deal is a provisional trilogue agreement. It requires formal approval by the European Parliament and the Council before the amending directive and the amending PRIIPs regulation are adopted and published. The Council has indicated that technical work will continue to finalise the legal texts in early 2026 and that key compliance dates will run from Official Journal publication, with staged application periods. In this context, “retail clients” refers to non-professional customers under MiFID II and comparable customer categories under insurance distribution rules. “Inducements” refers to fees, commissions, or monetary or non‑monetary benefits received in connection with an investment service. “Finfluencers” refers to third parties paid or incentivised to promote investment products through social media. “KID” refers to the PRIIPs key information document. Status and timing The package takes the form of targeted amendments to MiFID II, the Insurance Distribution Directive, UCITS, AIFMD, and Solvency II, plus an amendment to the PRIIPs Regulation. The Council has described a phased timeline: member states transpose the directive amendments 24 months after publication; the general application date is 30 months after publication; PRIIPs changes apply 18 months after publication. Until the package is adopted and applicable, existing EU and national rules continue to apply. Press release summaries are not the final legal text; firms will need to review the adopted acts and the related supervisory templates once available. Advice and suitability Negotiators agreed that financial and insurance advisers must recommend products and services that are suitable for the client’s needs. Suitability is assessed using proportionate, necessary information, including the client’s knowledge and experience, financial situation, ability to bear full or partial losses, investment needs and objectives, and risk tolerance. The Council text also signals a simplified suitability route for recommendations limited to diversified, non‑complex, cost‑efficient instruments. In that case, advisers would no longer need to assess the client’s investment knowledge and experience as part of the suitability assessment. Value for money and product comparison A central change is a “value for money” gate at product design and distribution. The Council text states that retail investment firms will be required to identify and quantify all costs and charges borne by investors and assess whether total costs and charges are justified and proportionate. Where costs and charges are not justified and proportionate, products should not be approved for sale to retail investors. To support supervisory scrutiny and product comparison, the Parliament press release states that ESMA and EIOPA are expected to develop supervisory benchmarks, and that investment firms would assess their products against peer groups with a representative number of similar instruments. The Council text also refers to agreed standards for peer groupings and supervisory benchmarks, including a period where national supervisory benchmarks may be used for insurance-based investment products. Retail clients should also be able to compare costs, charges, performance, and non‑financial benefits across products. The Commission press release also refers to standardised cost terminology, a yearly summary of portfolio performance for retail clients, and a feasibility study for a pan‑European tool that would support product comparison. Inducements and conflicts of interest The deal strengthens rules on inducements to reduce conflicts of interest in advice and distribution. The Parliament press release describes a new inducement test aimed at client best interests and clearer separation of inducements from other fees. The Council press release states that, where inducements are permitted, firms and advisers must act honestly, fairly, and professionally in the client’s best interests, the inducement must deliver a tangible client benefit, and the inducement cost must be disclosed clearly and separately from other charges. Member states remain free to introduce national inducement bans. Marketing communications and finfluencers The package places increased weight on retail financial education measures at member state level and on controls for online marketing and promotion. The Commission press release states that financial intermediaries are responsible for marketing communications, including communications disseminated via social media, celebrities, or other paid third parties. The Parliament press release states that, where firms use finfluencers to promote financial products or contracts, firms should have a written agreement with the finfluencer, hold the finfluencer’s contact details, and exercise control over the finfluencer’s promotional activity. PRIIPs KIDs and disclosure updates Negotiators agreed changes to the PRIIPs KID. The Parliament press release states that the KID should provide forward‑looking performance scenarios based on realistic data and plausible assumptions. The Council press release states that updated KID templates will be developed by the relevant European supervisory authorities and that KID information on costs, risk, and expected returns will be made more visible and accessible. It also states that, 30 months after entry into force of the new PRIIPs rules, KID information will have to be provided in a machine‑readable format to support comparison. Professional client opt‑up The Council press release indicates changes to the criteria for retail investors to be treated as professional clients, with two out of three criteria required: Transaction activity: 15 significant transactions over the last three years, 30 transactions over the previous year, or 10 transactions over €30,000 in unlisted companies over the last five years. Portfolio size: average portfolio value above €250,000 over the last three years. Experience or training: at least one year of relevant work in the financial sector, or proof of education or training and an ability to evaluate risk. The training/education alternative may not be combined with the portfolio criterion. The Council text also indicates that certain managers and directors of financial firms subject to fit‑and‑proper assessment, and certain AIFM employees with relevant fund knowledge and experience, will be treated as professional clients. Implementation planning Firms distributing products to EU retail clients should map the package to product design, pricing, cost/charge data, disclosures, advice workflows, inducement policies, and third‑party marketing arrangements. This includes assessing reliance on peer‑group comparisons and supervisory benchmarks for “value for money” and revisiting contracts and controls used for influencer-led promotion. Our team at Prokopiev Law Group advises on EU retail investment and distribution rules, including readiness planning for the RIS package across investment, insurance, and digital channels. Contact us to discuss scoping and implementation planning. Examples include: MiCA, CASP licensing, token structuring, securities analysis, AML/KYC, sanctions compliance, financial promotions, stablecoins, staking, DeFi, RWAs, custody. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

  • Model Weights, Watermarks, and Memorisation: UK, German, and US Primary-Source Signals on Generative AI and IP

    Generative AI providers are now facing materially different answers to a deceptively simple question: when does a model “contain” protected content, and who is responsible when that content shows up in outputs. In late 2025, the High Court of England and Wales (Chancery Division) and the Regional Court of Munich I each issued detailed rulings tied to real-world model behavior, while the USPTO’s Appeals Review Panel issued a precedential patent-eligibility decision that is directly relevant to ML training inventions. In the UK, Getty Images (US) Inc v Stability AI Ltd turned on a narrow set of claims that survived to trial. The court recorded that Getty’s “Outputs Claim” and “Training and Development Claims” were abandoned, leaving (i) trade mark infringement allegations tied to “synthetic” watermarks in generated images, and (ii) a novel secondary copyright theory aimed at downloads/importation of the Stable Diffusion model itself. The result was a mixed trade mark win for Getty on limited evidence, paired with a clear rejection of the secondary copyright importation theory for the model weights. On the trade mark side, the court treated infringement as evidence-driven and version-specific. Getty established infringement in respect of certain iStock watermarks under section 10(1) and section 10(2) of the Trade Marks Act 1994, but the findings were expressly confined to specific example outputs and did not extend to a broader conclusion about scale. The court dismissed Getty’s section 10(3) dilution claim, and it declined to address passing off on the way the case had been advanced. It also found no evidence supporting section 10(1) infringement for Getty Images watermarks, and it found no UK evidence of users generating Getty/iStock watermarks for certain later model versions referenced in the pleadings. The UK decision’s most consequential point for genAI developers was the court’s treatment of “infringing copy” in the secondary infringement provisions of the Copyright, Designs and Patents Act 1988. The court accepted that an “article” can be intangible, then held that an “infringing copy” must still be a “copy” in the statutory sense. On the facts found, the Stable Diffusion model in its final iteration did not store the claimant’s images, and the model weights were described as the product of learned patterns and features from training rather than stored copies. That conclusion disposed of the importation/distribution theory under sections 22 and 23 (as pleaded) because the model was not an “infringing copy,” even if training involved storing copies elsewhere during the training process. Germany has moved in the opposite direction on closely related concepts. In a case brought by GEMA against the operators of a GPT-based chatbot (models referenced in the judgment include “4” and “4o”), the Regional Court of Munich I (LG München I) treated memorisation as the hinge point. The court found that the disputed song lyrics were “reproducibly” embodied in the model parameters, and that the “work” could be made perceptible through simple prompts, which was sufficient for a reproduction analysis under German law’s technology-neutral understanding of fixation and perceptibility. The court granted injunctive relief against reproducing the lyrics “in” the language models and against making the lyrics publicly available via outputs, along with information and damages findings in principle. The Munich court also drew a sharp line on text-and-data-mining. It accepted that the German text and data mining exception (UrhG § 44b, implementing DSM Directive Article 4) can cover preparatory reproduction acts involved in assembling a training corpus. It then held that the memorisation of protected works within the model during training was not merely “text and data mining” and was not covered by § 44b, because the relevant reproductions did not serve the data analysis purpose that justifies the exception. In the court’s framing, the exception cannot be stretched to legitimise reproductions that reach beyond analysis and directly interfere with rightsholders’ exploitation interests. The Munich decision is also operationally important on attribution of conduct. The court treated the model operators as the responsible actors for reproductions caused by outputs where the prompts were “simple” and did not meaningfully dictate content. It held that the operators retained “Tatherrschaft” (control over the act) in that scenario, rather than shifting responsibility to the user as “prompter.” This matters for consumer-facing chat products, because it ties liability to controllable system choices: training data selection, training execution, and model architecture. In the United States, the USPTO’s Appeals Review Panel (ARP) moved in a different direction, addressing patent eligibility of AI training inventions rather than IP infringement risk. In Ex parte Desjardins, the ARP treated the claims as reciting an abstract idea at Step 2A, Prong One (mathematical calculation), then held the claims integrated that abstract idea into a practical application at Step 2A, Prong Two because the claim limitations were tied to technical improvements in continual learning and model efficiency: reducing storage requirements and preserving performance across sequential training, addressing “catastrophic forgetting.” The ARP vacated the Board’s § 101 new ground of rejection (without disturbing other aspects of the Board’s prior decisions). The USPTO then issued an advance notice of change to the MPEP to reflect the decision’s implications for examination practice. For product and legal teams operating across these jurisdictions, the combined signal is clear: “weights versus copies” is not a universal answer, and memorisation behavior is now a litigation fact, not a purely academic risk. UK exposure in this set of facts centred on trade mark and consumer perception issues tied to output artifacts (watermarks), while Germany treated the same category of model behavior (verbatim or near-verbatim retrieval from training data) as a reproduction that can occur “in the model,” paired with operator responsibility for outputs triggered by minimal prompts. US developments, at least at the USPTO, are simultaneously encouraging applicants to frame ML training inventions in terms of concrete technical improvements rather than “math in the abstract.” Practical mitigations that map to these rulings are not exotic. They include disciplined dataset provenance and opt-out handling, memorisation testing designed to detect verbatim recall, output controls for protected-corpus requests (lyrics, long passages), and product decisions that reduce the probability of generating confusing trade marks or brand-identifying artifacts. Where model weights are distributed, teams should also evaluate whether the legal theory in a given forum treats downstream distribution as distribution of a “copy,” and what evidence will be used to prove (or disprove) storage/containment of protected works. If you would like to discuss how these developments affect model training, deployment, licensing posture, and product controls, our team at Prokopiev Law Group can advise on both AI/content risk and adjacent digital-asset matters. Examples include: tokenised IP, NFTs, DeFi, stablecoins, staking, DAOs, RWAs, custody, MiCA compliance, VASP licensing, AML/KYC. Disclaimer: This document is for informational purposes only and does not constitute legal advice. The information provided is based on publicly available sources and may not reflect the latest legal developments. Readers should seek professional legal counsel before acting on any information contained in this document. Some parts of the text may be automatically generated. The views presented are those of the author and not any other individual or organization.

  • AI in Arbitration: Frameworks, Applications, and Challenges

    Artificial Intelligence (AI) is being integrated into arbitration as a tool to enhance efficiency and decision-making. In broad terms, AI refers to computer systems capable of tasks that typically require human intelligence, such as learning, pattern recognition, and natural language processing. In arbitration, AI’s role so far has been largely assistive – helping arbitrators and parties manage complex information and streamline procedures. For example, AI-driven software can rapidly review and analyze documents, searching exhibits or transcripts for relevant facts far faster than manual methods. Machine learning algorithms can detect inconsistencies across witness testimonies or summarize lengthy briefs into key points. Generative AI tools (like large language models) are also being used to draft texts – from procedural orders to initial award templates – based on user-provided inputs. The potential applications of AI in arbitration extend to nearly every stage of the process. AI systems can assist in legal research, sifting through case law and past awards to identify relevant precedents or even predict probable outcomes based on historical patterns. They can facilitate case management by automating routine administrative tasks, scheduling, and communications. We are proud to present an analysis of AI in arbitration available today. If you are exploring AI’s role in arbitration, Prokopiev Law Group can help. We pair seasoned litigators with leading-edge AI resources to streamline complex cases and help you navigate this evolving landscape. If your situation calls for additional expertise, we are equally prepared to connect you with the right partners. Legal and Regulatory Frameworks The incorporation of AI into arbitration raises questions about how existing laws and regulations apply to its use. Globally, no uniform or comprehensive legal regime yet governs AI in arbitration, but several jurisdictions have started to address the intersection of AI and dispute resolution through legislation, regulations, or policy guidelines. Convention on the Recognition and Enforcement of Foreign Arbitral Awards (New York Convention) A. Overview of the Convention’s Scope Article I(1) states that the Convention applies to the “recognition and enforcement of arbitral awards made in the territory of a State other than the State where the recognition and enforcement are sought,” arising out of disputes between “persons, whether physical or legal.” The Convention does not define “arbitrator” explicitly; rather, it references “arbitral awards … made by arbitrators appointed for each case or … permanent arbitral bodies.” There is no mention of any possibility that the arbitrator could be non-human or an AI entity. B. Key Provisions Envision Human Arbitrators Article II : Speaks of “an agreement in writing under which the parties undertake to submit to arbitration all or any differences….” The Convention assumes an “arbitration agreement” with standard rights to appoint or challenge “the arbitrator.” Article III–V : Concern the recognition/enforcement of awards and set out grounds upon which enforcement may be refused. For instance, Article V(1)(b)  refers to a party “not [being] given proper notice of the appointment of the arbitrator,” or “otherwise unable to present his case.” Article V(1)(d) : Allows refusal of enforcement if “the composition of the arbitral authority … was not in accordance with the agreement of the parties….” The reference to an “arbitral authority,” “arbitrator,” or “composition” suggests a set of identifiable, human  arbitrators who can be “composed” incorrectly or fail to abide by required qualifications. Article I(2) : The “term ‘arbitral awards’ shall include not only awards made by arbitrators appointed for each case but also those made by permanent arbitral bodies….” Even in the latter scenario, the Convention contemplates a recognized body of human arbitrators (e.g. an institution with a roster of living arbitrators), not an automated algorithm. C. The Convention’s Enforcement Regime Presupposes Human Judgment The entire enforcement structure is that an award is recognized only if it meets due-process requirements such as giving a party notice, enabling them to present their case, ensuring the arbitrator or arbitral body was validly composed. For instance, Article V(1)(a) contemplates that each party to the arbitration agreement must have “capacity,” and Article V(1)(b) contemplates that the party was able to present its case to an impartial decision-maker. An AI system cannot easily satisfy these due-process standards in the sense of being challenged, replaced, or tested for partiality or conflict of interest. D. “Permanent Arbitral Bodies” Do Not Imply Autonomous AI While Article I(2) references that an arbitral award can be made by “permanent arbitral bodies,” this does not open the door to a fully autonomous AI deciding the merits. A “permanent arbitral body” is typically an arbitral institution (like the ICC Court or an arbitral chamber) with rosters of living arbitrators. Nowhere does the Convention recognize a non-human decision-maker substituting for arbitrators themselves. UNCITRAL Model Law on International Commercial Arbitration A. Terminology and Structure Article 2(b) of the Model Law defines “arbitral tribunal” as “a sole arbitrator or a panel of arbitrators.” Article 10 refers to determining “the number of arbitrators,” “one” or “three,” etc., which in ordinary usage and practice means one or more individual persons. Article 11 lays out a procedure for appointing  arbitrators, handling their challenge  (articles 13, 14), and so on, plainly assuming a person. B. Core Provisions That Imply a Human Arbitrator Article 11 (and subsequent articles on challenge, removal, or replacement of arbitrators) revolve around verifying personal  traits, such as independence, impartiality, and conflicts of interest. For example, Article 12(1) requires an arbitrator, upon appointment, to “disclose any circumstances likely to give rise to justifiable doubts as to his impartiality or independence.” This is obviously oriented to a natural person. An AI system cannot meaningfully “disclose” personal conflicts. Article 31(1) demands that “The award shall be made in writing and shall be signed by the arbitrator or arbitrators.” While in practice a tribunal can sign electronically, the point is that an identifiable, accountable person  signs the award. A machine cannot undertake the personal act of signing or be held responsible. Article 19 affirms the freedom of the parties to determine procedure, but absent party agreement, the tribunal “may conduct the arbitration in such manner as it considers appropriate.” This includes evaluating evidence, hearing witnesses, and ensuring fundamental fairness (Articles 18, 24). That discretionary, human-like judgment is not accounted for if the “tribunal” were simply an AI tool with no human oversight. C. Arbitrator’s Duties Presuppose Personal Judgment Many of the Model Law’s articles require the arbitrator to exercise personal discretion and to do so impartially: Article 18 : “The parties shall be treated with equality and each party shall be given a full opportunity of presenting his case.” Human arbitrators are responsible for ensuring this fundamental right. Article 24 : The tribunal hears  the parties, manages  documents, questions  witnesses, etc. Article 26 : The tribunal may appoint  experts and question  them. Article 17  (and especially the 2006 amendments in Chapter IV A) require the arbitrator to assess  whether an “interim measure” is warranted, including the “harm likely to result if the measure is not granted.” These duties reflect a legal expectation of personal capacity for judgment , integral to the role of “arbitrator” as recognized by the Model Law. United States The United States has no specific federal statute or set of arbitration rules explicitly regulating the use of AI in arbitral proceedings. The Federal Arbitration Act (“FAA”), first enacted in 1925 (now codified at 9 U.S.C. §§ 1–16 and supplemented by Chapters 2–4), was drafted with human decision-makers in mind. Indeed, its provisions refer to “the arbitrators” or “either of them” in personal terms. Fully autonomous AI “arbitrators” were obviously not contemplated in 1925. Nonetheless, the FAA imposes no direct ban on an arbitrator’s use of technology. Indeed, under U.S. law, arbitration is fundamentally a matter of contract. If both parties consent, the arbitrator’s latitude to employ (or even to be) some form of technology is generally broad. So long as the parties’ agreement to arbitrate does not violate any other controlling principle (e.g., unconscionability, public policy), it will likely be enforceable. AI and the “Essence” of Arbitration Under the FAA The threshold issue is not whether an arbitrator may  use AI, but whether AI use undermines the essence of arbitration under the Federal Arbitration Act (FAA). The parties’ arbitration agreement—and the requirement that the arbitrator “ultimately decide” the matter—are central. Under 9 U.S.C. § 10(a), a party may move to vacate an award if “the arbitrators exceeded their powers,” or if there was “evident partiality” or “corruption.” In theory, if AI fully  supplants the human arbitrator and creates doubt about the award’s impartiality or the arbitrator’s independent judgment, a court could be asked to vacate on those grounds. A. Replacing the Arbitrator Entirely If AI replaces the arbitrator (with minimal or no human oversight), courts might question whether a non-human “arbitrator” is legally competent to issue an “award.” Under the FAA, the arbitrator’s written award is crucial (9 U.S.C. §§ 9–13). If the AI cannot satisfy minimal procedural requirements—like issuing a valid award or being sworn to hear testimony—or raises questions about “evident partiality,” a reviewing court could find a basis to vacate (9 U.S.C. § 10(a)). If an AI system controls the proceeding such that the human arbitrator exercises no true discretion, that might mean the award was not genuinely issued by the arbitrator—risking vacatur under 9 U.S.C. § 10(a)(4) for “imperfectly executed” powers. B. Public Policy Concerns An all-AI “award” that lacks a human hallmark of neutrality could, in a hypothetical scenario, be challenged under public policy. II. Potential Legal Challenges When AI Is Used in Arbitration A. Due Process and Fair Hearing Right to Present One’s Case (9 U.S.C. § 10(a)(3)) : Both parties must have the chance to be heard and present evidence. If AI inadvertently discards or downplays material evidence, and the arbitrator then fails to consider it, a party could allege denial of a fair hearing. Transparency : While arbitrators are not generally obliged to disclose their internal deliberations, an arbitrator’s undisclosed use of AI could raise due process issues if it introduces an unvetted analysis. If a losing party discovers the award rested on an AI-driven legal theory not argued by either side, the party could claim it had no opportunity to rebut it. “Undue Means” (9 U.S.C. § 10(a)(1)) : Traditionally, this refers to fraudulent or improper party conduct. Still, a creative argument might be that reliance on AI—trained on unknown data—without informing both parties is “undue means.” If the arbitrator’s decision relies on undisclosed AI, a party could argue it was effectively ambushed. B. Algorithmic Bias and Fairness of Outcomes Bias in AI Decision-Making : AI tools can inadvertently incorporate biases if trained on skewed data. This can undercut the neutrality required of an arbitrator. If an AI influences an award—for example, a damages calculator that systematically undervalues certain claims—a party might allege it introduced a biased element into the arbitration process. Challenge via “Evident Partiality” (9 U.S.C. § 10(a)(2)) : If an arbitrator relies on an AI known (or discoverable) to be biased, a losing party might argue constructive partiality. A court’s review is narrow, but extreme or obvious bias could support vacatur. III. FAA Vacatur or Modification of AI-Assisted Awards A. Exceeding Powers or Improper Delegation (9 U.S.C. § 10(a)(4)) An award is vulnerable if the arbitrator effectively delegates the decision to AI and merely rubber-stamps its output. Parties choose a human neutral—not a machine—and can argue the arbitrator “exceeded [their] powers” by failing to personally render judgment. B. Procedural Misconduct and Prejudice (9 U.S.C. § 10(a)(3)) Using AI might lead to misconduct if it pulls in information outside the record or curtails a party’s presentation of evidence. Any ex parte data-gathering (even by AI) can be challenged. Courts might find “misbehavior” if parties had no chance to confront AI-derived theories. C. Narrow Scope of Review Judicial review under the FAA is strictly limited (9 U.S.C. §§ 10, 11). Simple factual or legal errors—even if AI-related—rarely suffice for vacatur. A challenger must show the AI involvement triggered a recognized statutory ground (e.g., refusing to hear pertinent evidence or actual bias). Courts typically confirm awards unless there is a clear denial of fundamental fairness. D. Modification of Awards (9 U.S.C. § 11) If AI introduced a clear numerical error or a clerical-type mistake in the award, courts may modify or correct rather than vacate. Such errors include “evident material miscalculation” (§ 11(a)) or defects in form not affecting the merits (§ 11(c)). These are minor and straightforward fixes. AI Bill of Rights The White House’s Blueprint for an AI Bill of Rights (October 2022) sets forth high-level principles for the responsible design and use of automated systems. Two of its core tenets are particularly relevant: “Notice and Explanation” (transparency) and “Human Alternatives, Consideration, and Fallback” (human oversight). The Notice and Explanation  principle provides that people “should know that an automated system is being used and understand how and why it contributes to outcomes that impact [them]”, with plain-language explanations of an AI system’s role. The Human Alternatives  principle urges that people “should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems” caused by an automated decision. While the AI Bill of Rights is a policy guidance document (not a binding law), it reflects a federal push for algorithmic transparency, accountability, and human oversight in AI deployments. These values can certainly inform arbitration practice. For instance, if an arbitrator or arbitration institution chooses to utilize AI in case management or decision-making, adhering to these principles – by being transparent about the AI’s use and ensuring a human arbitrator remains in ultimate control – would be consistent with emerging best practices. We already see movement in this direction: industry guidelines under development (e.g. the Silicon Valley Arbitration & Mediation Center’s draft “Guidelines on the Use of AI in Arbitration”) emphasize disclosure of AI use and that arbitrators must not delegate their decision-making responsibility to an AI. European Union AI Regulation The AI Act (Regulation (EU) 2024/1689) lays down harmonized rules for the development, placing on the market, putting into service, and use of AI systems within the Union. It follows a risk-based approach, whereby AI systems that pose particularly high risks to safety or to fundamental rights are subject to enhanced obligations. The AI Act designates several domains in which AI systems are considered “high-risk.” Of particular relevance is Recital (61), which classifies AI systems “intended to be used by a judicial authority or on its behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts” as high-risk. That same recital extends the classification to AI systems “intended to be used by alternative dispute resolution bodies for those purposes,” when the decisions they produce create legal effects for the parties. Consequently, if an AI system is intended  to assist arbitrators in a manner akin to that for judicial authorities, and if its decisions or outputs can materially shape the arbitrators’ final binding outcome, then such an AI system comes within the “high-risk” classification. I. Conditions Triggering High-Risk Classification for AI Arbitration Does the AI “assist” in interpreting facts or law? According to Recital (61), the high-risk label applies when the AI system is used “to assist […] in researching and interpreting facts and the law and in applying the law to a concrete set of facts.” If an AI tool offers predictive analytics on likely case outcomes, interprets contract terms in light of relevant legal doctrine, or provides reasoned suggestions on liability, quantum of damages, or relevant procedural steps, it falls within the scope of “assisting” the arbiter in a legally determinative task. Does the AI system produce legal effects? The Regulation explicitly points to “the outcomes of the alternative dispute resolution proceedings [that] produce legal effects for the parties.”  (Recital (61)) Arbitral awards are typically binding on the parties—thus having legal effect—and often enforceable in national courts. Therefore, an AI system that guides or shapes the arbitrator’s (or arbitration panel’s) legally binding decision is presumably captured. Exclusion of “purely ancillary” uses Recital (61) clarifies that “purely ancillary administrative activities that do not affect the actual administration of justice in individual cases”  do not  trigger high-risk status. This means if the AI is limited to scheduling hearings, anonymizing documents, transcribing proceedings, or managing routine tasks that do not influence the legal or factual determinations, it would not  be considered high-risk under this Regulation. The dividing line is whether the AI’s output can materially  influence the final resolution of the dispute (e.g., analyzing core evidence, recommending liability determinations, or drafting essential portions of the award). II. Legal and Practical Implications for AI in Arbitration in EU When an AI tool used in arbitration is classified as high-risk, a suite of obligations from the Regulation applies. The Regulation’s relevant provisions on high-risk systems span risk management, data governance, technical documentation, transparency, human oversight, and post-market monitoring (Articles and recitals throughout the Act). Below is an overview of these obligations as they would apply to AI arbitration: A. Risk Management System (Article 9) Providers of high-risk AI systems are required to implement a documented, continuous risk management process covering the system’s entire lifecycle. For AI arbitration, the provider of the software (i.e. the entity placing it on the market or putting it into service) must: Identify potential risks (including the risk of incorrect, biased, or otherwise harmful award recommendations). Mitigate or prevent those risks through corresponding technical or organisational measures. Account for reasonably foreseeable misuse (for instance, using the tool for types of disputes or jurisdictions it is not designed to handle). B. Data Governance and Quality (Article 10) Data sets used to train, validate, or test a high-risk AI system must: Be relevant, representative, and correct to the greatest extent possible. Undergo appropriate governance and management to reduce errors or potential biases that could lead to discriminatory decisions or outcomes in arbitration. C. Technical Documentation, Record-Keeping, and Logging (Articles 11 and 12) High-risk AI systems must include: Clear, up-to-date technical documentation, covering the model design, data sets, performance metrics, known limitations, and other key technical aspects. Proper record-keeping (“logging”) of the system’s operations and outcomes, enabling traceability and ex post review (e.g. in the event of challenges to an arbitral decision relying on the AI’s outputs). D. Transparency and Instructions for Use (Article 13) Providers of high-risk AI systems must ensure sufficient transparency by: Supplying deployers (e.g. arbitral institutions) with instructions about how the system arrives at its recommendations, the system’s capabilities, known constraints, and safe operating conditions. Disclosing confidence metrics, disclaimers of reliance, warnings about potential error or bias, and any other usage guidelines that allow arbitrators to understand and properly interpret the system’s output. E. Human Oversight (Article 14) High-risk AI systems must be designed and developed to allow for human oversight: Arbitrators (or arbitral panels) must remain the ultimate decision-makers and be able to detect, override, or disregard any AI output that appears flawed or biased. The AI tool cannot replace the arbitrator’s judgment; rather, it should support the decision-making process in arbitration while preserving genuine human control. F. Accuracy, Robustness, and Cybersecurity (Article 15) Providers must ensure that high-risk AI systems: Achieve and maintain a level of accuracy that is appropriate in relation to the system’s intended purpose (e.g. suggesting case outcomes in arbitration). Are sufficiently robust and resilient against errors, manipulation, or cybersecurity threats—particularly critical for AI tools that could otherwise be hacked to produce fraudulent or manipulated arbitral results. G. Post-Market Monitoring (Article 17) Providers of high-risk AI systems must also: Monitor real-world performance once the system is deployed (i.e. used in actual arbitration proceedings). Take timely corrective actions if unacceptable deviations (e.g. high error rates, systemic biases) emerge in practice. III. The Role of the Provider vs. the Arbitration Institution (Deployer) Pursuant to Article 3(10)  of Regulation (EU) 2024/1689 (“AI Act”), a provider  is defined as: “...any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark.” Accordingly, where a law firm, software vendor, or specialized AI start-up develops a high-risk AI system and makes it available to arbitral institutions for the purpose of dispute resolution, that entity qualifies as the provider. Providers of high-risk AI systems must comply with the obligations set out in Articles 16–25  of the AI Act, including ensuring that the high-risk AI system meets the requirements laid down in Articles 9–15, performing or arranging the relevant conformity assessments (Article 43), and establishing post-market monitoring (Article 17). Under Article 3(12)  of the AI Act, a deployer  is defined as: “...any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.” In the arbitration context, the arbitral institution or the arbitration panel implementing the high-risk AI system is considered the deployer. Deployers have obligations outlined in Articles 29–31 , which include using the system in compliance with the provider’s instructions, monitoring its performance, retaining relevant records (Article 29), and ensuring that human oversight is effectively exercised throughout the system’s operation (Article 14). IV. Distinguishing “High-Risk” vs. “Ancillary” AI in Arbitration The AI Act’s operative text  (specifically Article 6(2) and Annex III Section A, point 8) classifies as high-risk those AI systems “intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution,” when the outcomes of the proceedings produce legal effects. However, the Regulation does not designate as high-risk any AI system that merely executes ancillary or administrative  tasks (the Act itself uses the term “purely ancillary administrative activities that do not affect the actual administration of justice in individual cases” in Recital (61)). Therefore: High-risk arbitration AI : Covered by Article 6(2) and Annex III Section A, point 8, if the AI system materially or substantively  influences the resolution of the dispute by “assisting in researching and interpreting facts and the law and in applying the law to a concrete set of facts.” This includes systems that suggest legal conclusions, draft core elements of the arbitral decision, or advise on factual or legal findings central to the final outcome. Not high-risk : If the AI tool is purely “ancillary” in nature — for instance, scheduling, document formatting, automated transcription, or anonymization — and does not  shape the actual analysis or findings that produce legally binding effects. Such use cases are not captured by Annex III, nor do they meet the condition in Article 6(2). Boundary scenarios : If the AI tool nominally performs only “supporting” tasks (such as ranking evidence or recommending procedural steps) but in practice significantly guides or steers the arbitrator’s essential decisions, that usage may bring it under the scope of high-risk classification. The decisive factor is whether the system’s functioning meaningfully affects  how the law and facts are ultimately applied. Hence the quoted distinction between “High-risk” AI and “Not high-risk” (or ancillary ) AI in arbitration aligns with the AI Act, subject to the caveat that borderline applications must be assessed in light of whether the AI’s outputs meaningfully influence or determine legally binding outcomes. National Arbitration Laws in Key EU Member States Arbitration in the EU is primarily governed by the national laws of each Member State. Over the past decades, many European countries have modernized their arbitration statutes, often drawing inspiration from the UNCITRAL Model Law on International Commercial Arbitration. However, the extent of Model Law adoption and procedural frameworks varies by country. Country Primary Legislation Based on UNCITRAL Model Law? Enforcement Germany 10th Book of Civil Procedure (ZPO), Sections 1025–1066​ Yes – Adopted Model Law (1985) with modifications​. Applies to both domestic and international arbitrations​. Arbitration-friendly courts; awards enforceable by court order, with grounds for setting aside or refusal mirroring New York Convention (NYC) defenses. France Code of Civil Procedure (CPC), Articles 1442–1527​. No – French law is an independent regime not  based on the Model Law​. Domestic arbitration  is subject to more stringent rules, while international arbitration  is more liberal​. International arbitration is defined broadly (dispute implicates international trade interests)​. Few mandatory rules in international arbitration; parties and arbitrators have wide autonomy​. No strict writing requirement for international arbitration agreements​. Uses an exequatur process. Netherlands Dutch Arbitration Act (Book 4, Code of Civil Procedure). Partial – Not a verbatim Model Law adoption, but significantly inspired by it​. No distinction between domestic and international cases. Domestic awards are enforced like judgments after deposit with the court. Foreign awards are recognized under the NYC, as the Act’s enforcement provisions closely follow the NYC grounds​. Sweden Swedish Arbitration Act (SAA). Yes in substance – Sweden did not formally adopt the Model Law’s text, but the SAA contains substantive similarities . Applies equally to domestic and international arbitrations​. SAA sets certain qualifications (e.g. legal capacity, impartiality) beyond Model Law minima​. Sweden is strongly pro-enforcement. As an NYC party, it enforces foreign awards under NYC terms. Spain Spanish Arbitration Act 60/2003 (SAA). Yes – The 2003 Act was heavily based on the 1985 UNCITRAL Model Law. Arbitral awards are enforceable as final judgments (no appeal on merits). The High Courts of Justice hear actions to annul awards on limited grounds​. Foreign awards are recognized under the NYC. Spanish courts generally uphold awards absent clear NYC Article V grounds. Italy Code of Civil Procedure (CCP). No – Italian arbitration law is not directly based on the Model Law, though it shares many of its core principles. The Courts of Appeal have jurisdiction over recognition and enforcement of foreign awards (Articles 839–840 CCP). Italy applies NYC criteria for refusal of enforcement. Domestic awards are enforced after the Court of Appeal confirms there are no grounds to set aside. AI Arbitration in Germany Below is a high-level analysis of how AI-based arbitration might fit into the wording of the German Arbitration Act (ZPO Book 10). The analysis is somewhat speculative, since the Act does not directly address AI at all. A. No Direct Prohibition on AI A first observation: nowhere  in Book 10 is there an explicit rule banning or limiting the use of AI for arbitral proceedings. Terms such as “schiedsrichter” (arbitrator), “Person,” or “unabhängig und unparteiisch” (independent and impartial) consistently refer to human  arbitrators, but do not categorically exclude the possibility of non-human decision-makers. The statutory text always presupposes a “person,” “he,” or “she” as an arbitrator; still, that is the default assumption in 1998–2018 legislative drafting, not necessarily a prohibition. One could argue that the repeated references to “Person” reflect an implicit normative stance that an arbitrator should be a human being—but from a strictly literal vantage, it is an open question if an AI “system” could serve in that role. B. Potential Impediments to AI as Sole Arbitrator Sections 1036–1038 speak of the requirement that arbitrators disclose conflicts of interest, remain impartial, and be capable of fulfilling the arbitrator’s duties. These requirements seem conceptually bound to human qualities (“berechtigte Zweifel an ihrer Unparteilichkeit,” “Umstände [...] unverzüglich offen zu legen,” “sein Amt nicht erfüllen”). One might argue an AI does not truly have “impartiality” or the capacity to “disclose” conflicts as a human might. A creative reading of these provisions might imply that only a human can exhibit these qualities, leading to an indirect barrier to AI arbitrators. Even so, a forward-looking approach might interpret “unabhängig und unparteiisch” as a principle that can be satisfied technologically if the AI’s training, data, and algorithms meet certain transparency standards. However, the textual references to “Er,” “Person,” or the requirement to “disclose circumstances” do suggest a legislative design geared toward human arbitrators. If the parties themselves voluntarily designate an AI system, a court might question whether that “appointment” meets the statutory standard of an impartial “Schiedsrichter” capable of fulfilling the mandated disclosure obligations. It is unclear how an AI would spontaneously “disclose” conflicts or handle “Ablehnungsverfahren” (challenge procedures) under §§ 1036–1037. C. Formal Requirements Under § 1035 Under § 1035, the parties “order” (ernennen) the arbitrator(s). The law contemplates an appointment by name, while also allowing the appointment method to be left to the parties’ agreement. One might attempt to list an AI platform or a specialized AI arbitral entity as the “Schiedsrichter.” Then, if the parties do not dispute that appointment, presumably the process is valid. The only textual friction is in § 1035(5) (the court must ensure the “unabhängigen und unparteiischen” arbitrator). If court assistance in appointing is requested, a judge might find an AI does not meet the statutory criteria, effectively refusing to “install” it. But if the parties themselves have chosen an AI system in private, it is not impossible from a purely textual standpoint—though it is an untested scenario. D. Procedural Rights: “Notice,” “Hearing,” and “Legal Audi Alteram Partem” Sections 1042–1048 require that each party is “to be heard” (rechtliches Gehör) and that the arbitrator handles evidence and ensures fairness. An AI system delivering a purely automated decision might be deemed to conflict with the personal oversight and reasoned assessment implied by these clauses. For instance, § 1042(1) states “the parties shall be treated equally” and “each party is entitled to be heard.” A purely algorithmic system could risk due-process concerns if it lacks human capacity to weigh “fairness” or accommodate unforeseen procedural nuances. Still, the text does not explicitly say an automaton cannot  do it; rather, it insists on respect for due process. If the AI system can incorporate such procedures—ensuring parties can submit evidence, respond to each other, and have an “explainable” outcome—there is no direct textual ban. E. Setting Aside and Public Policy Section 1059 allows a court to set aside an award for lack of due process or if the arbitrator is not properly appointed. An AI-based award that fails to grant each party an opportunity to present their case or that obscures the basis for the decision might be at risk of annulment under § 1059(2)(1)(b) or (d). The courts might also strike down an AI-run arbitration under “public policy” (ordre public) if it is deemed that the process is too opaque or not a “fair hearing.” So although no explicit clause forbids AI arbitrators, the effect  could be that an AI award is challenged under §§ 1059, 1052 (decision by a collegium?), or 1060(2). France A. Domestic Arbitration: Article 1450 Requires a Human Arbitrator Article 1450 ( Titre I, Chapitre II ) provides, in part: “La mission d’arbitre ne peut être exercée que par une personne physique jouissant du plein exercice de ses droits.Si la convention d’arbitrage désigne une personne morale, celle-ci ne dispose que du pouvoir d’organiser l’arbitrage.” This is the single most direct statement in the Act that speaks to who (or what) may serve as arbitrator. It clearly states that the function  of deciding the case (“la mission d’arbitre”) can only be carried out by a “personne physique” (a natural person) enjoying full civil rights. Meanwhile, a personne morale  (legal entity) may be tasked only with administering or organizing the proceedings, not issuing the decision. Under domestic  arbitration rules, an AI system—even if structured within a legal entity—cannot lawfully act as the actual decider, because the statute explicitly demands a natural person . This requirement amounts to an indirect but quite categorical ban  on a purely machine-based arbitrator in French domestic arbitration. An AI could presumably assist a human arbitrator, but it could not alone fulfill the statutory role of rendering the award. B. International Arbitration: No Verbatim “Personne Physique” Rule, Yet Similar Implications For international  arbitration, Article 1506 cross-references some (but not all) provisions of Title I. Article 1450 is notably not among those that automatically apply to international cases. As a result, there is no verbatim statement in the international part that “the arbitrator must be a natural person.” One might argue that, in principle, parties to an international arbitration could try to designate an AI system as their “arbitrator.” However, the rest of the code—e.g. Articles 1456, 1457, 1458, which are  incorporated by Article 1506—consistently presumes that the arbitrator is capable of disclosing conflicts, being “récusé,” having “un empêchement,” etc. These obligations appear tied to qualities of a human being: impartiality, independence, the duty to reveal conflicts of interest, the possibility of “abstention” or “démission,” etc. They strongly suggest a living individual is expected. An AI tool cannot meaningfully “reveal circumstances” that might affect its independence; nor can it be “révoqué” by unanimous party consent in the same sense as a human. Thus, even in international arbitration, the text strongly implies  an arbitrator is a person  who can fulfill these statutory duties. In short, the literal bar in Article 1450 applies to domestic arbitration, but the spirit  of the international arbitration provisions also envisions a human decision-maker. While there is no line that says “AI arbitrators are forbidden,” the repeated references to “l’arbitre” as someone who must disclose, resign, or be revoked for partiality push in the same direction. A French court would likely find that, under Articles 1456–1458 and 1506, an AI alone cannot meet the code’s requirements for an arbitrator. C. Possible Indirect Challenges and Public Policy Even if parties somehow attempted to sidestep the requirement of a “personne physique,” the code’s enforcement provisions would pose other obstacles: Due Process (Principe de la Contradiction).  Articles 1464, 1465, 1510, etc. require observance of contradiction  and égalité des parties . A purely automated arbitrator might fail to show that it afforded both sides a genuine opportunity to present arguments before a human decision-maker who can weigh them in a reasoned, flexible manner. Setting Aside or Refusal of Exequatur.  If the AI-based award flouts the code’s requirement that the arbitrator have capacity and be impartial, the losing party can invoke Articles 1492 or 1520 (for domestic vs. international) to seek annulment for irregular constitution of the tribunal or for violation of fundamental procedural principles. Manifest Contrariety to Public Policy.  Articles 1488 (domestic) and 1514 (international) require that a French court refuse exequatur if the award is “manifestly contrary to public policy.” A completely AI-run arbitral process might be deemed contrary to fundamental procedural fairness. In each respect, the Act’s structure presumes a human  tribunal with discretionary powers, an obligation to sign the award, etc. An AI alone would struggle to comply. Netherlands Under the Dutch Arbitration Act (Book 4 of the Dutch Code of Civil Procedure), the text presupposes that an arbitrator is a human decision-maker rather than an AI system. For instance, Article 1023 states that “Iedere handelingsbekwame natuurlijke persoon kan tot arbiter worden benoemd” (“Every legally competent natural person may be appointed as an arbitrator”) . This phrasing is already suggestive: it frames “arbiter” in terms of a flesh‑and‑blood individual who has legal capacity. Other provisions likewise assume that an arbitrator can perform tasks generally associated with a human decision-maker. For example, arbitrators must disclose potential conflicts of interest and can be challenged (wraking) if there is “gerechtvaardigde twijfel … aan zijn onpartijdigheid of onafhankelijkheid” (“justified doubt … about his impartiality or independence”) . The Act also speaks of “ondertekening” (signing) of the arbitral award (Article 1057) and grants arbitrators discretionary procedural powers, such as the authority to weigh evidence, hear witnesses, and manage hearings . All these elements lean heavily on human qualities, like independence, impartiality, and the capacity to understand and consider the parties’ arguments. Thus, while there is no single clause that literally says, “AI is barred from serving as an arbitrator,” the overall statutory design pivots around the concept of a legally competent person. An AI system cannot realistically fulfill obligations such as disclosing personal conflicts, signing an award, or being subjected to a wraking procedure as a human would. In that sense, although the Act does not contain an explicit prohibition on “AI arbitrators,” it effectively prohibits them by tying the arbitrator’s role to natural‑person status and personal legal capacities. Sweden Under the Swedish Arbitration Act (Lag (1999:116) om skiljeförfarande), the statutory language implicitly assumes that an arbitrator (skiljeman) will be a human being with legal capacity, rather than an AI system. For instance, Section 7 states that “Var och en som råder över sig själv och sin egendom kan vara skiljeman” (“Anyone who has the capacity to manage themselves and their property may serve as an arbitrator”) . This phrasing strongly suggests a natural person with the requisite legal autonomy rather than a non-human entity. The Act also repeatedly ties the arbitrator’s role to personal qualities such as impartiality (opartiskhet) and independence (oberoende) (Section 8), the ability to disclose circumstances that might undermine confidence in their neutrality (Section 9), and a duty to underwrite or “sign” the final award (Section 31) . All these provisions presuppose that the arbitrator can make discretionary judgments, weigh fairness, and be removed (skiljas från sitt uppdrag) on grounds related to personal conduct or conflicts of interest. A purely AI-driven system, which lacks the hallmarks of human capacity and accountability, could not fulfill such requirements. Accordingly, even though the Swedish law does not explicitly state “AI is prohibited from acting as an arbitrator,” it functionally bars non-human arbitrators by defining who can serve and by imposing obligations—impartiality, personal disclosure, physical or electronic signature, and readiness to be challenged for bias—that only a human individual can meaningfully carry out. Spain Under the Spanish Arbitration Act (Ley 60/2003, de 23 de diciembre), several provisions implicitly treat the árbitro as a human decision‑maker rather than an AI system. For instance, Article 13 (“Capacidad para ser árbitro”) specifies that arbitrators must be natural persons “en el pleno ejercicio de sus derechos civiles.” This requirement clearly presupposes a human being who has legal capacity. Additionally, the Act imposes duties of independence and impartiality (Article 17) and requires arbitrators to reveal any circumstances that might raise doubts about their neutrality . The law also contemplates acceptance of the arbitrators’ appointment in person (Article 16), the possibility of recusación (challenge) based on personal or professional reasons (Article 17–18), and the signing of the award (laudo) by the arbitrator(s) (Article 37). All these duties imply discretionary judgment and accountability typical of a human arbitrator. Italy Under the Italian Arbitration Act (Articles 806–840 of the Codice di Procedura Civile) , the text consistently assumes that an arbitro (arbitrator) is a human individual, in full legal capacity. For example: Article 812 provides that “Non può essere arbitro chi è privo […] della capacità legale di agire” (“No one lacking legal capacity may serve as arbitrator”). An AI system cannot meaningfully satisfy this personal requirement of legal capacity. The Act speaks of recusation, acceptance, disclosure of conflicts of interest, etc. (Articles 813, 815), all of which assume personal traits (e.g. independence, impartiality). Arbitrators sign the final award (Articles 823, 824), act as public officials for certain tasks, and bear personal liability (Articles 813-bis, 813-ter). Hence, though the law does not explicitly forbid “AI arbitrators,” it effectively bars non-human arbitrators by imposing requirements linked to human legal capacity, personal judgment, and accountability. An autonomous AI could not meet these conditions without human involvement. Other Jurisdictions United Arab Emirates Under Federal Law No. 6 of 2018 Concerning Arbitration in the UAE, an arbitrator must be a natural person with full legal capacity and certain other qualifications. Specifically: Article 10(1)(a)  provides that an arbitrator must not be a minor or otherwise deprived of their civil rights, indicating they must be a living individual with legal capacity. The same article stipulates that they cannot be a member of the board of trustees or an executive/administrative body of the arbitral institution handling the dispute, and they must not have relationships with any party that would cast doubt on their independence or impartiality. Article 10(2) confirms there is no requirement of a certain nationality or gender, but it still envisions a human arbitrator who can meet the personal requirements in Article 10(1). Article 10(3) also requires a potential arbitrator to disclose “any circumstances which are likely to cast doubts on his or her impartiality or independence,” and to continue updating the parties about such circumstances throughout the proceedings. An AI application is not able to fulfill that personal disclosure obligation in the sense prescribed by the Law. Article 10 BIS (introduced by a later amendment) restates that an arbitrator must be a natural person meeting the same standards – for example, holding no conflicts, disclosing membership in relevant boards, and so forth – if that person is chosen from among certain arbitration-center supervisory bodies. Hence, although the UAE Arbitration Law (Federal Law No. 6 of 2018) does not literally declare “AI arbitrators are prohibited,” it unequivocally conditions the role of arbitrator on being a natural person with the required legal capacity and duties such as disclosure of conflicts. An autonomous AI system, by contrast, cannot fulfill the obligations that the Law imposes, whether it be impartiality, independence, or the capacity to sign the final award. Such requirements, taken together, effectively exclude AI from serving as the sole or true arbitrator in a UAE-seated arbitration. Singapore Under Singapore’s International Arbitration Act (Cap. 143A, Rev. Ed. 2002) (the IAA), which incorporates the UNCITRAL Model Law on International Commercial Arbitration (with certain modifications), there is no explicit statement such as “an arbitrator must be a human being.” However, the provisions of the Act and the Model Law, read as a whole, presuppose that only natural persons can serve as arbitrators in a Singapore-seated international arbitration. A. Terminology and Structure of the Legislation Section 2 of the IAA (Part II) defines “arbitral tribunal” to mean “a sole arbitrator or a panel of arbitrators or a permanent arbitral institution.” Likewise, Article 2 of the Model Law uses “arbitral tribunal” to refer to a “sole arbitrator or a panel of arbitrators.” Neither the IAA nor the Model Law define “arbitrator” as something that could be non-human, nor do they provide a mechanism for appointing non-person entities. The Act consistently describes the arbitrator as an individual who can accept  appointments, disclose  conflicts, sign  awards, act fairly , be challenged , or be replaced , among other duties. B. Core Provisions That Imply a Natural Person Arbitrator Appointment and Acceptance Section 9A of the IAA  (read with Article 11 of the Model Law) speaks of “appointing the arbitrator” and states that if parties fail to appoint, the “appointing authority” does so. The entire scheme contemplates a named person  or individuals collectively as the arbitral tribunal. Article 12(1) of the Model Law  requires that “When a person is approached in connection with possible appointment as an arbitrator, that person shall disclose….” The word “person,” in the context of disclosing personal circumstances likely to raise doubts as to independence or impartiality, strongly suggests a natural person. Disclosure of Potential Conflicts Article 12 of the Model Law  further requires the arbitrator to “disclose any circumstance likely to give rise to justifiable doubts as to his impartiality or independence.” The capacity to evaluate personal conflict, have impartial relationships, etc., is a hallmark of a human arbitrator. The arbitrator is also subject to challenge  (Model Law Art. 13) if “circumstances exist that give rise to justifiable doubts as to his impartiality or independence” or if he lacks the qualifications the parties have agreed on. Signatures, Liability, and Immunities Sections 25 and 25A of the IAA  provide that an arbitrator is immune from liability for negligence in discharging the arbitrator’s duties and from civil liability unless acting in bad faith. This strongly implies that the arbitrator is a natural person , because the system of professional negligence or personal immunity for “an arbitrator” does not rationally apply to a non-human machine. Article 31(1) of the Model Law  (which has force of law under Section 3 of the IAA) states: “The award shall be made in writing and shall be signed  by the arbitrator or arbitrators.” Plainly, an autonomous AI does not meaningfully “sign” a final award in the sense required by law. Procedural Powers That Depend on Human Judgment Section 12(1) of the IAA  confers on the arbitral tribunal powers such as ordering security for costs, ordering discovery, giving directions for interrogatories, and so forth. The same section clarifies that the tribunal “may adopt if it thinks fit inquisitorial processes” (Section 12(3)) and “shall decide the dispute in accordance with such rules of law as are chosen by the parties” (Section 12(5), read with Article 28 of the Model Law). These provisions presume that the arbitrator can weigh and interpret evidence, evaluate fairness, impartially manage adversarial arguments, handle procedural complexities, and supply reasons in a final award. While an AI tool might assist a human arbitrator, the notion of autonomous final human-like judgment  is not recognized anywhere in the Act. C. Arbitrator’s Duties and the Necessity of Human Capacity Many duties that the IAA imposes on arbitrators are inherently personal and judgment-based: Fair Hearing and Due Process . Articles 18 and 24 of the Model Law stipulate that “The parties shall be treated with equality and each party shall be given a full opportunity of presenting his case,” and that “the arbitral tribunal shall hold hearings” or manage documentary proceedings. These tasks involve high-level procedural judgments, discretionary rulings, and the balancing of fairness concerns—indications that the law envisions a human decision-maker. Ability to Be Challenged, Replaced, or Terminated . Articles 12–15 of the Model Law describe the procedure for challenging an arbitrator for partiality, bias, or inability to serve. This only works meaningfully if the arbitrator is a natural person susceptible to partiality. Signing and Delivering the Award . The final step of the arbitration is thoroughly anchored in personal accountability: the written text is “signed by the arbitrator(s)” and delivered to the parties (Model Law Art. 31(1), (4)). D. Permanent Arbitral Institutions vs. Automated “AI Arbitrators” One might note that section 2  of the IAA includes “a permanent arbitral institution” in the definition of “arbitral tribunal.” This does not mean an institution by itself  can decide as the “arbitrator.” Rather, it typically refers to an arbitral body that administers arbitration or that may act as the appointing authority.  The actual day-to-day adjudication is still performed by an individual or panel of individuals. Indeed, the IAA draws a difference between: The arbitral institution  that may oversee or administer proceedings (e.g. SIAC, ICC, LCIA, etc.). The arbitrator(s)  who is/are physically deciding the merits of the dispute. Ethical and Due Process Considerations The use of AI in arbitration gives rise to several ethical and due process concerns . Arbitration is founded on principles of fairness, party autonomy, and the right to be heard; introducing AI must not undermine these fundamentals. Key considerations include: Transparency and Disclosure:  One ethical question is whether arbitrators or parties should disclose their use of AI tools. Transparency can be crucial if AI outputs influence decisions. There is currently no universal rule on disclosure, and practices vary. The guiding principle is that AI should not become a “black box” in decision-making – parties must know the bases of an award to exercise their rights (like challenging an award or understanding the reasoning). Lack of transparency could also raise enforceability issues  under due process grounds. Therefore, best practice leans towards disclosure when AI materially assists in adjudication , ensuring all participants remain on equal footing. Bias and Fairness:  AI systems can inadvertently introduce or amplify bias . Machine learning models trained on historical legal data may reflect the biases present in those data – for example, skewed representation of outcomes or language that favors one group. In arbitration, this is problematic if an AI tool gives systematically biased suggestions (say, favoring claimants or respondents based on past award trends) or if it undervalues perspectives from certain jurisdictions or legal traditions present in training data. The ethical duty of arbitrators to be impartial and fair extends to any tools they use. One safeguard is using diverse and representative training data; another is having humans (arbitrators or counsel) critically review AI findings rather than taking them at face value. Due Process and Right to a Fair Hearing:  Due process in arbitration means each party must have a fair opportunity to present its case and respond to the other side’s case, and the tribunal must consider all relevant evidence. AI use can challenge this in subtle ways. There is also the concern of explainability : due process is served by reasoned awards, but if a decision were influenced by an opaque algorithm, how can that reasoning be explained? Ensuring a fair hearing might entail allowing parties to object to or question the use of certain AI-derived inputs. Confidentiality and Privacy:  Confidentiality is a hallmark of arbitration. Ethical use of AI must guard against compromising the confidentiality of arbitral proceedings and sensitive data. Many popular AI services are cloud-based or hosted by third parties, which could pose risks if confidential case information (witness statements, trade secrets, etc.) is uploaded for analysis. AI Use Cases and Real-World Examples Despite the concerns, AI is already making tangible contributions in arbitration practice. A number of use cases and real-world examples  demonstrate how AI tools are being applied by arbitral institutions, arbitrators, and parties: Document Review and E-Discovery:  Arbitration cases, especially international commercial and investment disputes, often involve massive document productions. AI-driven document review platforms (leveraging natural language processing and machine learning) can significantly speed up this process. Tools like Relativity and Brainspace use AI to sort and search document collections, identifying relevant materials and patterns without exhaustive human review. Language Translation and Interpretation:  In multilingual arbitrations, AI has proven valuable for translation. Machine translation systems (from general ones like Google Translate to specialized legal translation engines) can quickly translate submissions, evidence, or awards. Moreover, AI is being used for real-time interpretation during hearings: recent advances have allowed live speech translation and transcription. Legal Research and Case Analytics:  AI assists lawyers in researching legal authorities and prior decisions relevant to an arbitration. Platforms like CoCounsel (by Casetext) and others integrate AI to answer legal questions and find citations from vast databases. Products like Lex Machina and Solomonic (originally designed for litigation analytics) are being applied to arbitration data to glean insights on how particular arbitrators tend to rule, how long certain types of cases take, or what damages are typically awarded in certain industries. Arbitrator Selection and Conflict Checking:  Choosing the right arbitrator is crucial, and AI is helping make this process more data-driven. Traditional selection relied on reputation and word-of-mouth, but now AI-based arbitrator profiles are available. Additionally, AI is used for conflict of interest checks: law firms use AI databases to quickly check if a prospective arbitrator or expert has any disclosed connections to entities involved, by scanning CVs, prior cases, and public records. This ensures compliance with disclosure obligations and helps avoid later challenges. Case Management and Procedural Efficiency:  Arbitral institutions have begun integrating AI to streamline case administration. Furthermore, during proceedings, AI chatbots can answer parties’ routine questions about rules or schedules, easing the administrative burden. Another emerging idea is AI prediction for settlement : parties might use outcome-prediction AI to decide whether to settle early. For instance, an AI might predict a 80% chance of liability and a damages range, prompting a party to offer or accept settlement rather than proceed. This was reportedly used in a few insurance arbitrations to successfully settle before award, with both sides agreeing to consult an algorithm’s evaluation as one data point in negotiations. These examples show that AI is not just theoretical in arbitration – it is actively being used to augment human work, in ways both big and small. Arbitration Institutions Implementing AI Initiatives ICC (International Chamber of Commerce) The “ICC Overarching Narrative on Artificial Intelligence” outlines the ICC’s perspective on harnessing AI responsibly, stressing fairness, transparency, accountability, and inclusive growth. It promotes risk-based regulation that fosters innovation without stifling competition, encourages collaboration between businesses and policymakers, and calls for global, harmonized approaches that safeguard privacy, data security, and human rights. The ICC highlights the importance of fostering trust through robust governance, empowering SMEs and emerging markets with accessible AI tools, and ensuring AI’s benefits reach all sectors of society. While it does not specifically govern arbitration, the Narrative’s focus on ethical and transparent AI use offers guiding principles that align with the ICC’s broader commitment to integrity and due process. AAA-ICDR (American Arbitration Association & Intl. Centre for Dispute Resolution) In 2023 the AAA-ICDR published ethical principles for AI in ADR, affirming its commitment to thoughtful integration of AI in dispute resolution. It has since deployed AI-driven tools – for example, an AI-powered transcription service to produce hearing transcripts faster and more affordably, and a “AAAi Panelist Search” generative AI system to help identify suitable arbitrators/mediators from its roster. These initiatives aim to boost efficiency while upholding due process and data security. JAMS (USA) JAMS introduced the ADR industry’s first specialized AI dispute arbitration rules in 2024, providing a framework tailored to cases involving AI systems. Later in 2024 it launched “JAMS Next,” an initiative integrating AI into its workflow. JAMS Next includes AI-assisted transcription (real-time court reporting with AI for instant rough drafts and faster final transcripts) and an AI-driven neutral search on its website to quickly find arbitrators/mediators via natural language queries. SCC (Arbitration Institute of the Stockholm Chamber of Commerce) In October 2024, the SCC released a “Guide to the use of artificial intelligence in cases administered under the SCC rules” under its rules. This guideline advises arbitration participants (especially tribunals) on responsible AI use. Key points include safeguarding confidentiality, ensuring AI does not diminish decision quality, promoting transparency (tribunals are encouraged to disclose any AI they use), and prohibiting any delegation of decision-making to AI. CIETAC (China International Economic and Trade Arbitration Commission) CIETAC has integrated AI and digital technologies into its case administration. By 2024 it implemented a one-stop online dispute resolution platform with e-filing, e-evidence exchange, an arbitrator hub, and case management via a dedicated app – enabling fully paperless proceedings. CIETAC reports it has achieved intelligent document processing, including full digital scanning and automated identification of arbitration documents, plus a system for detecting related cases. CIETAC’s annual report states it is accelerating "the application of artificial intelligence in arbitration” to enhance efficiency and service quality. Silicon Valley Arbitration & Mediation Center The SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration (1st Edition, 2024) present a comprehensive, principle-based framework designed to promote the responsible and informed use of AI in both domestic and international arbitration proceedings​. These guidelines aim to ensure fairness, transparency, and efficiency by providing clear responsibilities for all arbitration participants—including arbitrators, parties, and counsel. Key provisions include the obligation to understand AI tools' capabilities and limitations, safeguard confidentiality, disclose AI usage when appropriate, and refrain from delegating decision-making authority to AI. Arbitrators are specifically reminded to maintain independent judgment and uphold due process. The guidelines also address risks like AI hallucinations, bias, and misuse in evidence creation. A model clause is provided for integrating the guidelines into procedural orders, and the document is designed to evolve alongside technological advancements​. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.

To learn more about our services get in touch today.

  • LinkedIn
  • X

PLG Consulting LLC 

Kingstown, Saint Vincent and the Grenadines (Non-Legal Consulting Services)

Client Legal Services: Kyiv, Ukraine

Contact Us

Privacy Policy

© 2024 by Prokopiev Law Group

bottom of page