Search Results
152 results found with an empty search
- EU Retail Investment Strategy: Provisional Deal on Value-for-Money, Inducements, and Retail Disclosures
What operational and compliance changes should EU investment firms, insurers, and intermediaries plan for after the 18 December 2025 provisional political agreement between the European Parliament and the Council on the Retail Investment Strategy (RIS) package, covering retail advice and suitability, “value for money” checks, inducement controls, third‑party marketing (including financial influencers), and PRIIPs key information document disclosures? The 18 December 2025 deal is a provisional trilogue agreement. It requires formal approval by the European Parliament and the Council before the amending directive and the amending PRIIPs regulation are adopted and published. The Council has indicated that technical work will continue to finalise the legal texts in early 2026 and that key compliance dates will run from Official Journal publication, with staged application periods. In this context, “retail clients” refers to non-professional customers under MiFID II and comparable customer categories under insurance distribution rules. “Inducements” refers to fees, commissions, or monetary or non‑monetary benefits received in connection with an investment service. “Finfluencers” refers to third parties paid or incentivised to promote investment products through social media. “KID” refers to the PRIIPs key information document. Status and timing The package takes the form of targeted amendments to MiFID II, the Insurance Distribution Directive, UCITS, AIFMD, and Solvency II, plus an amendment to the PRIIPs Regulation. The Council has described a phased timeline: member states transpose the directive amendments 24 months after publication; the general application date is 30 months after publication; PRIIPs changes apply 18 months after publication. Until the package is adopted and applicable, existing EU and national rules continue to apply. Press release summaries are not the final legal text; firms will need to review the adopted acts and the related supervisory templates once available. Advice and suitability Negotiators agreed that financial and insurance advisers must recommend products and services that are suitable for the client’s needs. Suitability is assessed using proportionate, necessary information, including the client’s knowledge and experience, financial situation, ability to bear full or partial losses, investment needs and objectives, and risk tolerance. The Council text also signals a simplified suitability route for recommendations limited to diversified, non‑complex, cost‑efficient instruments. In that case, advisers would no longer need to assess the client’s investment knowledge and experience as part of the suitability assessment. Value for money and product comparison A central change is a “value for money” gate at product design and distribution. The Council text states that retail investment firms will be required to identify and quantify all costs and charges borne by investors and assess whether total costs and charges are justified and proportionate. Where costs and charges are not justified and proportionate, products should not be approved for sale to retail investors. To support supervisory scrutiny and product comparison, the Parliament press release states that ESMA and EIOPA are expected to develop supervisory benchmarks, and that investment firms would assess their products against peer groups with a representative number of similar instruments. The Council text also refers to agreed standards for peer groupings and supervisory benchmarks, including a period where national supervisory benchmarks may be used for insurance-based investment products. Retail clients should also be able to compare costs, charges, performance, and non‑financial benefits across products. The Commission press release also refers to standardised cost terminology, a yearly summary of portfolio performance for retail clients, and a feasibility study for a pan‑European tool that would support product comparison. Inducements and conflicts of interest The deal strengthens rules on inducements to reduce conflicts of interest in advice and distribution. The Parliament press release describes a new inducement test aimed at client best interests and clearer separation of inducements from other fees. The Council press release states that, where inducements are permitted, firms and advisers must act honestly, fairly, and professionally in the client’s best interests, the inducement must deliver a tangible client benefit, and the inducement cost must be disclosed clearly and separately from other charges. Member states remain free to introduce national inducement bans. Marketing communications and finfluencers The package places increased weight on retail financial education measures at member state level and on controls for online marketing and promotion. The Commission press release states that financial intermediaries are responsible for marketing communications, including communications disseminated via social media, celebrities, or other paid third parties. The Parliament press release states that, where firms use finfluencers to promote financial products or contracts, firms should have a written agreement with the finfluencer, hold the finfluencer’s contact details, and exercise control over the finfluencer’s promotional activity. PRIIPs KIDs and disclosure updates Negotiators agreed changes to the PRIIPs KID. The Parliament press release states that the KID should provide forward‑looking performance scenarios based on realistic data and plausible assumptions. The Council press release states that updated KID templates will be developed by the relevant European supervisory authorities and that KID information on costs, risk, and expected returns will be made more visible and accessible. It also states that, 30 months after entry into force of the new PRIIPs rules, KID information will have to be provided in a machine‑readable format to support comparison. Professional client opt‑up The Council press release indicates changes to the criteria for retail investors to be treated as professional clients, with two out of three criteria required: Transaction activity: 15 significant transactions over the last three years, 30 transactions over the previous year, or 10 transactions over €30,000 in unlisted companies over the last five years. Portfolio size: average portfolio value above €250,000 over the last three years. Experience or training: at least one year of relevant work in the financial sector, or proof of education or training and an ability to evaluate risk. The training/education alternative may not be combined with the portfolio criterion. The Council text also indicates that certain managers and directors of financial firms subject to fit‑and‑proper assessment, and certain AIFM employees with relevant fund knowledge and experience, will be treated as professional clients. Implementation planning Firms distributing products to EU retail clients should map the package to product design, pricing, cost/charge data, disclosures, advice workflows, inducement policies, and third‑party marketing arrangements. This includes assessing reliance on peer‑group comparisons and supervisory benchmarks for “value for money” and revisiting contracts and controls used for influencer-led promotion. Our team at Prokopiev Law Group advises on EU retail investment and distribution rules, including readiness planning for the RIS package across investment, insurance, and digital channels. Contact us to discuss scoping and implementation planning. Examples include: MiCA, CASP licensing, token structuring, securities analysis, AML/KYC, sanctions compliance, financial promotions, stablecoins, staking, DeFi, RWAs, custody. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Model Weights, Watermarks, and Memorisation: UK, German, and US Primary-Source Signals on Generative AI and IP
Generative AI providers are now facing materially different answers to a deceptively simple question: when does a model “contain” protected content, and who is responsible when that content shows up in outputs. In late 2025, the High Court of England and Wales (Chancery Division) and the Regional Court of Munich I each issued detailed rulings tied to real-world model behavior, while the USPTO’s Appeals Review Panel issued a precedential patent-eligibility decision that is directly relevant to ML training inventions. In the UK, Getty Images (US) Inc v Stability AI Ltd turned on a narrow set of claims that survived to trial. The court recorded that Getty’s “Outputs Claim” and “Training and Development Claims” were abandoned, leaving (i) trade mark infringement allegations tied to “synthetic” watermarks in generated images, and (ii) a novel secondary copyright theory aimed at downloads/importation of the Stable Diffusion model itself. The result was a mixed trade mark win for Getty on limited evidence, paired with a clear rejection of the secondary copyright importation theory for the model weights. On the trade mark side, the court treated infringement as evidence-driven and version-specific. Getty established infringement in respect of certain iStock watermarks under section 10(1) and section 10(2) of the Trade Marks Act 1994, but the findings were expressly confined to specific example outputs and did not extend to a broader conclusion about scale. The court dismissed Getty’s section 10(3) dilution claim, and it declined to address passing off on the way the case had been advanced. It also found no evidence supporting section 10(1) infringement for Getty Images watermarks, and it found no UK evidence of users generating Getty/iStock watermarks for certain later model versions referenced in the pleadings. The UK decision’s most consequential point for genAI developers was the court’s treatment of “infringing copy” in the secondary infringement provisions of the Copyright, Designs and Patents Act 1988. The court accepted that an “article” can be intangible, then held that an “infringing copy” must still be a “copy” in the statutory sense. On the facts found, the Stable Diffusion model in its final iteration did not store the claimant’s images, and the model weights were described as the product of learned patterns and features from training rather than stored copies. That conclusion disposed of the importation/distribution theory under sections 22 and 23 (as pleaded) because the model was not an “infringing copy,” even if training involved storing copies elsewhere during the training process. Germany has moved in the opposite direction on closely related concepts. In a case brought by GEMA against the operators of a GPT-based chatbot (models referenced in the judgment include “4” and “4o”), the Regional Court of Munich I (LG München I) treated memorisation as the hinge point. The court found that the disputed song lyrics were “reproducibly” embodied in the model parameters, and that the “work” could be made perceptible through simple prompts, which was sufficient for a reproduction analysis under German law’s technology-neutral understanding of fixation and perceptibility. The court granted injunctive relief against reproducing the lyrics “in” the language models and against making the lyrics publicly available via outputs, along with information and damages findings in principle. The Munich court also drew a sharp line on text-and-data-mining. It accepted that the German text and data mining exception (UrhG § 44b, implementing DSM Directive Article 4) can cover preparatory reproduction acts involved in assembling a training corpus. It then held that the memorisation of protected works within the model during training was not merely “text and data mining” and was not covered by § 44b, because the relevant reproductions did not serve the data analysis purpose that justifies the exception. In the court’s framing, the exception cannot be stretched to legitimise reproductions that reach beyond analysis and directly interfere with rightsholders’ exploitation interests. The Munich decision is also operationally important on attribution of conduct. The court treated the model operators as the responsible actors for reproductions caused by outputs where the prompts were “simple” and did not meaningfully dictate content. It held that the operators retained “Tatherrschaft” (control over the act) in that scenario, rather than shifting responsibility to the user as “prompter.” This matters for consumer-facing chat products, because it ties liability to controllable system choices: training data selection, training execution, and model architecture. In the United States, the USPTO’s Appeals Review Panel (ARP) moved in a different direction, addressing patent eligibility of AI training inventions rather than IP infringement risk. In Ex parte Desjardins, the ARP treated the claims as reciting an abstract idea at Step 2A, Prong One (mathematical calculation), then held the claims integrated that abstract idea into a practical application at Step 2A, Prong Two because the claim limitations were tied to technical improvements in continual learning and model efficiency: reducing storage requirements and preserving performance across sequential training, addressing “catastrophic forgetting.” The ARP vacated the Board’s § 101 new ground of rejection (without disturbing other aspects of the Board’s prior decisions). The USPTO then issued an advance notice of change to the MPEP to reflect the decision’s implications for examination practice. For product and legal teams operating across these jurisdictions, the combined signal is clear: “weights versus copies” is not a universal answer, and memorisation behavior is now a litigation fact, not a purely academic risk. UK exposure in this set of facts centred on trade mark and consumer perception issues tied to output artifacts (watermarks), while Germany treated the same category of model behavior (verbatim or near-verbatim retrieval from training data) as a reproduction that can occur “in the model,” paired with operator responsibility for outputs triggered by minimal prompts. US developments, at least at the USPTO, are simultaneously encouraging applicants to frame ML training inventions in terms of concrete technical improvements rather than “math in the abstract.” Practical mitigations that map to these rulings are not exotic. They include disciplined dataset provenance and opt-out handling, memorisation testing designed to detect verbatim recall, output controls for protected-corpus requests (lyrics, long passages), and product decisions that reduce the probability of generating confusing trade marks or brand-identifying artifacts. Where model weights are distributed, teams should also evaluate whether the legal theory in a given forum treats downstream distribution as distribution of a “copy,” and what evidence will be used to prove (or disprove) storage/containment of protected works. If you would like to discuss how these developments affect model training, deployment, licensing posture, and product controls, our team at Prokopiev Law Group can advise on both AI/content risk and adjacent digital-asset matters. Examples include: tokenised IP, NFTs, DeFi, stablecoins, staking, DAOs, RWAs, custody, MiCA compliance, VASP licensing, AML/KYC. Disclaimer: This document is for informational purposes only and does not constitute legal advice. The information provided is based on publicly available sources and may not reflect the latest legal developments. Readers should seek professional legal counsel before acting on any information contained in this document. Some parts of the text may be automatically generated. The views presented are those of the author and not any other individual or organization.
- AI in Arbitration: Frameworks, Applications, and Challenges
Artificial Intelligence (AI) is being integrated into arbitration as a tool to enhance efficiency and decision-making. In broad terms, AI refers to computer systems capable of tasks that typically require human intelligence, such as learning, pattern recognition, and natural language processing. In arbitration, AI’s role so far has been largely assistive – helping arbitrators and parties manage complex information and streamline procedures. For example, AI-driven software can rapidly review and analyze documents, searching exhibits or transcripts for relevant facts far faster than manual methods. Machine learning algorithms can detect inconsistencies across witness testimonies or summarize lengthy briefs into key points. Generative AI tools (like large language models) are also being used to draft texts – from procedural orders to initial award templates – based on user-provided inputs. The potential applications of AI in arbitration extend to nearly every stage of the process. AI systems can assist in legal research, sifting through case law and past awards to identify relevant precedents or even predict probable outcomes based on historical patterns. They can facilitate case management by automating routine administrative tasks, scheduling, and communications. We are proud to present an analysis of AI in arbitration available today. If you are exploring AI’s role in arbitration, Prokopiev Law Group can help. We pair seasoned litigators with leading-edge AI resources to streamline complex cases and help you navigate this evolving landscape. If your situation calls for additional expertise, we are equally prepared to connect you with the right partners. Legal and Regulatory Frameworks The incorporation of AI into arbitration raises questions about how existing laws and regulations apply to its use. Globally, no uniform or comprehensive legal regime yet governs AI in arbitration, but several jurisdictions have started to address the intersection of AI and dispute resolution through legislation, regulations, or policy guidelines. Convention on the Recognition and Enforcement of Foreign Arbitral Awards (New York Convention) A. Overview of the Convention’s Scope Article I(1) states that the Convention applies to the “recognition and enforcement of arbitral awards made in the territory of a State other than the State where the recognition and enforcement are sought,” arising out of disputes between “persons, whether physical or legal.” The Convention does not define “arbitrator” explicitly; rather, it references “arbitral awards … made by arbitrators appointed for each case or … permanent arbitral bodies.” There is no mention of any possibility that the arbitrator could be non-human or an AI entity. B. Key Provisions Envision Human Arbitrators Article II : Speaks of “an agreement in writing under which the parties undertake to submit to arbitration all or any differences….” The Convention assumes an “arbitration agreement” with standard rights to appoint or challenge “the arbitrator.” Article III–V : Concern the recognition/enforcement of awards and set out grounds upon which enforcement may be refused. For instance, Article V(1)(b) refers to a party “not [being] given proper notice of the appointment of the arbitrator,” or “otherwise unable to present his case.” Article V(1)(d) : Allows refusal of enforcement if “the composition of the arbitral authority … was not in accordance with the agreement of the parties….” The reference to an “arbitral authority,” “arbitrator,” or “composition” suggests a set of identifiable, human arbitrators who can be “composed” incorrectly or fail to abide by required qualifications. Article I(2) : The “term ‘arbitral awards’ shall include not only awards made by arbitrators appointed for each case but also those made by permanent arbitral bodies….” Even in the latter scenario, the Convention contemplates a recognized body of human arbitrators (e.g. an institution with a roster of living arbitrators), not an automated algorithm. C. The Convention’s Enforcement Regime Presupposes Human Judgment The entire enforcement structure is that an award is recognized only if it meets due-process requirements such as giving a party notice, enabling them to present their case, ensuring the arbitrator or arbitral body was validly composed. For instance, Article V(1)(a) contemplates that each party to the arbitration agreement must have “capacity,” and Article V(1)(b) contemplates that the party was able to present its case to an impartial decision-maker. An AI system cannot easily satisfy these due-process standards in the sense of being challenged, replaced, or tested for partiality or conflict of interest. D. “Permanent Arbitral Bodies” Do Not Imply Autonomous AI While Article I(2) references that an arbitral award can be made by “permanent arbitral bodies,” this does not open the door to a fully autonomous AI deciding the merits. A “permanent arbitral body” is typically an arbitral institution (like the ICC Court or an arbitral chamber) with rosters of living arbitrators. Nowhere does the Convention recognize a non-human decision-maker substituting for arbitrators themselves. UNCITRAL Model Law on International Commercial Arbitration A. Terminology and Structure Article 2(b) of the Model Law defines “arbitral tribunal” as “a sole arbitrator or a panel of arbitrators.” Article 10 refers to determining “the number of arbitrators,” “one” or “three,” etc., which in ordinary usage and practice means one or more individual persons. Article 11 lays out a procedure for appointing arbitrators, handling their challenge (articles 13, 14), and so on, plainly assuming a person. B. Core Provisions That Imply a Human Arbitrator Article 11 (and subsequent articles on challenge, removal, or replacement of arbitrators) revolve around verifying personal traits, such as independence, impartiality, and conflicts of interest. For example, Article 12(1) requires an arbitrator, upon appointment, to “disclose any circumstances likely to give rise to justifiable doubts as to his impartiality or independence.” This is obviously oriented to a natural person. An AI system cannot meaningfully “disclose” personal conflicts. Article 31(1) demands that “The award shall be made in writing and shall be signed by the arbitrator or arbitrators.” While in practice a tribunal can sign electronically, the point is that an identifiable, accountable person signs the award. A machine cannot undertake the personal act of signing or be held responsible. Article 19 affirms the freedom of the parties to determine procedure, but absent party agreement, the tribunal “may conduct the arbitration in such manner as it considers appropriate.” This includes evaluating evidence, hearing witnesses, and ensuring fundamental fairness (Articles 18, 24). That discretionary, human-like judgment is not accounted for if the “tribunal” were simply an AI tool with no human oversight. C. Arbitrator’s Duties Presuppose Personal Judgment Many of the Model Law’s articles require the arbitrator to exercise personal discretion and to do so impartially: Article 18 : “The parties shall be treated with equality and each party shall be given a full opportunity of presenting his case.” Human arbitrators are responsible for ensuring this fundamental right. Article 24 : The tribunal hears the parties, manages documents, questions witnesses, etc. Article 26 : The tribunal may appoint experts and question them. Article 17 (and especially the 2006 amendments in Chapter IV A) require the arbitrator to assess whether an “interim measure” is warranted, including the “harm likely to result if the measure is not granted.” These duties reflect a legal expectation of personal capacity for judgment , integral to the role of “arbitrator” as recognized by the Model Law. United States The United States has no specific federal statute or set of arbitration rules explicitly regulating the use of AI in arbitral proceedings. The Federal Arbitration Act (“FAA”), first enacted in 1925 (now codified at 9 U.S.C. §§ 1–16 and supplemented by Chapters 2–4), was drafted with human decision-makers in mind. Indeed, its provisions refer to “the arbitrators” or “either of them” in personal terms. Fully autonomous AI “arbitrators” were obviously not contemplated in 1925. Nonetheless, the FAA imposes no direct ban on an arbitrator’s use of technology. Indeed, under U.S. law, arbitration is fundamentally a matter of contract. If both parties consent, the arbitrator’s latitude to employ (or even to be) some form of technology is generally broad. So long as the parties’ agreement to arbitrate does not violate any other controlling principle (e.g., unconscionability, public policy), it will likely be enforceable. AI and the “Essence” of Arbitration Under the FAA The threshold issue is not whether an arbitrator may use AI, but whether AI use undermines the essence of arbitration under the Federal Arbitration Act (FAA). The parties’ arbitration agreement—and the requirement that the arbitrator “ultimately decide” the matter—are central. Under 9 U.S.C. § 10(a), a party may move to vacate an award if “the arbitrators exceeded their powers,” or if there was “evident partiality” or “corruption.” In theory, if AI fully supplants the human arbitrator and creates doubt about the award’s impartiality or the arbitrator’s independent judgment, a court could be asked to vacate on those grounds. A. Replacing the Arbitrator Entirely If AI replaces the arbitrator (with minimal or no human oversight), courts might question whether a non-human “arbitrator” is legally competent to issue an “award.” Under the FAA, the arbitrator’s written award is crucial (9 U.S.C. §§ 9–13). If the AI cannot satisfy minimal procedural requirements—like issuing a valid award or being sworn to hear testimony—or raises questions about “evident partiality,” a reviewing court could find a basis to vacate (9 U.S.C. § 10(a)). If an AI system controls the proceeding such that the human arbitrator exercises no true discretion, that might mean the award was not genuinely issued by the arbitrator—risking vacatur under 9 U.S.C. § 10(a)(4) for “imperfectly executed” powers. B. Public Policy Concerns An all-AI “award” that lacks a human hallmark of neutrality could, in a hypothetical scenario, be challenged under public policy. II. Potential Legal Challenges When AI Is Used in Arbitration A. Due Process and Fair Hearing Right to Present One’s Case (9 U.S.C. § 10(a)(3)) : Both parties must have the chance to be heard and present evidence. If AI inadvertently discards or downplays material evidence, and the arbitrator then fails to consider it, a party could allege denial of a fair hearing. Transparency : While arbitrators are not generally obliged to disclose their internal deliberations, an arbitrator’s undisclosed use of AI could raise due process issues if it introduces an unvetted analysis. If a losing party discovers the award rested on an AI-driven legal theory not argued by either side, the party could claim it had no opportunity to rebut it. “Undue Means” (9 U.S.C. § 10(a)(1)) : Traditionally, this refers to fraudulent or improper party conduct. Still, a creative argument might be that reliance on AI—trained on unknown data—without informing both parties is “undue means.” If the arbitrator’s decision relies on undisclosed AI, a party could argue it was effectively ambushed. B. Algorithmic Bias and Fairness of Outcomes Bias in AI Decision-Making : AI tools can inadvertently incorporate biases if trained on skewed data. This can undercut the neutrality required of an arbitrator. If an AI influences an award—for example, a damages calculator that systematically undervalues certain claims—a party might allege it introduced a biased element into the arbitration process. Challenge via “Evident Partiality” (9 U.S.C. § 10(a)(2)) : If an arbitrator relies on an AI known (or discoverable) to be biased, a losing party might argue constructive partiality. A court’s review is narrow, but extreme or obvious bias could support vacatur. III. FAA Vacatur or Modification of AI-Assisted Awards A. Exceeding Powers or Improper Delegation (9 U.S.C. § 10(a)(4)) An award is vulnerable if the arbitrator effectively delegates the decision to AI and merely rubber-stamps its output. Parties choose a human neutral—not a machine—and can argue the arbitrator “exceeded [their] powers” by failing to personally render judgment. B. Procedural Misconduct and Prejudice (9 U.S.C. § 10(a)(3)) Using AI might lead to misconduct if it pulls in information outside the record or curtails a party’s presentation of evidence. Any ex parte data-gathering (even by AI) can be challenged. Courts might find “misbehavior” if parties had no chance to confront AI-derived theories. C. Narrow Scope of Review Judicial review under the FAA is strictly limited (9 U.S.C. §§ 10, 11). Simple factual or legal errors—even if AI-related—rarely suffice for vacatur. A challenger must show the AI involvement triggered a recognized statutory ground (e.g., refusing to hear pertinent evidence or actual bias). Courts typically confirm awards unless there is a clear denial of fundamental fairness. D. Modification of Awards (9 U.S.C. § 11) If AI introduced a clear numerical error or a clerical-type mistake in the award, courts may modify or correct rather than vacate. Such errors include “evident material miscalculation” (§ 11(a)) or defects in form not affecting the merits (§ 11(c)). These are minor and straightforward fixes. AI Bill of Rights The White House’s Blueprint for an AI Bill of Rights (October 2022) sets forth high-level principles for the responsible design and use of automated systems. Two of its core tenets are particularly relevant: “Notice and Explanation” (transparency) and “Human Alternatives, Consideration, and Fallback” (human oversight). The Notice and Explanation principle provides that people “should know that an automated system is being used and understand how and why it contributes to outcomes that impact [them]”, with plain-language explanations of an AI system’s role. The Human Alternatives principle urges that people “should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems” caused by an automated decision. While the AI Bill of Rights is a policy guidance document (not a binding law), it reflects a federal push for algorithmic transparency, accountability, and human oversight in AI deployments. These values can certainly inform arbitration practice. For instance, if an arbitrator or arbitration institution chooses to utilize AI in case management or decision-making, adhering to these principles – by being transparent about the AI’s use and ensuring a human arbitrator remains in ultimate control – would be consistent with emerging best practices. We already see movement in this direction: industry guidelines under development (e.g. the Silicon Valley Arbitration & Mediation Center’s draft “Guidelines on the Use of AI in Arbitration”) emphasize disclosure of AI use and that arbitrators must not delegate their decision-making responsibility to an AI. European Union AI Regulation The AI Act (Regulation (EU) 2024/1689) lays down harmonized rules for the development, placing on the market, putting into service, and use of AI systems within the Union. It follows a risk-based approach, whereby AI systems that pose particularly high risks to safety or to fundamental rights are subject to enhanced obligations. The AI Act designates several domains in which AI systems are considered “high-risk.” Of particular relevance is Recital (61), which classifies AI systems “intended to be used by a judicial authority or on its behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts” as high-risk. That same recital extends the classification to AI systems “intended to be used by alternative dispute resolution bodies for those purposes,” when the decisions they produce create legal effects for the parties. Consequently, if an AI system is intended to assist arbitrators in a manner akin to that for judicial authorities, and if its decisions or outputs can materially shape the arbitrators’ final binding outcome, then such an AI system comes within the “high-risk” classification. I. Conditions Triggering High-Risk Classification for AI Arbitration Does the AI “assist” in interpreting facts or law? According to Recital (61), the high-risk label applies when the AI system is used “to assist […] in researching and interpreting facts and the law and in applying the law to a concrete set of facts.” If an AI tool offers predictive analytics on likely case outcomes, interprets contract terms in light of relevant legal doctrine, or provides reasoned suggestions on liability, quantum of damages, or relevant procedural steps, it falls within the scope of “assisting” the arbiter in a legally determinative task. Does the AI system produce legal effects? The Regulation explicitly points to “the outcomes of the alternative dispute resolution proceedings [that] produce legal effects for the parties.” (Recital (61)) Arbitral awards are typically binding on the parties—thus having legal effect—and often enforceable in national courts. Therefore, an AI system that guides or shapes the arbitrator’s (or arbitration panel’s) legally binding decision is presumably captured. Exclusion of “purely ancillary” uses Recital (61) clarifies that “purely ancillary administrative activities that do not affect the actual administration of justice in individual cases” do not trigger high-risk status. This means if the AI is limited to scheduling hearings, anonymizing documents, transcribing proceedings, or managing routine tasks that do not influence the legal or factual determinations, it would not be considered high-risk under this Regulation. The dividing line is whether the AI’s output can materially influence the final resolution of the dispute (e.g., analyzing core evidence, recommending liability determinations, or drafting essential portions of the award). II. Legal and Practical Implications for AI in Arbitration in EU When an AI tool used in arbitration is classified as high-risk, a suite of obligations from the Regulation applies. The Regulation’s relevant provisions on high-risk systems span risk management, data governance, technical documentation, transparency, human oversight, and post-market monitoring (Articles and recitals throughout the Act). Below is an overview of these obligations as they would apply to AI arbitration: A. Risk Management System (Article 9) Providers of high-risk AI systems are required to implement a documented, continuous risk management process covering the system’s entire lifecycle. For AI arbitration, the provider of the software (i.e. the entity placing it on the market or putting it into service) must: Identify potential risks (including the risk of incorrect, biased, or otherwise harmful award recommendations). Mitigate or prevent those risks through corresponding technical or organisational measures. Account for reasonably foreseeable misuse (for instance, using the tool for types of disputes or jurisdictions it is not designed to handle). B. Data Governance and Quality (Article 10) Data sets used to train, validate, or test a high-risk AI system must: Be relevant, representative, and correct to the greatest extent possible. Undergo appropriate governance and management to reduce errors or potential biases that could lead to discriminatory decisions or outcomes in arbitration. C. Technical Documentation, Record-Keeping, and Logging (Articles 11 and 12) High-risk AI systems must include: Clear, up-to-date technical documentation, covering the model design, data sets, performance metrics, known limitations, and other key technical aspects. Proper record-keeping (“logging”) of the system’s operations and outcomes, enabling traceability and ex post review (e.g. in the event of challenges to an arbitral decision relying on the AI’s outputs). D. Transparency and Instructions for Use (Article 13) Providers of high-risk AI systems must ensure sufficient transparency by: Supplying deployers (e.g. arbitral institutions) with instructions about how the system arrives at its recommendations, the system’s capabilities, known constraints, and safe operating conditions. Disclosing confidence metrics, disclaimers of reliance, warnings about potential error or bias, and any other usage guidelines that allow arbitrators to understand and properly interpret the system’s output. E. Human Oversight (Article 14) High-risk AI systems must be designed and developed to allow for human oversight: Arbitrators (or arbitral panels) must remain the ultimate decision-makers and be able to detect, override, or disregard any AI output that appears flawed or biased. The AI tool cannot replace the arbitrator’s judgment; rather, it should support the decision-making process in arbitration while preserving genuine human control. F. Accuracy, Robustness, and Cybersecurity (Article 15) Providers must ensure that high-risk AI systems: Achieve and maintain a level of accuracy that is appropriate in relation to the system’s intended purpose (e.g. suggesting case outcomes in arbitration). Are sufficiently robust and resilient against errors, manipulation, or cybersecurity threats—particularly critical for AI tools that could otherwise be hacked to produce fraudulent or manipulated arbitral results. G. Post-Market Monitoring (Article 17) Providers of high-risk AI systems must also: Monitor real-world performance once the system is deployed (i.e. used in actual arbitration proceedings). Take timely corrective actions if unacceptable deviations (e.g. high error rates, systemic biases) emerge in practice. III. The Role of the Provider vs. the Arbitration Institution (Deployer) Pursuant to Article 3(10) of Regulation (EU) 2024/1689 (“AI Act”), a provider is defined as: “...any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark.” Accordingly, where a law firm, software vendor, or specialized AI start-up develops a high-risk AI system and makes it available to arbitral institutions for the purpose of dispute resolution, that entity qualifies as the provider. Providers of high-risk AI systems must comply with the obligations set out in Articles 16–25 of the AI Act, including ensuring that the high-risk AI system meets the requirements laid down in Articles 9–15, performing or arranging the relevant conformity assessments (Article 43), and establishing post-market monitoring (Article 17). Under Article 3(12) of the AI Act, a deployer is defined as: “...any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.” In the arbitration context, the arbitral institution or the arbitration panel implementing the high-risk AI system is considered the deployer. Deployers have obligations outlined in Articles 29–31 , which include using the system in compliance with the provider’s instructions, monitoring its performance, retaining relevant records (Article 29), and ensuring that human oversight is effectively exercised throughout the system’s operation (Article 14). IV. Distinguishing “High-Risk” vs. “Ancillary” AI in Arbitration The AI Act’s operative text (specifically Article 6(2) and Annex III Section A, point 8) classifies as high-risk those AI systems “intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution,” when the outcomes of the proceedings produce legal effects. However, the Regulation does not designate as high-risk any AI system that merely executes ancillary or administrative tasks (the Act itself uses the term “purely ancillary administrative activities that do not affect the actual administration of justice in individual cases” in Recital (61)). Therefore: High-risk arbitration AI : Covered by Article 6(2) and Annex III Section A, point 8, if the AI system materially or substantively influences the resolution of the dispute by “assisting in researching and interpreting facts and the law and in applying the law to a concrete set of facts.” This includes systems that suggest legal conclusions, draft core elements of the arbitral decision, or advise on factual or legal findings central to the final outcome. Not high-risk : If the AI tool is purely “ancillary” in nature — for instance, scheduling, document formatting, automated transcription, or anonymization — and does not shape the actual analysis or findings that produce legally binding effects. Such use cases are not captured by Annex III, nor do they meet the condition in Article 6(2). Boundary scenarios : If the AI tool nominally performs only “supporting” tasks (such as ranking evidence or recommending procedural steps) but in practice significantly guides or steers the arbitrator’s essential decisions, that usage may bring it under the scope of high-risk classification. The decisive factor is whether the system’s functioning meaningfully affects how the law and facts are ultimately applied. Hence the quoted distinction between “High-risk” AI and “Not high-risk” (or ancillary ) AI in arbitration aligns with the AI Act, subject to the caveat that borderline applications must be assessed in light of whether the AI’s outputs meaningfully influence or determine legally binding outcomes. National Arbitration Laws in Key EU Member States Arbitration in the EU is primarily governed by the national laws of each Member State. Over the past decades, many European countries have modernized their arbitration statutes, often drawing inspiration from the UNCITRAL Model Law on International Commercial Arbitration. However, the extent of Model Law adoption and procedural frameworks varies by country. Country Primary Legislation Based on UNCITRAL Model Law? Enforcement Germany 10th Book of Civil Procedure (ZPO), Sections 1025–1066 Yes – Adopted Model Law (1985) with modifications. Applies to both domestic and international arbitrations. Arbitration-friendly courts; awards enforceable by court order, with grounds for setting aside or refusal mirroring New York Convention (NYC) defenses. France Code of Civil Procedure (CPC), Articles 1442–1527. No – French law is an independent regime not based on the Model Law. Domestic arbitration is subject to more stringent rules, while international arbitration is more liberal. International arbitration is defined broadly (dispute implicates international trade interests). Few mandatory rules in international arbitration; parties and arbitrators have wide autonomy. No strict writing requirement for international arbitration agreements. Uses an exequatur process. Netherlands Dutch Arbitration Act (Book 4, Code of Civil Procedure). Partial – Not a verbatim Model Law adoption, but significantly inspired by it. No distinction between domestic and international cases. Domestic awards are enforced like judgments after deposit with the court. Foreign awards are recognized under the NYC, as the Act’s enforcement provisions closely follow the NYC grounds. Sweden Swedish Arbitration Act (SAA). Yes in substance – Sweden did not formally adopt the Model Law’s text, but the SAA contains substantive similarities . Applies equally to domestic and international arbitrations. SAA sets certain qualifications (e.g. legal capacity, impartiality) beyond Model Law minima. Sweden is strongly pro-enforcement. As an NYC party, it enforces foreign awards under NYC terms. Spain Spanish Arbitration Act 60/2003 (SAA). Yes – The 2003 Act was heavily based on the 1985 UNCITRAL Model Law. Arbitral awards are enforceable as final judgments (no appeal on merits). The High Courts of Justice hear actions to annul awards on limited grounds. Foreign awards are recognized under the NYC. Spanish courts generally uphold awards absent clear NYC Article V grounds. Italy Code of Civil Procedure (CCP). No – Italian arbitration law is not directly based on the Model Law, though it shares many of its core principles. The Courts of Appeal have jurisdiction over recognition and enforcement of foreign awards (Articles 839–840 CCP). Italy applies NYC criteria for refusal of enforcement. Domestic awards are enforced after the Court of Appeal confirms there are no grounds to set aside. AI Arbitration in Germany Below is a high-level analysis of how AI-based arbitration might fit into the wording of the German Arbitration Act (ZPO Book 10). The analysis is somewhat speculative, since the Act does not directly address AI at all. A. No Direct Prohibition on AI A first observation: nowhere in Book 10 is there an explicit rule banning or limiting the use of AI for arbitral proceedings. Terms such as “schiedsrichter” (arbitrator), “Person,” or “unabhängig und unparteiisch” (independent and impartial) consistently refer to human arbitrators, but do not categorically exclude the possibility of non-human decision-makers. The statutory text always presupposes a “person,” “he,” or “she” as an arbitrator; still, that is the default assumption in 1998–2018 legislative drafting, not necessarily a prohibition. One could argue that the repeated references to “Person” reflect an implicit normative stance that an arbitrator should be a human being—but from a strictly literal vantage, it is an open question if an AI “system” could serve in that role. B. Potential Impediments to AI as Sole Arbitrator Sections 1036–1038 speak of the requirement that arbitrators disclose conflicts of interest, remain impartial, and be capable of fulfilling the arbitrator’s duties. These requirements seem conceptually bound to human qualities (“berechtigte Zweifel an ihrer Unparteilichkeit,” “Umstände [...] unverzüglich offen zu legen,” “sein Amt nicht erfüllen”). One might argue an AI does not truly have “impartiality” or the capacity to “disclose” conflicts as a human might. A creative reading of these provisions might imply that only a human can exhibit these qualities, leading to an indirect barrier to AI arbitrators. Even so, a forward-looking approach might interpret “unabhängig und unparteiisch” as a principle that can be satisfied technologically if the AI’s training, data, and algorithms meet certain transparency standards. However, the textual references to “Er,” “Person,” or the requirement to “disclose circumstances” do suggest a legislative design geared toward human arbitrators. If the parties themselves voluntarily designate an AI system, a court might question whether that “appointment” meets the statutory standard of an impartial “Schiedsrichter” capable of fulfilling the mandated disclosure obligations. It is unclear how an AI would spontaneously “disclose” conflicts or handle “Ablehnungsverfahren” (challenge procedures) under §§ 1036–1037. C. Formal Requirements Under § 1035 Under § 1035, the parties “order” (ernennen) the arbitrator(s). The law contemplates an appointment by name, while also allowing the appointment method to be left to the parties’ agreement. One might attempt to list an AI platform or a specialized AI arbitral entity as the “Schiedsrichter.” Then, if the parties do not dispute that appointment, presumably the process is valid. The only textual friction is in § 1035(5) (the court must ensure the “unabhängigen und unparteiischen” arbitrator). If court assistance in appointing is requested, a judge might find an AI does not meet the statutory criteria, effectively refusing to “install” it. But if the parties themselves have chosen an AI system in private, it is not impossible from a purely textual standpoint—though it is an untested scenario. D. Procedural Rights: “Notice,” “Hearing,” and “Legal Audi Alteram Partem” Sections 1042–1048 require that each party is “to be heard” (rechtliches Gehör) and that the arbitrator handles evidence and ensures fairness. An AI system delivering a purely automated decision might be deemed to conflict with the personal oversight and reasoned assessment implied by these clauses. For instance, § 1042(1) states “the parties shall be treated equally” and “each party is entitled to be heard.” A purely algorithmic system could risk due-process concerns if it lacks human capacity to weigh “fairness” or accommodate unforeseen procedural nuances. Still, the text does not explicitly say an automaton cannot do it; rather, it insists on respect for due process. If the AI system can incorporate such procedures—ensuring parties can submit evidence, respond to each other, and have an “explainable” outcome—there is no direct textual ban. E. Setting Aside and Public Policy Section 1059 allows a court to set aside an award for lack of due process or if the arbitrator is not properly appointed. An AI-based award that fails to grant each party an opportunity to present their case or that obscures the basis for the decision might be at risk of annulment under § 1059(2)(1)(b) or (d). The courts might also strike down an AI-run arbitration under “public policy” (ordre public) if it is deemed that the process is too opaque or not a “fair hearing.” So although no explicit clause forbids AI arbitrators, the effect could be that an AI award is challenged under §§ 1059, 1052 (decision by a collegium?), or 1060(2). France A. Domestic Arbitration: Article 1450 Requires a Human Arbitrator Article 1450 ( Titre I, Chapitre II ) provides, in part: “La mission d’arbitre ne peut être exercée que par une personne physique jouissant du plein exercice de ses droits.Si la convention d’arbitrage désigne une personne morale, celle-ci ne dispose que du pouvoir d’organiser l’arbitrage.” This is the single most direct statement in the Act that speaks to who (or what) may serve as arbitrator. It clearly states that the function of deciding the case (“la mission d’arbitre”) can only be carried out by a “personne physique” (a natural person) enjoying full civil rights. Meanwhile, a personne morale (legal entity) may be tasked only with administering or organizing the proceedings, not issuing the decision. Under domestic arbitration rules, an AI system—even if structured within a legal entity—cannot lawfully act as the actual decider, because the statute explicitly demands a natural person . This requirement amounts to an indirect but quite categorical ban on a purely machine-based arbitrator in French domestic arbitration. An AI could presumably assist a human arbitrator, but it could not alone fulfill the statutory role of rendering the award. B. International Arbitration: No Verbatim “Personne Physique” Rule, Yet Similar Implications For international arbitration, Article 1506 cross-references some (but not all) provisions of Title I. Article 1450 is notably not among those that automatically apply to international cases. As a result, there is no verbatim statement in the international part that “the arbitrator must be a natural person.” One might argue that, in principle, parties to an international arbitration could try to designate an AI system as their “arbitrator.” However, the rest of the code—e.g. Articles 1456, 1457, 1458, which are incorporated by Article 1506—consistently presumes that the arbitrator is capable of disclosing conflicts, being “récusé,” having “un empêchement,” etc. These obligations appear tied to qualities of a human being: impartiality, independence, the duty to reveal conflicts of interest, the possibility of “abstention” or “démission,” etc. They strongly suggest a living individual is expected. An AI tool cannot meaningfully “reveal circumstances” that might affect its independence; nor can it be “révoqué” by unanimous party consent in the same sense as a human. Thus, even in international arbitration, the text strongly implies an arbitrator is a person who can fulfill these statutory duties. In short, the literal bar in Article 1450 applies to domestic arbitration, but the spirit of the international arbitration provisions also envisions a human decision-maker. While there is no line that says “AI arbitrators are forbidden,” the repeated references to “l’arbitre” as someone who must disclose, resign, or be revoked for partiality push in the same direction. A French court would likely find that, under Articles 1456–1458 and 1506, an AI alone cannot meet the code’s requirements for an arbitrator. C. Possible Indirect Challenges and Public Policy Even if parties somehow attempted to sidestep the requirement of a “personne physique,” the code’s enforcement provisions would pose other obstacles: Due Process (Principe de la Contradiction). Articles 1464, 1465, 1510, etc. require observance of contradiction and égalité des parties . A purely automated arbitrator might fail to show that it afforded both sides a genuine opportunity to present arguments before a human decision-maker who can weigh them in a reasoned, flexible manner. Setting Aside or Refusal of Exequatur. If the AI-based award flouts the code’s requirement that the arbitrator have capacity and be impartial, the losing party can invoke Articles 1492 or 1520 (for domestic vs. international) to seek annulment for irregular constitution of the tribunal or for violation of fundamental procedural principles. Manifest Contrariety to Public Policy. Articles 1488 (domestic) and 1514 (international) require that a French court refuse exequatur if the award is “manifestly contrary to public policy.” A completely AI-run arbitral process might be deemed contrary to fundamental procedural fairness. In each respect, the Act’s structure presumes a human tribunal with discretionary powers, an obligation to sign the award, etc. An AI alone would struggle to comply. Netherlands Under the Dutch Arbitration Act (Book 4 of the Dutch Code of Civil Procedure), the text presupposes that an arbitrator is a human decision-maker rather than an AI system. For instance, Article 1023 states that “Iedere handelingsbekwame natuurlijke persoon kan tot arbiter worden benoemd” (“Every legally competent natural person may be appointed as an arbitrator”) . This phrasing is already suggestive: it frames “arbiter” in terms of a flesh‑and‑blood individual who has legal capacity. Other provisions likewise assume that an arbitrator can perform tasks generally associated with a human decision-maker. For example, arbitrators must disclose potential conflicts of interest and can be challenged (wraking) if there is “gerechtvaardigde twijfel … aan zijn onpartijdigheid of onafhankelijkheid” (“justified doubt … about his impartiality or independence”) . The Act also speaks of “ondertekening” (signing) of the arbitral award (Article 1057) and grants arbitrators discretionary procedural powers, such as the authority to weigh evidence, hear witnesses, and manage hearings . All these elements lean heavily on human qualities, like independence, impartiality, and the capacity to understand and consider the parties’ arguments. Thus, while there is no single clause that literally says, “AI is barred from serving as an arbitrator,” the overall statutory design pivots around the concept of a legally competent person. An AI system cannot realistically fulfill obligations such as disclosing personal conflicts, signing an award, or being subjected to a wraking procedure as a human would. In that sense, although the Act does not contain an explicit prohibition on “AI arbitrators,” it effectively prohibits them by tying the arbitrator’s role to natural‑person status and personal legal capacities. Sweden Under the Swedish Arbitration Act (Lag (1999:116) om skiljeförfarande), the statutory language implicitly assumes that an arbitrator (skiljeman) will be a human being with legal capacity, rather than an AI system. For instance, Section 7 states that “Var och en som råder över sig själv och sin egendom kan vara skiljeman” (“Anyone who has the capacity to manage themselves and their property may serve as an arbitrator”) . This phrasing strongly suggests a natural person with the requisite legal autonomy rather than a non-human entity. The Act also repeatedly ties the arbitrator’s role to personal qualities such as impartiality (opartiskhet) and independence (oberoende) (Section 8), the ability to disclose circumstances that might undermine confidence in their neutrality (Section 9), and a duty to underwrite or “sign” the final award (Section 31) . All these provisions presuppose that the arbitrator can make discretionary judgments, weigh fairness, and be removed (skiljas från sitt uppdrag) on grounds related to personal conduct or conflicts of interest. A purely AI-driven system, which lacks the hallmarks of human capacity and accountability, could not fulfill such requirements. Accordingly, even though the Swedish law does not explicitly state “AI is prohibited from acting as an arbitrator,” it functionally bars non-human arbitrators by defining who can serve and by imposing obligations—impartiality, personal disclosure, physical or electronic signature, and readiness to be challenged for bias—that only a human individual can meaningfully carry out. Spain Under the Spanish Arbitration Act (Ley 60/2003, de 23 de diciembre), several provisions implicitly treat the árbitro as a human decision‑maker rather than an AI system. For instance, Article 13 (“Capacidad para ser árbitro”) specifies that arbitrators must be natural persons “en el pleno ejercicio de sus derechos civiles.” This requirement clearly presupposes a human being who has legal capacity. Additionally, the Act imposes duties of independence and impartiality (Article 17) and requires arbitrators to reveal any circumstances that might raise doubts about their neutrality . The law also contemplates acceptance of the arbitrators’ appointment in person (Article 16), the possibility of recusación (challenge) based on personal or professional reasons (Article 17–18), and the signing of the award (laudo) by the arbitrator(s) (Article 37). All these duties imply discretionary judgment and accountability typical of a human arbitrator. Italy Under the Italian Arbitration Act (Articles 806–840 of the Codice di Procedura Civile) , the text consistently assumes that an arbitro (arbitrator) is a human individual, in full legal capacity. For example: Article 812 provides that “Non può essere arbitro chi è privo […] della capacità legale di agire” (“No one lacking legal capacity may serve as arbitrator”). An AI system cannot meaningfully satisfy this personal requirement of legal capacity. The Act speaks of recusation, acceptance, disclosure of conflicts of interest, etc. (Articles 813, 815), all of which assume personal traits (e.g. independence, impartiality). Arbitrators sign the final award (Articles 823, 824), act as public officials for certain tasks, and bear personal liability (Articles 813-bis, 813-ter). Hence, though the law does not explicitly forbid “AI arbitrators,” it effectively bars non-human arbitrators by imposing requirements linked to human legal capacity, personal judgment, and accountability. An autonomous AI could not meet these conditions without human involvement. Other Jurisdictions United Arab Emirates Under Federal Law No. 6 of 2018 Concerning Arbitration in the UAE, an arbitrator must be a natural person with full legal capacity and certain other qualifications. Specifically: Article 10(1)(a) provides that an arbitrator must not be a minor or otherwise deprived of their civil rights, indicating they must be a living individual with legal capacity. The same article stipulates that they cannot be a member of the board of trustees or an executive/administrative body of the arbitral institution handling the dispute, and they must not have relationships with any party that would cast doubt on their independence or impartiality. Article 10(2) confirms there is no requirement of a certain nationality or gender, but it still envisions a human arbitrator who can meet the personal requirements in Article 10(1). Article 10(3) also requires a potential arbitrator to disclose “any circumstances which are likely to cast doubts on his or her impartiality or independence,” and to continue updating the parties about such circumstances throughout the proceedings. An AI application is not able to fulfill that personal disclosure obligation in the sense prescribed by the Law. Article 10 BIS (introduced by a later amendment) restates that an arbitrator must be a natural person meeting the same standards – for example, holding no conflicts, disclosing membership in relevant boards, and so forth – if that person is chosen from among certain arbitration-center supervisory bodies. Hence, although the UAE Arbitration Law (Federal Law No. 6 of 2018) does not literally declare “AI arbitrators are prohibited,” it unequivocally conditions the role of arbitrator on being a natural person with the required legal capacity and duties such as disclosure of conflicts. An autonomous AI system, by contrast, cannot fulfill the obligations that the Law imposes, whether it be impartiality, independence, or the capacity to sign the final award. Such requirements, taken together, effectively exclude AI from serving as the sole or true arbitrator in a UAE-seated arbitration. Singapore Under Singapore’s International Arbitration Act (Cap. 143A, Rev. Ed. 2002) (the IAA), which incorporates the UNCITRAL Model Law on International Commercial Arbitration (with certain modifications), there is no explicit statement such as “an arbitrator must be a human being.” However, the provisions of the Act and the Model Law, read as a whole, presuppose that only natural persons can serve as arbitrators in a Singapore-seated international arbitration. A. Terminology and Structure of the Legislation Section 2 of the IAA (Part II) defines “arbitral tribunal” to mean “a sole arbitrator or a panel of arbitrators or a permanent arbitral institution.” Likewise, Article 2 of the Model Law uses “arbitral tribunal” to refer to a “sole arbitrator or a panel of arbitrators.” Neither the IAA nor the Model Law define “arbitrator” as something that could be non-human, nor do they provide a mechanism for appointing non-person entities. The Act consistently describes the arbitrator as an individual who can accept appointments, disclose conflicts, sign awards, act fairly , be challenged , or be replaced , among other duties. B. Core Provisions That Imply a Natural Person Arbitrator Appointment and Acceptance Section 9A of the IAA (read with Article 11 of the Model Law) speaks of “appointing the arbitrator” and states that if parties fail to appoint, the “appointing authority” does so. The entire scheme contemplates a named person or individuals collectively as the arbitral tribunal. Article 12(1) of the Model Law requires that “When a person is approached in connection with possible appointment as an arbitrator, that person shall disclose….” The word “person,” in the context of disclosing personal circumstances likely to raise doubts as to independence or impartiality, strongly suggests a natural person. Disclosure of Potential Conflicts Article 12 of the Model Law further requires the arbitrator to “disclose any circumstance likely to give rise to justifiable doubts as to his impartiality or independence.” The capacity to evaluate personal conflict, have impartial relationships, etc., is a hallmark of a human arbitrator. The arbitrator is also subject to challenge (Model Law Art. 13) if “circumstances exist that give rise to justifiable doubts as to his impartiality or independence” or if he lacks the qualifications the parties have agreed on. Signatures, Liability, and Immunities Sections 25 and 25A of the IAA provide that an arbitrator is immune from liability for negligence in discharging the arbitrator’s duties and from civil liability unless acting in bad faith. This strongly implies that the arbitrator is a natural person , because the system of professional negligence or personal immunity for “an arbitrator” does not rationally apply to a non-human machine. Article 31(1) of the Model Law (which has force of law under Section 3 of the IAA) states: “The award shall be made in writing and shall be signed by the arbitrator or arbitrators.” Plainly, an autonomous AI does not meaningfully “sign” a final award in the sense required by law. Procedural Powers That Depend on Human Judgment Section 12(1) of the IAA confers on the arbitral tribunal powers such as ordering security for costs, ordering discovery, giving directions for interrogatories, and so forth. The same section clarifies that the tribunal “may adopt if it thinks fit inquisitorial processes” (Section 12(3)) and “shall decide the dispute in accordance with such rules of law as are chosen by the parties” (Section 12(5), read with Article 28 of the Model Law). These provisions presume that the arbitrator can weigh and interpret evidence, evaluate fairness, impartially manage adversarial arguments, handle procedural complexities, and supply reasons in a final award. While an AI tool might assist a human arbitrator, the notion of autonomous final human-like judgment is not recognized anywhere in the Act. C. Arbitrator’s Duties and the Necessity of Human Capacity Many duties that the IAA imposes on arbitrators are inherently personal and judgment-based: Fair Hearing and Due Process . Articles 18 and 24 of the Model Law stipulate that “The parties shall be treated with equality and each party shall be given a full opportunity of presenting his case,” and that “the arbitral tribunal shall hold hearings” or manage documentary proceedings. These tasks involve high-level procedural judgments, discretionary rulings, and the balancing of fairness concerns—indications that the law envisions a human decision-maker. Ability to Be Challenged, Replaced, or Terminated . Articles 12–15 of the Model Law describe the procedure for challenging an arbitrator for partiality, bias, or inability to serve. This only works meaningfully if the arbitrator is a natural person susceptible to partiality. Signing and Delivering the Award . The final step of the arbitration is thoroughly anchored in personal accountability: the written text is “signed by the arbitrator(s)” and delivered to the parties (Model Law Art. 31(1), (4)). D. Permanent Arbitral Institutions vs. Automated “AI Arbitrators” One might note that section 2 of the IAA includes “a permanent arbitral institution” in the definition of “arbitral tribunal.” This does not mean an institution by itself can decide as the “arbitrator.” Rather, it typically refers to an arbitral body that administers arbitration or that may act as the appointing authority. The actual day-to-day adjudication is still performed by an individual or panel of individuals. Indeed, the IAA draws a difference between: The arbitral institution that may oversee or administer proceedings (e.g. SIAC, ICC, LCIA, etc.). The arbitrator(s) who is/are physically deciding the merits of the dispute. Ethical and Due Process Considerations The use of AI in arbitration gives rise to several ethical and due process concerns . Arbitration is founded on principles of fairness, party autonomy, and the right to be heard; introducing AI must not undermine these fundamentals. Key considerations include: Transparency and Disclosure: One ethical question is whether arbitrators or parties should disclose their use of AI tools. Transparency can be crucial if AI outputs influence decisions. There is currently no universal rule on disclosure, and practices vary. The guiding principle is that AI should not become a “black box” in decision-making – parties must know the bases of an award to exercise their rights (like challenging an award or understanding the reasoning). Lack of transparency could also raise enforceability issues under due process grounds. Therefore, best practice leans towards disclosure when AI materially assists in adjudication , ensuring all participants remain on equal footing. Bias and Fairness: AI systems can inadvertently introduce or amplify bias . Machine learning models trained on historical legal data may reflect the biases present in those data – for example, skewed representation of outcomes or language that favors one group. In arbitration, this is problematic if an AI tool gives systematically biased suggestions (say, favoring claimants or respondents based on past award trends) or if it undervalues perspectives from certain jurisdictions or legal traditions present in training data. The ethical duty of arbitrators to be impartial and fair extends to any tools they use. One safeguard is using diverse and representative training data; another is having humans (arbitrators or counsel) critically review AI findings rather than taking them at face value. Due Process and Right to a Fair Hearing: Due process in arbitration means each party must have a fair opportunity to present its case and respond to the other side’s case, and the tribunal must consider all relevant evidence. AI use can challenge this in subtle ways. There is also the concern of explainability : due process is served by reasoned awards, but if a decision were influenced by an opaque algorithm, how can that reasoning be explained? Ensuring a fair hearing might entail allowing parties to object to or question the use of certain AI-derived inputs. Confidentiality and Privacy: Confidentiality is a hallmark of arbitration. Ethical use of AI must guard against compromising the confidentiality of arbitral proceedings and sensitive data. Many popular AI services are cloud-based or hosted by third parties, which could pose risks if confidential case information (witness statements, trade secrets, etc.) is uploaded for analysis. AI Use Cases and Real-World Examples Despite the concerns, AI is already making tangible contributions in arbitration practice. A number of use cases and real-world examples demonstrate how AI tools are being applied by arbitral institutions, arbitrators, and parties: Document Review and E-Discovery: Arbitration cases, especially international commercial and investment disputes, often involve massive document productions. AI-driven document review platforms (leveraging natural language processing and machine learning) can significantly speed up this process. Tools like Relativity and Brainspace use AI to sort and search document collections, identifying relevant materials and patterns without exhaustive human review. Language Translation and Interpretation: In multilingual arbitrations, AI has proven valuable for translation. Machine translation systems (from general ones like Google Translate to specialized legal translation engines) can quickly translate submissions, evidence, or awards. Moreover, AI is being used for real-time interpretation during hearings: recent advances have allowed live speech translation and transcription. Legal Research and Case Analytics: AI assists lawyers in researching legal authorities and prior decisions relevant to an arbitration. Platforms like CoCounsel (by Casetext) and others integrate AI to answer legal questions and find citations from vast databases. Products like Lex Machina and Solomonic (originally designed for litigation analytics) are being applied to arbitration data to glean insights on how particular arbitrators tend to rule, how long certain types of cases take, or what damages are typically awarded in certain industries. Arbitrator Selection and Conflict Checking: Choosing the right arbitrator is crucial, and AI is helping make this process more data-driven. Traditional selection relied on reputation and word-of-mouth, but now AI-based arbitrator profiles are available. Additionally, AI is used for conflict of interest checks: law firms use AI databases to quickly check if a prospective arbitrator or expert has any disclosed connections to entities involved, by scanning CVs, prior cases, and public records. This ensures compliance with disclosure obligations and helps avoid later challenges. Case Management and Procedural Efficiency: Arbitral institutions have begun integrating AI to streamline case administration. Furthermore, during proceedings, AI chatbots can answer parties’ routine questions about rules or schedules, easing the administrative burden. Another emerging idea is AI prediction for settlement : parties might use outcome-prediction AI to decide whether to settle early. For instance, an AI might predict a 80% chance of liability and a damages range, prompting a party to offer or accept settlement rather than proceed. This was reportedly used in a few insurance arbitrations to successfully settle before award, with both sides agreeing to consult an algorithm’s evaluation as one data point in negotiations. These examples show that AI is not just theoretical in arbitration – it is actively being used to augment human work, in ways both big and small. Arbitration Institutions Implementing AI Initiatives ICC (International Chamber of Commerce) The “ICC Overarching Narrative on Artificial Intelligence” outlines the ICC’s perspective on harnessing AI responsibly, stressing fairness, transparency, accountability, and inclusive growth. It promotes risk-based regulation that fosters innovation without stifling competition, encourages collaboration between businesses and policymakers, and calls for global, harmonized approaches that safeguard privacy, data security, and human rights. The ICC highlights the importance of fostering trust through robust governance, empowering SMEs and emerging markets with accessible AI tools, and ensuring AI’s benefits reach all sectors of society. While it does not specifically govern arbitration, the Narrative’s focus on ethical and transparent AI use offers guiding principles that align with the ICC’s broader commitment to integrity and due process. AAA-ICDR (American Arbitration Association & Intl. Centre for Dispute Resolution) In 2023 the AAA-ICDR published ethical principles for AI in ADR, affirming its commitment to thoughtful integration of AI in dispute resolution. It has since deployed AI-driven tools – for example, an AI-powered transcription service to produce hearing transcripts faster and more affordably, and a “AAAi Panelist Search” generative AI system to help identify suitable arbitrators/mediators from its roster. These initiatives aim to boost efficiency while upholding due process and data security. JAMS (USA) JAMS introduced the ADR industry’s first specialized AI dispute arbitration rules in 2024, providing a framework tailored to cases involving AI systems. Later in 2024 it launched “JAMS Next,” an initiative integrating AI into its workflow. JAMS Next includes AI-assisted transcription (real-time court reporting with AI for instant rough drafts and faster final transcripts) and an AI-driven neutral search on its website to quickly find arbitrators/mediators via natural language queries. SCC (Arbitration Institute of the Stockholm Chamber of Commerce) In October 2024, the SCC released a “Guide to the use of artificial intelligence in cases administered under the SCC rules” under its rules. This guideline advises arbitration participants (especially tribunals) on responsible AI use. Key points include safeguarding confidentiality, ensuring AI does not diminish decision quality, promoting transparency (tribunals are encouraged to disclose any AI they use), and prohibiting any delegation of decision-making to AI. CIETAC (China International Economic and Trade Arbitration Commission) CIETAC has integrated AI and digital technologies into its case administration. By 2024 it implemented a one-stop online dispute resolution platform with e-filing, e-evidence exchange, an arbitrator hub, and case management via a dedicated app – enabling fully paperless proceedings. CIETAC reports it has achieved intelligent document processing, including full digital scanning and automated identification of arbitration documents, plus a system for detecting related cases. CIETAC’s annual report states it is accelerating "the application of artificial intelligence in arbitration” to enhance efficiency and service quality. Silicon Valley Arbitration & Mediation Center The SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration (1st Edition, 2024) present a comprehensive, principle-based framework designed to promote the responsible and informed use of AI in both domestic and international arbitration proceedings. These guidelines aim to ensure fairness, transparency, and efficiency by providing clear responsibilities for all arbitration participants—including arbitrators, parties, and counsel. Key provisions include the obligation to understand AI tools' capabilities and limitations, safeguard confidentiality, disclose AI usage when appropriate, and refrain from delegating decision-making authority to AI. Arbitrators are specifically reminded to maintain independent judgment and uphold due process. The guidelines also address risks like AI hallucinations, bias, and misuse in evidence creation. A model clause is provided for integrating the guidelines into procedural orders, and the document is designed to evolve alongside technological advancements. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- When Referrals Become Regulated
Many Web3 projects and individuals earn fees for introductions, referrals, and deal sourcing. In the European Union, the legal question is when an “introducer” crosses into regulated intermediation under MiFID II or MiCA. MiFID II: when an introduction becomes an investment service MiFID II Article 5 requires authorisation for investment services. “Investment advice” is a personal recommendation on transactions in financial instruments (Article 4(1)(4)). Annex I lists the key services for introducers. The most relevant are reception and transmission of orders, execution of orders, placing, and investment advice. A referral model enters MiFID II territory when the introducer does more than general marketing. Common trigger points are: taking a client instruction to buy or sell, then passing it onward. negotiating price, size, timing, or settlement for a client. giving a client-specific recommendation on a financial instrument token. arranging placements for an issuer and charging a transaction-linked fee. acting as an external sales channel for an authorised firm. MiFID II also provides a route for “tied agents.” A tied agent acts for one authorised investment firm. The authorised firm bears full responsibility (Article 4(1)(29)). Member States set tied-agent registration and supervision rules (Article 29). MiCA: when an introduction becomes a crypto-asset service MiCA Article 59 prohibits providing crypto-asset services in the Union without authorisation. The provider must be established in a Member State. MiCA defines a list of “crypto-asset services” (Article 3(1)(16)). For introducers, the most sensitive services are: reception and transmission of orders for crypto-assets on behalf of clients. execution of orders for crypto-assets on behalf of clients. placing of crypto-assets. exchange of crypto-assets for funds or for other crypto-assets. advice on crypto-assets. MiCA also covers custody and administration, operation of a trading platform, portfolio management, and transfer services. MiCA uses functional definitions. “Reception and transmission” covers receiving an order and passing it to a third party for execution. “Placing” covers marketing crypto-assets to purchasers on behalf of an offeror. “Advice” covers a personal recommendation to a client. Non-custodial design reduces custody exposure. It does not remove authorisation risk. Order handling, execution, placing, or personal recommendations can still trigger MiCA. Third-country firms and “exclusive initiative” under MiCA An offshore entity does not, by itself, remove EU exposure. MiCA Article 61 allows third-country service only at the client’s own exclusive initiative. Solicitation or promotion in the Union can defeat that route. Article 61 also limits follow-on services to the same type of service and the same provider. Case study: introducer + non-custodial interface A company incorporated offshore runs a crypto-only, non-custodial interface for professional counterparties. Users connect their own wallets. The interface shows third-party protocol yields. Users interact directly with external protocols. The company holds no client keys or fiat. The company earns fees from partner referrals and introductions. It also applies access controls and geo-blocking in selected jurisdictions. If the curated opportunities involve financial instrument tokens, MiFID II becomes primary. A “deal introduction” can become “placing” or “investment advice.” That risk rises with client-specific steering and deal execution involvement. A transaction-linked success fee can support an inference of intermediation. If the activity involves MiCA crypto-assets, the question is whether the company provides a MiCA service. Referral codes and general marketing often sit outside authorisation. This requires no order reception, order routing, execution, placing, or advice. The risk rises where the introducer curates gated dealflow and actively solicits participation. Work delivered in the case study included: mapping the full service flow against MiFID II and MiCA definitions. classifying tokens and activities by regime and perimeter. drafting referral and introduction terms with clear activity limits. reviewing EU targeting signals and third-country “exclusive initiative” controls. preparing a licensing or partnering path where the business model required regulated services. Offshore structuring: what it solves, and what it does not Offshore centres vary. Some impose licensing or registration for digital asset services. Bermuda restricts carrying on “digital asset business” in or from within Bermuda without a licence. The British Virgin Islands restricts providing a “virtual asset service” in or from within the Virgin Islands without registration. The Bahamas’ Digital Assets and Registered Exchanges Act, 2024 sets a registration regime for digital asset businesses. The BVI Act also deems certain BVI companies to be carrying on from within the territory. Offshore incorporation can help with corporate administration and contracting. It does not neutralise EU rules for EU-facing operations. EU supervisors look at where the service is provided and how it is promoted. EU-facing indicators include EU staff, EU-targeted marketing, and EU client onboarding. Indicators also include EU payment rails and EU events. Where these indicators exist, the licensing risk usually remains. The practical options are to stay outside the regulated service definitions, or move regulated functions into an authorised EU entity or partner. Prokopiev Law Group can advise on mapping introducer activity to MiFID II and MiCA, drafting referral and introduction terms, and designing an offshore/EU operating split. Contact us to review your service flow, fee mechanics, marketing footprint, and EU nexus. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Case Study: Manufacturing RWA Pools Built on an RMI DAO LLC with Asset-Holding SPV LLCs
The legal question: how can a tokenized real‑world asset (RWA) financing program for operating manufacturing facilities be structured so each investor contracts with the specific facility pool that owns and finances the facility assets, while a protocol-level entity coordinates issuance and voting mechanics across multiple facilities? This case study describes an implementation using a Republic of the Marshall Islands decentralized autonomous organization limited liability company (RMI DAO LLC) as the protocol entity, paired with separate SPV LLCs for each investment pool holding EU and Middle East manufacturing assets. “ Platform DAO LLC ” means the RMI DAO LLC that holds protocol IP, runs the on-chain contracts, and provides administrative services, but does not hold facility assets. “ Pool SPV ” means an LLC formed for one investment pool that (i) holds title to defined facility assets (directly or via a wholly owned local asset-holding company) and (ii) is the lender or equity holder that directly funds the facility owner or operator. “ Direct investor–asset privity ” means the investor’s contractual counterparty is the Pool SPV that owns or controls the financed facility asset package, not a protocol treasury or a multi-asset holding company. Deal objective The client’s objective was ring‑fenced, asset-by-asset exposure to operating manufacturing facilities in EU Member State(s) and Middle East jurisdiction(s). Investors required a direct contractual relationship to the specific financed facility pool, with no cross-collateralization across facilities and no reliance on a protocol balance sheet. The operating reality included asset-level diligence (title, permits, equipment registries), facility-level cash flows, and local-law security packages that had to attach to identified assets. Entity stack The structure used four legal layers, each with a single job: Platform DAO LLC (RMI): protocol IP owner and administrator, with on‑chain voting and transfer controls. Pool SPV (LLC): one SPV per investment pool, the investor counterparty and the facility finance counterparty. Local AssetCo / OpCo (EU or Middle East): the local company that holds site permits, runs operations, and employs staff. Security and cash controls: local-law mortgages/pledges and controlled bank accounts tied to each Pool SPV. Platform DAO LLC The Republic of the Marshall Islands DAO Act treats a DAO as a domestic LLC formed under the Marshall Islands LLC statute, with DAO-specific formation and operating requirements. The Platform DAO LLC used that statute to tie the legal entity to its on‑chain control layer. DAO entity law also matters for transfer controls. The company agreement and the on-chain logic were drafted as one system. For a multi‑facility program, that requirement supported a clean separation between the Platform DAO LLC (technology and administration) and the Pool SPVs (asset exposure). Investor identity and beneficial ownership data for a pool lived at the pool level, not in a protocol-level omnibus ledger. Pool SPVs Each investment pool was placed into its own Pool SPV LLC. The Pool SPV, not the Platform DAO LLC, executed the asset purchase and the facility finance documents. Investors subscribed directly into the Pool SPV. The Pool SPV’s operating agreement (or, where used, note terms) governed distributions, reporting, voting on major actions, and liquidation priorities. Facility ownership and financing followed a single rule: the Pool SPV had to be the legal titleholder of the facility asset package or the legal owner of the local company that held title. The choice depended on local restrictions on foreign ownership, licensing, and land registration. Where local law allowed direct foreign ownership, the Pool SPV acquired title to the facility real estate and key equipment and then leased them to the operating company. Where local law required local holding, the Pool SPV owned 100% of a local AssetCo that held title and entered the lease with the OpCo, and the Pool SPV held a share pledge over AssetCo plus a package of contractual covenants. “Direct financing” in this project meant the Pool SPV was the named lender or equity investor on the facility documents, with proceeds wired from the Pool SPV’s controlled bank account to the seller, contractor, or operating company. The security package was asset-specific and recorded under the situs law of the asset. Components typically included: a mortgage over real property, pledges over equipment where registrable, a share pledge over the local holding company, and an assignment of key receivables or insurance proceeds (each subject to local perfection rules). Investor legal connection to the facility The investor signed (i) a subscription agreement with the Pool SPV and (ii) the Pool SPV operating agreement (or note instrument) as the source of economic rights and enforcement rights. The Pool SPV then held the asset title or the shareholding that controlled the titleholder. Investors were not unsecured creditors of the Platform DAO LLC. The structure also separated “protocol participation” from “asset exposure.” The Platform DAO LLC could issue a protocol token for voting on protocol parameters and service fees. The Pool SPV issued a separate pool interest tied to one facility pool. A pool interest transfer required pool-level compliance checks because the pool interest carried the investor’s rights against the asset-holding SPV. Cross-border limits and edge cases Ring-fencing depends on separateness in fact. Each Pool SPV had its own bank account, accounting, resolutions, and service contracts. Intercompany agreements with the Platform DAO LLC were priced and documented. The Pool SPV did not guarantee protocol obligations, and the Platform DAO LLC did not guarantee pool obligations. Series structures were evaluated and rejected for cross-border assets. Many non‑US jurisdictions do not treat a series as a separate legal person for title and insolvency purposes. Separate Pool SPVs give cleaner asset title and cleaner enforcement, at the cost of more entity maintenance. Tokenholder voting has limits. A lender’s decision to accelerate a loan, enforce a mortgage, or replace a plant operator can trigger local-law duties and sometimes court processes. The drafting split voting into (i) investor direction on defined “major decisions” and (ii) day‑to‑day execution by a manager with signing authority, subject to conflicts rules and reporting covenants. Foundation or trust wrapper as an alternative to a DAO LLC A non-DAO top tier can replace the Platform DAO LLC without changing the pool architecture. Two common alternatives are a foundation or a trust. Under a foundation structure, a foundation holds the protocol IP, appoints administrators, and contracts with service providers. The foundation has no shareholders. Control sits with its council (or equivalent organ) and any protector/guardian mechanism the statute permits. Tokenholder voting can be drafted as a direction right or as an advisory input to the council, depending on how much on-chain control the council is willing to accept without violating statutory duties. Under a trust structure, a trustee holds the protocol IP and related rights on trust for defined beneficiaries or for a defined purpose, under an applicable trust statute. Legal title sits with the trustee. Control flows through trustee duties, the trust deed, and any protector or enforcer role. Tokenholder voting can be tied to beneficiary directions if the trust deed allows it, with trustee discretion carved to preserve fiduciary compliance. The foundation/trust choice changes bank onboarding, liability allocation, and enforcement mechanics. A foundation gives a single legal person as counterparty for protocol-level contracts. A trust gives contractual control through trustee duties, but the trustee remains the titleholder. Neither choice changes the core investor protection in this case study because investors still contract directly with the Pool SPV that owns and finances the facility assets. Prokopiev Law Group advises sponsors and operators on RWA manufacturing structures, including RMI DAO LLC formation, SPV pool documentation, cross-border offering controls, and the foundation/trust alternative for protocol ownership and administration. Contact our team to scope an entity stack and document set for a specific facility pipeline. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Case Study: Anonymized Token Sale Legal-Structural Implementation for a Protocol Launch Client
This case study documents the legal-structural implementation we executed for a protocol development client preparing a public token sale contemporaneous with mainnet launch. The engagement objective was to construct a governance-credible and operationally separable architecture in which ecosystem stewardship, sale execution, and commercial development operations were isolated into distinct legal entities, with defined authority boundaries, traceable funds flows, and publication-grade disclosure outputs. Step 1 — Define the system boundary and token functions. The project defines a public blockchain network and a native Token intended for validator staking within the consensus design, payment of network transaction fees, and participation in protocol change mechanics through validator delegation and client software adoption. The Token is treated as a network utility asset whose operative functions are limited to the protocol’s technical and economic design, with the sale timed near public network launch and delivery planned around launch. Step 2 — Partition stewardship, sale execution, and commercial build work. The legal architecture separates long-lived stewardship and ecosystem spending from the token sale contracting party and from the employer that builds software. The steward entity holds treasury responsibilities, grants, delegations, and program spending; the seller entity signs the sale documentation and carries sale-specific contractual exposure; the builder entity employs staff, contracts vendors, and performs engineering work under a paid services mandate. Step 3 — Constitute the steward entity as a memberless foundation company. The project forms FoundationCo as a memberless Cayman Islands foundation company and appoints a board that acts as the decision node for treasury policy, delegation programs, ecosystem incentives, and high-value contractual commitments. The internal rules describe approvals, signatories, conflicts, custody controls, and public communications channels, with the operating intent fixed on network development support, decentralization, security posture, adoption programs, education, and ecosystem outreach. Step 4 — Constitute the seller entity as a controlled subsidiary. The project forms IssuerCo as a British Virgin Islands company, controlled by FoundationCo, and uses that subsidiary as the contracting counterparty for the exchange-run token sale portal. Control is expressed through corporate governance instruments and board resolutions that allocate signing authority for sale documentation, custody arrangements, and token delivery mechanics while reserving treasury-risk decisions to FoundationCo’s board. Step 5 — Constitute the builder entity as a for-profit operating company. The project forms or maintains DevCo as a for-profit corporation that hires engineers and contractors, engages auditors and security vendors, and maintains the core client codebase. DevCo runs payroll and standard commercial operations, and it acts as the operational locus for software development, research, documentation, and tooling that support the network’s launch and ongoing maintenance. Step 6 — Fix the intercompany services boundary in a development services agreement. FoundationCo and DevCo execute a written development services agreement that defines deliverables, milestones, acceptance criteria, payment terms, and termination rights, with compensation described as arm’s-length professional services fees. The agreement assigns responsibility for confidentiality, security controls, incident handling, and dependency management, and it states the IP path for new code contributions and contributor assignments, either into DevCo with a license to FoundationCo or directly into FoundationCo, with one path chosen and applied consistently. Step 7 — Define treasury funding flows between builder and steward. The structure states how FoundationCo will obtain cash runway for staffing, vendors, and ecosystem programs, including a donation or grant from DevCo or another funding source that covers operating expenditures into a defined period. The funding instrument is documented with scope, permitted uses, reporting expectations, and board approval thresholds, and it remains distinct from any token sale proceeds, which follow sale-specific controls. Step 8 — Specify token non-rights and the separation from equity. The Token terms state, in plain contractual language, that the Token does not represent equity, ownership, or profit-sharing rights in FoundationCo, DevCo, or any other organization, and it does not confer entitlement to protocol revenue. The structure treats equity in DevCo and the Token’s economic and functional profile as separate instruments with separate value drivers, and the disclosure package keeps that separation explicit and repeated wherever purchaser expectations could drift. Step 9 — Fix the initial token supply map and release mechanics. The project allocates the initial supply across ecosystem development, team, investors, public sale, builder treasury, and community distributions, with an explicit distinction between tokens that are unlocked at launch and tokens that remain locked under time-based schedules. The lock design can include an initial one-year lock for team and investors, followed by scheduled releases over multiple years, and it can restrict staking for locked balances so early staking rewards flow into public circulation rather than concentrating among insiders. Step 10 — Bind the public sale to the seller entity and the sale portal’s mechanics. IssuerCo signs the portal agreement, accepts portal eligibility gating, and publishes the sale terms that define the sale window, fixed price, maximum offered tokens, minimum and maximum purchase amounts, allocation mechanics under oversubscription, and delivery timing tied to network launch. The purchaser relationship is formed through the portal’s onboarding and the seller’s sale terms, and the seller document states that the portal operator does not supply or vouch for the seller’s disclosure content. Step 11 — Describe oversubscription allocation as a deterministic procedure. The sale documentation describes a fill procedure that allocates first to the smallest unfilled requests and iterates upward until supply exhausts, then splits any remainder evenly among still-unfilled purchasers. Step 12 — Define proceeds flow, custody segmentation, and internal controls. The funds flow specifies that purchasers pay through the portal, the portal remits net proceeds to IssuerCo, and IssuerCo transfers proceeds into FoundationCo under a treasury policy approved by FoundationCo’s board, with defined retention for sale-related liabilities where needed. Treasury operations use segregated wallets for sale inventory, ecosystem allocations, market-maker loan inventory, and operating spend, with multi-signature controls and recorded approvals for transfers, grants, and delegations. Step 13 — Contract for initial liquidity support through token loans and monitoring. If the project uses professional market makers, IssuerCo enters token loan agreements that define loan size, term, permitted trading conduct, return obligations, and early termination triggers, with a third-party monitoring specialist tracking the use and idle balances of loaned tokens. The documentation can also allow a limited, short-duration provision of initial DEX liquidity from a capped fraction of initial supply, with explicit treatment of those tokens as part of the ecosystem allocation rather than part of the public sale tranche. Step 14 — Publish a disclosure package that matches the legal and operational map. The seller publishes a disclosure document that covers project information and core contributors, the sale terms and purchaser eligibility gating, token functions and token non-rights, the allocation table and locked/unlocked status at launch, release schedules for locked categories, community distribution mechanics, conflicts statements regarding related-party token transactions, liquidity support arrangements, security posture including audits and open-source repositories, and a risk section that addresses sale execution, entity limitations, token trading liquidity, technology risk, and jurisdictional restrictions. Prokopiev Law Group can implement the same token sale legal structure for your project, or tailor the architecture, documentation stack, and disclosure framework to your specific commercial, technical, and jurisdictional requirements. We coordinate a global network of counsel and service providers to support multi-jurisdiction execution from formation through launch and post-launch governance. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- GDPR Compliance in Blockchain: Analysis of EDPB Guidelines 02/2025
Blockchain (a type of distributed ledger) is designed as a distributed, tamper-resistant, and transparent data structure with no centralized control. Transactions are grouped into blocks cryptographically chained together, and a consensus mechanism (e.g. proof of work or proof of stake) ensures all nodes agree on the valid state. These properties— decentralization (many participants replicate data), immutability (once recorded, data cannot be altered without detection), and transparent access to ledger data—present novel compliance challenges under the GDPR. For example, once personal data is written to an append-only blockchain, it cannot be individually deleted or modified without undermining the ledger’s integrity. The lack of a centralized controller and the presence of numerous independent participants mean there is no single entity that can easily remove or edit data across all copies of the ledger. This immutability conflicts with GDPR requirements like the right to erasure and rectification, and calls into question how to enforce storage limitation on data that might persist for the life of the blockchain . The decentralized governance of many blockchains also complicates accountability: participants may be in different jurisdictions and not bound by a common contract, making it difficult to ensure that personal data is used only for the specified purpose and not further processed incompatibly. Additionally, blockchains often make transaction data visible (at least to participants or even publicly in open chains), so any personal data (or identifiers) stored on-chain are broadly accessible, raising concerns under the principles of confidentiality and transparency. Many blockchains support smart contracts , which are self-executing programs recorded on-chain. Smart contracts can embed personal data in their code or transactions and can autonomously trigger actions affecting individuals. The EDPB notes that execution of smart contracts might amount to solely automated decision-making with legal effects, thus invoking GDPR Article 22 – requiring human intervention and contestation rights if such decisions significantly affect data subjects. In summary, fundamental blockchain features like decentralization and immutability provide integrity and availability guarantees through cryptography and consensus, but they inherently clash with certain GDPR principles and rights. The guidelines stress that these technological traits do not exempt controllers from GDPR compliance: they must instead be addressed through careful design and governance measures. Controllers should thoroughly assess whether a blockchain solution is appropriate and can be configured to meet data protection requirements before using it for personal data processing. Personal Data in Blockchain Systems A critical step is determining what data in a blockchain constitutes personal data . Blockchains record transactions that typically include a set of metadata (e.g. timestamps, cryptographic public keys or addresses, and other technical details) and a payload (the content of the transaction). Even if the blockchain uses pseudonymous identifiers (like hashed public keys that appear as random strings), these can qualify as personal data if they relate to an identifiable individual . The EDPB confirms that a public key or account address will be considered personal data whenever it can be linked to a natural person by means “reasonably likely to be used” (for instance, by correlation in the event of a data breach or through off-chain data). In many blockchain networks, participants’ addresses are always visible to all nodes to allow transaction validation, and even if they do not directly reveal a name, they may be indirectly identifiable (especially when combined with external information or if a user reuses an address). Thus, seemingly anonymous on-chain identifiers are often pseudonymous personal data , falling under GDPR scope. Moreover, the payload data itself can include personal data. For example, a transaction might encode an individual’s name, a contract with personal terms, or references to documents about a person. Smart contract interactions could record details about a user’s activities or assets, and even if the smart contract’s code is public, any personal data written into the contract’s state or logs would be stored on-chain. The guidelines emphasize that any personal data stored on-chain – whether in transaction fields, smart contract storage, or account balances – is subject to GDPR. Additionally, when individuals interact with blockchain systems, off-chain data processing often occurs in the surrounding ecosystem. For instance, a user’s blockchain wallet software or a decentralized application (dApp) may collect device identifiers or IP addresses when connecting to the network, or third-party blockchain explorers may log user queries – all of which can be personal data even though they are not recorded on the chain itself. These auxiliary data flows must also be accounted for by controllers. Storing personal data on-chain is inherently risky from a data protection perspective. The EDPB’s general rule is that controllers should avoid putting personal data on the blockchain unless strictly necessary , because doing so makes it difficult to comply with core principles. In particular, storing personal data in plaintext (unencrypted) form on a public ledger is considered highly problematic and “strongly discouraged” due to conflicts with Article 5 principles like data minimization, purpose limitation, and storage limitation. If personal data must be processed in a blockchain use-case, the guidelines advocate for data protection by design strategies to minimize on-chain data: e.g. keeping personal data off-chain and only referencing it via pseudonymous pointers or cryptographic hashes on-chain. By limiting the exposure of personal details on the immutable ledger, controllers can maintain greater control – off-chain data can be modified or deleted when needed, while the blockchain only stores a non-identifying commitment or link. Several advanced privacy-preserving techniques (discussed later) enable this kind of separation. The EDPB also notes emerging zero-knowledge proof architectures, where cryptographic proofs allow validation of transactions or identities without revealing the underlying personal data on-chain. Such approaches (often termed “zero-knowledge blockchains”) illustrate that it is possible to leverage blockchain features while drastically reducing on-chain personal data disclosure, though they require complex cryptographic implementations. Roles and Responsibilities of Blockchain Participants GDPR compliance requires clearly defining the roles of all actors involved in processing. In blockchain networks, the decentralized governance model leads to a multiplicity of stakeholders (e.g. node operators, miners/validators, developers of a blockchain platform or smart contracts, users submitting transactions, etc.), which makes assigning the traditional roles of controller and processor complex. The EDPB reaffirms that using a particular technology or a distributed setup does not remove the need to identify responsible parties under GDPR. In fact, controllers must perform a careful, case-by-case analysis of who determines the purposes and means of each personal data processing operation on the blockchain. The standard GDPR definitions apply: a controller is the entity (alone or jointly with others) that decides why and how personal data are processed, whereas a processor processes personal data on behalf of a controller (under its instruction). These roles must be mapped onto the specific blockchain architecture and governance structure in question. The guidelines stress that this allocation should follow the factual influence and decision-making power of actors, in line with the accountability principle. Relevant factors include the nature of the service or application using the blockchain, the governance and rules of the blockchain network, the technical capacity of participants to influence data processing, and any contractual arrangements between parties. Permissioned vs. permissionless blockchain design has a significant impact on roles. In a permissioned blockchain (closed or consortium network), a designated authority or a consortium of entities controls who can participate as a node. This setup inherently offers a clearer allocation of responsibilities , since the governing entity (or entities) can be identified and held accountable as controllers for the on-chain processing. The EDPB explicitly recommends that organizations favor permissioned blockchains for personal data use, given that a central managing party can enforce compliance and there is less ambiguity about who is responsible. Only if there are well-justified reasons why a permissioned model is not feasible should controllers explore more decentralized alternatives – and even then, they should question whether using blockchain at all is appropriate if it impedes accountability. By contrast, in public permissionless blockchains , anyone can become a node and the network is maintained by a diffuse community without a single point of control. Here, participants are not all equal in function, and their GDPR roles depend on their activities and influence. The guidelines explain that in some blockchain systems, individual nodes might have a very limited role : for example, merely relaying and validating transactions against fixed rules without exercising meaningful discretion. If a node simply follows the protocol (e.g. checks digital signatures, ensures transactions meet technical criteria, and includes them in blocks without any preference or purpose of its own), that node might not be considered a controller of personal data, since it is not deciding the purpose or essential means of processing but acting in a minimal, automatic capacity. (Likewise, such a node is generally not a processor either, because it’s not acting on behalf of a known controller under instructions – it’s just participating in a distributed consensus.) On the other hand, many nodes in permissionless networks do exert influence over personal data processing. For instance, miners/validators choose which pending transactions to include in a block, can decide to fork the blockchain , or otherwise change how data is processed for their own objectives. In doing so, they may effectively determine how and when certain personal data is recorded on the ledger (which transactions get confirmed) and potentially even why (if, for example, they prioritize certain transactions for economic gain or policy reasons). The EDPB states that such nodes can qualify as controllers or joint controllers if they “exercise a decisive influence” over the purposes and means of the processing. This could be individually (a miner deciding a transaction’s inclusion, affecting that data subject’s processing) or collectively (nodes agreeing to alter the protocol rules, which changes how all personal data is handled). Importantly, in truly decentralized scenarios, nodes are not acting “on behalf” of another entity; they pursue their own goals within the network, so they cannot be considered mere processors taking instructions. The EDPB strongly encourages that, in such cases, the participating entities formalize their roles through a consortium or similar legal entity to serve as the identifiable controller for the blockchain processing. By creating an overarching legal structure, the various node operators can share obligations as joint controllers, which helps ensure accountability despite the distributed setup. In summary, the guidelines call for clarity of roles : every blockchain project involving personal data should document who the controller(s) is (e.g. the platform operator, the consortium of participants, and/or application providers using the chain) and which parties, if any, act as processors. Joint controllership arrangements are likely when multiple parties together define the data processing purposes (for example, co-developers of a blockchain service, or consortium members each contributing data and determining its use). All such arrangements remain subject to GDPR Article 26 (joint controller requirements) or Article 28 (controller-processor contracts) as appropriate. Even in permissionless public networks with no formal governance, participants cannot escape GDPR liability simply by pointing to the lack of central control – organizational and contractual measures should be put in place to allocate responsibilities wherever possible. Lawfulness of Processing and Legal Bases Any processing of personal data on a blockchain must have a valid legal basis under GDPR Article 6. The EDPB makes clear that there is no blanket legal basis that automatically applies to all blockchain operations; controllers must determine the appropriate basis for each specific processing purpose in their blockchain use-case. For example, recording personal data on a ledger for a decentralized identity system might be based on the user’s consent , whereas processing personal data on a supply-chain blockchain might rely on legitimate interests or contractual necessity , depending on the context. The key is that the choice of legal basis must align with the purpose of processing and meet all the conditions for that basis. If the blockchain processing includes special category data (GDPR Article 9(1) data, such as health, biometric, or political opinion data), then an Article 9(2) exception is required in addition to an Article 6 basis. For instance, a blockchain that stores medical records would need an explicit consent of the data subjects or another specific derogation under Article 9(2) to process health data. The guidelines note that multiple legal bases might be available in a given scenario, but controllers must carefully assess which basis is valid and most appropriate given the context and ensure they can fulfill the obligations that come with it. Consent (Article 6(1)(a)) is a potential legal basis in blockchain, but the EDPB issues strong caveats about its use. If consent is chosen, it must be fully compliant with GDPR’s definition and standards: truly freely given, specific, informed, and unambiguous , with a clear affirmative action by the data subject. Due to the irreversibility of blockchain entries, a critical issue is the data subject’s ability to withdraw consent . GDPR requires that consent can be withdrawn at any time, and if a processing was based on consent, the data must stop being processed (and ideally be deleted or anonymized) once consent is withdrawn. The guidelines underline that if personal data was stored on-chain under a consent basis, that data “must be deleted or rendered anonymous” if consent is later withdrawn . This presents a serious challenge: since blockchains do not easily allow deletion, any controller relying on consent must have a technical plan to effectively remove or irrevocably anonymize the data upon withdrawal. For example, if personal data was encrypted on-chain with the data subject’s consent, the controller should be prepared to delete the decryption key (rendering the on-chain data unintelligible) if consent is withdrawn. Without such measures, consent would not be considered valid because the data subject wouldn’t have a genuine choice to discontinue processing without detriment. In practice, this means consent is often not a suitable basis for immutable on-chain processing, unless the system is designed such that consent withdrawal triggers effective erasure (through key revocation, etc.). The EDPB also reminds that consent does not override other GDPR requirements – even with consent, all principles (data minimization, purpose limitation, security, etc.) and rights must still be respected, and consent cannot be “forced” by making the service conditional unless necessary. Other legal bases may be more appropriate for blockchain solutions. Contractual necessity (Article 6(1)(b)) could apply if the processing is genuinely required to fulfill a contract with the data subject – for example, using a blockchain to execute a smart contract that a user has entered into. However, this basis is narrowly interpreted and only covers processing that is objectively necessary for the contractual service requested by the data subject. A related basis, legal obligation (Art 6(1)(c)), might be relevant if a law mandates using a blockchain for certain record-keeping (though such scenarios are rare; more often, blockchain is a choice of technology rather than a legal requirement). If a blockchain is used to comply with a legal obligation (say, a governmental transparency ledger required by law), that law could also justify restrictions on certain data subject rights under Article 23 GDPR. The guidelines give examples like anti-money-laundering (AML) applications or land registries: if Union or Member State law requires certain data be kept on an immutable ledger, that law may lawfully restrict rights like erasure, as long as the Article 23 conditions are met. In those cases, the legal basis might be Art 6(1)(c) or (e) (public interest or official authority) combined with a statutory restriction on deletion rights. Controllers must ensure any such restriction is provided by law, necessary, and proportionate. A commonly cited basis for blockchain processing is legitimate interests of the controller or a third party (Article 6(1)(f)). Indeed, many private-sector blockchain uses (e.g. maintaining a distributed ledger of transactions for system integrity or efficiency) might invoke the legitimate interests ground. The EDPB emphasizes that using Art 6(1)(f) requires a careful three-part test : purpose test (the interest being pursued must be legitimate and lawful), necessity test (processing must be necessary for that purpose), and balancing test (the interests of the controller versus the fundamental rights and interests of data subjects). In a blockchain context, controllers must evaluate whether their interest (for example, having an immutable, distributed record) is proportionate to the impact on data subjects (permanent storage of their personal data, global access, potential lack of control). The guidelines refer to EDPB’s earlier guidance on Art 6(1)(f), underscoring that if the detriment to individuals is too great relative to the controller’s aims, legitimate interest cannot be relied on. Notably, data subjects have a right to object to processing based on legitimate interests (as discussed later), and if they do, the controller must stop processing unless it demonstrates compelling overriding interests. This means that in a blockchain scenario, using legitimate interests as a basis implicitly requires that the controller be able to cease or restrict processing for an objecting individual if no override applies – a requirement that again forces consideration of how one would stop processing on an immutable ledger. In summary, no matter which legal basis is chosen, it must be supported by the blockchain’s technical design . For instance, if relying on consent, build the system to allow data removal; if relying on legitimate interests, ensure you’ve minimized data use and can honor objections, and so on. The EDPB’s overarching message is that the choice of technology (blockchain) does not justify lowering GDPR standards – controllers are expected to adapt the blockchain’s use or design to the law, rather than the other way around. Data Protection by Design: Principles, Minimization and Security Measures Under GDPR Article 25, data protection by design and by default is crucial when implementing blockchain solutions. The EDPB reiterates that controllers must embed data protection principles into the architecture of the system from the outset, especially given that blockchain’s characteristics make some principles harder to enforce. The guideline stresses that GDPR principles are non-negotiable – even if blockchain is technically complex, controllers are obligated to find ways to comply effectively. This often requires a combination of innovative technical and organizational measures, carefully tailored to the context, to ensure that principles like data minimization, storage limitation, integrity/confidentiality, and purpose limitation are upheld in practice. Importantly, effectiveness is at the heart of data protection by design: it is not enough to perform a formal check-the-box exercise; the chosen measures must actually result in a higher level of privacy and protection for data subjects in the real operation of the blockchain. In other words, controllers should be able to demonstrate that their blockchain implementation achieves compliance outcomes (e.g. no unnecessary personal data is exposed or retained), not just that they attempted some generic precautions. A primary design strategy is data minimization and storage limitation : avoid putting personal data on-chain whenever possible . The guidelines suggest that many use-cases can be accomplished by storing personal data off the blockchain and only writing references or proofs on-chain. By minimizing on-chain personal data, one limits the risk of immutable, perpetual exposure. For example, rather than recording a user’s name or full document on-chain, the blockchain could store a hash or a cryptographic commitment of the data, while the actual personal data stays in a traditional database or under the data subject’s control off-chain. If the personal data needs to be later retrieved or verified, the off-chain source can be checked against the on-chain hash. The EDPB details several privacy-enhancing techniques to implement this: Strong Encryption: Personal data can be encrypted before inclusion in the blockchain, so that only those with the decryption key can read it. This protects confidentiality on a public ledger. Notably, encrypting data does not remove it from GDPR’s scope (encrypted data is still personal data) and doesn’t absolve the controller of obligations. However, encryption can mitigate unauthorized access. The controller should use state-of-the-art cryptography and manage keys carefully. One advantage in context of erasure is that if a data subject invokes the right to deletion, the controller could delete the decryption key , rendering the on-chain encrypted data indecipherable (a form of “cryptographic erasure”). The guidelines caution, though, that encryption’s protection is time-limited: algorithms can be broken or weakened over time (e.g. by future quantum computing), and since blockchains are meant to last indefinitely, there must be a plan to manage algorithm obsolescence. Controllers should periodically assess the strength of the encryption in use and be ready to upgrade to stronger algorithms or migrate data if needed, especially for sensitive data that must remain confidential for many years. Hashing and Pseudonymization: Instead of storing personal data in cleartext, a controller can store a hashed value (with a secret salt or key) on-chain. The actual data and the salt/key are kept off-chain. A well-designed salted hash is effectively a pseudonymous identifier: on its own, the hash doesn’t reveal the underlying data unless one has the secret or can brute-force guess it. The guidelines note the benefit that if the secret salt/key is later deleted, the on-chain hash becomes practically unlinkable to the original personal data. In that scenario, even though the hash remains on the blockchain, it no longer corresponds to an identifiable person (assuming strong hash and secret), which can serve as a means of compliance with erasure requests. Nonetheless, the EDPB warns that hashing is not a panacea: hashes (especially if unsalted or poorly implemented ) can be reversible via dictionary attacks or re-identification techniques, and even salted hashes are considered personal data as long as the original data or keys exist somewhere. Therefore, storing a hash on-chain still counts as processing personal data, and all GDPR obligations apply (plus the off-chain data storage must be secured and governed). The use of unsalted or static hashes is explicitly deemed insufficient for confidentiality on a public blockchain – robust, secret salts/keys must be used to prevent inference of the original data. Cryptographic Commitments: A commitment is similar to a hash but typically provides cryptographic guarantees (like being binding and hiding ) that can be advantageous. The guidelines describe storing a perfectly hiding commitment on-chain. “Hiding” means that given only the commitment, nothing can be learned about the original data; and if the scheme is secure, even brute-force attempts yield no information. The data (and any randomness used for the commitment) remain off-chain. Once the off-chain data and “witness” (secret randomness) are deleted, the on-chain commitment is computationally useless – it cannot be opened or linked back to personal data. Thus, commitments offer a way to prove that some data existed or was agreed upon, without ever revealing the data itself on the blockchain, and later the linkage can be destroyed. This technique supports both minimization and eventual erasure (by dropping the ability to resolve the commitment). As with hashing, using commitments still requires an off-chain storage solution for the actual data and careful key management. Off-Chain Storage with On-Chain Pointers: Rather than placing even derived data on-chain, controllers can store personal data entirely off-chain (for example, in a secure database or distributed file system) and put only a pointer or reference (e.g. a URL, document ID, or Merkle-tree root) on the blockchain. The on-chain pointer by itself may or may not be personal data (if it’s just a random locator, it might not identify a person). The principle here is that the blockchain is used only to timestamp or validate the existence of data (the proof of existence ), while the data itself resides elsewhere under more flexible control. If the data needs to be erased or modified, it can be done off-chain, and the on-chain pointer could be updated or de-referenced. The EDPB notes that when using such architectures, confidentiality of the off-chain store is vital (the link between on-chain and off-chain must be protected). Off-chain storage shifts some GDPR burdens off the blockchain itself, but the off-chain processing is fully subject to GDPR (the controller must secure it, have a legal basis, etc., just like any database). Indeed, using off-chain storage means the overall system now has multiple components (blockchain plus off-chain systems), each of which must be evaluated for compliance and security. In practice, a combination of measures is often advisable. For example, one might hash personal data and store the hash on-chain, encrypt the personal data off-chain, and distribute the encrypted data among trusted parties or keep it with the data subject. The guidelines acknowledge that achieving adequate data protection in blockchain may require layered solutions and novel privacy-enhancing technologies used in tandem. They also suggest considering “zero-knowledge” architectures – e.g. using zero-knowledge proofs or other cryptographic protocols so that nodes can validate transactions without seeing personal data in clear. By limiting data visibility and accessibility , controllers address the GDPR principles: Confidentiality/Integrity (Security) is maintained by encryption and hashing (Article 32), Data minimization is respected by only recording what is necessary (and in pseudonymized form) on-chain, and Storage limitation is tackled by enabling effective removal (via key destruction or off-chain deletion) when data is no longer needed. Despite all these measures, the EDPB reminds controllers that “technical impossibility” is not an excuse for non-compliance . For instance, one cannot simply say “we cannot delete data from the blockchain, so the right to erasure does not apply.” Instead, the system should have been designed to avoid that situation (e.g. by not storing data that would later need deletion, or by using encryption that can be nullified). GDPR expects controllers to anticipate and prevent problems by design . If a particular blockchain architecture makes it impossible to meet a fundamental requirement (like deleting personal data on request), the onus is on the controller to re-evaluate the use of that technology or adjust the architecture so that compliance can be achieved. In fact, the guidelines bluntly state that if a blockchain’s “strong integrity” feature (immutability) conflicts with data protection needs, controllers should consider using other tools or technologies altogether rather than blockchain. Data protection by design might mean opting for a permissioned chain where an admin can intervene in exceptional cases, or using shorter-lived sidechains where data can be phased out, etc., if that’s necessary to honor GDPR principles. Purpose limitation is another principle worth noting: personal data should be collected for specified, legitimate purposes and not further processed in incompatible ways. Public blockchains pose a challenge here because once data is public on the ledger, anyone might use it for new purposes beyond the original intent (for example, using blockchain transaction data to profile users’ behavior). The disintermediated nature means participants are not all contractually bound to restrict their use of data. To enforce purpose limitation, a private or permissioned blockchain can impose agreements on participants about how data may be used. In public networks, the best approach is to minimize the data to begin with (so that even if someone repurposes on-chain data, it reveals little), or use technical controls like encryption such that unauthorized parties cannot interpret the data. In any case, controllers must define clear purposes for any personal data they put on a blockchain and ensure that the data is not repurposed in a manner incompatible with those purposes. This ties back into governance: if multiple organizations are involved, they should agree on purpose limitations and document them (e.g. in consortium bylaws or smart contract code that limits data use). Finally, security measures (integrity and availability, as well as confidentiality) are vital given that blockchain data is widely replicated. GDPR Article 32 obligations apply: controllers and processors must implement appropriate technical and organizational security measures. The blockchain’s inherent design gives some security benefits – data integrity and availability are strong due to the consensus and replication (tampering is detectable and data is redundantly stored). However, confidentiality is not inherent in most blockchains (many are intentionally transparent). Thus, as described above, encryption or pseudonymization is needed to prevent unauthorized access to personal data on-chain. Controllers should also secure the off-chain environments : for instance, nodes should have measures against breaches, keys (like private keys controlling identities or decryption keys) must be protected, and any APIs or wallets handling personal data should be hardened. The EDPB points out that encryption keys and algorithms must be managed over time – since blockchain data may remain for decades, a plan to update cryptographic measures and handle potential future vulnerabilities (like quantum attacks) should be part of the security strategy. Regular risk assessment of cryptographic strength and contingency plans (such as re-encrypting data with stronger algorithms or migrating to a new platform) are recommended when the data is sensitive and long-lived. Additionally, since blockchains involve networked nodes, network security (authenticating nodes, preventing Sybil attacks, DDoS protection, etc.) is relevant to protect the availability and reliability of the processing. The guidelines also mention ensuring access control and traceability in blockchain applications: for instance, in a permissioned blockchain, limit who can view certain personal data fields, and log access to data for audit. While some of these controls might be unconventional for blockchain, creative implementations (like encryption schemes where only certain roles can decrypt certain data) can achieve a form of access control even in a decentralized setting. In summary, security must be comprehensive , covering on-chain data (through cryptography), off-chain data stores, node infrastructure, and even the metadata and communications in the blockchain ecosystem (since communication metadata could reveal personal info like who communicated with whom and when). The controller should treat any supporting systems (key management servers, off-chain databases, etc.) as part of the overall processing and secure them to GDPR standards. Data Subject Rights in Blockchain Environments GDPR grants individuals robust rights over their personal data, and the EDPB emphasizes that these rights are technology-neutral – data subjects do not lose their rights simply because their data is processed on a blockchain. All the usual rights (access, rectification, erasure, restriction, objection, data portability, and the right not to be subject to certain automated decisions) must be respected. However, fulfilling these rights requires special consideration in a blockchain context, given the inherent immutability and distributed control. The guidelines consistently urge that mechanisms to honor data subject rights be incorporated at the design stage of any blockchain system. Transparency and Information (Articles 12–14 GDPR): Data subjects have the right to be informed about how their data is processed. In blockchain projects, this means controllers must provide clear and easily accessible privacy notices explaining, among other things, that personal data will be recorded on a blockchain, the nature of that blockchain (public or private, how it works), who will have access to the data (e.g. all node operators globally, if it’s a public chain), and what rights and recourses the individual has. The EDPB specifies that the controller should inform the data subject before submitting their personal data to the blockchain (e.g. at the time of data collection or before writing a transaction that includes their data). The information must be given in concise, plain language as usual, but also cover blockchain-specific aspects (such as potential international transfers to network nodes, the potential inability to fully delete data, and the measures in place to protect their data). If multiple parties are controllers (e.g. a consortium), they should coordinate so that the data subject isn’t confused by multiple or inconsistent notices. The guidelines also note that transparency is key to fairness – data subjects should not be surprised by unexpected processing like the public permanence of their data. Therefore, full disclosure of the blockchain’s implications is part of compliant processing. Right of Access (Article 15) and Data Portability (Article 20): The right of access means an individual can ask the controller to confirm if their data is being processed and obtain a copy of that data and relevant information. On a blockchain, the data subject might already have access to the ledger (if it’s public, anyone can read it), but the controller still has an obligation to provide an intelligible copy of the data concerning that individual upon request. Fulfilling an access request may involve extracting all transactions or data linked to that person (for example, all entries associated with their blockchain address) and presenting it in a user-friendly format along with explanatory information (purposes, recipients (nodes), storage period, etc.). The EDPB believes that the exercise of access rights is compatible with blockchain as long as controllers do their part to compile and furnish the information. Indeed, blockchains keep an indelible record, which could even make it easier in some cases to retrieve historical data for a subject – but the controller must ensure the individual can understand it (a dump of raw blockchain entries may not be meaningful without context). As for data portability , if processing is based on consent or contract and is carried out by automated means (criteria for Article 20), the data subject can request their personal data in a commonly used, machine-readable format. Blockchain data is already in a structured format, but the controller should consider providing it in a convenient form (perhaps a CSV or JSON listing of the person’s relevant blockchain entries). One complexity is that blockchains often involve multiple joint controllers; a user might request data from one of them (say, the dApp provider) who then must gather personal data from the chain. The guidelines assert that these rights can be met in blockchain systems – there is nothing about blockchain that technically prevents giving a user their data or moving it – so long as the controller has procedures to extract the data and not just point to the public ledger. Right to Rectification (Article 16): This right entitles individuals to have inaccurate personal data corrected or completed. On an immutable ledger, you typically cannot directly change or delete a past entry, which complicates straightforward “correction” of that record. The EDPB notes that fulfilling rectification may require creative solutions in blockchain. One approach is the use of additional transactions or data to indicate corrections. For instance, if an incorrect piece of personal data was stored in a blockchain transaction, the controller might publish a new transaction that references the earlier one and states that “the data X in transaction Y is corrected to Z.” This effectively appends a correction on-chain without erasing the original error. Anyone reading the chain would need to be made aware (through the application logic or off-chain means) that the later transaction supersedes the earlier information. In permissioned blockchains, another possibility is that an administrator can flag certain data as deprecated or push a state update that rectifies a value (though the original record remains in history). The guidelines also suggest that if rectification in effect requires erasing the old data (for example, a data subject contests a stored document and the resolution is to remove it), then the same techniques used for erasure requests should be applied to implement rectification . That is, if a piece of personal data is wrong, you might “rectify” by effectively deleting or anonymizing the erroneous data and perhaps inserting the correct data elsewhere. The key point is that controllers must have a plan to address inaccuracies – they cannot respond to a rectification request with “we technically cannot change the blockchain.” By design, they should either avoid recording personal data that might need correction, or enable an overlay system that can mark or nullify incorrect data. Ensuring data accuracy is part of the GDPR’s principles (Art 5(1)(d)), so blockchain solutions should incorporate validation steps to prevent errors (e.g. verify data before it’s written) and mechanisms to propagate corrections. The EDPB acknowledges this can be “technically demanding” on blockchain, but it must be tackled in compliance efforts. Right to Erasure (Article 17) and Right to Object (Article 21): These are particularly thorny in an immutable ledger but are central to GDPR. The right to erasure (“right to be forgotten”) allows individuals to have their personal data deleted when, for example, the data is no longer necessary, consent is withdrawn, or processing is unlawful. On most blockchains, you cannot literally delete a block or transaction without compromising the chain’s integrity and consensus – nodes deliberately resist alteration of history. The EDPB frankly observes that it might be technically impracticable to honor a request for actual deletion of on-chain data in many cases. Nevertheless, controllers are expected to design systems to accommodate erasure as far as possible . The guidelines recommend that personal data should never be stored on-chain in directly identifiable form unless the use case absolutely requires it and all risks have been addressed. If data is stored in a reversible or obscurable form (encrypted, hashed, or off-chain), then an erasure request can be fulfilled by rendering the data inaccessible or anonymized . For example, if a user invokes erasure, the controller could erase the personal data in the off-chain storage and delete any link or key such that the on-chain reference can no longer be tied to the person. The chain record itself might remain, but it would be functionally anonymous – the controller must ensure that neither they nor anyone else can identify the data subject from the remaining on-chain data. This often entails erasing all related off-chain data (e.g. mapping tables, identity information) that connected the on-chain entry to the individual. When done properly, what remains on-chain is just an orphaned entry that no longer has personal data context (or is just a random string). The EDPB calls this approach “effective anonymization” of blockchain data in response to erasure requests. However, they caution that achieving genuine anonymization is difficult and context-dependent – it presupposes that the on-chain data by itself cannot identify the person (so it was suitably abstracted to begin with) and that all additional data that could re-link identity is wiped. If a controller cannot meet these conditions – for instance, in a fully public ledger where personal data (like a name or a facial image) was directly embedded on-chain – then they face a serious compliance problem. The guidelines explicitly advise that if the strong immutability of blockchain is not needed for the purpose, controllers should avoid using blockchain for personal data , because otherwise they might not be able to honor erasure and other rights. In other words, do not put data on an immutable ledger unless you have no alternative and you have robust measures to mitigate the privacy impact. The right to object (Article 21) gives data subjects the ability to object to processing based on certain grounds (notably, legitimate interests or public interest tasks) and requires the controller to stop or not begin processing that data unless they demonstrate compelling legitimate grounds overriding the individual’s interests. In a blockchain scenario, this means if a person says “I object to you processing my data on this blockchain,” the controller must assess and usually cease further processing relating to that person (unless an exemption applies). If the controller’s legal basis was legitimate interest, an objection typically means the processing should stop for that individual. Therefore, much like erasure, controllers should plan how they would halt processing a particular person’s data on-chain. In practice, honoring an objection could involve refraining from adding any new data about that person to the blockchain and possibly taking steps equivalent to erasure for existing data (since continuing to hold it might not be justifiable if the objection is valid). The guidelines bundle the design considerations for objection with erasure, stating these rights “must be complied with by design” in blockchain systems. They highlight that if personal data is stored on-chain, stopping all processing might be hard, hence the preference to limit on-chain personal data from the start. When an objection is received, a controller should at minimum ensure that no further dissemination or use of that person’s on-chain data occurs under its control. In a permissioned chain, the controller could instruct other nodes not to process that user’s data going forward (or delete it if possible from application-level records). In public chains, the controller may only be able to anonymize or dissociate the data (similar to an erasure solution) so that processing effectively ceases in relation to an identifiable person. An additional complication is if an objection is made to a particular node’s processing (say a European data subject objects to her data being processed by nodes outside the EU) – given the distributed nature, it’s difficult to selectively remove one node’s copy. This again underscores why choosing an appropriate blockchain architecture (e.g. limited nodes bound by agreements) is part of GDPR compliance. Notably, the guidelines mention that data subject rights cannot be waived or contracted away by the data subject either . Even if users consent to certain processing on blockchain, they retain their rights under GDPR. The EDPB also rejects any notion that because a data subject chose to use a blockchain service, they have implicitly waived the right to erasure or rectification; those rights still apply and must be “fulfilled in accordance with the GDPR” despite technical challenges. Right not to be subject to Automated Decision-Making (Article 22): This is relevant when blockchains use smart contracts or algorithms that make decisions affecting individuals without human intervention . If a smart contract automatically executes a decision that produces legal effects or similarly significant effects on a person (for example, a decentralized finance protocol automatically liquidating a user’s assets, or an identity blockchain automatically determining eligibility for a service), then Article 22’s protections kick in. The EDPB highlights that smart contract-driven decisions can constitute automated individual decision-making , and thus controllers must ensure they comply with Article 22. Under GDPR, purely automated decisions with significant effects are prohibited unless they fall under certain exceptions (necessity for a contract, explicit consent, or authorization by law with safeguards), and even when allowed, the data subject has the right to obtain human intervention, to express their point of view, and to contest the decision. Accordingly, if a blockchain application involves such decision-making, the controller should build in safeguards : for instance, there should be a way for a human to review or override a smart contract’s outcome at the data subject’s request. The guidelines explicitly state that when Article 22 applies, the controller must guarantee the possibility of human intervention and the ability for the data subject to contest the decision, even if the smart contract’s outcome is recorded on the blockchain . This may require off-chain processes (for example, a customer support or dispute resolution mechanism that can compensate for or counteract an on-chain decision). It could also mean avoiding fully automated processing of sensitive matters in blockchain, or seeking explicit consent with awareness of the consequences if using that route. The EDPB’s position is a reminder that “code is law” ethos of blockchains does not override human rights – systems should be designed such that algorithmic decisions are not final and unchallengeable for the individual. In practice, ensuring compliance here might involve pausing certain smart contract actions until a human approves them in cases that affect individuals, or providing a parallel off-chain method to reverse or mitigate an outcome for the individual if needed. This can be technically difficult (and arguably undermines some benefits of automation), but it is necessary for high-stakes processing to avoid violating GDPR’s prohibition on unchecked automated decisions. Data Protection Impact Assessments (DPIAs) for Blockchain Processing Given the novel and potentially high-risk nature of blockchain processing, the guidelines strongly advise – and often require – performing a Data Protection Impact Assessment (DPIA) before deploying blockchain solutions that involve personal data. Under GDPR Article 35, a DPIA is mandatory for any processing likely to result in a high risk to individuals’ rights and freedoms. The EDPB notes that using blockchain can introduce significant new risks to data subjects, so many blockchain use-cases will trigger the need for a DPIA. In fact, several criteria listed by WP29 (and endorsed by EDPB) for requiring DPIAs are often met by blockchain applications: e.g. use of new technology, large-scale systematic monitoring, matching or combining datasets, data regarding vulnerable individuals, etc. The DPIA should assess the processing as a whole , including both the on-chain and off-chain components and flows of data. The guidelines highlight that blockchain can add distinct sources of risk that might not exist in traditional systems. When conducting the DPIA, controllers should enumerate these risks, such as: Immutability and Irreversibility: Risk that personal data cannot be rectified or erased, impacting rights (as discussed, this is a key high-risk factor). Global Data Dissemination: In public blockchains, personal data is replicated on nodes worldwide, potentially including in jurisdictions without adequate protection. This raises risks of unauthorized access or unlawful international transfers , and loss of control by the controller. Lack of clear control: The distributed governance means breach response, honoring rights, or making changes may be hard, posing accountability and compliance risks. Additional technical operations: The DPIA should consider not just data stored on-chain but also ancillary processing inherent to blockchain . For example, the transmission of transactions over a peer-to-peer network (which involves broadcasting personal data to many nodes), the temporary storage of data in mempools or caches awaiting block inclusion, the creation of “orphan” blocks that might later be abandoned but still contain personal data, and the off-chain storage of data referenced by the blockchain. Also, the metadata generated (timestamps, IP addresses of nodes or users, public keys, etc.) and the management of cryptographic keys and seeds are all part of the processing ecosystem. These can introduce risks like profiling of user activity through transaction metadata or exposure if cryptographic secrets are compromised. The DPIA must catalog these elements and evaluate their impact on privacy. In identifying risks, controllers should not limit themselves to on-chain data breaches; they should look at the whole lifecycle and all related processes in the blockchain environment. For instance, if a third-party analytics tool is used to monitor the blockchain network and collects personal info, that’s in scope. The EDPB explicitly lists elements like communication of blocks among nodes, transaction queue handling, off-chain data storage, metadata generation and key management as aspects that might introduce risks requiring mitigation. All such issues should be documented in the DPIA, along with measures to address them. Crucially, if the DPIA finds that certain high risks cannot be sufficiently mitigated , the controller has a few options: modify the blockchain model or choose a different technology , or ultimately, if high risk remains, refrain from processing or consult with the supervisory authority (as per Article 36). The guidelines encourage controllers to remember that blockchain is not the only solution – if one model (say, a public permissionless chain) is too risky, perhaps a permissioned chain or a non-blockchain database could achieve the goal with less risk. This ties back to necessity and proportionality: the DPIA should question whether using a blockchain is necessary and proportionate for the intended purpose, or if a less invasive alternative exists. The EDPB suggests a structured approach in the DPIA, highlighting additional aspects to specifically address for blockchain-based processing. These include: Detailed description of the processing operations involving blockchain: The DPIA should describe the use-case and how personal data flows through the blockchain system. For example, outline what personal data is written on-chain vs kept off-chain, what the blockchain model is (public/private, permissioned/permissionless), who the participants are, and the roles and responsibilities of each (identifying the controllers, joint controllers, and processors). It should also describe the governance framework of the blockchain (how decisions are made about the network or protocol), the data lifecycle on the blockchain (from data input, propagation, validation, to indefinite storage), and any integration with other systems. Essentially, the DPIA’s systematic description should make clear how and where personal data enters the blockchain, how it is processed there, and who has control over it . This includes listing categories of data subjects and personal data, categories of recipients (e.g. node operators, miners, third-party observers), and any third parties that might receive data (like external oracle services, etc.). If smart contracts are used, the DPIA should note their function and whether they involve automated inference of personal data or decision-making. Also, mention if personal data is processed off-chain in parallel (and how linkage is managed), and if there are international transfers happening due to nodes in third countries. Necessity and proportionality analysis: The DPIA must evaluate why blockchain is being used and whether the same goal could be achieved with less data or a different method. Controllers should justify that using a blockchain (and the specific type of blockchain chosen) is necessary for the purpose and proportionate in terms of data protection. For instance, if the purpose is to ensure data integrity and transparency among a consortium, is a fully public blockchain necessary, or would a private ledger suffice (less exposure)? Could hashing the data instead of storing raw data achieve the purpose? This section should confirm that only the minimum required personal data is processed on-chain and that all GDPR principles (like purpose limitation and data minimization) are respected in the design. If a more privacy-friendly alternative exists to achieve the same ends, the DPIA should acknowledge that and explain why the chosen approach is still justified. Regulators will expect to see that the controller has thought critically about alternatives to blockchain or less extreme configurations and has adopted blockchain only if needed. Assessment of risks to rights and freedoms: Building on the earlier identification of risk sources, the DPIA should analyze the potential impacts on individuals if things go wrong or if rights cannot be exercised. This includes risks like personal data being publicly available and leading to harm (e.g. financial information on a blockchain could expose someone to fraud or profiling), risk of inability to delete erroneous or sensitive data, risk of data breaches (which on a blockchain could mean an attacker obtaining the private key of a user and thus all their on-chain personal data, or a malicious fork that exposes data), and risk of misuse of data by network participants. The DPIA should consider the severity and likelihood of each risk. For instance, how likely is re-identification of pseudonymous data on this blockchain, and what harm would that cause? It should also consider aggregate risks , like if the blockchain aggregates data from many sources, could that enable intrusive profiling or surveillance. Another specific risk is if multiple copies of data make breaches harder to contain – e.g. even if one node is secured, another might be compromised. The EDPB also expects an assessment of possible data breach scenarios in blockchain. A data breach in traditional terms might be unconventional in blockchain (since data is by design shared), but for instance, if an unauthorized party gets access to a normally permissioned ledger, or if someone’s private key is stolen and fraudulent transactions with personal data are appended, these are security incidents to evaluate. The extent of a potential breach (all copies globally could be impacted) should be weighed. Measures to address the risks: Finally, the DPIA must catalogue the safeguards and controls the controller will implement to mitigate identified risks. The guidelines indicate many of these in earlier sections: encryption, hashing, off-chain data segregation, strict access controls in permissioned networks, governance measures (contracts among participants), data lifecycle management, etc. For each risk, the DPIA should explain how it is reduced to an acceptable level. For example, “Risk of inability to erase data is mitigated by only storing hashed data on-chain and deleting the salt upon request, rendering data anonymous. Risk of unauthorized access is mitigated by strong encryption of any personal data on-chain. Risk of international transfer issues is mitigated by restricting node locations to EU and contractual clauses with any external nodes,” and so on. The DPIA should also cover how data subject rights can be exercised (detailing the process and technical means for access, erasure, etc., as part of the risk treatment). The EDPB document even provides an Annex with recommendations (Annex A) that likely serve as a checklist for many of these measures. Controllers can use that as a reference in their DPIA to ensure they have considered all points. The overarching advice is that performing a DPIA is not just a formality for blockchain projects, but a valuable process to identify and resolve privacy issues early . If the outcome of a DPIA is that high residual risks remain (for example, one might conclude “we cannot mitigate the risk that we cannot erase data if needed”), then according to GDPR the controller must consult the supervisory authority before proceeding. It’s conceivable that certain public blockchain applications might fall into this category, essentially requiring regulatory consultation or else refraining from processing until a solution is found. The guidelines imply that careful design choices (such as choosing a permissioned model, or avoiding on-chain personal data) can often reduce risks to an acceptable level so the processing can go ahead. If not, the project may need rethinking. International Data Transfers and Chapter V GDPR Compliance Blockchains are borderless by nature – a public blockchain network typically has nodes (computers maintaining the ledger) in many countries around the world. When personal data is written to such a blockchain, that data is effectively transferred across national borders to every foreign node that holds a copy or can access the ledger. Under GDPR’s Chapter V , any transfer of personal data from the EEA to a third country must meet certain conditions (adequacy decision, appropriate safeguards like Standard Contractual Clauses, binding corporate rules, etc., or a specific derogation). The EDPB highlights that blockchain technology will “often involve international data transfer” scenarios, especially if the network includes nodes outside the EU/EEA. This raises a compliance challenge: in an open blockchain, every node acts as a recipient of personal data, yet unlike traditional data export, a European controller typically does not have a direct relationship or agreement with all those foreign nodes. In fact, in permissionless networks, the nodes are “neither necessarily chosen or vetted” by the controller. This lack of control over data flows can conflict with GDPR’s transfer regime, which assumes a defined exporter and importer. Despite these difficulties, the GDPR’s transfer rules still fully apply – there is no blockchain exemption. Any European controller using a blockchain that causes personal data to be replicated to nodes outside the EEA must ensure that the transfer is lawful under Chapter V. That likely means the controller has to either restrict the network to EEA nodes or implement a transfer mechanism. The guidelines suggest some approaches: for example, in a permissioned blockchain , the controller could require all participating nodes/operators outside the EEA to sign Standard Contractual Clauses (SCCs) or similar agreements before being allowed to join the network. In such a case, joining the consortium might be conditioned on agreeing to EU data protection safeguards, making each node contractually bound to GDPR-like obligations. This is analogous to how a data exporter might get a foreign data importer to sign SCCs in a traditional transfer. Another approach is to limit node locations to countries with an adequacy decision , ensuring data only flows to jurisdictions deemed by the EU to provide adequate protection. However, in many public blockchains, neither approach is practically enforceable (anyone can set up a node anywhere). The EDPB acknowledges that truly public blockchains (e.g. Bitcoin, Ethereum) create a situation that “may raise compliance concerns” because of these unrestricted international flows. Controllers thus face a tough question: if you cannot prevent or properly safeguard international transfers on a public blockchain, can you use that blockchain for personal data at all? The guidelines stop short of forbidding it outright but strongly hint that data protection by design must address transfer risks from the start . They note that ensuring proper application of transfer requirements should be “addressed from the design of blockchain activities” . In practice, this might mean designing the solution such that personal data never leaves the EEA or is never exposed to uncontrolled nodes. For instance, one could use a permissioned blockchain with nodes only in the EU, or encrypt personal data so that even if it reaches foreign nodes, it’s unintelligible (though even encrypted personal data transfer might formally be a transfer if the foreign node can potentially decrypt or if keys could be obtained). Another technique is to store only hashes or commitments on the global blockchain (which might be considered anonymous data to foreign nodes if they have no access to the original data), and keep the personal data on servers in the EEA. That way, even though something is transferred globally, it may not be considered personal data by itself. These strategies align with earlier points on minimization and encryption, but here the emphasis is on geographical data flow control . If a controller does proceed with using a global network, they should identify in the DPIA and records of processing that international transfers occur, and specify the transfer mechanism relied upon. For example, they might list the countries of known nodes and cite SCCs or explicit consent (though consent for transfers is rarely practical or sufficient) as the basis. In many cases, explicit consent of the data subject for transfers (GDPR Art 49(1)(a) derogation) could be a fallback – for example, a DApp could warn the user: “your data will be published on a global network; by proceeding you consent to this international transfer.” However, relying on Art 49 derogations should be limited to occasional, necessity-based transfers, and not regular systematic ones, so it’s not a sound long-term solution for something like blockchain that continuously exports data. Therefore, structural solutions (like restricting node geography or using EU-based infrastructure) are preferable. The guidelines also remind controllers that Chapter V compliance is part of data protection by design . Article 25 (data protection by design) requires considering all GDPR requirements in the design, including those about cross-border data flow. This means if a blockchain setup would violate transfer rules, that setup is not compliant with Article 25 – it should be reworked. Designing privacy into the system includes designing legal compliance into the system’s geopolitical footprint. Guidelines 02/2025 on processing of personal data through blockchain technologies (Version 1.1, 08 Apr 2025) are explicitly labelled “Adopted – version for public consultation”, indicating that the text is provisional and non-binding pending further deliberation. The European Data Protection Board has invited all interested parties to transmit written observations via the prescribed online form by 9 June 2025; only after analysing those submissions will the Board adopt a definitive version. https://www.edpb.europa.eu/our-work-tools/documents/public-consultations/2025/guidelines-022025-processing-personal-data_en?utm_source=chatgpt.com The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- PSD2 vs. MiCA for EU Crypto Businesses
Scope and Objectives of PSD2 vs. MiCA PSD2 – Payment Services Directive (2015) : PSD2 provides the core legal framework for retail payment services in the EU, aiming to foster an integrated EU payments market and enhance innovation, competition, and security in electronic payments. Building on the first Payment Services Directive (2007), PSD2 addressed barriers to new types of payment services (e.g. fintech payment initiation and account aggregation) while improving consumer protection and payment security (notably through Strong Customer Authentication requirements). PSD2 applies to payment service providers (PSPs) – including banks (credit institutions), electronic money institutions (EMIs), and authorized payment institutions (PIs) – and governs services like payment account handling, transfers of funds, card issuing, acquiring, money remittance, and online payment initiation or account information services (open banking). Its objectives are to level the playing field for new payment players, ensure consumer rights and security , and harmonize rules across Member States for a single EU payments market. PSD2 is a Directive, meaning each Member State transposed its provisions into national law by January 2018 (with certain measures like Strong Customer Authentication applying from 2019). It amended related laws (including the E-Money Directive 2009/110/EC) to align e-money issuance with the new payments regime. In sum, PSD2 focuses on fiat currency payments and electronic money , not on crypto-assets per se, though it impacts crypto businesses when they perform regulated payment activities (as discussed below). MiCA – Markets in Crypto-Assets Regulation (2023) : MiCA establishes the first EU-wide harmonized framework for crypto-assets not otherwise regulated by existing financial services law. Its scope expressly excludes crypto-assets that qualify as regulated instruments (e.g. financial instruments under MiFID, bank deposits, structured deposits, insurance/pension products), as well as unique non-fungible assets (NFTs). MiCA’s objective is to support innovation and fair competition in digital finance while safeguarding consumers, market integrity, and financial stability in the crypto-asset sector. It introduces uniform rules for the issuance of crypto-assets (including token offerings and stablecoins) and for crypto-asset services such as trading, exchange, custody, and advice. Key provisions mandate transparency and disclosure (e.g. white papers for offerings), prudential safeguards, governance and conduct standards for service providers, and supervision of transactions. Unlike PSD2, MiCA is directly applicable in all Member States (no national transposition needed). Regulatory Applicability to Crypto Business Models MiCA is tailored to crypto-asset activities, whereas PSD2 governs traditional payment services – but certain crypto business models trigger one or both regimes. Below we analyze how each framework applies (or not) to key categories of crypto businesses: Cryptocurrency Exchanges & Trading Platforms : Under MiCA, operating a crypto-asset trading platform or exchanging crypto-assets for fiat/other crypto is a regulated crypto-asset service requiring authorization as a Crypto-Asset Service Provider (CASP) . CASPs operating exchanges must implement transparent, non-discriminatory trading rules and ensure resilient systems, fair access criteria, and timely trade settlement (within 24h for on-chain or same-day off-chain settlements). Exchanges using proprietary capital to quote prices (market-making) must disclose pricing methods and comply with post-trade transparency. MiCA thus squarely covers both centralized crypto exchanges and brokers. By contrast, PSD2 generally does not regulate crypto-to-crypto or crypto-to-fiat exchanges as such transactions are outside the traditional “payment service” definition (which centers on transfers of “funds” i.e. banknotes, scriptural money, or electronic money). If an exchange merely facilitates trades and does not itself execute a payment from a customer’s payment account, PSD2 is not directly engaged. However, when a crypto exchange handles fiat currency deposits or withdrawals for customers, those fiat operations (e.g. holding client fiat balances or transferring euros to a bank) may fall under PSD2 or E-Money rules. Many crypto exchanges partner with licensed banks/EMIs to hold client fiat or have themselves obtained an EMI or PI license to handle fiat wallet services in compliance with PSD2/EMD2. In summary, MiCA will newly require EU crypto exchanges to obtain a CASP license (with passporting EU-wide), while PSD2/EMD2 obligations may simultaneously apply for any fiat payment aspects (e.g. exchange maintaining euro wallets or processing SEPA transfers). Notably, MiCA Article 81 mandates that if a CASP’s business model requires holding client funds as defined in PSD2 , those funds must be deposited with an EU credit institution or central bank, and payment transactions related to the crypto service may only be executed if the CASP is also authorized as a payment institution under PSD2 . This creates a dual licensing scenario for exchanges handling both crypto and fiat, as discussed further under overlaps. Custodial Wallet Providers & Crypto Custodians : MiCA brings custodial wallet services firmly into regulation. Any entity providing custody and administration of crypto-assets on behalf of clients (holding customers’ crypto private keys or crypto-assets in custody) is a CASP requiring authorization. Such custodians must conclude written agreements with clients specifying the services and maintain a custody policy. MiCA imposes strict duties: custodians must not use client crypto-assets for their own account , must keep client assets unencumbered and segregated , and bear liability for loss arising from hacks or IT incidents (including cyber-attacks or theft). In addition, MiCA explicitly excludes non-custodial wallet software providers from its scope: providers of hardware/software for self-hosted wallets (where the user, not a service, controls the keys) are not deemed CASPs. PSD2, on the other hand, has no direct equivalent for crypto custody services . Holding or safeguarding cryptocurrency is not a regulated payment service under PSD2 (which only covers safeguarding of funds in the sense of fiat/e-money). However, if a crypto custodian also holds fiat payment accounts or e-money for clients (e.g. to facilitate buying crypto), that aspect would invoke PSD2/EMD compliance. Of note, the EBA now advises that a custodial arrangement for e-money tokens (stablecoins) can be viewed as a “payment account” under PSD2 if it’s in the client’s name and used to send/receive those tokens. In summary, pure crypto custody is governed by MiCA (new CASP license), while custody of fiat money or stablecoins might concurrently require a PSD2 license depending on the service offered. Decentralized Finance (DeFi) Protocols : Truly decentralized protocols (operating via smart contracts with no central operator or intermediary ) largely fall outside both PSD2 and MiCA. PSD2 does not cover these as it regulates service providers (firms) rather than autonomous code. MiCA likewise recites that fully decentralised services provided without any intermediary do not fall within the scope of the regulation . For example, a DeFi liquidity pool or DEX with no entity controlling it would not itself be a CASP under MiCA. However, MiCA’s perimeter can capture entities that do engage with DeFi on behalf of users – e.g. a company offering an interface or “front-end” to a DeFi protocol, or administering aspects of a protocol, could be deemed to be providing a regulated crypto-asset service. MiCA foresees future evaluation of how to handle DeFi and mandates reports on DeFi’s development and potential regulation. PSD2 remains inapplicable unless the DeFi activity somehow involves traditional payment services. In practice, regulatory uncertainty exists for semi-decentralized arrangements – firms should carefully assess whether any identifiable entity is providing a service that could be captured by MiCA’s definitions (such as operating a trading platform, even if transactions settle on-chain). For now, fully autonomous DeFi systems are an unregulated gray area – a noted gap which regulators may address in the future. Crypto Asset Issuers (Utility Tokens and Others) : MiCA distinguishes three types of crypto-assets: e-money tokens (EMTs) referencing a single fiat currency (a subset of stablecoins), asset-referenced tokens (ARTs) referencing baskets of assets (including currencies, commodities, or crypto), and other crypto-assets (including utility tokens). Issuers of EMTs and ARTs face the most stringent obligations under MiCA. An issuer of an EMT (fiat-pegged stablecoin) must be authorized as a credit institution or an electronic money institution under existing EU banking/e-money law. In fact, MiCA deems EMTs to be “electronic money” under the E-Money Directive, meaning stablecoin issuers must meet all e-money issuance and redeemability requirements in addition to MiCA’s crypto-specific rules. For example, an EMT issuer must always honor redemption at par value in the referenced currency at any time, and it may not pay interest on tokens (to discourage treating them as savings instruments). MiCA Title IV adds further obligations (e.g. white paper, reserve asset custody and investment rules, and if “significant” in scale, higher capital and liquidity requirements). Asset-Referenced Token issuers (e.g. a stablecoin referencing multiple currencies or assets) likewise need prior authorization and must comply with detailed reserve, governance, and disclosure requirements (Title III of MiCA). Smaller utility token issuers have lighter requirements: generally they must publish and notify a crypto-asset white paper with essential information, but no authorization is required to offer simple utility tokens (unless they fall into another category). PSD2 is not directly concerned with crypto asset issuance unless the activity intersects with payment services. For example, issuing a utility token used for platform access is outside PSD2’s scope. However, issuing a stablecoin might invoke e-money law: under pre-MiCA law, a fiat-pegged, redeemable stablecoin often qualified as “electronic money” (stored monetary value issued on receipt of funds) under Directive 2009/110/EC, requiring an e-money license. This has now been codified by MiCA (EMTs = e-money). Thus, a crypto business issuing a euro-pegged token needs a dual regulatory footing : compliance with MiCA’s Title IV and authorization under the E-Money Directive (which PSD2 references). Issuers that are banks (credit institutions) automatically meet the authorization requirement, while non-bank issuers must obtain an EMI license. PSD2’s consumer protection rules on payment transactions would apply when those tokens are used for payments (similar to how prepaid e-money usage is regulated). Crypto Payment Service Providers : Some crypto businesses provide payment services such as enabling merchants to accept crypto or stablecoin payments, or offering crypto-based remittances. If such a business facilitates the transfer of funds (fiat or e-money) on behalf of a payer to a payee, it may perform a regulated payment service under PSD2 (e.g. money remittance or payment processing). MiCA introduces a specific service category for the custody and administration on behalf of clients , meaning if a firm carries out crypto transactions for clients (such as sending crypto to others per user instructions), it requires CASP authorization for that service (akin to a remittance service). In practice, a crypto payments firm often straddles both regimes: the crypto leg of the activity (handling the crypto asset transfer) will be subject to MiCA, and any fiat conversion or involvement of fiat accounts invokes PSD2. For example, a company that lets users pay merchants in crypto while the merchant receives fiat will need a PSD2 payment institution license (to handle the fiat payment to the merchant) and, separately, a CASP license under MiCA for the crypto-to-fiat exchange service and possibly the transfer of crypto on the payer’s behalf. PSD2’s scope over pure crypto transactions is essentially null – PSD2 covers transfers of “funds” (which are euros or other official currencies, or e-money). However, once a stablecoin or crypto-asset is involved, MiCA takes over for that portion. One special case is when stablecoins are used as a payment medium : the EBA has recently clarified that transferring e-money tokens (EMTs) for clients is to be viewed as a payment service under PSD2 , since an EMT is electronic money. As a result, a crypto payment provider transacting in euro-stablecoins on behalf of users might technically need a PSD2 license in addition to its CASP registration. This dual coverage is temporary – the EBA has issued a “no-action” position advising national regulators to delay enforcing PSD2 license requirements for CASPs handling EMT transfers until March 2026 , to avoid requiring two separate authorizations immediately. In the long term, regulatory reforms (PSD3) are expected to eliminate such duplicative licensing. For now, crypto payment providers must be cognizant of both frameworks: MiCA will regulate their crypto transaction services (with requirements on disclosures, handling of private keys, etc.), and PSD2 will regulate any fiat payout or stablecoin issuance/redemption piece, including consumer rights and security for those payment transactions. Decentralized Exchanges and Peer-to-Peer Platforms : A decentralized exchange (DEX) with no central operator is akin to the DeFi discussion above – MiCA would likely not directly apply if truly no service provider is present. PSD2 would not apply because no regulated payment service (as legally defined) is being provided to a user by a third-party. However, if a business operates a peer-to-peer trading platform (even non-custodial) where it intermediates trades between users, it may be considered as operating a “trading platform for crypto-assets” under MiCA, thus a regulated CASP. The platform provider in that case must be authorized and comply with MiCA’s rules for exchanges (transparency, operational resilience, etc.). PSD2 remains irrelevant unless fiat payment services are bundled (e.g. the platform also facilitates fiat settlement between the parties, in which case those fiat flows require a payment license or partnership with a payment institution). Legal Risks and Compliance Strategies in the New Regime Key legal risks for non-compliance include potential fines, license withdrawal (or inability to obtain required licenses), civil liability to clients, and reputational damage. Below we outline strategies to ensure compliance and mitigate risks: Regulatory Classification & Licensing Strategy: Every crypto business should map its activities to the legal definitions in PSD2 and MiCA. Determine which aspects of the business are payment services , which are crypto-asset services , and which involve token issuance . For instance, if you hold customer fiat and execute transfers, you likely need a PSD2 authorization (or partnership with a licensed PSP) in addition to any MiCA authorization. If you operate a crypto exchange/custody, plan to obtain a CASP license by late 2024. Start the application preparations early – authorization requires substantial documentation (business plan, internal procedures, security policies, fit-and-proper assessments, etc. under MiCA Articles 62-63). If you issue a stablecoin, initiate the process to become an EMI or partner with a licensed bank . Where dual licensing is needed (EMI + CASP), engage with both regulators; leverage the EBA’s recommendation of streamlined dual applications (EBA suggests NCAs use information from the CASP application to ease PSD2 authorizations). Governance and Internal Controls: Both regimes put emphasis on fit-and-proper management and robust governance. Crypto businesses should strengthen their governance structures – ensure that board members and key executives have clean compliance records (no AML/fraud convictions), and that they collectively possess the requisite expertise in both crypto technology and financial compliance. Implement or update compliance policies for risk management, conflict of interest, complaints handling, and business continuity to meet MiCA’s standards. Set up an internal audit and control function if not already present (MiCA will require effective internal controls). Training staff on new obligations (e.g. treating client crypto fairly, handling of inside information in token markets, etc.) is crucial. Establish clear procedures for client asset safeguarding : segregate on-ledger wallets for clients, keep accurate records of holdings, and conduct regular reconciliations. Prepare to document everything – regulators will expect comprehensive policies and evidence of their implementation during licensing and inspections. AML/CFT Compliance Upgrade: With increased regulatory scrutiny, crypto firms must elevate their AML controls to the level of traditional finance. If not already done, implement robust KYC onboarding compliant with AMLD5 – verify customer identity, source of funds (especially for large crypto-fiat conversions), and screen against sanctions/Pep lists. Enhance transaction monitoring systems to detect patterns of suspicious crypto transfers (layering, structuring, use of mixers, etc.). Ensure you can comply with the Travel Rule for crypto transfers: by 2024, CASPs will need to collect and transmit originator/beneficiary information for transfers of crypto-assets, similar to wire transfers. Align with the latest EBA/ESMA guidance on AML for CASPs once issued. Also, be mindful of MiCA’s requirement for extra vigilance with high-risk jurisdictions – incorporate that into your risk-based approach (e.g. enhanced due diligence for customers from or sending crypto to blacklisted countries). Consumer Protection and Transparency Measures: To comply with both PSD2 and MiCA spirit, crypto businesses should adopt a customer-centric approach . Provide clear, plain-language disclosures of fees, risks, and terms of service. For example, if you run a trading platform, publish your pricing methodology or firm quotes as required, and have a clear policy on how client orders are executed (best execution). If you custody assets, provide clients with statements of their holdings and inform them of security measures in place (and any risks). Implement a complaints handling process now (even ahead of formal MiCA RTS on this) – including a designated contact point, log of complaints, and timely resolution process, as this will be mandated. Consider offering voluntary protections such as insurance or compensation schemes for theft of crypto (as some exchanges do) to bolster confidence – MiCA will hold you liable for certain losses by law, so insurance also protects the firm’s solvency. In anticipation of strong customer authentication becoming expected for crypto transactions (per EBA’s advice), deploy two-factor authentication and withdrawal confirmation steps for clients. Technical and Cybersecurity Readiness: Given DORA’s application and MiCA’s ICT risk focus, crypto businesses must invest in IT security. Perform comprehensive penetration testing and smart contract audits (if applicable). Establish 24/7 monitoring for unauthorized access or unusual transactions. Develop an incident response plan for cyber incidents – who to alert, how to contain breaches, how to communicate to clients and regulators. MiCA and DORA will require incident reporting within tight deadlines for significant events, so be prepared to detect and report promptly. Also ensure data protection compliance – review how customer data (IDs, account info, blockchain addresses linked to identities) is stored and used; conduct a GDPR privacy impact assessment for new data practices (like sharing data with analytics providers). Encryption and secure key management (including multi-signature or hardware security modules for private keys) are fundamental to demonstrate to regulators that client assets are safe. Overlapping Compliance – Streamlining Efforts: If your business requires compliance with both PSD2 and MiCA (e.g. you operate a platform where users hold euro balances and crypto balances), look for synergies in compliance efforts. For instance, the own funds calculation you do for PSD2 can inform the capital planning for MiCA’s requirements – coordinate with advisors to ensure the highest required amount covers both. Training programs for staff can cover both anti-fraud (PSD2) and anti-market-manipulation (MiCA) aspects together, fostering a culture of compliance across the board. Where EBA has advised deprioritization of certain PSD2 provisions for CASPs (like not focusing on IBAN and open banking for stablecoin wallets), document that guidance and be prepared to discuss with auditors or examiners why certain PSD2 measures may not be relevant to your model. Essentially, maintain a compliance matrix mapping each requirement of both regimes to your internal controls, so nothing is overlooked and redundancies are minimized. Prokopiev Law Group, powered by a global partner network, secures MiCA CASP license approvals, resolves PSD3 crypto overlap, delivers stablecoin EMT authorization, implements the EU Travel Rule, hardens DORA resilience, readies DAC8 crypto tax reporting, completes VASP registration, and obtains Dubai VARA license, MAS DPT license, Hong Kong SFC VASP clearance, DAO legal wrapper structuring and other high-demand Web3 mandates, ensuring full compliance across the European Union, United Kingdom, United States, Switzerland, Liechtenstein, Singapore, Hong Kong, UAE (VARA and ADGM), Cayman Islands and British Virgin Islands; write to us at prokopievlaw.com/contact for rapid, execution-ready guidance. Sources Disclaimer The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- The “GENIUS” Stablecoin Bill (Guiding and Establishing National Innovation for U.S. Stablecoins Act of 2025)
Introduction and Official Source The Guiding and Establishing National Innovation for U.S. Stablecoins Act of 2025 , commonly known as the GENIUS Act , is a proposed U.S. federal law to create a comprehensive regulatory framework for payment stablecoins . The bill’s official text (S.1582 in the 119th Congress) is available on Congress.gov . It defines “payment stablecoin” as a digital asset used for payments or settlement that is redeemable for a fixed monetary value (for example, one token redeemable for $1). The GENIUS Act aims to ensure that only fully-regulated and properly reserved stablecoins circulate in U.S. markets, establishing requirements for issuers, consumer protections, and oversight mechanisms. This report provides a detailed legal overview of the bill’s text, sponsorship, legislative status, and key provisions, followed by an analysis of its implications for stablecoin issuers, users, and regulators. ( Official Bill Source: The full bill text and status can be found on Congress.gov .) Key Provisions of the GENIUS Act of 2025 Scope and Definitions The GENIUS Act applies to “payment stablecoins,” defined as digital assets intended for use as a means of payment or settlement that the issuer is obligated to redeem for a fixed value (e.g. one U.S. dollar). The definition explicitly excludes instruments that are already legal tender or bank deposits, that pay interest, or that are securities under existing law. In fact, the Act makes clear that a compliant payment stablecoin “is not a security” under the Securities Act, Securities Exchange Act, or Investment Company Act. It also later clarifies that stablecoins regulated under this framework are not commodities for regulatory purposes and are not insured deposits . By carving stablecoins out of securities and commodities definitions, the bill provides regulatory clarity – these digital tokens will be overseen under the new stablecoin-specific regime rather than by the SEC or CFTC as securities or commodities. The Act’s provisions generally cover USD-pegged stablecoins (though “monetary value” could include foreign currencies), and would require any stablecoin used by U.S. persons in commerce to be issued by an authorized entity under this law (or an equivalent foreign regime, as discussed below). Permitted Issuers and Licensing Requirements One of the cornerstone features of the GENIUS Act is that it restricts stablecoin issuance to regulated entities termed “permitted payment stablecoin issuers.” In essence, only a permitted issuer may issue a payment stablecoin for use by U.S. persons (subject to certain phase-ins and exceptions). The Act defines three categories of permitted issuers: (A) Insured Depository Institution Subsidiaries: A subsidiary of a federally insured depository institution (i.e. a bank or credit union) that is approved to issue stablecoins. This allows traditional banks (through affiliates) to issue stablecoins under their existing bank regulatory framework. (B) Federal Nonbank Stablecoin Issuers: A nonbank company that obtains a new federal license/charter to issue stablecoins (referred to as a “Federal qualified nonbank payment stablecoin issuer”). These nonbank issuers would be chartered and supervised by the Office of the Comptroller of the Currency (OCC) , the primary regulator for national banks. The OCC is explicitly tasked with regulating nonbank stablecoin issuers under the Act’s federal regime. (C) State-Qualified Stablecoin Issuers: A nonbank entity that is chartered or licensed under state law to issue stablecoins (for example, a state trust company or money transmitter that meets the Act’s standards), termed a “State qualified payment stablecoin issuer.” The Act permits a state-based regulatory option provided the state’s rules are “substantially similar” to the federal standards . Importantly, the state-based option is limited to issuers with $10 billion or less in outstanding stablecoin liabilities – larger issuers must operate under federal oversight. All permitted issuers, whether federal or state, must be incorporated in the United States and be subject to regulation by the appropriate regulatory agency . In practice, this means a nonbank stablecoin provider can choose a regulatory path: either apply for a new OCC license (federal charter) or seek approval under a qualifying state regime (if they will remain relatively smaller in scale). A nonbank issuer that grows beyond $10 billion in stablecoin circulation will have to transition to the federal regime (OCC supervision) within a prescribed period, unless granted a waiver by the federal regulator. This ensures that larger stablecoin issuers with potential systemic impact are under federal supervision. The Act lays out a licensing (approval) process for would-be issuers. An applicant must demonstrate it can meet all prudential requirements (capital, reserves, risk management, etc.). Notably, if a regulator does not act on an application within 120 days, the application is deemed approved by default . Any denial must be accompanied by a rationale, and applicants have the right to appeal denials. This provision is meant to prevent regulators from indefinitely stonewalling new entrants and to promote timely decision-making, thereby encouraging innovation and competition in stablecoin markets. Regulatory jurisdiction: For bank-affiliated issuers, their existing federal banking regulator (e.g. Federal Reserve, FDIC, or OCC depending on the charter) will oversee stablecoin operations; for nonbank issuers under the federal option, the OCC is the primary regulator. State-licensed issuers remain under their state regulators , though federal authorities retain certain backup powers as discussed below. No matter the route, all permitted issuers face baseline requirements on reserves, liquidity, auditing, and consumer disclosure as mandated by the GENIUS Act. Reserve Backing and Capital Standards To protect the value of stablecoins and prevent runs, the GENIUS Act imposes stringent reserve requirements . Every payment stablecoin must be 100% backed by high-quality liquid assets on a one-to-one basis . In other words, for each $1 of stablecoin issued, at least $1 of eligible reserve assets must be held by the issuer at all times. The bill tightly defines “permitted reserves” to limit them to safe and liquid instruments , including: U.S. coins and currency, balances in insured bank accounts, short-term U.S. Treasury bills , Treasury repurchase agreements (repos) or reverse repos fully collateralized by Treasuries, shares in government money market funds, central bank reserves , and other similar government-issued assets approved by regulators. Riskier or illiquid assets (e.g. corporate debt, equities, exotic investments) cannot count toward reserve backing , preventing an issuer from backing a stablecoin with volatile or credit-risky instruments. This addresses past concerns where some stablecoins were found to hold risky reserve assets; under GENIUS, reserves must essentially be cash or cash equivalents with minimal credit and market risk. The Act also restricts how reserve assets can be used . Issuers generally cannot leverage or encumber reserves for speculative investments. Reserves may be used only for specified purposes, such as redeeming stablecoins on demand or serving as collateral in safe short-term financing transactions (like Treasury repos). This ensures the reserve assets remain liquid and available to meet stablecoin redemptions at all times. Notably, stablecoin issuers would not be subject to traditional bank regulatory capital requirements that apply to depository institutions. The rationale is that because stablecoin issuers must hold $1-for-$1 reserves , the usual bank capital ratios (which allow fractional reserve banking) are not applicable – essentially the stablecoin issuer’s own “capital” is the full reserve backing itself. The Act instead directs federal and state regulators to issue tailored capital and liquidity rules specific to stablecoin issuers (likely ensuring resilience against operational risks or losses unrelated to reserve backing). In sum, the GENIUS framework creates a full-reserve model for stablecoins to protect coin holders, rather than the fractional reserve model of traditional banking. Disclosure, Reporting, and Audits Every permitted issuer must establish and publicly disclose its redemption policies (i.e. the terms and procedures for consumers to redeem stablecoins for cash). They are also required to issue regular reports on their outstanding stablecoin liabilities and the composition of their reserves . At a minimum, issuers must publish monthly reports detailing the total stablecoins in circulation and exactly what assets make up the reserves backing those coins. These reports provide ongoing transparency so that users and regulators can verify that stablecoins are fully collateralized at all times. Crucially, the Act requires that management attestations and independent audits support these disclosures. The periodic reserve reports must be certified by the issuer’s executive officers (holding management accountable for accuracy) and “examined” by a registered public accounting firm . In addition, any issuer with more than $50 billion in stablecoins outstanding faces an even higher bar: it must submit annual audited financial statements to regulators. These audits would likely evaluate the issuer’s financial condition, internal controls, and reserve holdings in depth. The combination of management certification, third-party examination of reserve reports, and annual audits (for large issuers) creates multiple layers of assurance that a stablecoin issuer is honest and solvent. Consumer-facing note: The Act also requires that consumers be informed that stablecoins are not government-insured deposits – avoiding any false sense of FDIC protection – and presumably mandates other disclosures to ensure users understand the product they are using. Supervision and Enforcement Powers For federally regulated issuers (i.e. bank subsidiaries and OCC-chartered nonbanks), the same federal banking agencies that regulate their affiliated bank or that granted their charter will supervise the stablecoin activities . For example, a stablecoin subsidiary of a national bank might be supervised by the OCC alongside the bank, whereas a nonbank stablecoin issuer with an OCC charter will be supervised directly by the OCC (with the Federal Reserve possibly overseeing any holding company). These regulators are tasked with monitoring the issuer’s financial condition, safety and soundness, and risk management systems in the context of stablecoin operations. All federal-regime issuers must file regular reports and submit to examinations by their regulator , just as depository institutions do. Regulators are also armed with enforcement authority . If an issuer violates any requirement of the Act or any condition imposed in its charter/license, the regulator can order the issuer to stop issuing stablecoins or impose other enforcement actions (e.g. fines, cease-and-desist orders). In extreme cases, an issuer could effectively be shut down for non-compliance. This enforcement mechanism ensures that regulators can act swiftly to correct problems – for instance, if an issuer’s reserves fell short or if it engaged in unsafe practices, the agency could halt further issuance until issues are remedied. These powers mirror the kind of prompt corrective action regulators have in banking supervision, adapted to stablecoins. For state-qualified issuers (<$10B) , the Act preserves primary oversight for state regulators : “State regulators would have supervisory, examination, and enforcement authority over all state issuers”. In other words, a company that opts for a state stablecoin license will answer mainly to the state banking or financial authority that chartered it. However, to ensure federal interests (like financial stability and monetary policy) are protected, the Act gives federal regulators certain backup authorities over state-regulated issuers. Specifically, a state may voluntarily cede supervisory responsibility to the Federal Reserve for a stablecoin issuer. Even if not ceded, the Federal Reserve or OCC can step in to take enforcement actions against a state issuer in “unusual and exigent” circumstances . This language suggests that if a state-regulated stablecoin were threatening broader financial stability or flagrantly violating the rules, federal authorities could intervene (similar to how the Fed has emergency powers in other contexts). Additionally, if a state-based issuer grows beyond the $10B threshold, it must come under joint federal/state supervision or shift to a federal charter, as noted earlier. Anti-Money Laundering and Financial Integrity Measures Stablecoins raise concerns about illicit finance, so the GENIUS Act contains AML (Anti–Money Laundering) and counter-terrorism finance provisions . All permitted stablecoin issuers would be made explicitly subject to the Bank Secrecy Act (BSA) , which is the primary U.S. AML law. This means issuers must implement know-your-customer (KYC) programs , monitor transactions for suspicious activity, file Suspicious Activity Reports (SARs) and Currency Transaction Reports (CTRs) as required, and comply with anti-money laundering and sanctions laws just like banks and money services businesses do. The Act in fact requires each issuer to certify that it has implemented an AML and sanctions compliance program as a condition of being licensed. Failure to maintain an adequate AML program could lead to enforcement action. The law also directs the Financial Crimes Enforcement Network (FinCEN) (the bureau of the Treasury Department that oversees BSA compliance) to issue “tailored” AML rules for digital assets . FinCEN would be tasked with updating or refining AML regulations to address the unique characteristics of stablecoins and other digital assets, and to “facilitate novel methods…to detect illicit activity involving digital assets.” . This could include guidance on blockchain analytics, informationsharing, or new reporting requirements specific to crypto transactions. In essence, Congress is requiring regulators to modernize AML oversight tools to keep pace with stablecoin technology, recognizing that traditional methods must adapt (for example, leveraging the traceability of distributed ledgers while also managing privacy concerns). Furthermore, the Act imposes bad actor bans : any individual who has been convicted of certain financial crimes is prohibited from serving as an officer or director of a stablecoin issuer. Custody and Investor Protection Provisions The GENIUS Act contains several provisions addressing the custody of stablecoins and reserve assets , as well as protections for stablecoin holders in the event of issuer insolvency. First, the law would allow regulated financial institutions to custody stablecoins and their reserves. Banks and credit unions are explicitly permitted to hold stablecoins in custody for customers and to hold reserve assets on behalf of stablecoin issuers. They are also permitted to use distributed ledger technology (blockchains) in their operations and even to issue “tokenized deposits” (essentially bank deposits represented as tokens on a blockchain) if they choose. These clarifications remove any legal doubt that banks can participate in the stablecoin ecosystem, which could foster integration of stablecoins into mainstream banking (e.g. banks offering stablecoin wallets or integrating stablecoin payments). For any entity acting as a custodian of stablecoin assets or reserves – whether it’s an issuer itself or a third-party custodian – the Act sets rules to protect customers. Notably, custodians are prohibited from commingling customer stablecoin funds with the custodian’s own assets . Customer assets must be segregated, which protects users if the custodian were to fail. (Commingling could put customer funds at risk of being tied up in the custodian’s bankruptcy; segregation ensures they remain identifiable and returnable to customers.) Limited exceptions might apply (e.g. pooling for operational efficiency), but generally segregation of reserves is required . Additionally, any stablecoin custodian must itself be a regulated entity – either a federally or state regulated bank or a registered securities/capital markets entity (like a trust company or broker-dealer) regulated by the SEC or CFTC. This ensures that companies holding the reserves (or the tokens in custody for users) are subject to oversight and examinations regarding their safeguarding of those assets. Perhaps one of the most important consumer protections in the Act is its treatment of stablecoin holders’ claims in bankruptcy . The law provides that stablecoin holders have priority over all other creditors in claims against the issuer’s reserve assets . In practical terms, if a stablecoin issuer were to go bankrupt or be liquidated, the customers who hold the stablecoin tokens get first claim on the reserve funds backing those tokens, before any other debts of the company are paid. This is a powerful protection: it means the collateral truly belongs to the token holders and cannot be grabbed by, say, the issuer’s other creditors. It greatly increases the likelihood that stablecoin holders can recover their money even if the issuer fails, essentially making them senior secured creditors with respect to the reserve. The Act also updates the bankruptcy code as needed to enforce this priority rule. By giving stablecoin users a senior claim, the Act addresses the scenario of a stablecoin “run” – users will know they are first in line for repayment, which should reduce panic and run incentives during stress. Finally, as noted earlier, the Act explicitly states that stablecoins are not insured by the federal government . This means if a stablecoin fails and somehow reserves were insufficient (which should not happen under a 100% reserve mandate, but theoretically if there were fraud or losses), holders cannot claim FDIC deposit insurance or other federal insurance – they rely on the issuer’s assets. Requiring disclosure of this fact is an important consumer-awareness measure to prevent any misunderstanding of stablecoins as risk-free insured deposits. Consumers must rely on the regulatory framework, the reserve backing, and the issuer’s soundness for protection, rather than an insurance safety net. Foreign Issuers and International Considerations Recognizing the global nature of crypto markets, the GENIUS Act also addresses foreign stablecoin issuers and cross-border usage . Within three years of the Act’s enactment , it would become unlawful to offer or sell a payment stablecoin to U.S. persons except by a permitted (U.S.-regulated) issuer. This effectively phases out unregulated stablecoins from the U.S. market – even foreign-based stablecoins (like those issued overseas) must either become compliant or exit the U.S. consumer market. However, the Act gives the U.S. Treasury, in consultation with other regulators, authority to establish “reciprocal agreements” with foreign jurisdictions that have comparable regulatory standards for stablecoins. If a foreign jurisdiction has a robust oversight regime similar to the GENIUS Act, Treasury can deem it comparable and set up an agreement to allow that jurisdiction’s regulated stablecoins to be offered in the U.S. without each foreign issuer needing a separate U.S. license. This is akin to passporting or mutual recognition. Even when foreign stablecoins are allowed, the Act imposes additional safeguards. Any foreign stablecoin permitted for U.S. use must have the technical capability to freeze transactions and to comply with lawful orders . In practice, this means truly decentralized or ungovernable stablecoins would not qualify – the issuer or controlling entity must be able to block illicit transactions or seize tokens when required by regulators (e.g. to enforce sanctions or court orders). Additionally, foreign issuers that want their stablecoins used in the U.S. must register with the OCC and submit to ongoing supervision , and they must hold a portion of their reserves in U.S. financial institutions sufficient to satisfy U.S. redemption requests. This latter requirement ensures that U.S. holders of a foreign stablecoin can redeem locally without depending entirely on foreign banks. It also gives U.S. regulators some leverage (since those reserves in the U.S. can be overseen or frozen if needed). The Act gives the Treasury Secretary (along with other agencies) flexibility to waive certain requirements for foreign issuers and for digital asset intermediaries that deal in foreign stablecoins , if such waivers are in the U.S. interest. This could allow, for example, transitional arrangements or special cases if a strict application of the rules would disrupt markets. Overall, these foreign issuer provisions aim to extend the regulatory perimeter internationally – encouraging other countries to implement similar standards and preventing the U.S. from becoming a haven for unregulated stablecoins, or conversely, preventing circumvention of U.S. rules via offshore entities. Other Notable Provisions In addition to the core elements above, the GENIUS Act contains a few other notable legal provisions: Executive Officials and Conflicts of Interest: Partly in response to concerns about high-level conflicts (the bill’s nickname “GENIUS” drew scrutiny around potential involvement of political figures in crypto ventures), the final Senate version added a provision affirming that existing federal ethics laws prohibit senior government officials (e.g. the President, Cabinet members) from issuing stablecoins . Federal Implementation Timeline: The Act directs that the regulatory framework be put into effect within one year of enactment . This includes agencies promulgating all necessary regulations through notice-and-comment rulemaking. A one-year implementation deadline is an aggressive timetable, reflecting Congress’s desire to quickly stand up oversight in a fast-moving crypto market. Regulators would need to coordinate and issue rules on licensing procedures, capital levels, examination guidelines, disclosure formats, etc., by that deadline. Continuation of Existing Authority: Until the new rules are in place, the Act does not immediately outlaw existing stablecoin activity. It implicitly allows current stablecoin issuers operating under state money transmitter laws or other exemptions to continue during the interim. Likewise, the Office of the Comptroller of the Currency’s prior guidance that national banks may handle stablecoins remains effective until superseded. This avoids a disruption where stablecoins would suddenly become illegal before a pathway to compliance exists. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- European Securities and Markets Authority (ESMA) Final Report – Guidelines on supervisory practices for competent authorities to prevent and detect market abuse under the MiCAR
ESMA’s new guidelines stem directly from Article 92(3) of MiCA, which obliges the authority to harmonise how national competent authorities (NCAs) combat market abuse in crypto-asset markets by 30 June 2025. The final report, published on 29 April 2025, therefore fills a regulatory gap that MiCA itself created: the regulation outlawed insider dealing, unlawful disclosure and manipulation involving crypto-assets, yet supervisory cultures and data capabilities still vary widely across the EU. 1. Why ESMA issued these Guidelines Legal mandate – Article 92(3) MiCA instructs ESMA to publish, by 30 June 2025, guidelines that harmonise how national competent authorities (NCAs) prevent and detect market abuse in crypto-asset markets. Regulatory gap – MiCA extends market-abuse rules to crypto, but supervisory cultures, tooling and data availability still differ widely across Member States. Objective – Provide a common, risk-based supervisory framework that builds on experience under the Market Abuse Regulation (MAR) while addressing crypto-specific threats such as maximal extractable value (MEV), token-supply manipulation and the outsized role of social-media hype. 2. Scope, status and implementation timetable Element Details Addressees All EU NCAs (Article 3(1)(35) MiCA) Subject matter Supervision of insider dealing , unlawful disclosure and market manipulation involving crypto-assets (Title VI MiCA, Arts 86-92) Entry into force 3 months after the multilingual version appears on ESMA’s website Comply-or-explain Within 2 months of publication, each NCA must notify ESMA whether it will (i) comply, (ii) comply later, or (iii) not comply, giving reasons. ESMA will publish the list. Relationship to RTS Complements the forthcoming Regulatory Technical Standards on Suspicious Transaction or Order Reports (STORs) required under Art 92(2) MiCA. 3. Guiding principles and cross-cutting themes Proportionality (Guideline 1) – Supervisory intensity must reflect the scale, complexity and risk of local crypto-markets and actors. Risk-based & forward-looking approach (Guideline 2) – NCAs should continuously scan for emerging abuse typologies (e.g., on-chain front-running, algorithmic manipulation) and adapt oversight swiftly. Leverage existing MAR know-how (Guideline 3) – Before inventing new controls, map current MAR surveillance to crypto contexts and plug the gaps (e.g., add MEV detection). Build a common EU supervisory culture (Guideline 4) – Systematic peer-exchange of cases, data and best practices via ESMA working groups; potential ESMA convergence tools where divergent practices persist. Adequate & specialised resources (Guideline 5) – Dedicated crypto teams, data scientists and bespoke tooling; tap initiatives such as the EU Supervisory Digital Finance Academy for staff up-skilling. Stakeholder dialogue (Guideline 6) – Maintain open channels with industry, academia, tech providers and public-interest groups to anticipate new threats and co-design solutions. Market-integrity outreach (Guideline 7) – NCAs should run public-education campaigns, issue Q&As and encourage voluntary best practices (e.g., issuer insider-lists, platform user warnings). 4. Operational supervision and enforcement tools Area Key expectations Market monitoring (Guideline 8) Adopt data-driven surveillance combining on-chain, off-chain and cross-market feeds; supplement automated scans (pattern/keyword) with human analysis; include social-media, blogs, newsletters and podcasts where they influence prices. Oversight of PPAETs * (Guideline 9) Ensure Persons Professionally Arranging or Executing Transactions maintain effective, continuously reviewed abuse-detection systems; apply proportionate intensity (e.g., full trading-venue vs. order-transmission CASPs). STOR handling (Guideline 10) Put in place clear internal workflows that assign responsibilities, grade severity/recurrence, and ensure timely follow-up; response must be proportionate to the threat detected. ESMA coordination (Guideline 11) Seek ESMA-led joint inspections/investigations in cross-border cases involving multiple NCAs, risk of conflicting actions, or undue burden on firms. Third-country obstacles (Guideline 12) Alert ESMA and peers when non-EU trading flows, legal barriers or uncooperative foreign authorities hamper abuse detection; strive for a common supervisory stance toward such obstacles. *PPAET = firms (often CASPs) that professionally arrange or execute crypto transactions. 5. Crypto-specific risk considerations woven into the Guidelines MEV & front-running – Recognised as potential insider dealing/market manipulation vectors that require bespoke detection logic. Token mechanics – Sudden changes in supply, reserve backing (for asset-referenced/stablecoins) or governance decisions can be exploited for price manipulation. Social-media virality – Higher risk of pump-and-dump or misinformation campaigns; NCAs are urged to monitor high-reach accounts and coordinated posting patterns. Cross-border trading & DeFi – Surveillance must cover platforms and liquidity venues outside the EU; data-gathering may depend on blockchain analytics and cooperation agreements. 6. Stakeholder feedback and ESMA’s adjustments Securities & Markets Stakeholder Group (SMSG) broadly endorsed the draft but asked for stronger emphasis on NCA staffing, training and consumer-protection links. ESMA’s response (Annex II) – Added explicit encouragement for dedicated crypto resources and ongoing training (Guideline 5). Suggested voluntary dialogue with other authorities (consumer-protection, AML) under Guideline 4, while noting legal-basis constraints. Reaffirmed proportionality so smaller markets are not over-burdened. 7. Next steps for NCAs and industry Translation & publication – ESMA will release the final Guidelines in all EU languages. NCA notices – Within 2 months, each NCA must file its comply/intentions statement. ESMA will disclose any non-compliance publicly. Practical adoption – NCAs integrate the Guidelines into national frameworks; market participants (especially PPAETs and trading-venue CASPs) should align internal surveillance and STOR processes accordingly. 8. Key take-aways for market participants Expect more uniform and data-intensive surveillance across the EU, including scrutiny of your social-media communications. STOR obligations will soon follow detailed RTS templates; start enhancing your detection logic now (on-chain & off-chain reconciliation, MEV scenarios, cross-asset signals). Prepare for dialogue with supervisors —NCAs will actively seek feedback on new risks and may issue guidance or best-practice checklists. Cross-border models (routing flow to non-EU venues, DeFi aggregators) may attract heightened attention where they frustrate EU supervision. Prokopiev Law Group is a forward-thinking Web3 legal consultancy that bridges breakthrough blockchain ideas with the world’s fast-moving regulatory frameworks. Operating from Kyiv yet connected to a network that spans more than 50 jurisdictions, the firm delivers end-to-end support—from incorporating and structuring crypto ventures to drafting DAO governance, token-sale and data-protection documentation, and securing the licences and cross-border compliance investors expect under regimes such as MiCA. Its lawyers pair deep, specialised expertise with clear, business-oriented advice, letting founders and funders focus on building while PLG shoulders the legal heavy lifting. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- ESMA Guidelines on the Conditions and Criteria for the Qualification of Crypto-Assets as Financial Instruments
The Guidelines apply to competent authorities, financial market participants, and any individual or entity engaged in crypto-asset activities. They seek to clarify how Article 2(5) of MiCA should be applied when determining whether a crypto-asset qualifies as a financial instrument. They come into force sixty days after their publication in all official EU languages on ESMA’s website, happened on March 19, 2025. Legislative References, Abbreviations, and Definitions Underpinning these Guidelines are several core pieces of legislation. The most central of these are MiFID II (Directive 2014/65/EU), AIFMD (Directive 2011/61/EU), MiCA (Regulation (EU) 2023/1114), UCITSD (Directive 2009/65/EC), the Money Market Fund Regulation (2017/1131/EU), and the ESMA Regulation (Regulation (EU) 1095/2010). They also make reference to DLTR (Regulation (EU) 2022/858), which governs pilot regimes for distributed ledger technology.Relevant abbreviations include AIF for Alternative Investment Fund, ART for Asset-Referenced Token, CASP for Crypto-Asset Service Provider, DLT for Distributed Ledger Technology, EMT for Electronic Money Token, and NFT for Non-Fungible Token. Classification of Crypto-Assets as Transferable Securities (Guideline 2) To determine whether a crypto-asset qualifies as a transferable security, it is important to verify whether the crypto-asset grants rights equivalent to those attached to shares, bonds, or other forms of securitised debt. The text of MiFID II (Article 4(1)(44)) underpins this assessment. According to the Guidelines, three main criteria must be cumulatively fulfilled. First, a crypto-asset must not be an instrument of payment, so if its sole use is as a medium of exchange, it would not qualify as a transferable security. Second, the crypto-asset must belong to a “class” of securities, meaning that it confers the same rights and obligations to all holders or else belongs to a distinct class within the issuance. Third, it must be “negotiable on the capital market,” which generally means that it can be freely transferred or traded, including on trading platforms equivalent to those covered by MiFID. If all these points are satisfied, then the crypto-asset should be classified as a transferable security and treated under the same rules that govern traditional instruments. Classification as Other Types of Financial Instruments Money-Market Instruments (Guideline 3) A crypto-asset that would be considered a money-market instrument must normally be traded on the money market and should not serve merely as an instrument of payment. The crypto-asset should exhibit features akin to short-term negotiable debt obligations, such as treasury bills or commercial paper, and typically have a short maturity or a fixed redemption date. An example might be a token representing a certificate of credit balance repayable within a short timeframe, though it must be clearly distinguishable from mere payment tools. Units in Collective Investment Undertakings (Guideline 4) A crypto-asset qualifies as a unit or share in a collective investment undertaking if it involves pooling capital from multiple investors, follows a predefined investment policy managed by a third party, and pursues a pooled return for the benefit of those investors. The focus is on whether participants lack day-to-day discretion over how the capital is managed and whether the project is not purely commercial or industrial in purpose. An example would be a token representing ownership in a fund-like structure that invests in a portfolio of other digital or traditional assets; if it meets the criteria from existing definitions in AIFMD and UCITSD (excluding pure payment or operational tools), it may be deemed a collective investment undertaking. Derivative Contracts (Guideline 5) The Guidelines recognize two broad scenarios for derivatives: crypto-assets can serve as the underlying asset for a derivative contract, or they can themselves be structured as derivative contracts. In both cases, reference must be made to Annex I Section C (4)-(10) of MiFID II, which identifies features such as a future commitment (forward, option, swap, or similar) and a value derived from an external reference point, such as a commodity price, interest rate, or another crypto-asset. Whether the derivative settles in fiat or crypto is not decisive if the essential characteristics of a derivative are present. This includes perpetual futures or synthetic tokens that track an index or basket of assets, provided they fit into one of MiFID II’s derivative categories. Emission Allowances (Guideline 6) A crypto-asset may be considered an emission allowance if it represents a right to emit a set amount of greenhouse gases recognized under the EU Emissions Trading Scheme, in line with Directive 2003/87/EC. If the token is interchangeable with official allowances and can be used to meet compliance obligations, it should then be regulated under MiFID II as an emission allowance. On the other hand, self-declared carbon credits or voluntary offsets that are not recognized by EU authorities do not fall under this category. Background on the Notion of Crypto-Assets Classification as Crypto-Assets (Guideline 7) The Guidelines reiterate that a crypto-asset, in general, is a digital representation of value or rights, transferred and stored via DLT. If it cannot be transferred beyond the issuer or if it is purely an instrument of payment, it typically falls outside the scope of these financial-instrument rules. Moreover, the fact that a holder anticipates profit from a token’s appreciation is not by itself sufficient to qualify the token as a financial instrument. Crypto-Assets That Are Unique and Non-Fungible (NFTs) (Guideline 8) Non-fungible tokens, which are unique and not interchangeable with each other, are excluded from MiCA provided they genuinely fulfill the requirement of uniqueness. This means having distinctive characteristics or rights that cannot be matched by any other asset in the same issuance. Merely assigning a unique technical identifier to each token is not enough to establish non-fungibility if the tokens effectively grant identical rights and are indistinguishable in economic reality. Fractionalizing an NFT into multiple tradable pieces typically renders those fractional parts non-unique unless each part has distinct attributes of its own. Hybrid Crypto-Assets (Guideline 9) Some tokens combine features typical of multiple crypto-asset categories, such as partial investment features (like profit participation) alongside a utility function (like access to a digital service). If, on closer assessment, any component of the token fits the definition of a financial instrument under MiFID II, the financial instrument classification applies, taking precedence over other labels. The Guidelines thus underline that hybrid tokens must be evaluated under a substance-over-form approach, with a focus on their actual rights, obligations, and economic features rather than how the issuer labels them. Conclusion Taken as a whole, the Guidelines demonstrate ESMA’s intention to ensure that all tokens conferring rights equivalent to conventional financial instruments are appropriately supervised under MiFID II. Although labels such as “utility” or “NFT” may be used by issuers, the ultimate question is whether the token’s real-world function and associated rights align with those of a security, a derivative, or another regulated category. By following this approach, authorities and market participants can maintain consistent, technology-neutral regulation in the fast-evolving crypto-asset space. Prokopiev Law Group stays at the forefront of Web3 compliance and regulatory intelligence, offering strategic support across NFT legal solutions, DAO governance, DeFi compliance, token issuance, crypto KYC, and smart contract audits. Leveraging a broad global network of partners, we ensure your project meets evolving regulations worldwide, including in the EU, US, Singapore, Switzerland, and the UK. If you want tailored guidance to protect your interests and remain future-proof, write to us for more information. Reference: Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- E-Money and Electronic Money Tokens (EMTs)
How do electronic money (e-money) and electronic money tokens (EMTs) differ, and what are the regulatory frameworks governing them within the European Economic Area (EEA)? Definition and Regulation of E-Money Tokens (EMTs) E-Money Tokens (EMTs): EMTs are a specific type of crypto-asset, their value typically pegged to a single fiat currency such as the Euro or US Dollar. These crypto-assets represent digital value or rights that can be transferred and stored electronically through distributed ledger technology (DLT) or similar systems. DLT operates as a synchronized information repository shared across multiple network nodes. Regulatory Framework: The Markets in Crypto-Assets Regulation EU 2023/1114 (MiCA) outlines stringent conditions for the issuance of EMTs. Key points include: EMTs can only be issued by credit institutions or Electronic Money Institutions (EMIs) regulated by an EEA regulator. MiCA came into effect in June 2023 and will be fully applicable from December 30, 2024. Issuer Obligations Under MiCA: Prudential, Organizational, and Conduct Requirements: Issuers must adhere to specific prudential standards, organizational requirements, and business conduct rules, including: Issuing EMTs at par value. Granting holders redemption rights at par value. Prohibiting the granting of interest on EMTs. White Paper Requirements: Issuers are mandated to publish a white paper with detailed information such as: Issuer details: Name, address, registration date, parent company (if applicable), and potential conflicts of interest. EMT specifics: Name, description, and details of developers. Public offer details: Total number of units offered. Rights and obligations: Redemption rights and complaints handling procedures. Underlying technology. Associated risks and mitigation measures. Significant e-money tokens (EMTs) are subject to higher capital requirements and enhanced oversight by the European Banking Authority (EBA). Significant EMTs are defined as those which can scale up significantly, potentially impacting financial stability, monetary sovereignty, and monetary policy within the EU. The EBA mandates that issuers of significant EMTs hold additional capital reserves. Specifically, significant issuers must maintain capital that is the higher of either €2 million or 3% of the average reserve assets. The EBA monitors these issuers closely, requiring detailed reports on their financial health and risk management practices. Issuers of significant EMTs must also adhere to comprehensive reporting obligations. They need to provide regular updates on their liquidity positions, stress testing results, and compliance with redemption obligations. Definition and Regulation of Electronic Money Electronic Money (E-Money): E-money is defined as electronically or magnetically stored monetary value representing a claim on the issuer. Its characteristics include: Issued upon receipt of funds for the purpose of payment transactions. Accepted by entities other than the issuer. Not excluded by Regulation 5 of the European Communities (Electronic Money) Regulations 2011 (EMI Regulations). Exclusions Under Regulation 5: The EMI Regulations exclude monetary value stored on specific payment instruments with limited use and monetary value used for specific payment transactions by electronic communications service providers. Electronic Money Institutions (EMIs): An EMI is an entity that has been authorized to issue e-money under the EMI Regulations, which is necessary for any e-money issuance within the EEA. Comparative Analysis of E-Money and EMTs Definition: E-Money: Electronically stored monetary value represented by a claim on the issuer. EMTs: Crypto-assets whose value is usually linked to a single fiat currency. Issuers: E-Money: Issued by EMIs upon receipt of funds for making payment transactions. EMTs: Issued by EMIs and/or credit institutions. Legal Regime: E-Money: Governed by the European Communities (Electronic Money) Regulations 2011. EMTs: Governed by MiCA. Status: E-Money: Not necessarily an EMT, but can be depending on how it is transferred and stored. EMTs: All EMTs are also considered e-money. To ensure compliance with the latest regulations and navigate the Web3 legal landscape, please contact Prokopiev Law Group. Our expertise in cryptocurrency law, smart contracts, and regulatory compliance, combined with our extensive global network of partners, guarantees that your business adheres to both local and international standards. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.











