Search Results
147 results found with an empty search
- GDPR Compliance in Blockchain: Analysis of EDPB Guidelines 02/2025
Blockchain (a type of distributed ledger) is designed as a distributed, tamper-resistant, and transparent data structure with no centralized control. Transactions are grouped into blocks cryptographically chained together, and a consensus mechanism (e.g. proof of work or proof of stake) ensures all nodes agree on the valid state. These properties— decentralization (many participants replicate data), immutability (once recorded, data cannot be altered without detection), and transparent access to ledger data—present novel compliance challenges under the GDPR. For example, once personal data is written to an append-only blockchain, it cannot be individually deleted or modified without undermining the ledger’s integrity. The lack of a centralized controller and the presence of numerous independent participants mean there is no single entity that can easily remove or edit data across all copies of the ledger. This immutability conflicts with GDPR requirements like the right to erasure and rectification, and calls into question how to enforce storage limitation on data that might persist for the life of the blockchain . The decentralized governance of many blockchains also complicates accountability: participants may be in different jurisdictions and not bound by a common contract, making it difficult to ensure that personal data is used only for the specified purpose and not further processed incompatibly. Additionally, blockchains often make transaction data visible (at least to participants or even publicly in open chains), so any personal data (or identifiers) stored on-chain are broadly accessible, raising concerns under the principles of confidentiality and transparency. Many blockchains support smart contracts , which are self-executing programs recorded on-chain. Smart contracts can embed personal data in their code or transactions and can autonomously trigger actions affecting individuals. The EDPB notes that execution of smart contracts might amount to solely automated decision-making with legal effects, thus invoking GDPR Article 22 – requiring human intervention and contestation rights if such decisions significantly affect data subjects. In summary, fundamental blockchain features like decentralization and immutability provide integrity and availability guarantees through cryptography and consensus, but they inherently clash with certain GDPR principles and rights. The guidelines stress that these technological traits do not exempt controllers from GDPR compliance: they must instead be addressed through careful design and governance measures. Controllers should thoroughly assess whether a blockchain solution is appropriate and can be configured to meet data protection requirements before using it for personal data processing. Personal Data in Blockchain Systems A critical step is determining what data in a blockchain constitutes personal data . Blockchains record transactions that typically include a set of metadata (e.g. timestamps, cryptographic public keys or addresses, and other technical details) and a payload (the content of the transaction). Even if the blockchain uses pseudonymous identifiers (like hashed public keys that appear as random strings), these can qualify as personal data if they relate to an identifiable individual . The EDPB confirms that a public key or account address will be considered personal data whenever it can be linked to a natural person by means “reasonably likely to be used” (for instance, by correlation in the event of a data breach or through off-chain data). In many blockchain networks, participants’ addresses are always visible to all nodes to allow transaction validation, and even if they do not directly reveal a name, they may be indirectly identifiable (especially when combined with external information or if a user reuses an address). Thus, seemingly anonymous on-chain identifiers are often pseudonymous personal data , falling under GDPR scope. Moreover, the payload data itself can include personal data. For example, a transaction might encode an individual’s name, a contract with personal terms, or references to documents about a person. Smart contract interactions could record details about a user’s activities or assets, and even if the smart contract’s code is public, any personal data written into the contract’s state or logs would be stored on-chain. The guidelines emphasize that any personal data stored on-chain – whether in transaction fields, smart contract storage, or account balances – is subject to GDPR. Additionally, when individuals interact with blockchain systems, off-chain data processing often occurs in the surrounding ecosystem. For instance, a user’s blockchain wallet software or a decentralized application (dApp) may collect device identifiers or IP addresses when connecting to the network, or third-party blockchain explorers may log user queries – all of which can be personal data even though they are not recorded on the chain itself. These auxiliary data flows must also be accounted for by controllers. Storing personal data on-chain is inherently risky from a data protection perspective. The EDPB’s general rule is that controllers should avoid putting personal data on the blockchain unless strictly necessary , because doing so makes it difficult to comply with core principles. In particular, storing personal data in plaintext (unencrypted) form on a public ledger is considered highly problematic and “strongly discouraged” due to conflicts with Article 5 principles like data minimization, purpose limitation, and storage limitation. If personal data must be processed in a blockchain use-case, the guidelines advocate for data protection by design strategies to minimize on-chain data: e.g. keeping personal data off-chain and only referencing it via pseudonymous pointers or cryptographic hashes on-chain. By limiting the exposure of personal details on the immutable ledger, controllers can maintain greater control – off-chain data can be modified or deleted when needed, while the blockchain only stores a non-identifying commitment or link. Several advanced privacy-preserving techniques (discussed later) enable this kind of separation. The EDPB also notes emerging zero-knowledge proof architectures, where cryptographic proofs allow validation of transactions or identities without revealing the underlying personal data on-chain. Such approaches (often termed “zero-knowledge blockchains”) illustrate that it is possible to leverage blockchain features while drastically reducing on-chain personal data disclosure, though they require complex cryptographic implementations. Roles and Responsibilities of Blockchain Participants GDPR compliance requires clearly defining the roles of all actors involved in processing. In blockchain networks, the decentralized governance model leads to a multiplicity of stakeholders (e.g. node operators, miners/validators, developers of a blockchain platform or smart contracts, users submitting transactions, etc.), which makes assigning the traditional roles of controller and processor complex. The EDPB reaffirms that using a particular technology or a distributed setup does not remove the need to identify responsible parties under GDPR. In fact, controllers must perform a careful, case-by-case analysis of who determines the purposes and means of each personal data processing operation on the blockchain. The standard GDPR definitions apply: a controller is the entity (alone or jointly with others) that decides why and how personal data are processed, whereas a processor processes personal data on behalf of a controller (under its instruction). These roles must be mapped onto the specific blockchain architecture and governance structure in question. The guidelines stress that this allocation should follow the factual influence and decision-making power of actors, in line with the accountability principle. Relevant factors include the nature of the service or application using the blockchain, the governance and rules of the blockchain network, the technical capacity of participants to influence data processing, and any contractual arrangements between parties. Permissioned vs. permissionless blockchain design has a significant impact on roles. In a permissioned blockchain (closed or consortium network), a designated authority or a consortium of entities controls who can participate as a node. This setup inherently offers a clearer allocation of responsibilities , since the governing entity (or entities) can be identified and held accountable as controllers for the on-chain processing. The EDPB explicitly recommends that organizations favor permissioned blockchains for personal data use, given that a central managing party can enforce compliance and there is less ambiguity about who is responsible. Only if there are well-justified reasons why a permissioned model is not feasible should controllers explore more decentralized alternatives – and even then, they should question whether using blockchain at all is appropriate if it impedes accountability. By contrast, in public permissionless blockchains , anyone can become a node and the network is maintained by a diffuse community without a single point of control. Here, participants are not all equal in function, and their GDPR roles depend on their activities and influence. The guidelines explain that in some blockchain systems, individual nodes might have a very limited role : for example, merely relaying and validating transactions against fixed rules without exercising meaningful discretion. If a node simply follows the protocol (e.g. checks digital signatures, ensures transactions meet technical criteria, and includes them in blocks without any preference or purpose of its own), that node might not be considered a controller of personal data, since it is not deciding the purpose or essential means of processing but acting in a minimal, automatic capacity. (Likewise, such a node is generally not a processor either, because it’s not acting on behalf of a known controller under instructions – it’s just participating in a distributed consensus.) On the other hand, many nodes in permissionless networks do exert influence over personal data processing. For instance, miners/validators choose which pending transactions to include in a block, can decide to fork the blockchain , or otherwise change how data is processed for their own objectives. In doing so, they may effectively determine how and when certain personal data is recorded on the ledger (which transactions get confirmed) and potentially even why (if, for example, they prioritize certain transactions for economic gain or policy reasons). The EDPB states that such nodes can qualify as controllers or joint controllers if they “exercise a decisive influence” over the purposes and means of the processing. This could be individually (a miner deciding a transaction’s inclusion, affecting that data subject’s processing) or collectively (nodes agreeing to alter the protocol rules, which changes how all personal data is handled). Importantly, in truly decentralized scenarios, nodes are not acting “on behalf” of another entity; they pursue their own goals within the network, so they cannot be considered mere processors taking instructions. The EDPB strongly encourages that, in such cases, the participating entities formalize their roles through a consortium or similar legal entity to serve as the identifiable controller for the blockchain processing. By creating an overarching legal structure, the various node operators can share obligations as joint controllers, which helps ensure accountability despite the distributed setup. In summary, the guidelines call for clarity of roles : every blockchain project involving personal data should document who the controller(s) is (e.g. the platform operator, the consortium of participants, and/or application providers using the chain) and which parties, if any, act as processors. Joint controllership arrangements are likely when multiple parties together define the data processing purposes (for example, co-developers of a blockchain service, or consortium members each contributing data and determining its use). All such arrangements remain subject to GDPR Article 26 (joint controller requirements) or Article 28 (controller-processor contracts) as appropriate. Even in permissionless public networks with no formal governance, participants cannot escape GDPR liability simply by pointing to the lack of central control – organizational and contractual measures should be put in place to allocate responsibilities wherever possible. Lawfulness of Processing and Legal Bases Any processing of personal data on a blockchain must have a valid legal basis under GDPR Article 6. The EDPB makes clear that there is no blanket legal basis that automatically applies to all blockchain operations; controllers must determine the appropriate basis for each specific processing purpose in their blockchain use-case. For example, recording personal data on a ledger for a decentralized identity system might be based on the user’s consent , whereas processing personal data on a supply-chain blockchain might rely on legitimate interests or contractual necessity , depending on the context. The key is that the choice of legal basis must align with the purpose of processing and meet all the conditions for that basis. If the blockchain processing includes special category data (GDPR Article 9(1) data, such as health, biometric, or political opinion data), then an Article 9(2) exception is required in addition to an Article 6 basis. For instance, a blockchain that stores medical records would need an explicit consent of the data subjects or another specific derogation under Article 9(2) to process health data. The guidelines note that multiple legal bases might be available in a given scenario, but controllers must carefully assess which basis is valid and most appropriate given the context and ensure they can fulfill the obligations that come with it. Consent (Article 6(1)(a)) is a potential legal basis in blockchain, but the EDPB issues strong caveats about its use. If consent is chosen, it must be fully compliant with GDPR’s definition and standards: truly freely given, specific, informed, and unambiguous , with a clear affirmative action by the data subject. Due to the irreversibility of blockchain entries, a critical issue is the data subject’s ability to withdraw consent . GDPR requires that consent can be withdrawn at any time, and if a processing was based on consent, the data must stop being processed (and ideally be deleted or anonymized) once consent is withdrawn. The guidelines underline that if personal data was stored on-chain under a consent basis, that data “must be deleted or rendered anonymous” if consent is later withdrawn . This presents a serious challenge: since blockchains do not easily allow deletion, any controller relying on consent must have a technical plan to effectively remove or irrevocably anonymize the data upon withdrawal. For example, if personal data was encrypted on-chain with the data subject’s consent, the controller should be prepared to delete the decryption key (rendering the on-chain data unintelligible) if consent is withdrawn. Without such measures, consent would not be considered valid because the data subject wouldn’t have a genuine choice to discontinue processing without detriment. In practice, this means consent is often not a suitable basis for immutable on-chain processing, unless the system is designed such that consent withdrawal triggers effective erasure (through key revocation, etc.). The EDPB also reminds that consent does not override other GDPR requirements – even with consent, all principles (data minimization, purpose limitation, security, etc.) and rights must still be respected, and consent cannot be “forced” by making the service conditional unless necessary. Other legal bases may be more appropriate for blockchain solutions. Contractual necessity (Article 6(1)(b)) could apply if the processing is genuinely required to fulfill a contract with the data subject – for example, using a blockchain to execute a smart contract that a user has entered into. However, this basis is narrowly interpreted and only covers processing that is objectively necessary for the contractual service requested by the data subject. A related basis, legal obligation (Art 6(1)(c)), might be relevant if a law mandates using a blockchain for certain record-keeping (though such scenarios are rare; more often, blockchain is a choice of technology rather than a legal requirement). If a blockchain is used to comply with a legal obligation (say, a governmental transparency ledger required by law), that law could also justify restrictions on certain data subject rights under Article 23 GDPR. The guidelines give examples like anti-money-laundering (AML) applications or land registries: if Union or Member State law requires certain data be kept on an immutable ledger, that law may lawfully restrict rights like erasure, as long as the Article 23 conditions are met. In those cases, the legal basis might be Art 6(1)(c) or (e) (public interest or official authority) combined with a statutory restriction on deletion rights. Controllers must ensure any such restriction is provided by law, necessary, and proportionate. A commonly cited basis for blockchain processing is legitimate interests of the controller or a third party (Article 6(1)(f)). Indeed, many private-sector blockchain uses (e.g. maintaining a distributed ledger of transactions for system integrity or efficiency) might invoke the legitimate interests ground. The EDPB emphasizes that using Art 6(1)(f) requires a careful three-part test : purpose test (the interest being pursued must be legitimate and lawful), necessity test (processing must be necessary for that purpose), and balancing test (the interests of the controller versus the fundamental rights and interests of data subjects). In a blockchain context, controllers must evaluate whether their interest (for example, having an immutable, distributed record) is proportionate to the impact on data subjects (permanent storage of their personal data, global access, potential lack of control). The guidelines refer to EDPB’s earlier guidance on Art 6(1)(f), underscoring that if the detriment to individuals is too great relative to the controller’s aims, legitimate interest cannot be relied on. Notably, data subjects have a right to object to processing based on legitimate interests (as discussed later), and if they do, the controller must stop processing unless it demonstrates compelling overriding interests. This means that in a blockchain scenario, using legitimate interests as a basis implicitly requires that the controller be able to cease or restrict processing for an objecting individual if no override applies – a requirement that again forces consideration of how one would stop processing on an immutable ledger. In summary, no matter which legal basis is chosen, it must be supported by the blockchain’s technical design . For instance, if relying on consent, build the system to allow data removal; if relying on legitimate interests, ensure you’ve minimized data use and can honor objections, and so on. The EDPB’s overarching message is that the choice of technology (blockchain) does not justify lowering GDPR standards – controllers are expected to adapt the blockchain’s use or design to the law, rather than the other way around. Data Protection by Design: Principles, Minimization and Security Measures Under GDPR Article 25, data protection by design and by default is crucial when implementing blockchain solutions. The EDPB reiterates that controllers must embed data protection principles into the architecture of the system from the outset, especially given that blockchain’s characteristics make some principles harder to enforce. The guideline stresses that GDPR principles are non-negotiable – even if blockchain is technically complex, controllers are obligated to find ways to comply effectively. This often requires a combination of innovative technical and organizational measures, carefully tailored to the context, to ensure that principles like data minimization, storage limitation, integrity/confidentiality, and purpose limitation are upheld in practice. Importantly, effectiveness is at the heart of data protection by design: it is not enough to perform a formal check-the-box exercise; the chosen measures must actually result in a higher level of privacy and protection for data subjects in the real operation of the blockchain. In other words, controllers should be able to demonstrate that their blockchain implementation achieves compliance outcomes (e.g. no unnecessary personal data is exposed or retained), not just that they attempted some generic precautions. A primary design strategy is data minimization and storage limitation : avoid putting personal data on-chain whenever possible . The guidelines suggest that many use-cases can be accomplished by storing personal data off the blockchain and only writing references or proofs on-chain. By minimizing on-chain personal data, one limits the risk of immutable, perpetual exposure. For example, rather than recording a user’s name or full document on-chain, the blockchain could store a hash or a cryptographic commitment of the data, while the actual personal data stays in a traditional database or under the data subject’s control off-chain. If the personal data needs to be later retrieved or verified, the off-chain source can be checked against the on-chain hash. The EDPB details several privacy-enhancing techniques to implement this: Strong Encryption: Personal data can be encrypted before inclusion in the blockchain, so that only those with the decryption key can read it. This protects confidentiality on a public ledger. Notably, encrypting data does not remove it from GDPR’s scope (encrypted data is still personal data) and doesn’t absolve the controller of obligations. However, encryption can mitigate unauthorized access. The controller should use state-of-the-art cryptography and manage keys carefully. One advantage in context of erasure is that if a data subject invokes the right to deletion, the controller could delete the decryption key , rendering the on-chain encrypted data indecipherable (a form of “cryptographic erasure”). The guidelines caution, though, that encryption’s protection is time-limited: algorithms can be broken or weakened over time (e.g. by future quantum computing), and since blockchains are meant to last indefinitely, there must be a plan to manage algorithm obsolescence. Controllers should periodically assess the strength of the encryption in use and be ready to upgrade to stronger algorithms or migrate data if needed, especially for sensitive data that must remain confidential for many years. Hashing and Pseudonymization: Instead of storing personal data in cleartext, a controller can store a hashed value (with a secret salt or key) on-chain. The actual data and the salt/key are kept off-chain. A well-designed salted hash is effectively a pseudonymous identifier: on its own, the hash doesn’t reveal the underlying data unless one has the secret or can brute-force guess it. The guidelines note the benefit that if the secret salt/key is later deleted, the on-chain hash becomes practically unlinkable to the original personal data. In that scenario, even though the hash remains on the blockchain, it no longer corresponds to an identifiable person (assuming strong hash and secret), which can serve as a means of compliance with erasure requests. Nonetheless, the EDPB warns that hashing is not a panacea: hashes (especially if unsalted or poorly implemented ) can be reversible via dictionary attacks or re-identification techniques, and even salted hashes are considered personal data as long as the original data or keys exist somewhere. Therefore, storing a hash on-chain still counts as processing personal data, and all GDPR obligations apply (plus the off-chain data storage must be secured and governed). The use of unsalted or static hashes is explicitly deemed insufficient for confidentiality on a public blockchain – robust, secret salts/keys must be used to prevent inference of the original data. Cryptographic Commitments: A commitment is similar to a hash but typically provides cryptographic guarantees (like being binding and hiding ) that can be advantageous. The guidelines describe storing a perfectly hiding commitment on-chain. “Hiding” means that given only the commitment, nothing can be learned about the original data; and if the scheme is secure, even brute-force attempts yield no information. The data (and any randomness used for the commitment) remain off-chain. Once the off-chain data and “witness” (secret randomness) are deleted, the on-chain commitment is computationally useless – it cannot be opened or linked back to personal data. Thus, commitments offer a way to prove that some data existed or was agreed upon, without ever revealing the data itself on the blockchain, and later the linkage can be destroyed. This technique supports both minimization and eventual erasure (by dropping the ability to resolve the commitment). As with hashing, using commitments still requires an off-chain storage solution for the actual data and careful key management. Off-Chain Storage with On-Chain Pointers: Rather than placing even derived data on-chain, controllers can store personal data entirely off-chain (for example, in a secure database or distributed file system) and put only a pointer or reference (e.g. a URL, document ID, or Merkle-tree root) on the blockchain. The on-chain pointer by itself may or may not be personal data (if it’s just a random locator, it might not identify a person). The principle here is that the blockchain is used only to timestamp or validate the existence of data (the proof of existence ), while the data itself resides elsewhere under more flexible control. If the data needs to be erased or modified, it can be done off-chain, and the on-chain pointer could be updated or de-referenced. The EDPB notes that when using such architectures, confidentiality of the off-chain store is vital (the link between on-chain and off-chain must be protected). Off-chain storage shifts some GDPR burdens off the blockchain itself, but the off-chain processing is fully subject to GDPR (the controller must secure it, have a legal basis, etc., just like any database). Indeed, using off-chain storage means the overall system now has multiple components (blockchain plus off-chain systems), each of which must be evaluated for compliance and security. In practice, a combination of measures is often advisable. For example, one might hash personal data and store the hash on-chain, encrypt the personal data off-chain, and distribute the encrypted data among trusted parties or keep it with the data subject. The guidelines acknowledge that achieving adequate data protection in blockchain may require layered solutions and novel privacy-enhancing technologies used in tandem. They also suggest considering “zero-knowledge” architectures – e.g. using zero-knowledge proofs or other cryptographic protocols so that nodes can validate transactions without seeing personal data in clear. By limiting data visibility and accessibility , controllers address the GDPR principles: Confidentiality/Integrity (Security) is maintained by encryption and hashing (Article 32), Data minimization is respected by only recording what is necessary (and in pseudonymized form) on-chain, and Storage limitation is tackled by enabling effective removal (via key destruction or off-chain deletion) when data is no longer needed. Despite all these measures, the EDPB reminds controllers that “technical impossibility” is not an excuse for non-compliance . For instance, one cannot simply say “we cannot delete data from the blockchain, so the right to erasure does not apply.” Instead, the system should have been designed to avoid that situation (e.g. by not storing data that would later need deletion, or by using encryption that can be nullified). GDPR expects controllers to anticipate and prevent problems by design . If a particular blockchain architecture makes it impossible to meet a fundamental requirement (like deleting personal data on request), the onus is on the controller to re-evaluate the use of that technology or adjust the architecture so that compliance can be achieved. In fact, the guidelines bluntly state that if a blockchain’s “strong integrity” feature (immutability) conflicts with data protection needs, controllers should consider using other tools or technologies altogether rather than blockchain. Data protection by design might mean opting for a permissioned chain where an admin can intervene in exceptional cases, or using shorter-lived sidechains where data can be phased out, etc., if that’s necessary to honor GDPR principles. Purpose limitation is another principle worth noting: personal data should be collected for specified, legitimate purposes and not further processed in incompatible ways. Public blockchains pose a challenge here because once data is public on the ledger, anyone might use it for new purposes beyond the original intent (for example, using blockchain transaction data to profile users’ behavior). The disintermediated nature means participants are not all contractually bound to restrict their use of data. To enforce purpose limitation, a private or permissioned blockchain can impose agreements on participants about how data may be used. In public networks, the best approach is to minimize the data to begin with (so that even if someone repurposes on-chain data, it reveals little), or use technical controls like encryption such that unauthorized parties cannot interpret the data. In any case, controllers must define clear purposes for any personal data they put on a blockchain and ensure that the data is not repurposed in a manner incompatible with those purposes. This ties back into governance: if multiple organizations are involved, they should agree on purpose limitations and document them (e.g. in consortium bylaws or smart contract code that limits data use). Finally, security measures (integrity and availability, as well as confidentiality) are vital given that blockchain data is widely replicated. GDPR Article 32 obligations apply: controllers and processors must implement appropriate technical and organizational security measures. The blockchain’s inherent design gives some security benefits – data integrity and availability are strong due to the consensus and replication (tampering is detectable and data is redundantly stored). However, confidentiality is not inherent in most blockchains (many are intentionally transparent). Thus, as described above, encryption or pseudonymization is needed to prevent unauthorized access to personal data on-chain. Controllers should also secure the off-chain environments : for instance, nodes should have measures against breaches, keys (like private keys controlling identities or decryption keys) must be protected, and any APIs or wallets handling personal data should be hardened. The EDPB points out that encryption keys and algorithms must be managed over time – since blockchain data may remain for decades, a plan to update cryptographic measures and handle potential future vulnerabilities (like quantum attacks) should be part of the security strategy. Regular risk assessment of cryptographic strength and contingency plans (such as re-encrypting data with stronger algorithms or migrating to a new platform) are recommended when the data is sensitive and long-lived. Additionally, since blockchains involve networked nodes, network security (authenticating nodes, preventing Sybil attacks, DDoS protection, etc.) is relevant to protect the availability and reliability of the processing. The guidelines also mention ensuring access control and traceability in blockchain applications: for instance, in a permissioned blockchain, limit who can view certain personal data fields, and log access to data for audit. While some of these controls might be unconventional for blockchain, creative implementations (like encryption schemes where only certain roles can decrypt certain data) can achieve a form of access control even in a decentralized setting. In summary, security must be comprehensive , covering on-chain data (through cryptography), off-chain data stores, node infrastructure, and even the metadata and communications in the blockchain ecosystem (since communication metadata could reveal personal info like who communicated with whom and when). The controller should treat any supporting systems (key management servers, off-chain databases, etc.) as part of the overall processing and secure them to GDPR standards. Data Subject Rights in Blockchain Environments GDPR grants individuals robust rights over their personal data, and the EDPB emphasizes that these rights are technology-neutral – data subjects do not lose their rights simply because their data is processed on a blockchain. All the usual rights (access, rectification, erasure, restriction, objection, data portability, and the right not to be subject to certain automated decisions) must be respected. However, fulfilling these rights requires special consideration in a blockchain context, given the inherent immutability and distributed control. The guidelines consistently urge that mechanisms to honor data subject rights be incorporated at the design stage of any blockchain system. Transparency and Information (Articles 12–14 GDPR): Data subjects have the right to be informed about how their data is processed. In blockchain projects, this means controllers must provide clear and easily accessible privacy notices explaining, among other things, that personal data will be recorded on a blockchain, the nature of that blockchain (public or private, how it works), who will have access to the data (e.g. all node operators globally, if it’s a public chain), and what rights and recourses the individual has. The EDPB specifies that the controller should inform the data subject before submitting their personal data to the blockchain (e.g. at the time of data collection or before writing a transaction that includes their data). The information must be given in concise, plain language as usual, but also cover blockchain-specific aspects (such as potential international transfers to network nodes, the potential inability to fully delete data, and the measures in place to protect their data). If multiple parties are controllers (e.g. a consortium), they should coordinate so that the data subject isn’t confused by multiple or inconsistent notices. The guidelines also note that transparency is key to fairness – data subjects should not be surprised by unexpected processing like the public permanence of their data. Therefore, full disclosure of the blockchain’s implications is part of compliant processing. Right of Access (Article 15) and Data Portability (Article 20): The right of access means an individual can ask the controller to confirm if their data is being processed and obtain a copy of that data and relevant information. On a blockchain, the data subject might already have access to the ledger (if it’s public, anyone can read it), but the controller still has an obligation to provide an intelligible copy of the data concerning that individual upon request. Fulfilling an access request may involve extracting all transactions or data linked to that person (for example, all entries associated with their blockchain address) and presenting it in a user-friendly format along with explanatory information (purposes, recipients (nodes), storage period, etc.). The EDPB believes that the exercise of access rights is compatible with blockchain as long as controllers do their part to compile and furnish the information. Indeed, blockchains keep an indelible record, which could even make it easier in some cases to retrieve historical data for a subject – but the controller must ensure the individual can understand it (a dump of raw blockchain entries may not be meaningful without context). As for data portability , if processing is based on consent or contract and is carried out by automated means (criteria for Article 20), the data subject can request their personal data in a commonly used, machine-readable format. Blockchain data is already in a structured format, but the controller should consider providing it in a convenient form (perhaps a CSV or JSON listing of the person’s relevant blockchain entries). One complexity is that blockchains often involve multiple joint controllers; a user might request data from one of them (say, the dApp provider) who then must gather personal data from the chain. The guidelines assert that these rights can be met in blockchain systems – there is nothing about blockchain that technically prevents giving a user their data or moving it – so long as the controller has procedures to extract the data and not just point to the public ledger. Right to Rectification (Article 16): This right entitles individuals to have inaccurate personal data corrected or completed. On an immutable ledger, you typically cannot directly change or delete a past entry, which complicates straightforward “correction” of that record. The EDPB notes that fulfilling rectification may require creative solutions in blockchain. One approach is the use of additional transactions or data to indicate corrections. For instance, if an incorrect piece of personal data was stored in a blockchain transaction, the controller might publish a new transaction that references the earlier one and states that “the data X in transaction Y is corrected to Z.” This effectively appends a correction on-chain without erasing the original error. Anyone reading the chain would need to be made aware (through the application logic or off-chain means) that the later transaction supersedes the earlier information. In permissioned blockchains, another possibility is that an administrator can flag certain data as deprecated or push a state update that rectifies a value (though the original record remains in history). The guidelines also suggest that if rectification in effect requires erasing the old data (for example, a data subject contests a stored document and the resolution is to remove it), then the same techniques used for erasure requests should be applied to implement rectification . That is, if a piece of personal data is wrong, you might “rectify” by effectively deleting or anonymizing the erroneous data and perhaps inserting the correct data elsewhere. The key point is that controllers must have a plan to address inaccuracies – they cannot respond to a rectification request with “we technically cannot change the blockchain.” By design, they should either avoid recording personal data that might need correction, or enable an overlay system that can mark or nullify incorrect data. Ensuring data accuracy is part of the GDPR’s principles (Art 5(1)(d)), so blockchain solutions should incorporate validation steps to prevent errors (e.g. verify data before it’s written) and mechanisms to propagate corrections. The EDPB acknowledges this can be “technically demanding” on blockchain, but it must be tackled in compliance efforts. Right to Erasure (Article 17) and Right to Object (Article 21): These are particularly thorny in an immutable ledger but are central to GDPR. The right to erasure (“right to be forgotten”) allows individuals to have their personal data deleted when, for example, the data is no longer necessary, consent is withdrawn, or processing is unlawful. On most blockchains, you cannot literally delete a block or transaction without compromising the chain’s integrity and consensus – nodes deliberately resist alteration of history. The EDPB frankly observes that it might be technically impracticable to honor a request for actual deletion of on-chain data in many cases. Nevertheless, controllers are expected to design systems to accommodate erasure as far as possible . The guidelines recommend that personal data should never be stored on-chain in directly identifiable form unless the use case absolutely requires it and all risks have been addressed. If data is stored in a reversible or obscurable form (encrypted, hashed, or off-chain), then an erasure request can be fulfilled by rendering the data inaccessible or anonymized . For example, if a user invokes erasure, the controller could erase the personal data in the off-chain storage and delete any link or key such that the on-chain reference can no longer be tied to the person. The chain record itself might remain, but it would be functionally anonymous – the controller must ensure that neither they nor anyone else can identify the data subject from the remaining on-chain data. This often entails erasing all related off-chain data (e.g. mapping tables, identity information) that connected the on-chain entry to the individual. When done properly, what remains on-chain is just an orphaned entry that no longer has personal data context (or is just a random string). The EDPB calls this approach “effective anonymization” of blockchain data in response to erasure requests. However, they caution that achieving genuine anonymization is difficult and context-dependent – it presupposes that the on-chain data by itself cannot identify the person (so it was suitably abstracted to begin with) and that all additional data that could re-link identity is wiped. If a controller cannot meet these conditions – for instance, in a fully public ledger where personal data (like a name or a facial image) was directly embedded on-chain – then they face a serious compliance problem. The guidelines explicitly advise that if the strong immutability of blockchain is not needed for the purpose, controllers should avoid using blockchain for personal data , because otherwise they might not be able to honor erasure and other rights. In other words, do not put data on an immutable ledger unless you have no alternative and you have robust measures to mitigate the privacy impact. The right to object (Article 21) gives data subjects the ability to object to processing based on certain grounds (notably, legitimate interests or public interest tasks) and requires the controller to stop or not begin processing that data unless they demonstrate compelling legitimate grounds overriding the individual’s interests. In a blockchain scenario, this means if a person says “I object to you processing my data on this blockchain,” the controller must assess and usually cease further processing relating to that person (unless an exemption applies). If the controller’s legal basis was legitimate interest, an objection typically means the processing should stop for that individual. Therefore, much like erasure, controllers should plan how they would halt processing a particular person’s data on-chain. In practice, honoring an objection could involve refraining from adding any new data about that person to the blockchain and possibly taking steps equivalent to erasure for existing data (since continuing to hold it might not be justifiable if the objection is valid). The guidelines bundle the design considerations for objection with erasure, stating these rights “must be complied with by design” in blockchain systems. They highlight that if personal data is stored on-chain, stopping all processing might be hard, hence the preference to limit on-chain personal data from the start. When an objection is received, a controller should at minimum ensure that no further dissemination or use of that person’s on-chain data occurs under its control. In a permissioned chain, the controller could instruct other nodes not to process that user’s data going forward (or delete it if possible from application-level records). In public chains, the controller may only be able to anonymize or dissociate the data (similar to an erasure solution) so that processing effectively ceases in relation to an identifiable person. An additional complication is if an objection is made to a particular node’s processing (say a European data subject objects to her data being processed by nodes outside the EU) – given the distributed nature, it’s difficult to selectively remove one node’s copy. This again underscores why choosing an appropriate blockchain architecture (e.g. limited nodes bound by agreements) is part of GDPR compliance. Notably, the guidelines mention that data subject rights cannot be waived or contracted away by the data subject either . Even if users consent to certain processing on blockchain, they retain their rights under GDPR. The EDPB also rejects any notion that because a data subject chose to use a blockchain service, they have implicitly waived the right to erasure or rectification; those rights still apply and must be “fulfilled in accordance with the GDPR” despite technical challenges. Right not to be subject to Automated Decision-Making (Article 22): This is relevant when blockchains use smart contracts or algorithms that make decisions affecting individuals without human intervention . If a smart contract automatically executes a decision that produces legal effects or similarly significant effects on a person (for example, a decentralized finance protocol automatically liquidating a user’s assets, or an identity blockchain automatically determining eligibility for a service), then Article 22’s protections kick in. The EDPB highlights that smart contract-driven decisions can constitute automated individual decision-making , and thus controllers must ensure they comply with Article 22. Under GDPR, purely automated decisions with significant effects are prohibited unless they fall under certain exceptions (necessity for a contract, explicit consent, or authorization by law with safeguards), and even when allowed, the data subject has the right to obtain human intervention, to express their point of view, and to contest the decision. Accordingly, if a blockchain application involves such decision-making, the controller should build in safeguards : for instance, there should be a way for a human to review or override a smart contract’s outcome at the data subject’s request. The guidelines explicitly state that when Article 22 applies, the controller must guarantee the possibility of human intervention and the ability for the data subject to contest the decision, even if the smart contract’s outcome is recorded on the blockchain . This may require off-chain processes (for example, a customer support or dispute resolution mechanism that can compensate for or counteract an on-chain decision). It could also mean avoiding fully automated processing of sensitive matters in blockchain, or seeking explicit consent with awareness of the consequences if using that route. The EDPB’s position is a reminder that “code is law” ethos of blockchains does not override human rights – systems should be designed such that algorithmic decisions are not final and unchallengeable for the individual. In practice, ensuring compliance here might involve pausing certain smart contract actions until a human approves them in cases that affect individuals, or providing a parallel off-chain method to reverse or mitigate an outcome for the individual if needed. This can be technically difficult (and arguably undermines some benefits of automation), but it is necessary for high-stakes processing to avoid violating GDPR’s prohibition on unchecked automated decisions. Data Protection Impact Assessments (DPIAs) for Blockchain Processing Given the novel and potentially high-risk nature of blockchain processing, the guidelines strongly advise – and often require – performing a Data Protection Impact Assessment (DPIA) before deploying blockchain solutions that involve personal data. Under GDPR Article 35, a DPIA is mandatory for any processing likely to result in a high risk to individuals’ rights and freedoms. The EDPB notes that using blockchain can introduce significant new risks to data subjects, so many blockchain use-cases will trigger the need for a DPIA. In fact, several criteria listed by WP29 (and endorsed by EDPB) for requiring DPIAs are often met by blockchain applications: e.g. use of new technology, large-scale systematic monitoring, matching or combining datasets, data regarding vulnerable individuals, etc. The DPIA should assess the processing as a whole , including both the on-chain and off-chain components and flows of data. The guidelines highlight that blockchain can add distinct sources of risk that might not exist in traditional systems. When conducting the DPIA, controllers should enumerate these risks, such as: Immutability and Irreversibility: Risk that personal data cannot be rectified or erased, impacting rights (as discussed, this is a key high-risk factor). Global Data Dissemination: In public blockchains, personal data is replicated on nodes worldwide, potentially including in jurisdictions without adequate protection. This raises risks of unauthorized access or unlawful international transfers , and loss of control by the controller. Lack of clear control: The distributed governance means breach response, honoring rights, or making changes may be hard, posing accountability and compliance risks. Additional technical operations: The DPIA should consider not just data stored on-chain but also ancillary processing inherent to blockchain . For example, the transmission of transactions over a peer-to-peer network (which involves broadcasting personal data to many nodes), the temporary storage of data in mempools or caches awaiting block inclusion, the creation of “orphan” blocks that might later be abandoned but still contain personal data, and the off-chain storage of data referenced by the blockchain. Also, the metadata generated (timestamps, IP addresses of nodes or users, public keys, etc.) and the management of cryptographic keys and seeds are all part of the processing ecosystem. These can introduce risks like profiling of user activity through transaction metadata or exposure if cryptographic secrets are compromised. The DPIA must catalog these elements and evaluate their impact on privacy. In identifying risks, controllers should not limit themselves to on-chain data breaches; they should look at the whole lifecycle and all related processes in the blockchain environment. For instance, if a third-party analytics tool is used to monitor the blockchain network and collects personal info, that’s in scope. The EDPB explicitly lists elements like communication of blocks among nodes, transaction queue handling, off-chain data storage, metadata generation and key management as aspects that might introduce risks requiring mitigation. All such issues should be documented in the DPIA, along with measures to address them. Crucially, if the DPIA finds that certain high risks cannot be sufficiently mitigated , the controller has a few options: modify the blockchain model or choose a different technology , or ultimately, if high risk remains, refrain from processing or consult with the supervisory authority (as per Article 36). The guidelines encourage controllers to remember that blockchain is not the only solution – if one model (say, a public permissionless chain) is too risky, perhaps a permissioned chain or a non-blockchain database could achieve the goal with less risk. This ties back to necessity and proportionality: the DPIA should question whether using a blockchain is necessary and proportionate for the intended purpose, or if a less invasive alternative exists. The EDPB suggests a structured approach in the DPIA, highlighting additional aspects to specifically address for blockchain-based processing. These include: Detailed description of the processing operations involving blockchain: The DPIA should describe the use-case and how personal data flows through the blockchain system. For example, outline what personal data is written on-chain vs kept off-chain, what the blockchain model is (public/private, permissioned/permissionless), who the participants are, and the roles and responsibilities of each (identifying the controllers, joint controllers, and processors). It should also describe the governance framework of the blockchain (how decisions are made about the network or protocol), the data lifecycle on the blockchain (from data input, propagation, validation, to indefinite storage), and any integration with other systems. Essentially, the DPIA’s systematic description should make clear how and where personal data enters the blockchain, how it is processed there, and who has control over it . This includes listing categories of data subjects and personal data, categories of recipients (e.g. node operators, miners, third-party observers), and any third parties that might receive data (like external oracle services, etc.). If smart contracts are used, the DPIA should note their function and whether they involve automated inference of personal data or decision-making. Also, mention if personal data is processed off-chain in parallel (and how linkage is managed), and if there are international transfers happening due to nodes in third countries. Necessity and proportionality analysis: The DPIA must evaluate why blockchain is being used and whether the same goal could be achieved with less data or a different method. Controllers should justify that using a blockchain (and the specific type of blockchain chosen) is necessary for the purpose and proportionate in terms of data protection. For instance, if the purpose is to ensure data integrity and transparency among a consortium, is a fully public blockchain necessary, or would a private ledger suffice (less exposure)? Could hashing the data instead of storing raw data achieve the purpose? This section should confirm that only the minimum required personal data is processed on-chain and that all GDPR principles (like purpose limitation and data minimization) are respected in the design. If a more privacy-friendly alternative exists to achieve the same ends, the DPIA should acknowledge that and explain why the chosen approach is still justified. Regulators will expect to see that the controller has thought critically about alternatives to blockchain or less extreme configurations and has adopted blockchain only if needed. Assessment of risks to rights and freedoms: Building on the earlier identification of risk sources, the DPIA should analyze the potential impacts on individuals if things go wrong or if rights cannot be exercised. This includes risks like personal data being publicly available and leading to harm (e.g. financial information on a blockchain could expose someone to fraud or profiling), risk of inability to delete erroneous or sensitive data, risk of data breaches (which on a blockchain could mean an attacker obtaining the private key of a user and thus all their on-chain personal data, or a malicious fork that exposes data), and risk of misuse of data by network participants. The DPIA should consider the severity and likelihood of each risk. For instance, how likely is re-identification of pseudonymous data on this blockchain, and what harm would that cause? It should also consider aggregate risks , like if the blockchain aggregates data from many sources, could that enable intrusive profiling or surveillance. Another specific risk is if multiple copies of data make breaches harder to contain – e.g. even if one node is secured, another might be compromised. The EDPB also expects an assessment of possible data breach scenarios in blockchain. A data breach in traditional terms might be unconventional in blockchain (since data is by design shared), but for instance, if an unauthorized party gets access to a normally permissioned ledger, or if someone’s private key is stolen and fraudulent transactions with personal data are appended, these are security incidents to evaluate. The extent of a potential breach (all copies globally could be impacted) should be weighed. Measures to address the risks: Finally, the DPIA must catalogue the safeguards and controls the controller will implement to mitigate identified risks. The guidelines indicate many of these in earlier sections: encryption, hashing, off-chain data segregation, strict access controls in permissioned networks, governance measures (contracts among participants), data lifecycle management, etc. For each risk, the DPIA should explain how it is reduced to an acceptable level. For example, “Risk of inability to erase data is mitigated by only storing hashed data on-chain and deleting the salt upon request, rendering data anonymous. Risk of unauthorized access is mitigated by strong encryption of any personal data on-chain. Risk of international transfer issues is mitigated by restricting node locations to EU and contractual clauses with any external nodes,” and so on. The DPIA should also cover how data subject rights can be exercised (detailing the process and technical means for access, erasure, etc., as part of the risk treatment). The EDPB document even provides an Annex with recommendations (Annex A) that likely serve as a checklist for many of these measures. Controllers can use that as a reference in their DPIA to ensure they have considered all points. The overarching advice is that performing a DPIA is not just a formality for blockchain projects, but a valuable process to identify and resolve privacy issues early . If the outcome of a DPIA is that high residual risks remain (for example, one might conclude “we cannot mitigate the risk that we cannot erase data if needed”), then according to GDPR the controller must consult the supervisory authority before proceeding. It’s conceivable that certain public blockchain applications might fall into this category, essentially requiring regulatory consultation or else refraining from processing until a solution is found. The guidelines imply that careful design choices (such as choosing a permissioned model, or avoiding on-chain personal data) can often reduce risks to an acceptable level so the processing can go ahead. If not, the project may need rethinking. International Data Transfers and Chapter V GDPR Compliance Blockchains are borderless by nature – a public blockchain network typically has nodes (computers maintaining the ledger) in many countries around the world. When personal data is written to such a blockchain, that data is effectively transferred across national borders to every foreign node that holds a copy or can access the ledger. Under GDPR’s Chapter V , any transfer of personal data from the EEA to a third country must meet certain conditions (adequacy decision, appropriate safeguards like Standard Contractual Clauses, binding corporate rules, etc., or a specific derogation). The EDPB highlights that blockchain technology will “often involve international data transfer” scenarios, especially if the network includes nodes outside the EU/EEA. This raises a compliance challenge: in an open blockchain, every node acts as a recipient of personal data, yet unlike traditional data export, a European controller typically does not have a direct relationship or agreement with all those foreign nodes. In fact, in permissionless networks, the nodes are “neither necessarily chosen or vetted” by the controller. This lack of control over data flows can conflict with GDPR’s transfer regime, which assumes a defined exporter and importer. Despite these difficulties, the GDPR’s transfer rules still fully apply – there is no blockchain exemption. Any European controller using a blockchain that causes personal data to be replicated to nodes outside the EEA must ensure that the transfer is lawful under Chapter V. That likely means the controller has to either restrict the network to EEA nodes or implement a transfer mechanism. The guidelines suggest some approaches: for example, in a permissioned blockchain , the controller could require all participating nodes/operators outside the EEA to sign Standard Contractual Clauses (SCCs) or similar agreements before being allowed to join the network. In such a case, joining the consortium might be conditioned on agreeing to EU data protection safeguards, making each node contractually bound to GDPR-like obligations. This is analogous to how a data exporter might get a foreign data importer to sign SCCs in a traditional transfer. Another approach is to limit node locations to countries with an adequacy decision , ensuring data only flows to jurisdictions deemed by the EU to provide adequate protection. However, in many public blockchains, neither approach is practically enforceable (anyone can set up a node anywhere). The EDPB acknowledges that truly public blockchains (e.g. Bitcoin, Ethereum) create a situation that “may raise compliance concerns” because of these unrestricted international flows. Controllers thus face a tough question: if you cannot prevent or properly safeguard international transfers on a public blockchain, can you use that blockchain for personal data at all? The guidelines stop short of forbidding it outright but strongly hint that data protection by design must address transfer risks from the start . They note that ensuring proper application of transfer requirements should be “addressed from the design of blockchain activities” . In practice, this might mean designing the solution such that personal data never leaves the EEA or is never exposed to uncontrolled nodes. For instance, one could use a permissioned blockchain with nodes only in the EU, or encrypt personal data so that even if it reaches foreign nodes, it’s unintelligible (though even encrypted personal data transfer might formally be a transfer if the foreign node can potentially decrypt or if keys could be obtained). Another technique is to store only hashes or commitments on the global blockchain (which might be considered anonymous data to foreign nodes if they have no access to the original data), and keep the personal data on servers in the EEA. That way, even though something is transferred globally, it may not be considered personal data by itself. These strategies align with earlier points on minimization and encryption, but here the emphasis is on geographical data flow control . If a controller does proceed with using a global network, they should identify in the DPIA and records of processing that international transfers occur, and specify the transfer mechanism relied upon. For example, they might list the countries of known nodes and cite SCCs or explicit consent (though consent for transfers is rarely practical or sufficient) as the basis. In many cases, explicit consent of the data subject for transfers (GDPR Art 49(1)(a) derogation) could be a fallback – for example, a DApp could warn the user: “your data will be published on a global network; by proceeding you consent to this international transfer.” However, relying on Art 49 derogations should be limited to occasional, necessity-based transfers, and not regular systematic ones, so it’s not a sound long-term solution for something like blockchain that continuously exports data. Therefore, structural solutions (like restricting node geography or using EU-based infrastructure) are preferable. The guidelines also remind controllers that Chapter V compliance is part of data protection by design . Article 25 (data protection by design) requires considering all GDPR requirements in the design, including those about cross-border data flow. This means if a blockchain setup would violate transfer rules, that setup is not compliant with Article 25 – it should be reworked. Designing privacy into the system includes designing legal compliance into the system’s geopolitical footprint. Guidelines 02/2025 on processing of personal data through blockchain technologies (Version 1.1, 08 Apr 2025) are explicitly labelled “Adopted – version for public consultation”, indicating that the text is provisional and non-binding pending further deliberation. The European Data Protection Board has invited all interested parties to transmit written observations via the prescribed online form by 9 June 2025; only after analysing those submissions will the Board adopt a definitive version. https://www.edpb.europa.eu/our-work-tools/documents/public-consultations/2025/guidelines-022025-processing-personal-data_en?utm_source=chatgpt.com The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- PSD2 vs. MiCA for EU Crypto Businesses
Scope and Objectives of PSD2 vs. MiCA PSD2 – Payment Services Directive (2015) : PSD2 provides the core legal framework for retail payment services in the EU, aiming to foster an integrated EU payments market and enhance innovation, competition, and security in electronic payments. Building on the first Payment Services Directive (2007), PSD2 addressed barriers to new types of payment services (e.g. fintech payment initiation and account aggregation) while improving consumer protection and payment security (notably through Strong Customer Authentication requirements). PSD2 applies to payment service providers (PSPs) – including banks (credit institutions), electronic money institutions (EMIs), and authorized payment institutions (PIs) – and governs services like payment account handling, transfers of funds, card issuing, acquiring, money remittance, and online payment initiation or account information services (open banking). Its objectives are to level the playing field for new payment players, ensure consumer rights and security , and harmonize rules across Member States for a single EU payments market. PSD2 is a Directive, meaning each Member State transposed its provisions into national law by January 2018 (with certain measures like Strong Customer Authentication applying from 2019). It amended related laws (including the E-Money Directive 2009/110/EC) to align e-money issuance with the new payments regime. In sum, PSD2 focuses on fiat currency payments and electronic money , not on crypto-assets per se, though it impacts crypto businesses when they perform regulated payment activities (as discussed below). MiCA – Markets in Crypto-Assets Regulation (2023) : MiCA establishes the first EU-wide harmonized framework for crypto-assets not otherwise regulated by existing financial services law. Its scope expressly excludes crypto-assets that qualify as regulated instruments (e.g. financial instruments under MiFID, bank deposits, structured deposits, insurance/pension products), as well as unique non-fungible assets (NFTs). MiCA’s objective is to support innovation and fair competition in digital finance while safeguarding consumers, market integrity, and financial stability in the crypto-asset sector. It introduces uniform rules for the issuance of crypto-assets (including token offerings and stablecoins) and for crypto-asset services such as trading, exchange, custody, and advice. Key provisions mandate transparency and disclosure (e.g. white papers for offerings), prudential safeguards, governance and conduct standards for service providers, and supervision of transactions. Unlike PSD2, MiCA is directly applicable in all Member States (no national transposition needed). Regulatory Applicability to Crypto Business Models MiCA is tailored to crypto-asset activities, whereas PSD2 governs traditional payment services – but certain crypto business models trigger one or both regimes. Below we analyze how each framework applies (or not) to key categories of crypto businesses: Cryptocurrency Exchanges & Trading Platforms : Under MiCA, operating a crypto-asset trading platform or exchanging crypto-assets for fiat/other crypto is a regulated crypto-asset service requiring authorization as a Crypto-Asset Service Provider (CASP) . CASPs operating exchanges must implement transparent, non-discriminatory trading rules and ensure resilient systems, fair access criteria, and timely trade settlement (within 24h for on-chain or same-day off-chain settlements). Exchanges using proprietary capital to quote prices (market-making) must disclose pricing methods and comply with post-trade transparency. MiCA thus squarely covers both centralized crypto exchanges and brokers. By contrast, PSD2 generally does not regulate crypto-to-crypto or crypto-to-fiat exchanges as such transactions are outside the traditional “payment service” definition (which centers on transfers of “funds” i.e. banknotes, scriptural money, or electronic money). If an exchange merely facilitates trades and does not itself execute a payment from a customer’s payment account, PSD2 is not directly engaged. However, when a crypto exchange handles fiat currency deposits or withdrawals for customers, those fiat operations (e.g. holding client fiat balances or transferring euros to a bank) may fall under PSD2 or E-Money rules. Many crypto exchanges partner with licensed banks/EMIs to hold client fiat or have themselves obtained an EMI or PI license to handle fiat wallet services in compliance with PSD2/EMD2. In summary, MiCA will newly require EU crypto exchanges to obtain a CASP license (with passporting EU-wide), while PSD2/EMD2 obligations may simultaneously apply for any fiat payment aspects (e.g. exchange maintaining euro wallets or processing SEPA transfers). Notably, MiCA Article 81 mandates that if a CASP’s business model requires holding client funds as defined in PSD2 , those funds must be deposited with an EU credit institution or central bank, and payment transactions related to the crypto service may only be executed if the CASP is also authorized as a payment institution under PSD2 . This creates a dual licensing scenario for exchanges handling both crypto and fiat, as discussed further under overlaps. Custodial Wallet Providers & Crypto Custodians : MiCA brings custodial wallet services firmly into regulation. Any entity providing custody and administration of crypto-assets on behalf of clients (holding customers’ crypto private keys or crypto-assets in custody) is a CASP requiring authorization. Such custodians must conclude written agreements with clients specifying the services and maintain a custody policy. MiCA imposes strict duties: custodians must not use client crypto-assets for their own account , must keep client assets unencumbered and segregated , and bear liability for loss arising from hacks or IT incidents (including cyber-attacks or theft). In addition, MiCA explicitly excludes non-custodial wallet software providers from its scope: providers of hardware/software for self-hosted wallets (where the user, not a service, controls the keys) are not deemed CASPs. PSD2, on the other hand, has no direct equivalent for crypto custody services . Holding or safeguarding cryptocurrency is not a regulated payment service under PSD2 (which only covers safeguarding of funds in the sense of fiat/e-money). However, if a crypto custodian also holds fiat payment accounts or e-money for clients (e.g. to facilitate buying crypto), that aspect would invoke PSD2/EMD compliance. Of note, the EBA now advises that a custodial arrangement for e-money tokens (stablecoins) can be viewed as a “payment account” under PSD2 if it’s in the client’s name and used to send/receive those tokens. In summary, pure crypto custody is governed by MiCA (new CASP license), while custody of fiat money or stablecoins might concurrently require a PSD2 license depending on the service offered. Decentralized Finance (DeFi) Protocols : Truly decentralized protocols (operating via smart contracts with no central operator or intermediary ) largely fall outside both PSD2 and MiCA. PSD2 does not cover these as it regulates service providers (firms) rather than autonomous code. MiCA likewise recites that fully decentralised services provided without any intermediary do not fall within the scope of the regulation . For example, a DeFi liquidity pool or DEX with no entity controlling it would not itself be a CASP under MiCA. However, MiCA’s perimeter can capture entities that do engage with DeFi on behalf of users – e.g. a company offering an interface or “front-end” to a DeFi protocol, or administering aspects of a protocol, could be deemed to be providing a regulated crypto-asset service. MiCA foresees future evaluation of how to handle DeFi and mandates reports on DeFi’s development and potential regulation. PSD2 remains inapplicable unless the DeFi activity somehow involves traditional payment services. In practice, regulatory uncertainty exists for semi-decentralized arrangements – firms should carefully assess whether any identifiable entity is providing a service that could be captured by MiCA’s definitions (such as operating a trading platform, even if transactions settle on-chain). For now, fully autonomous DeFi systems are an unregulated gray area – a noted gap which regulators may address in the future. Crypto Asset Issuers (Utility Tokens and Others) : MiCA distinguishes three types of crypto-assets: e-money tokens (EMTs) referencing a single fiat currency (a subset of stablecoins), asset-referenced tokens (ARTs) referencing baskets of assets (including currencies, commodities, or crypto), and other crypto-assets (including utility tokens). Issuers of EMTs and ARTs face the most stringent obligations under MiCA. An issuer of an EMT (fiat-pegged stablecoin) must be authorized as a credit institution or an electronic money institution under existing EU banking/e-money law. In fact, MiCA deems EMTs to be “electronic money” under the E-Money Directive, meaning stablecoin issuers must meet all e-money issuance and redeemability requirements in addition to MiCA’s crypto-specific rules. For example, an EMT issuer must always honor redemption at par value in the referenced currency at any time, and it may not pay interest on tokens (to discourage treating them as savings instruments). MiCA Title IV adds further obligations (e.g. white paper, reserve asset custody and investment rules, and if “significant” in scale, higher capital and liquidity requirements). Asset-Referenced Token issuers (e.g. a stablecoin referencing multiple currencies or assets) likewise need prior authorization and must comply with detailed reserve, governance, and disclosure requirements (Title III of MiCA). Smaller utility token issuers have lighter requirements: generally they must publish and notify a crypto-asset white paper with essential information, but no authorization is required to offer simple utility tokens (unless they fall into another category). PSD2 is not directly concerned with crypto asset issuance unless the activity intersects with payment services. For example, issuing a utility token used for platform access is outside PSD2’s scope. However, issuing a stablecoin might invoke e-money law: under pre-MiCA law, a fiat-pegged, redeemable stablecoin often qualified as “electronic money” (stored monetary value issued on receipt of funds) under Directive 2009/110/EC, requiring an e-money license. This has now been codified by MiCA (EMTs = e-money). Thus, a crypto business issuing a euro-pegged token needs a dual regulatory footing : compliance with MiCA’s Title IV and authorization under the E-Money Directive (which PSD2 references). Issuers that are banks (credit institutions) automatically meet the authorization requirement, while non-bank issuers must obtain an EMI license. PSD2’s consumer protection rules on payment transactions would apply when those tokens are used for payments (similar to how prepaid e-money usage is regulated). Crypto Payment Service Providers : Some crypto businesses provide payment services such as enabling merchants to accept crypto or stablecoin payments, or offering crypto-based remittances. If such a business facilitates the transfer of funds (fiat or e-money) on behalf of a payer to a payee, it may perform a regulated payment service under PSD2 (e.g. money remittance or payment processing). MiCA introduces a specific service category for the custody and administration on behalf of clients , meaning if a firm carries out crypto transactions for clients (such as sending crypto to others per user instructions), it requires CASP authorization for that service (akin to a remittance service). In practice, a crypto payments firm often straddles both regimes: the crypto leg of the activity (handling the crypto asset transfer) will be subject to MiCA, and any fiat conversion or involvement of fiat accounts invokes PSD2. For example, a company that lets users pay merchants in crypto while the merchant receives fiat will need a PSD2 payment institution license (to handle the fiat payment to the merchant) and, separately, a CASP license under MiCA for the crypto-to-fiat exchange service and possibly the transfer of crypto on the payer’s behalf. PSD2’s scope over pure crypto transactions is essentially null – PSD2 covers transfers of “funds” (which are euros or other official currencies, or e-money). However, once a stablecoin or crypto-asset is involved, MiCA takes over for that portion. One special case is when stablecoins are used as a payment medium : the EBA has recently clarified that transferring e-money tokens (EMTs) for clients is to be viewed as a payment service under PSD2 , since an EMT is electronic money. As a result, a crypto payment provider transacting in euro-stablecoins on behalf of users might technically need a PSD2 license in addition to its CASP registration. This dual coverage is temporary – the EBA has issued a “no-action” position advising national regulators to delay enforcing PSD2 license requirements for CASPs handling EMT transfers until March 2026 , to avoid requiring two separate authorizations immediately. In the long term, regulatory reforms (PSD3) are expected to eliminate such duplicative licensing. For now, crypto payment providers must be cognizant of both frameworks: MiCA will regulate their crypto transaction services (with requirements on disclosures, handling of private keys, etc.), and PSD2 will regulate any fiat payout or stablecoin issuance/redemption piece, including consumer rights and security for those payment transactions. Decentralized Exchanges and Peer-to-Peer Platforms : A decentralized exchange (DEX) with no central operator is akin to the DeFi discussion above – MiCA would likely not directly apply if truly no service provider is present. PSD2 would not apply because no regulated payment service (as legally defined) is being provided to a user by a third-party. However, if a business operates a peer-to-peer trading platform (even non-custodial) where it intermediates trades between users, it may be considered as operating a “trading platform for crypto-assets” under MiCA, thus a regulated CASP. The platform provider in that case must be authorized and comply with MiCA’s rules for exchanges (transparency, operational resilience, etc.). PSD2 remains irrelevant unless fiat payment services are bundled (e.g. the platform also facilitates fiat settlement between the parties, in which case those fiat flows require a payment license or partnership with a payment institution). Legal Risks and Compliance Strategies in the New Regime Key legal risks for non-compliance include potential fines, license withdrawal (or inability to obtain required licenses), civil liability to clients, and reputational damage. Below we outline strategies to ensure compliance and mitigate risks: Regulatory Classification & Licensing Strategy: Every crypto business should map its activities to the legal definitions in PSD2 and MiCA. Determine which aspects of the business are payment services , which are crypto-asset services , and which involve token issuance . For instance, if you hold customer fiat and execute transfers, you likely need a PSD2 authorization (or partnership with a licensed PSP) in addition to any MiCA authorization. If you operate a crypto exchange/custody, plan to obtain a CASP license by late 2024. Start the application preparations early – authorization requires substantial documentation (business plan, internal procedures, security policies, fit-and-proper assessments, etc. under MiCA Articles 62-63). If you issue a stablecoin, initiate the process to become an EMI or partner with a licensed bank . Where dual licensing is needed (EMI + CASP), engage with both regulators; leverage the EBA’s recommendation of streamlined dual applications (EBA suggests NCAs use information from the CASP application to ease PSD2 authorizations). Governance and Internal Controls: Both regimes put emphasis on fit-and-proper management and robust governance. Crypto businesses should strengthen their governance structures – ensure that board members and key executives have clean compliance records (no AML/fraud convictions), and that they collectively possess the requisite expertise in both crypto technology and financial compliance. Implement or update compliance policies for risk management, conflict of interest, complaints handling, and business continuity to meet MiCA’s standards. Set up an internal audit and control function if not already present (MiCA will require effective internal controls). Training staff on new obligations (e.g. treating client crypto fairly, handling of inside information in token markets, etc.) is crucial. Establish clear procedures for client asset safeguarding : segregate on-ledger wallets for clients, keep accurate records of holdings, and conduct regular reconciliations. Prepare to document everything – regulators will expect comprehensive policies and evidence of their implementation during licensing and inspections. AML/CFT Compliance Upgrade: With increased regulatory scrutiny, crypto firms must elevate their AML controls to the level of traditional finance. If not already done, implement robust KYC onboarding compliant with AMLD5 – verify customer identity, source of funds (especially for large crypto-fiat conversions), and screen against sanctions/Pep lists. Enhance transaction monitoring systems to detect patterns of suspicious crypto transfers (layering, structuring, use of mixers, etc.). Ensure you can comply with the Travel Rule for crypto transfers: by 2024, CASPs will need to collect and transmit originator/beneficiary information for transfers of crypto-assets, similar to wire transfers. Align with the latest EBA/ESMA guidance on AML for CASPs once issued. Also, be mindful of MiCA’s requirement for extra vigilance with high-risk jurisdictions – incorporate that into your risk-based approach (e.g. enhanced due diligence for customers from or sending crypto to blacklisted countries). Consumer Protection and Transparency Measures: To comply with both PSD2 and MiCA spirit, crypto businesses should adopt a customer-centric approach . Provide clear, plain-language disclosures of fees, risks, and terms of service. For example, if you run a trading platform, publish your pricing methodology or firm quotes as required, and have a clear policy on how client orders are executed (best execution). If you custody assets, provide clients with statements of their holdings and inform them of security measures in place (and any risks). Implement a complaints handling process now (even ahead of formal MiCA RTS on this) – including a designated contact point, log of complaints, and timely resolution process, as this will be mandated. Consider offering voluntary protections such as insurance or compensation schemes for theft of crypto (as some exchanges do) to bolster confidence – MiCA will hold you liable for certain losses by law, so insurance also protects the firm’s solvency. In anticipation of strong customer authentication becoming expected for crypto transactions (per EBA’s advice), deploy two-factor authentication and withdrawal confirmation steps for clients. Technical and Cybersecurity Readiness: Given DORA’s application and MiCA’s ICT risk focus, crypto businesses must invest in IT security. Perform comprehensive penetration testing and smart contract audits (if applicable). Establish 24/7 monitoring for unauthorized access or unusual transactions. Develop an incident response plan for cyber incidents – who to alert, how to contain breaches, how to communicate to clients and regulators. MiCA and DORA will require incident reporting within tight deadlines for significant events, so be prepared to detect and report promptly. Also ensure data protection compliance – review how customer data (IDs, account info, blockchain addresses linked to identities) is stored and used; conduct a GDPR privacy impact assessment for new data practices (like sharing data with analytics providers). Encryption and secure key management (including multi-signature or hardware security modules for private keys) are fundamental to demonstrate to regulators that client assets are safe. Overlapping Compliance – Streamlining Efforts: If your business requires compliance with both PSD2 and MiCA (e.g. you operate a platform where users hold euro balances and crypto balances), look for synergies in compliance efforts. For instance, the own funds calculation you do for PSD2 can inform the capital planning for MiCA’s requirements – coordinate with advisors to ensure the highest required amount covers both. Training programs for staff can cover both anti-fraud (PSD2) and anti-market-manipulation (MiCA) aspects together, fostering a culture of compliance across the board. Where EBA has advised deprioritization of certain PSD2 provisions for CASPs (like not focusing on IBAN and open banking for stablecoin wallets), document that guidance and be prepared to discuss with auditors or examiners why certain PSD2 measures may not be relevant to your model. Essentially, maintain a compliance matrix mapping each requirement of both regimes to your internal controls, so nothing is overlooked and redundancies are minimized. Prokopiev Law Group, powered by a global partner network, secures MiCA CASP license approvals, resolves PSD3 crypto overlap, delivers stablecoin EMT authorization, implements the EU Travel Rule, hardens DORA resilience, readies DAC8 crypto tax reporting, completes VASP registration, and obtains Dubai VARA license, MAS DPT license, Hong Kong SFC VASP clearance, DAO legal wrapper structuring and other high-demand Web3 mandates, ensuring full compliance across the European Union, United Kingdom, United States, Switzerland, Liechtenstein, Singapore, Hong Kong, UAE (VARA and ADGM), Cayman Islands and British Virgin Islands; write to us at prokopievlaw.com/contact for rapid, execution-ready guidance. Sources Disclaimer The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- The “GENIUS” Stablecoin Bill (Guiding and Establishing National Innovation for U.S. Stablecoins Act of 2025)
Introduction and Official Source The Guiding and Establishing National Innovation for U.S. Stablecoins Act of 2025 , commonly known as the GENIUS Act , is a proposed U.S. federal law to create a comprehensive regulatory framework for payment stablecoins . The bill’s official text (S.1582 in the 119th Congress) is available on Congress.gov . It defines “payment stablecoin” as a digital asset used for payments or settlement that is redeemable for a fixed monetary value (for example, one token redeemable for $1). The GENIUS Act aims to ensure that only fully-regulated and properly reserved stablecoins circulate in U.S. markets, establishing requirements for issuers, consumer protections, and oversight mechanisms. This report provides a detailed legal overview of the bill’s text, sponsorship, legislative status, and key provisions, followed by an analysis of its implications for stablecoin issuers, users, and regulators. ( Official Bill Source: The full bill text and status can be found on Congress.gov .) Key Provisions of the GENIUS Act of 2025 Scope and Definitions The GENIUS Act applies to “payment stablecoins,” defined as digital assets intended for use as a means of payment or settlement that the issuer is obligated to redeem for a fixed value (e.g. one U.S. dollar). The definition explicitly excludes instruments that are already legal tender or bank deposits, that pay interest, or that are securities under existing law. In fact, the Act makes clear that a compliant payment stablecoin “is not a security” under the Securities Act, Securities Exchange Act, or Investment Company Act. It also later clarifies that stablecoins regulated under this framework are not commodities for regulatory purposes and are not insured deposits . By carving stablecoins out of securities and commodities definitions, the bill provides regulatory clarity – these digital tokens will be overseen under the new stablecoin-specific regime rather than by the SEC or CFTC as securities or commodities. The Act’s provisions generally cover USD-pegged stablecoins (though “monetary value” could include foreign currencies), and would require any stablecoin used by U.S. persons in commerce to be issued by an authorized entity under this law (or an equivalent foreign regime, as discussed below). Permitted Issuers and Licensing Requirements One of the cornerstone features of the GENIUS Act is that it restricts stablecoin issuance to regulated entities termed “permitted payment stablecoin issuers.” In essence, only a permitted issuer may issue a payment stablecoin for use by U.S. persons (subject to certain phase-ins and exceptions). The Act defines three categories of permitted issuers: (A) Insured Depository Institution Subsidiaries: A subsidiary of a federally insured depository institution (i.e. a bank or credit union) that is approved to issue stablecoins. This allows traditional banks (through affiliates) to issue stablecoins under their existing bank regulatory framework. (B) Federal Nonbank Stablecoin Issuers: A nonbank company that obtains a new federal license/charter to issue stablecoins (referred to as a “Federal qualified nonbank payment stablecoin issuer”). These nonbank issuers would be chartered and supervised by the Office of the Comptroller of the Currency (OCC) , the primary regulator for national banks. The OCC is explicitly tasked with regulating nonbank stablecoin issuers under the Act’s federal regime. (C) State-Qualified Stablecoin Issuers: A nonbank entity that is chartered or licensed under state law to issue stablecoins (for example, a state trust company or money transmitter that meets the Act’s standards), termed a “State qualified payment stablecoin issuer.” The Act permits a state-based regulatory option provided the state’s rules are “substantially similar” to the federal standards . Importantly, the state-based option is limited to issuers with $10 billion or less in outstanding stablecoin liabilities – larger issuers must operate under federal oversight. All permitted issuers, whether federal or state, must be incorporated in the United States and be subject to regulation by the appropriate regulatory agency . In practice, this means a nonbank stablecoin provider can choose a regulatory path: either apply for a new OCC license (federal charter) or seek approval under a qualifying state regime (if they will remain relatively smaller in scale). A nonbank issuer that grows beyond $10 billion in stablecoin circulation will have to transition to the federal regime (OCC supervision) within a prescribed period, unless granted a waiver by the federal regulator. This ensures that larger stablecoin issuers with potential systemic impact are under federal supervision. The Act lays out a licensing (approval) process for would-be issuers. An applicant must demonstrate it can meet all prudential requirements (capital, reserves, risk management, etc.). Notably, if a regulator does not act on an application within 120 days, the application is deemed approved by default . Any denial must be accompanied by a rationale, and applicants have the right to appeal denials. This provision is meant to prevent regulators from indefinitely stonewalling new entrants and to promote timely decision-making, thereby encouraging innovation and competition in stablecoin markets. Regulatory jurisdiction: For bank-affiliated issuers, their existing federal banking regulator (e.g. Federal Reserve, FDIC, or OCC depending on the charter) will oversee stablecoin operations; for nonbank issuers under the federal option, the OCC is the primary regulator. State-licensed issuers remain under their state regulators , though federal authorities retain certain backup powers as discussed below. No matter the route, all permitted issuers face baseline requirements on reserves, liquidity, auditing, and consumer disclosure as mandated by the GENIUS Act. Reserve Backing and Capital Standards To protect the value of stablecoins and prevent runs, the GENIUS Act imposes stringent reserve requirements . Every payment stablecoin must be 100% backed by high-quality liquid assets on a one-to-one basis . In other words, for each $1 of stablecoin issued, at least $1 of eligible reserve assets must be held by the issuer at all times. The bill tightly defines “permitted reserves” to limit them to safe and liquid instruments , including: U.S. coins and currency, balances in insured bank accounts, short-term U.S. Treasury bills , Treasury repurchase agreements (repos) or reverse repos fully collateralized by Treasuries, shares in government money market funds, central bank reserves , and other similar government-issued assets approved by regulators. Riskier or illiquid assets (e.g. corporate debt, equities, exotic investments) cannot count toward reserve backing , preventing an issuer from backing a stablecoin with volatile or credit-risky instruments. This addresses past concerns where some stablecoins were found to hold risky reserve assets; under GENIUS, reserves must essentially be cash or cash equivalents with minimal credit and market risk. The Act also restricts how reserve assets can be used . Issuers generally cannot leverage or encumber reserves for speculative investments. Reserves may be used only for specified purposes, such as redeeming stablecoins on demand or serving as collateral in safe short-term financing transactions (like Treasury repos). This ensures the reserve assets remain liquid and available to meet stablecoin redemptions at all times. Notably, stablecoin issuers would not be subject to traditional bank regulatory capital requirements that apply to depository institutions. The rationale is that because stablecoin issuers must hold $1-for-$1 reserves , the usual bank capital ratios (which allow fractional reserve banking) are not applicable – essentially the stablecoin issuer’s own “capital” is the full reserve backing itself. The Act instead directs federal and state regulators to issue tailored capital and liquidity rules specific to stablecoin issuers (likely ensuring resilience against operational risks or losses unrelated to reserve backing). In sum, the GENIUS framework creates a full-reserve model for stablecoins to protect coin holders, rather than the fractional reserve model of traditional banking. Disclosure, Reporting, and Audits Every permitted issuer must establish and publicly disclose its redemption policies (i.e. the terms and procedures for consumers to redeem stablecoins for cash). They are also required to issue regular reports on their outstanding stablecoin liabilities and the composition of their reserves . At a minimum, issuers must publish monthly reports detailing the total stablecoins in circulation and exactly what assets make up the reserves backing those coins. These reports provide ongoing transparency so that users and regulators can verify that stablecoins are fully collateralized at all times. Crucially, the Act requires that management attestations and independent audits support these disclosures. The periodic reserve reports must be certified by the issuer’s executive officers (holding management accountable for accuracy) and “examined” by a registered public accounting firm . In addition, any issuer with more than $50 billion in stablecoins outstanding faces an even higher bar: it must submit annual audited financial statements to regulators. These audits would likely evaluate the issuer’s financial condition, internal controls, and reserve holdings in depth. The combination of management certification, third-party examination of reserve reports, and annual audits (for large issuers) creates multiple layers of assurance that a stablecoin issuer is honest and solvent. Consumer-facing note: The Act also requires that consumers be informed that stablecoins are not government-insured deposits – avoiding any false sense of FDIC protection – and presumably mandates other disclosures to ensure users understand the product they are using. Supervision and Enforcement Powers For federally regulated issuers (i.e. bank subsidiaries and OCC-chartered nonbanks), the same federal banking agencies that regulate their affiliated bank or that granted their charter will supervise the stablecoin activities . For example, a stablecoin subsidiary of a national bank might be supervised by the OCC alongside the bank, whereas a nonbank stablecoin issuer with an OCC charter will be supervised directly by the OCC (with the Federal Reserve possibly overseeing any holding company). These regulators are tasked with monitoring the issuer’s financial condition, safety and soundness, and risk management systems in the context of stablecoin operations. All federal-regime issuers must file regular reports and submit to examinations by their regulator , just as depository institutions do. Regulators are also armed with enforcement authority . If an issuer violates any requirement of the Act or any condition imposed in its charter/license, the regulator can order the issuer to stop issuing stablecoins or impose other enforcement actions (e.g. fines, cease-and-desist orders). In extreme cases, an issuer could effectively be shut down for non-compliance. This enforcement mechanism ensures that regulators can act swiftly to correct problems – for instance, if an issuer’s reserves fell short or if it engaged in unsafe practices, the agency could halt further issuance until issues are remedied. These powers mirror the kind of prompt corrective action regulators have in banking supervision, adapted to stablecoins. For state-qualified issuers (<$10B) , the Act preserves primary oversight for state regulators : “State regulators would have supervisory, examination, and enforcement authority over all state issuers”. In other words, a company that opts for a state stablecoin license will answer mainly to the state banking or financial authority that chartered it. However, to ensure federal interests (like financial stability and monetary policy) are protected, the Act gives federal regulators certain backup authorities over state-regulated issuers. Specifically, a state may voluntarily cede supervisory responsibility to the Federal Reserve for a stablecoin issuer. Even if not ceded, the Federal Reserve or OCC can step in to take enforcement actions against a state issuer in “unusual and exigent” circumstances . This language suggests that if a state-regulated stablecoin were threatening broader financial stability or flagrantly violating the rules, federal authorities could intervene (similar to how the Fed has emergency powers in other contexts). Additionally, if a state-based issuer grows beyond the $10B threshold, it must come under joint federal/state supervision or shift to a federal charter, as noted earlier. Anti-Money Laundering and Financial Integrity Measures Stablecoins raise concerns about illicit finance, so the GENIUS Act contains AML (Anti–Money Laundering) and counter-terrorism finance provisions . All permitted stablecoin issuers would be made explicitly subject to the Bank Secrecy Act (BSA) , which is the primary U.S. AML law. This means issuers must implement know-your-customer (KYC) programs , monitor transactions for suspicious activity, file Suspicious Activity Reports (SARs) and Currency Transaction Reports (CTRs) as required, and comply with anti-money laundering and sanctions laws just like banks and money services businesses do. The Act in fact requires each issuer to certify that it has implemented an AML and sanctions compliance program as a condition of being licensed. Failure to maintain an adequate AML program could lead to enforcement action. The law also directs the Financial Crimes Enforcement Network (FinCEN) (the bureau of the Treasury Department that oversees BSA compliance) to issue “tailored” AML rules for digital assets . FinCEN would be tasked with updating or refining AML regulations to address the unique characteristics of stablecoins and other digital assets, and to “facilitate novel methods…to detect illicit activity involving digital assets.” . This could include guidance on blockchain analytics, informationsharing, or new reporting requirements specific to crypto transactions. In essence, Congress is requiring regulators to modernize AML oversight tools to keep pace with stablecoin technology, recognizing that traditional methods must adapt (for example, leveraging the traceability of distributed ledgers while also managing privacy concerns). Furthermore, the Act imposes bad actor bans : any individual who has been convicted of certain financial crimes is prohibited from serving as an officer or director of a stablecoin issuer. Custody and Investor Protection Provisions The GENIUS Act contains several provisions addressing the custody of stablecoins and reserve assets , as well as protections for stablecoin holders in the event of issuer insolvency. First, the law would allow regulated financial institutions to custody stablecoins and their reserves. Banks and credit unions are explicitly permitted to hold stablecoins in custody for customers and to hold reserve assets on behalf of stablecoin issuers. They are also permitted to use distributed ledger technology (blockchains) in their operations and even to issue “tokenized deposits” (essentially bank deposits represented as tokens on a blockchain) if they choose. These clarifications remove any legal doubt that banks can participate in the stablecoin ecosystem, which could foster integration of stablecoins into mainstream banking (e.g. banks offering stablecoin wallets or integrating stablecoin payments). For any entity acting as a custodian of stablecoin assets or reserves – whether it’s an issuer itself or a third-party custodian – the Act sets rules to protect customers. Notably, custodians are prohibited from commingling customer stablecoin funds with the custodian’s own assets . Customer assets must be segregated, which protects users if the custodian were to fail. (Commingling could put customer funds at risk of being tied up in the custodian’s bankruptcy; segregation ensures they remain identifiable and returnable to customers.) Limited exceptions might apply (e.g. pooling for operational efficiency), but generally segregation of reserves is required . Additionally, any stablecoin custodian must itself be a regulated entity – either a federally or state regulated bank or a registered securities/capital markets entity (like a trust company or broker-dealer) regulated by the SEC or CFTC. This ensures that companies holding the reserves (or the tokens in custody for users) are subject to oversight and examinations regarding their safeguarding of those assets. Perhaps one of the most important consumer protections in the Act is its treatment of stablecoin holders’ claims in bankruptcy . The law provides that stablecoin holders have priority over all other creditors in claims against the issuer’s reserve assets . In practical terms, if a stablecoin issuer were to go bankrupt or be liquidated, the customers who hold the stablecoin tokens get first claim on the reserve funds backing those tokens, before any other debts of the company are paid. This is a powerful protection: it means the collateral truly belongs to the token holders and cannot be grabbed by, say, the issuer’s other creditors. It greatly increases the likelihood that stablecoin holders can recover their money even if the issuer fails, essentially making them senior secured creditors with respect to the reserve. The Act also updates the bankruptcy code as needed to enforce this priority rule. By giving stablecoin users a senior claim, the Act addresses the scenario of a stablecoin “run” – users will know they are first in line for repayment, which should reduce panic and run incentives during stress. Finally, as noted earlier, the Act explicitly states that stablecoins are not insured by the federal government . This means if a stablecoin fails and somehow reserves were insufficient (which should not happen under a 100% reserve mandate, but theoretically if there were fraud or losses), holders cannot claim FDIC deposit insurance or other federal insurance – they rely on the issuer’s assets. Requiring disclosure of this fact is an important consumer-awareness measure to prevent any misunderstanding of stablecoins as risk-free insured deposits. Consumers must rely on the regulatory framework, the reserve backing, and the issuer’s soundness for protection, rather than an insurance safety net. Foreign Issuers and International Considerations Recognizing the global nature of crypto markets, the GENIUS Act also addresses foreign stablecoin issuers and cross-border usage . Within three years of the Act’s enactment , it would become unlawful to offer or sell a payment stablecoin to U.S. persons except by a permitted (U.S.-regulated) issuer. This effectively phases out unregulated stablecoins from the U.S. market – even foreign-based stablecoins (like those issued overseas) must either become compliant or exit the U.S. consumer market. However, the Act gives the U.S. Treasury, in consultation with other regulators, authority to establish “reciprocal agreements” with foreign jurisdictions that have comparable regulatory standards for stablecoins. If a foreign jurisdiction has a robust oversight regime similar to the GENIUS Act, Treasury can deem it comparable and set up an agreement to allow that jurisdiction’s regulated stablecoins to be offered in the U.S. without each foreign issuer needing a separate U.S. license. This is akin to passporting or mutual recognition. Even when foreign stablecoins are allowed, the Act imposes additional safeguards. Any foreign stablecoin permitted for U.S. use must have the technical capability to freeze transactions and to comply with lawful orders . In practice, this means truly decentralized or ungovernable stablecoins would not qualify – the issuer or controlling entity must be able to block illicit transactions or seize tokens when required by regulators (e.g. to enforce sanctions or court orders). Additionally, foreign issuers that want their stablecoins used in the U.S. must register with the OCC and submit to ongoing supervision , and they must hold a portion of their reserves in U.S. financial institutions sufficient to satisfy U.S. redemption requests. This latter requirement ensures that U.S. holders of a foreign stablecoin can redeem locally without depending entirely on foreign banks. It also gives U.S. regulators some leverage (since those reserves in the U.S. can be overseen or frozen if needed). The Act gives the Treasury Secretary (along with other agencies) flexibility to waive certain requirements for foreign issuers and for digital asset intermediaries that deal in foreign stablecoins , if such waivers are in the U.S. interest. This could allow, for example, transitional arrangements or special cases if a strict application of the rules would disrupt markets. Overall, these foreign issuer provisions aim to extend the regulatory perimeter internationally – encouraging other countries to implement similar standards and preventing the U.S. from becoming a haven for unregulated stablecoins, or conversely, preventing circumvention of U.S. rules via offshore entities. Other Notable Provisions In addition to the core elements above, the GENIUS Act contains a few other notable legal provisions: Executive Officials and Conflicts of Interest: Partly in response to concerns about high-level conflicts (the bill’s nickname “GENIUS” drew scrutiny around potential involvement of political figures in crypto ventures), the final Senate version added a provision affirming that existing federal ethics laws prohibit senior government officials (e.g. the President, Cabinet members) from issuing stablecoins . Federal Implementation Timeline: The Act directs that the regulatory framework be put into effect within one year of enactment . This includes agencies promulgating all necessary regulations through notice-and-comment rulemaking. A one-year implementation deadline is an aggressive timetable, reflecting Congress’s desire to quickly stand up oversight in a fast-moving crypto market. Regulators would need to coordinate and issue rules on licensing procedures, capital levels, examination guidelines, disclosure formats, etc., by that deadline. Continuation of Existing Authority: Until the new rules are in place, the Act does not immediately outlaw existing stablecoin activity. It implicitly allows current stablecoin issuers operating under state money transmitter laws or other exemptions to continue during the interim. Likewise, the Office of the Comptroller of the Currency’s prior guidance that national banks may handle stablecoins remains effective until superseded. This avoids a disruption where stablecoins would suddenly become illegal before a pathway to compliance exists. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- European Securities and Markets Authority (ESMA) Final Report – Guidelines on supervisory practices for competent authorities to prevent and detect market abuse under the MiCAR
ESMA’s new guidelines stem directly from Article 92(3) of MiCA, which obliges the authority to harmonise how national competent authorities (NCAs) combat market abuse in crypto-asset markets by 30 June 2025. The final report, published on 29 April 2025, therefore fills a regulatory gap that MiCA itself created: the regulation outlawed insider dealing, unlawful disclosure and manipulation involving crypto-assets, yet supervisory cultures and data capabilities still vary widely across the EU. 1. Why ESMA issued these Guidelines Legal mandate – Article 92(3) MiCA instructs ESMA to publish, by 30 June 2025, guidelines that harmonise how national competent authorities (NCAs) prevent and detect market abuse in crypto-asset markets. Regulatory gap – MiCA extends market-abuse rules to crypto, but supervisory cultures, tooling and data availability still differ widely across Member States. Objective – Provide a common, risk-based supervisory framework that builds on experience under the Market Abuse Regulation (MAR) while addressing crypto-specific threats such as maximal extractable value (MEV), token-supply manipulation and the outsized role of social-media hype. 2. Scope, status and implementation timetable Element Details Addressees All EU NCAs (Article 3(1)(35) MiCA) Subject matter Supervision of insider dealing , unlawful disclosure and market manipulation involving crypto-assets (Title VI MiCA, Arts 86-92) Entry into force 3 months after the multilingual version appears on ESMA’s website Comply-or-explain Within 2 months of publication, each NCA must notify ESMA whether it will (i) comply, (ii) comply later, or (iii) not comply, giving reasons. ESMA will publish the list. Relationship to RTS Complements the forthcoming Regulatory Technical Standards on Suspicious Transaction or Order Reports (STORs) required under Art 92(2) MiCA. 3. Guiding principles and cross-cutting themes Proportionality (Guideline 1) – Supervisory intensity must reflect the scale, complexity and risk of local crypto-markets and actors. Risk-based & forward-looking approach (Guideline 2) – NCAs should continuously scan for emerging abuse typologies (e.g., on-chain front-running, algorithmic manipulation) and adapt oversight swiftly. Leverage existing MAR know-how (Guideline 3) – Before inventing new controls, map current MAR surveillance to crypto contexts and plug the gaps (e.g., add MEV detection). Build a common EU supervisory culture (Guideline 4) – Systematic peer-exchange of cases, data and best practices via ESMA working groups; potential ESMA convergence tools where divergent practices persist. Adequate & specialised resources (Guideline 5) – Dedicated crypto teams, data scientists and bespoke tooling; tap initiatives such as the EU Supervisory Digital Finance Academy for staff up-skilling. Stakeholder dialogue (Guideline 6) – Maintain open channels with industry, academia, tech providers and public-interest groups to anticipate new threats and co-design solutions. Market-integrity outreach (Guideline 7) – NCAs should run public-education campaigns, issue Q&As and encourage voluntary best practices (e.g., issuer insider-lists, platform user warnings). 4. Operational supervision and enforcement tools Area Key expectations Market monitoring (Guideline 8) Adopt data-driven surveillance combining on-chain, off-chain and cross-market feeds; supplement automated scans (pattern/keyword) with human analysis; include social-media, blogs, newsletters and podcasts where they influence prices. Oversight of PPAETs * (Guideline 9) Ensure Persons Professionally Arranging or Executing Transactions maintain effective, continuously reviewed abuse-detection systems; apply proportionate intensity (e.g., full trading-venue vs. order-transmission CASPs). STOR handling (Guideline 10) Put in place clear internal workflows that assign responsibilities, grade severity/recurrence, and ensure timely follow-up; response must be proportionate to the threat detected. ESMA coordination (Guideline 11) Seek ESMA-led joint inspections/investigations in cross-border cases involving multiple NCAs, risk of conflicting actions, or undue burden on firms. Third-country obstacles (Guideline 12) Alert ESMA and peers when non-EU trading flows, legal barriers or uncooperative foreign authorities hamper abuse detection; strive for a common supervisory stance toward such obstacles. *PPAET = firms (often CASPs) that professionally arrange or execute crypto transactions. 5. Crypto-specific risk considerations woven into the Guidelines MEV & front-running – Recognised as potential insider dealing/market manipulation vectors that require bespoke detection logic. Token mechanics – Sudden changes in supply, reserve backing (for asset-referenced/stablecoins) or governance decisions can be exploited for price manipulation. Social-media virality – Higher risk of pump-and-dump or misinformation campaigns; NCAs are urged to monitor high-reach accounts and coordinated posting patterns. Cross-border trading & DeFi – Surveillance must cover platforms and liquidity venues outside the EU; data-gathering may depend on blockchain analytics and cooperation agreements. 6. Stakeholder feedback and ESMA’s adjustments Securities & Markets Stakeholder Group (SMSG) broadly endorsed the draft but asked for stronger emphasis on NCA staffing, training and consumer-protection links. ESMA’s response (Annex II) – Added explicit encouragement for dedicated crypto resources and ongoing training (Guideline 5). Suggested voluntary dialogue with other authorities (consumer-protection, AML) under Guideline 4, while noting legal-basis constraints. Reaffirmed proportionality so smaller markets are not over-burdened. 7. Next steps for NCAs and industry Translation & publication – ESMA will release the final Guidelines in all EU languages. NCA notices – Within 2 months, each NCA must file its comply/intentions statement. ESMA will disclose any non-compliance publicly. Practical adoption – NCAs integrate the Guidelines into national frameworks; market participants (especially PPAETs and trading-venue CASPs) should align internal surveillance and STOR processes accordingly. 8. Key take-aways for market participants Expect more uniform and data-intensive surveillance across the EU, including scrutiny of your social-media communications. STOR obligations will soon follow detailed RTS templates; start enhancing your detection logic now (on-chain & off-chain reconciliation, MEV scenarios, cross-asset signals). Prepare for dialogue with supervisors —NCAs will actively seek feedback on new risks and may issue guidance or best-practice checklists. Cross-border models (routing flow to non-EU venues, DeFi aggregators) may attract heightened attention where they frustrate EU supervision. Prokopiev Law Group is a forward-thinking Web3 legal consultancy that bridges breakthrough blockchain ideas with the world’s fast-moving regulatory frameworks. Operating from Kyiv yet connected to a network that spans more than 50 jurisdictions, the firm delivers end-to-end support—from incorporating and structuring crypto ventures to drafting DAO governance, token-sale and data-protection documentation, and securing the licences and cross-border compliance investors expect under regimes such as MiCA. Its lawyers pair deep, specialised expertise with clear, business-oriented advice, letting founders and funders focus on building while PLG shoulders the legal heavy lifting. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- AI in Arbitration: Frameworks, Applications, and Challenges
Artificial Intelligence (AI) is being integrated into arbitration as a tool to enhance efficiency and decision-making. In broad terms, AI refers to computer systems capable of tasks that typically require human intelligence, such as learning, pattern recognition, and natural language processing. In arbitration, AI’s role so far has been largely assistive – helping arbitrators and parties manage complex information and streamline procedures. For example, AI-driven software can rapidly review and analyze documents, searching exhibits or transcripts for relevant facts far faster than manual methods. Machine learning algorithms can detect inconsistencies across witness testimonies or summarize lengthy briefs into key points. Generative AI tools (like large language models) are also being used to draft texts – from procedural orders to initial award templates – based on user-provided inputs. The potential applications of AI in arbitration extend to nearly every stage of the process. AI systems can assist in legal research, sifting through case law and past awards to identify relevant precedents or even predict probable outcomes based on historical patterns. They can facilitate case management by automating routine administrative tasks, scheduling, and communications. We are proud to present an analysis of AI in arbitration available today. If you are exploring AI’s role in arbitration, Prokopiev Law Group can help. We pair seasoned litigators with leading-edge AI resources to streamline complex cases and help you navigate this evolving landscape. If your situation calls for additional expertise, we are equally prepared to connect you with the right partners. Legal and Regulatory Frameworks The incorporation of AI into arbitration raises questions about how existing laws and regulations apply to its use. Globally, no uniform or comprehensive legal regime yet governs AI in arbitration, but several jurisdictions have started to address the intersection of AI and dispute resolution through legislation, regulations, or policy guidelines. Convention on the Recognition and Enforcement of Foreign Arbitral Awards (New York Convention) A. Overview of the Convention’s Scope Article I(1) states that the Convention applies to the “recognition and enforcement of arbitral awards made in the territory of a State other than the State where the recognition and enforcement are sought,” arising out of disputes between “persons, whether physical or legal.” The Convention does not define “arbitrator” explicitly; rather, it references “arbitral awards … made by arbitrators appointed for each case or … permanent arbitral bodies.” There is no mention of any possibility that the arbitrator could be non-human or an AI entity. B. Key Provisions Envision Human Arbitrators Article II : Speaks of “an agreement in writing under which the parties undertake to submit to arbitration all or any differences….” The Convention assumes an “arbitration agreement” with standard rights to appoint or challenge “the arbitrator.” Article III–V : Concern the recognition/enforcement of awards and set out grounds upon which enforcement may be refused. For instance, Article V(1)(b) refers to a party “not [being] given proper notice of the appointment of the arbitrator,” or “otherwise unable to present his case.” Article V(1)(d) : Allows refusal of enforcement if “the composition of the arbitral authority … was not in accordance with the agreement of the parties….” The reference to an “arbitral authority,” “arbitrator,” or “composition” suggests a set of identifiable, human arbitrators who can be “composed” incorrectly or fail to abide by required qualifications. Article I(2) : The “term ‘arbitral awards’ shall include not only awards made by arbitrators appointed for each case but also those made by permanent arbitral bodies….” Even in the latter scenario, the Convention contemplates a recognized body of human arbitrators (e.g. an institution with a roster of living arbitrators), not an automated algorithm. C. The Convention’s Enforcement Regime Presupposes Human Judgment The entire enforcement structure is that an award is recognized only if it meets due-process requirements such as giving a party notice, enabling them to present their case, ensuring the arbitrator or arbitral body was validly composed. For instance, Article V(1)(a) contemplates that each party to the arbitration agreement must have “capacity,” and Article V(1)(b) contemplates that the party was able to present its case to an impartial decision-maker. An AI system cannot easily satisfy these due-process standards in the sense of being challenged, replaced, or tested for partiality or conflict of interest. D. “Permanent Arbitral Bodies” Do Not Imply Autonomous AI While Article I(2) references that an arbitral award can be made by “permanent arbitral bodies,” this does not open the door to a fully autonomous AI deciding the merits. A “permanent arbitral body” is typically an arbitral institution (like the ICC Court or an arbitral chamber) with rosters of living arbitrators. Nowhere does the Convention recognize a non-human decision-maker substituting for arbitrators themselves. UNCITRAL Model Law on International Commercial Arbitration A. Terminology and Structure Article 2(b) of the Model Law defines “arbitral tribunal” as “a sole arbitrator or a panel of arbitrators.” Article 10 refers to determining “the number of arbitrators,” “one” or “three,” etc., which in ordinary usage and practice means one or more individual persons. Article 11 lays out a procedure for appointing arbitrators, handling their challenge (articles 13, 14), and so on, plainly assuming a person. B. Core Provisions That Imply a Human Arbitrator Article 11 (and subsequent articles on challenge, removal, or replacement of arbitrators) revolve around verifying personal traits, such as independence, impartiality, and conflicts of interest. For example, Article 12(1) requires an arbitrator, upon appointment, to “disclose any circumstances likely to give rise to justifiable doubts as to his impartiality or independence.” This is obviously oriented to a natural person. An AI system cannot meaningfully “disclose” personal conflicts. Article 31(1) demands that “The award shall be made in writing and shall be signed by the arbitrator or arbitrators.” While in practice a tribunal can sign electronically, the point is that an identifiable, accountable person signs the award. A machine cannot undertake the personal act of signing or be held responsible. Article 19 affirms the freedom of the parties to determine procedure, but absent party agreement, the tribunal “may conduct the arbitration in such manner as it considers appropriate.” This includes evaluating evidence, hearing witnesses, and ensuring fundamental fairness (Articles 18, 24). That discretionary, human-like judgment is not accounted for if the “tribunal” were simply an AI tool with no human oversight. C. Arbitrator’s Duties Presuppose Personal Judgment Many of the Model Law’s articles require the arbitrator to exercise personal discretion and to do so impartially: Article 18 : “The parties shall be treated with equality and each party shall be given a full opportunity of presenting his case.” Human arbitrators are responsible for ensuring this fundamental right. Article 24 : The tribunal hears the parties, manages documents, questions witnesses, etc. Article 26 : The tribunal may appoint experts and question them. Article 17 (and especially the 2006 amendments in Chapter IV A) require the arbitrator to assess whether an “interim measure” is warranted, including the “harm likely to result if the measure is not granted.” These duties reflect a legal expectation of personal capacity for judgment , integral to the role of “arbitrator” as recognized by the Model Law. United States The United States has no specific federal statute or set of arbitration rules explicitly regulating the use of AI in arbitral proceedings. The Federal Arbitration Act (“FAA”), first enacted in 1925 (now codified at 9 U.S.C. §§ 1–16 and supplemented by Chapters 2–4), was drafted with human decision-makers in mind. Indeed, its provisions refer to “the arbitrators” or “either of them” in personal terms. Fully autonomous AI “arbitrators” were obviously not contemplated in 1925. Nonetheless, the FAA imposes no direct ban on an arbitrator’s use of technology. Indeed, under U.S. law, arbitration is fundamentally a matter of contract. If both parties consent, the arbitrator’s latitude to employ (or even to be) some form of technology is generally broad. So long as the parties’ agreement to arbitrate does not violate any other controlling principle (e.g., unconscionability, public policy), it will likely be enforceable. AI and the “Essence” of Arbitration Under the FAA The threshold issue is not whether an arbitrator may use AI, but whether AI use undermines the essence of arbitration under the Federal Arbitration Act (FAA). The parties’ arbitration agreement—and the requirement that the arbitrator “ultimately decide” the matter—are central. Under 9 U.S.C. § 10(a), a party may move to vacate an award if “the arbitrators exceeded their powers,” or if there was “evident partiality” or “corruption.” In theory, if AI fully supplants the human arbitrator and creates doubt about the award’s impartiality or the arbitrator’s independent judgment, a court could be asked to vacate on those grounds. A. Replacing the Arbitrator Entirely If AI replaces the arbitrator (with minimal or no human oversight), courts might question whether a non-human “arbitrator” is legally competent to issue an “award.” Under the FAA, the arbitrator’s written award is crucial (9 U.S.C. §§ 9–13). If the AI cannot satisfy minimal procedural requirements—like issuing a valid award or being sworn to hear testimony—or raises questions about “evident partiality,” a reviewing court could find a basis to vacate (9 U.S.C. § 10(a)). If an AI system controls the proceeding such that the human arbitrator exercises no true discretion, that might mean the award was not genuinely issued by the arbitrator—risking vacatur under 9 U.S.C. § 10(a)(4) for “imperfectly executed” powers. B. Public Policy Concerns An all-AI “award” that lacks a human hallmark of neutrality could, in a hypothetical scenario, be challenged under public policy. II. Potential Legal Challenges When AI Is Used in Arbitration A. Due Process and Fair Hearing Right to Present One’s Case (9 U.S.C. § 10(a)(3)) : Both parties must have the chance to be heard and present evidence. If AI inadvertently discards or downplays material evidence, and the arbitrator then fails to consider it, a party could allege denial of a fair hearing. Transparency : While arbitrators are not generally obliged to disclose their internal deliberations, an arbitrator’s undisclosed use of AI could raise due process issues if it introduces an unvetted analysis. If a losing party discovers the award rested on an AI-driven legal theory not argued by either side, the party could claim it had no opportunity to rebut it. “Undue Means” (9 U.S.C. § 10(a)(1)) : Traditionally, this refers to fraudulent or improper party conduct. Still, a creative argument might be that reliance on AI—trained on unknown data—without informing both parties is “undue means.” If the arbitrator’s decision relies on undisclosed AI, a party could argue it was effectively ambushed. B. Algorithmic Bias and Fairness of Outcomes Bias in AI Decision-Making : AI tools can inadvertently incorporate biases if trained on skewed data. This can undercut the neutrality required of an arbitrator. If an AI influences an award—for example, a damages calculator that systematically undervalues certain claims—a party might allege it introduced a biased element into the arbitration process. Challenge via “Evident Partiality” (9 U.S.C. § 10(a)(2)) : If an arbitrator relies on an AI known (or discoverable) to be biased, a losing party might argue constructive partiality. A court’s review is narrow, but extreme or obvious bias could support vacatur. III. FAA Vacatur or Modification of AI-Assisted Awards A. Exceeding Powers or Improper Delegation (9 U.S.C. § 10(a)(4)) An award is vulnerable if the arbitrator effectively delegates the decision to AI and merely rubber-stamps its output. Parties choose a human neutral—not a machine—and can argue the arbitrator “exceeded [their] powers” by failing to personally render judgment. B. Procedural Misconduct and Prejudice (9 U.S.C. § 10(a)(3)) Using AI might lead to misconduct if it pulls in information outside the record or curtails a party’s presentation of evidence. Any ex parte data-gathering (even by AI) can be challenged. Courts might find “misbehavior” if parties had no chance to confront AI-derived theories. C. Narrow Scope of Review Judicial review under the FAA is strictly limited (9 U.S.C. §§ 10, 11). Simple factual or legal errors—even if AI-related—rarely suffice for vacatur. A challenger must show the AI involvement triggered a recognized statutory ground (e.g., refusing to hear pertinent evidence or actual bias). Courts typically confirm awards unless there is a clear denial of fundamental fairness. D. Modification of Awards (9 U.S.C. § 11) If AI introduced a clear numerical error or a clerical-type mistake in the award, courts may modify or correct rather than vacate. Such errors include “evident material miscalculation” (§ 11(a)) or defects in form not affecting the merits (§ 11(c)). These are minor and straightforward fixes. AI Bill of Rights The White House’s Blueprint for an AI Bill of Rights (October 2022) sets forth high-level principles for the responsible design and use of automated systems. Two of its core tenets are particularly relevant: “Notice and Explanation” (transparency) and “Human Alternatives, Consideration, and Fallback” (human oversight). The Notice and Explanation principle provides that people “should know that an automated system is being used and understand how and why it contributes to outcomes that impact [them]”, with plain-language explanations of an AI system’s role. The Human Alternatives principle urges that people “should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems” caused by an automated decision. While the AI Bill of Rights is a policy guidance document (not a binding law), it reflects a federal push for algorithmic transparency, accountability, and human oversight in AI deployments. These values can certainly inform arbitration practice. For instance, if an arbitrator or arbitration institution chooses to utilize AI in case management or decision-making, adhering to these principles – by being transparent about the AI’s use and ensuring a human arbitrator remains in ultimate control – would be consistent with emerging best practices. We already see movement in this direction: industry guidelines under development (e.g. the Silicon Valley Arbitration & Mediation Center’s draft “Guidelines on the Use of AI in Arbitration”) emphasize disclosure of AI use and that arbitrators must not delegate their decision-making responsibility to an AI. European Union AI Regulation The AI Act (Regulation (EU) 2024/1689) lays down harmonized rules for the development, placing on the market, putting into service, and use of AI systems within the Union. It follows a risk-based approach, whereby AI systems that pose particularly high risks to safety or to fundamental rights are subject to enhanced obligations. The AI Act designates several domains in which AI systems are considered “high-risk.” Of particular relevance is Recital (61), which classifies AI systems “intended to be used by a judicial authority or on its behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts” as high-risk. That same recital extends the classification to AI systems “intended to be used by alternative dispute resolution bodies for those purposes,” when the decisions they produce create legal effects for the parties. Consequently, if an AI system is intended to assist arbitrators in a manner akin to that for judicial authorities, and if its decisions or outputs can materially shape the arbitrators’ final binding outcome, then such an AI system comes within the “high-risk” classification. I. Conditions Triggering High-Risk Classification for AI Arbitration Does the AI “assist” in interpreting facts or law? According to Recital (61), the high-risk label applies when the AI system is used “to assist […] in researching and interpreting facts and the law and in applying the law to a concrete set of facts.” If an AI tool offers predictive analytics on likely case outcomes, interprets contract terms in light of relevant legal doctrine, or provides reasoned suggestions on liability, quantum of damages, or relevant procedural steps, it falls within the scope of “assisting” the arbiter in a legally determinative task. Does the AI system produce legal effects? The Regulation explicitly points to “the outcomes of the alternative dispute resolution proceedings [that] produce legal effects for the parties.” (Recital (61)) Arbitral awards are typically binding on the parties—thus having legal effect—and often enforceable in national courts. Therefore, an AI system that guides or shapes the arbitrator’s (or arbitration panel’s) legally binding decision is presumably captured. Exclusion of “purely ancillary” uses Recital (61) clarifies that “purely ancillary administrative activities that do not affect the actual administration of justice in individual cases” do not trigger high-risk status. This means if the AI is limited to scheduling hearings, anonymizing documents, transcribing proceedings, or managing routine tasks that do not influence the legal or factual determinations, it would not be considered high-risk under this Regulation. The dividing line is whether the AI’s output can materially influence the final resolution of the dispute (e.g., analyzing core evidence, recommending liability determinations, or drafting essential portions of the award). II. Legal and Practical Implications for AI in Arbitration in EU When an AI tool used in arbitration is classified as high-risk, a suite of obligations from the Regulation applies. The Regulation’s relevant provisions on high-risk systems span risk management, data governance, technical documentation, transparency, human oversight, and post-market monitoring (Articles and recitals throughout the Act). Below is an overview of these obligations as they would apply to AI arbitration: A. Risk Management System (Article 9) Providers of high-risk AI systems are required to implement a documented, continuous risk management process covering the system’s entire lifecycle. For AI arbitration, the provider of the software (i.e. the entity placing it on the market or putting it into service) must: Identify potential risks (including the risk of incorrect, biased, or otherwise harmful award recommendations). Mitigate or prevent those risks through corresponding technical or organisational measures. Account for reasonably foreseeable misuse (for instance, using the tool for types of disputes or jurisdictions it is not designed to handle). B. Data Governance and Quality (Article 10) Data sets used to train, validate, or test a high-risk AI system must: Be relevant, representative, and correct to the greatest extent possible. Undergo appropriate governance and management to reduce errors or potential biases that could lead to discriminatory decisions or outcomes in arbitration. C. Technical Documentation, Record-Keeping, and Logging (Articles 11 and 12) High-risk AI systems must include: Clear, up-to-date technical documentation, covering the model design, data sets, performance metrics, known limitations, and other key technical aspects. Proper record-keeping (“logging”) of the system’s operations and outcomes, enabling traceability and ex post review (e.g. in the event of challenges to an arbitral decision relying on the AI’s outputs). D. Transparency and Instructions for Use (Article 13) Providers of high-risk AI systems must ensure sufficient transparency by: Supplying deployers (e.g. arbitral institutions) with instructions about how the system arrives at its recommendations, the system’s capabilities, known constraints, and safe operating conditions. Disclosing confidence metrics, disclaimers of reliance, warnings about potential error or bias, and any other usage guidelines that allow arbitrators to understand and properly interpret the system’s output. E. Human Oversight (Article 14) High-risk AI systems must be designed and developed to allow for human oversight: Arbitrators (or arbitral panels) must remain the ultimate decision-makers and be able to detect, override, or disregard any AI output that appears flawed or biased. The AI tool cannot replace the arbitrator’s judgment; rather, it should support the decision-making process in arbitration while preserving genuine human control. F. Accuracy, Robustness, and Cybersecurity (Article 15) Providers must ensure that high-risk AI systems: Achieve and maintain a level of accuracy that is appropriate in relation to the system’s intended purpose (e.g. suggesting case outcomes in arbitration). Are sufficiently robust and resilient against errors, manipulation, or cybersecurity threats—particularly critical for AI tools that could otherwise be hacked to produce fraudulent or manipulated arbitral results. G. Post-Market Monitoring (Article 17) Providers of high-risk AI systems must also: Monitor real-world performance once the system is deployed (i.e. used in actual arbitration proceedings). Take timely corrective actions if unacceptable deviations (e.g. high error rates, systemic biases) emerge in practice. III. The Role of the Provider vs. the Arbitration Institution (Deployer) Pursuant to Article 3(10) of Regulation (EU) 2024/1689 (“AI Act”), a provider is defined as: “...any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark.” Accordingly, where a law firm, software vendor, or specialized AI start-up develops a high-risk AI system and makes it available to arbitral institutions for the purpose of dispute resolution, that entity qualifies as the provider. Providers of high-risk AI systems must comply with the obligations set out in Articles 16–25 of the AI Act, including ensuring that the high-risk AI system meets the requirements laid down in Articles 9–15, performing or arranging the relevant conformity assessments (Article 43), and establishing post-market monitoring (Article 17). Under Article 3(12) of the AI Act, a deployer is defined as: “...any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.” In the arbitration context, the arbitral institution or the arbitration panel implementing the high-risk AI system is considered the deployer. Deployers have obligations outlined in Articles 29–31 , which include using the system in compliance with the provider’s instructions, monitoring its performance, retaining relevant records (Article 29), and ensuring that human oversight is effectively exercised throughout the system’s operation (Article 14). IV. Distinguishing “High-Risk” vs. “Ancillary” AI in Arbitration The AI Act’s operative text (specifically Article 6(2) and Annex III Section A, point 8) classifies as high-risk those AI systems “intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution,” when the outcomes of the proceedings produce legal effects. However, the Regulation does not designate as high-risk any AI system that merely executes ancillary or administrative tasks (the Act itself uses the term “purely ancillary administrative activities that do not affect the actual administration of justice in individual cases” in Recital (61)). Therefore: High-risk arbitration AI : Covered by Article 6(2) and Annex III Section A, point 8, if the AI system materially or substantively influences the resolution of the dispute by “assisting in researching and interpreting facts and the law and in applying the law to a concrete set of facts.” This includes systems that suggest legal conclusions, draft core elements of the arbitral decision, or advise on factual or legal findings central to the final outcome. Not high-risk : If the AI tool is purely “ancillary” in nature — for instance, scheduling, document formatting, automated transcription, or anonymization — and does not shape the actual analysis or findings that produce legally binding effects. Such use cases are not captured by Annex III, nor do they meet the condition in Article 6(2). Boundary scenarios : If the AI tool nominally performs only “supporting” tasks (such as ranking evidence or recommending procedural steps) but in practice significantly guides or steers the arbitrator’s essential decisions, that usage may bring it under the scope of high-risk classification. The decisive factor is whether the system’s functioning meaningfully affects how the law and facts are ultimately applied. Hence the quoted distinction between “High-risk” AI and “Not high-risk” (or ancillary ) AI in arbitration aligns with the AI Act, subject to the caveat that borderline applications must be assessed in light of whether the AI’s outputs meaningfully influence or determine legally binding outcomes. National Arbitration Laws in Key EU Member States Arbitration in the EU is primarily governed by the national laws of each Member State. Over the past decades, many European countries have modernized their arbitration statutes, often drawing inspiration from the UNCITRAL Model Law on International Commercial Arbitration. However, the extent of Model Law adoption and procedural frameworks varies by country. Country Primary Legislation Based on UNCITRAL Model Law? Enforcement Germany 10th Book of Civil Procedure (ZPO), Sections 1025–1066 Yes – Adopted Model Law (1985) with modifications. Applies to both domestic and international arbitrations. Arbitration-friendly courts; awards enforceable by court order, with grounds for setting aside or refusal mirroring New York Convention (NYC) defenses. France Code of Civil Procedure (CPC), Articles 1442–1527. No – French law is an independent regime not based on the Model Law. Domestic arbitration is subject to more stringent rules, while international arbitration is more liberal. International arbitration is defined broadly (dispute implicates international trade interests). Few mandatory rules in international arbitration; parties and arbitrators have wide autonomy. No strict writing requirement for international arbitration agreements. Uses an exequatur process. Netherlands Dutch Arbitration Act (Book 4, Code of Civil Procedure). Partial – Not a verbatim Model Law adoption, but significantly inspired by it. No distinction between domestic and international cases. Domestic awards are enforced like judgments after deposit with the court. Foreign awards are recognized under the NYC, as the Act’s enforcement provisions closely follow the NYC grounds. Sweden Swedish Arbitration Act (SAA). Yes in substance – Sweden did not formally adopt the Model Law’s text, but the SAA contains substantive similarities . Applies equally to domestic and international arbitrations. SAA sets certain qualifications (e.g. legal capacity, impartiality) beyond Model Law minima. Sweden is strongly pro-enforcement. As an NYC party, it enforces foreign awards under NYC terms. Spain Spanish Arbitration Act 60/2003 (SAA). Yes – The 2003 Act was heavily based on the 1985 UNCITRAL Model Law. Arbitral awards are enforceable as final judgments (no appeal on merits). The High Courts of Justice hear actions to annul awards on limited grounds. Foreign awards are recognized under the NYC. Spanish courts generally uphold awards absent clear NYC Article V grounds. Italy Code of Civil Procedure (CCP). No – Italian arbitration law is not directly based on the Model Law, though it shares many of its core principles. The Courts of Appeal have jurisdiction over recognition and enforcement of foreign awards (Articles 839–840 CCP). Italy applies NYC criteria for refusal of enforcement. Domestic awards are enforced after the Court of Appeal confirms there are no grounds to set aside. AI Arbitration in Germany Below is a high-level analysis of how AI-based arbitration might fit into the wording of the German Arbitration Act (ZPO Book 10). The analysis is somewhat speculative, since the Act does not directly address AI at all. A. No Direct Prohibition on AI A first observation: nowhere in Book 10 is there an explicit rule banning or limiting the use of AI for arbitral proceedings. Terms such as “schiedsrichter” (arbitrator), “Person,” or “unabhängig und unparteiisch” (independent and impartial) consistently refer to human arbitrators, but do not categorically exclude the possibility of non-human decision-makers. The statutory text always presupposes a “person,” “he,” or “she” as an arbitrator; still, that is the default assumption in 1998–2018 legislative drafting, not necessarily a prohibition. One could argue that the repeated references to “Person” reflect an implicit normative stance that an arbitrator should be a human being—but from a strictly literal vantage, it is an open question if an AI “system” could serve in that role. B. Potential Impediments to AI as Sole Arbitrator Sections 1036–1038 speak of the requirement that arbitrators disclose conflicts of interest, remain impartial, and be capable of fulfilling the arbitrator’s duties. These requirements seem conceptually bound to human qualities (“berechtigte Zweifel an ihrer Unparteilichkeit,” “Umstände [...] unverzüglich offen zu legen,” “sein Amt nicht erfüllen”). One might argue an AI does not truly have “impartiality” or the capacity to “disclose” conflicts as a human might. A creative reading of these provisions might imply that only a human can exhibit these qualities, leading to an indirect barrier to AI arbitrators. Even so, a forward-looking approach might interpret “unabhängig und unparteiisch” as a principle that can be satisfied technologically if the AI’s training, data, and algorithms meet certain transparency standards. However, the textual references to “Er,” “Person,” or the requirement to “disclose circumstances” do suggest a legislative design geared toward human arbitrators. If the parties themselves voluntarily designate an AI system, a court might question whether that “appointment” meets the statutory standard of an impartial “Schiedsrichter” capable of fulfilling the mandated disclosure obligations. It is unclear how an AI would spontaneously “disclose” conflicts or handle “Ablehnungsverfahren” (challenge procedures) under §§ 1036–1037. C. Formal Requirements Under § 1035 Under § 1035, the parties “order” (ernennen) the arbitrator(s). The law contemplates an appointment by name, while also allowing the appointment method to be left to the parties’ agreement. One might attempt to list an AI platform or a specialized AI arbitral entity as the “Schiedsrichter.” Then, if the parties do not dispute that appointment, presumably the process is valid. The only textual friction is in § 1035(5) (the court must ensure the “unabhängigen und unparteiischen” arbitrator). If court assistance in appointing is requested, a judge might find an AI does not meet the statutory criteria, effectively refusing to “install” it. But if the parties themselves have chosen an AI system in private, it is not impossible from a purely textual standpoint—though it is an untested scenario. D. Procedural Rights: “Notice,” “Hearing,” and “Legal Audi Alteram Partem” Sections 1042–1048 require that each party is “to be heard” (rechtliches Gehör) and that the arbitrator handles evidence and ensures fairness. An AI system delivering a purely automated decision might be deemed to conflict with the personal oversight and reasoned assessment implied by these clauses. For instance, § 1042(1) states “the parties shall be treated equally” and “each party is entitled to be heard.” A purely algorithmic system could risk due-process concerns if it lacks human capacity to weigh “fairness” or accommodate unforeseen procedural nuances. Still, the text does not explicitly say an automaton cannot do it; rather, it insists on respect for due process. If the AI system can incorporate such procedures—ensuring parties can submit evidence, respond to each other, and have an “explainable” outcome—there is no direct textual ban. E. Setting Aside and Public Policy Section 1059 allows a court to set aside an award for lack of due process or if the arbitrator is not properly appointed. An AI-based award that fails to grant each party an opportunity to present their case or that obscures the basis for the decision might be at risk of annulment under § 1059(2)(1)(b) or (d). The courts might also strike down an AI-run arbitration under “public policy” (ordre public) if it is deemed that the process is too opaque or not a “fair hearing.” So although no explicit clause forbids AI arbitrators, the effect could be that an AI award is challenged under §§ 1059, 1052 (decision by a collegium?), or 1060(2). France A. Domestic Arbitration: Article 1450 Requires a Human Arbitrator Article 1450 ( Titre I, Chapitre II ) provides, in part: “La mission d’arbitre ne peut être exercée que par une personne physique jouissant du plein exercice de ses droits.Si la convention d’arbitrage désigne une personne morale, celle-ci ne dispose que du pouvoir d’organiser l’arbitrage.” This is the single most direct statement in the Act that speaks to who (or what) may serve as arbitrator. It clearly states that the function of deciding the case (“la mission d’arbitre”) can only be carried out by a “personne physique” (a natural person) enjoying full civil rights. Meanwhile, a personne morale (legal entity) may be tasked only with administering or organizing the proceedings, not issuing the decision. Under domestic arbitration rules, an AI system—even if structured within a legal entity—cannot lawfully act as the actual decider, because the statute explicitly demands a natural person . This requirement amounts to an indirect but quite categorical ban on a purely machine-based arbitrator in French domestic arbitration. An AI could presumably assist a human arbitrator, but it could not alone fulfill the statutory role of rendering the award. B. International Arbitration: No Verbatim “Personne Physique” Rule, Yet Similar Implications For international arbitration, Article 1506 cross-references some (but not all) provisions of Title I. Article 1450 is notably not among those that automatically apply to international cases. As a result, there is no verbatim statement in the international part that “the arbitrator must be a natural person.” One might argue that, in principle, parties to an international arbitration could try to designate an AI system as their “arbitrator.” However, the rest of the code—e.g. Articles 1456, 1457, 1458, which are incorporated by Article 1506—consistently presumes that the arbitrator is capable of disclosing conflicts, being “récusé,” having “un empêchement,” etc. These obligations appear tied to qualities of a human being: impartiality, independence, the duty to reveal conflicts of interest, the possibility of “abstention” or “démission,” etc. They strongly suggest a living individual is expected. An AI tool cannot meaningfully “reveal circumstances” that might affect its independence; nor can it be “révoqué” by unanimous party consent in the same sense as a human. Thus, even in international arbitration, the text strongly implies an arbitrator is a person who can fulfill these statutory duties. In short, the literal bar in Article 1450 applies to domestic arbitration, but the spirit of the international arbitration provisions also envisions a human decision-maker. While there is no line that says “AI arbitrators are forbidden,” the repeated references to “l’arbitre” as someone who must disclose, resign, or be revoked for partiality push in the same direction. A French court would likely find that, under Articles 1456–1458 and 1506, an AI alone cannot meet the code’s requirements for an arbitrator. C. Possible Indirect Challenges and Public Policy Even if parties somehow attempted to sidestep the requirement of a “personne physique,” the code’s enforcement provisions would pose other obstacles: Due Process (Principe de la Contradiction). Articles 1464, 1465, 1510, etc. require observance of contradiction and égalité des parties . A purely automated arbitrator might fail to show that it afforded both sides a genuine opportunity to present arguments before a human decision-maker who can weigh them in a reasoned, flexible manner. Setting Aside or Refusal of Exequatur. If the AI-based award flouts the code’s requirement that the arbitrator have capacity and be impartial, the losing party can invoke Articles 1492 or 1520 (for domestic vs. international) to seek annulment for irregular constitution of the tribunal or for violation of fundamental procedural principles. Manifest Contrariety to Public Policy. Articles 1488 (domestic) and 1514 (international) require that a French court refuse exequatur if the award is “manifestly contrary to public policy.” A completely AI-run arbitral process might be deemed contrary to fundamental procedural fairness. In each respect, the Act’s structure presumes a human tribunal with discretionary powers, an obligation to sign the award, etc. An AI alone would struggle to comply. Netherlands Under the Dutch Arbitration Act (Book 4 of the Dutch Code of Civil Procedure), the text presupposes that an arbitrator is a human decision-maker rather than an AI system. For instance, Article 1023 states that “Iedere handelingsbekwame natuurlijke persoon kan tot arbiter worden benoemd” (“Every legally competent natural person may be appointed as an arbitrator”) . This phrasing is already suggestive: it frames “arbiter” in terms of a flesh‑and‑blood individual who has legal capacity. Other provisions likewise assume that an arbitrator can perform tasks generally associated with a human decision-maker. For example, arbitrators must disclose potential conflicts of interest and can be challenged (wraking) if there is “gerechtvaardigde twijfel … aan zijn onpartijdigheid of onafhankelijkheid” (“justified doubt … about his impartiality or independence”) . The Act also speaks of “ondertekening” (signing) of the arbitral award (Article 1057) and grants arbitrators discretionary procedural powers, such as the authority to weigh evidence, hear witnesses, and manage hearings . All these elements lean heavily on human qualities, like independence, impartiality, and the capacity to understand and consider the parties’ arguments. Thus, while there is no single clause that literally says, “AI is barred from serving as an arbitrator,” the overall statutory design pivots around the concept of a legally competent person. An AI system cannot realistically fulfill obligations such as disclosing personal conflicts, signing an award, or being subjected to a wraking procedure as a human would. In that sense, although the Act does not contain an explicit prohibition on “AI arbitrators,” it effectively prohibits them by tying the arbitrator’s role to natural‑person status and personal legal capacities. Sweden Under the Swedish Arbitration Act (Lag (1999:116) om skiljeförfarande), the statutory language implicitly assumes that an arbitrator (skiljeman) will be a human being with legal capacity, rather than an AI system. For instance, Section 7 states that “Var och en som råder över sig själv och sin egendom kan vara skiljeman” (“Anyone who has the capacity to manage themselves and their property may serve as an arbitrator”) . This phrasing strongly suggests a natural person with the requisite legal autonomy rather than a non-human entity. The Act also repeatedly ties the arbitrator’s role to personal qualities such as impartiality (opartiskhet) and independence (oberoende) (Section 8), the ability to disclose circumstances that might undermine confidence in their neutrality (Section 9), and a duty to underwrite or “sign” the final award (Section 31) . All these provisions presuppose that the arbitrator can make discretionary judgments, weigh fairness, and be removed (skiljas från sitt uppdrag) on grounds related to personal conduct or conflicts of interest. A purely AI-driven system, which lacks the hallmarks of human capacity and accountability, could not fulfill such requirements. Accordingly, even though the Swedish law does not explicitly state “AI is prohibited from acting as an arbitrator,” it functionally bars non-human arbitrators by defining who can serve and by imposing obligations—impartiality, personal disclosure, physical or electronic signature, and readiness to be challenged for bias—that only a human individual can meaningfully carry out. Spain Under the Spanish Arbitration Act (Ley 60/2003, de 23 de diciembre), several provisions implicitly treat the árbitro as a human decision‑maker rather than an AI system. For instance, Article 13 (“Capacidad para ser árbitro”) specifies that arbitrators must be natural persons “en el pleno ejercicio de sus derechos civiles.” This requirement clearly presupposes a human being who has legal capacity. Additionally, the Act imposes duties of independence and impartiality (Article 17) and requires arbitrators to reveal any circumstances that might raise doubts about their neutrality . The law also contemplates acceptance of the arbitrators’ appointment in person (Article 16), the possibility of recusación (challenge) based on personal or professional reasons (Article 17–18), and the signing of the award (laudo) by the arbitrator(s) (Article 37). All these duties imply discretionary judgment and accountability typical of a human arbitrator. Italy Under the Italian Arbitration Act (Articles 806–840 of the Codice di Procedura Civile) , the text consistently assumes that an arbitro (arbitrator) is a human individual, in full legal capacity. For example: Article 812 provides that “Non può essere arbitro chi è privo […] della capacità legale di agire” (“No one lacking legal capacity may serve as arbitrator”). An AI system cannot meaningfully satisfy this personal requirement of legal capacity. The Act speaks of recusation, acceptance, disclosure of conflicts of interest, etc. (Articles 813, 815), all of which assume personal traits (e.g. independence, impartiality). Arbitrators sign the final award (Articles 823, 824), act as public officials for certain tasks, and bear personal liability (Articles 813-bis, 813-ter). Hence, though the law does not explicitly forbid “AI arbitrators,” it effectively bars non-human arbitrators by imposing requirements linked to human legal capacity, personal judgment, and accountability. An autonomous AI could not meet these conditions without human involvement. Other Jurisdictions United Arab Emirates Under Federal Law No. 6 of 2018 Concerning Arbitration in the UAE, an arbitrator must be a natural person with full legal capacity and certain other qualifications. Specifically: Article 10(1)(a) provides that an arbitrator must not be a minor or otherwise deprived of their civil rights, indicating they must be a living individual with legal capacity. The same article stipulates that they cannot be a member of the board of trustees or an executive/administrative body of the arbitral institution handling the dispute, and they must not have relationships with any party that would cast doubt on their independence or impartiality. Article 10(2) confirms there is no requirement of a certain nationality or gender, but it still envisions a human arbitrator who can meet the personal requirements in Article 10(1). Article 10(3) also requires a potential arbitrator to disclose “any circumstances which are likely to cast doubts on his or her impartiality or independence,” and to continue updating the parties about such circumstances throughout the proceedings. An AI application is not able to fulfill that personal disclosure obligation in the sense prescribed by the Law. Article 10 BIS (introduced by a later amendment) restates that an arbitrator must be a natural person meeting the same standards – for example, holding no conflicts, disclosing membership in relevant boards, and so forth – if that person is chosen from among certain arbitration-center supervisory bodies. Hence, although the UAE Arbitration Law (Federal Law No. 6 of 2018) does not literally declare “AI arbitrators are prohibited,” it unequivocally conditions the role of arbitrator on being a natural person with the required legal capacity and duties such as disclosure of conflicts. An autonomous AI system, by contrast, cannot fulfill the obligations that the Law imposes, whether it be impartiality, independence, or the capacity to sign the final award. Such requirements, taken together, effectively exclude AI from serving as the sole or true arbitrator in a UAE-seated arbitration. Singapore Under Singapore’s International Arbitration Act (Cap. 143A, Rev. Ed. 2002) (the IAA), which incorporates the UNCITRAL Model Law on International Commercial Arbitration (with certain modifications), there is no explicit statement such as “an arbitrator must be a human being.” However, the provisions of the Act and the Model Law, read as a whole, presuppose that only natural persons can serve as arbitrators in a Singapore-seated international arbitration. A. Terminology and Structure of the Legislation Section 2 of the IAA (Part II) defines “arbitral tribunal” to mean “a sole arbitrator or a panel of arbitrators or a permanent arbitral institution.” Likewise, Article 2 of the Model Law uses “arbitral tribunal” to refer to a “sole arbitrator or a panel of arbitrators.” Neither the IAA nor the Model Law define “arbitrator” as something that could be non-human, nor do they provide a mechanism for appointing non-person entities. The Act consistently describes the arbitrator as an individual who can accept appointments, disclose conflicts, sign awards, act fairly , be challenged , or be replaced , among other duties. B. Core Provisions That Imply a Natural Person Arbitrator Appointment and Acceptance Section 9A of the IAA (read with Article 11 of the Model Law) speaks of “appointing the arbitrator” and states that if parties fail to appoint, the “appointing authority” does so. The entire scheme contemplates a named person or individuals collectively as the arbitral tribunal. Article 12(1) of the Model Law requires that “When a person is approached in connection with possible appointment as an arbitrator, that person shall disclose….” The word “person,” in the context of disclosing personal circumstances likely to raise doubts as to independence or impartiality, strongly suggests a natural person. Disclosure of Potential Conflicts Article 12 of the Model Law further requires the arbitrator to “disclose any circumstance likely to give rise to justifiable doubts as to his impartiality or independence.” The capacity to evaluate personal conflict, have impartial relationships, etc., is a hallmark of a human arbitrator. The arbitrator is also subject to challenge (Model Law Art. 13) if “circumstances exist that give rise to justifiable doubts as to his impartiality or independence” or if he lacks the qualifications the parties have agreed on. Signatures, Liability, and Immunities Sections 25 and 25A of the IAA provide that an arbitrator is immune from liability for negligence in discharging the arbitrator’s duties and from civil liability unless acting in bad faith. This strongly implies that the arbitrator is a natural person , because the system of professional negligence or personal immunity for “an arbitrator” does not rationally apply to a non-human machine. Article 31(1) of the Model Law (which has force of law under Section 3 of the IAA) states: “The award shall be made in writing and shall be signed by the arbitrator or arbitrators.” Plainly, an autonomous AI does not meaningfully “sign” a final award in the sense required by law. Procedural Powers That Depend on Human Judgment Section 12(1) of the IAA confers on the arbitral tribunal powers such as ordering security for costs, ordering discovery, giving directions for interrogatories, and so forth. The same section clarifies that the tribunal “may adopt if it thinks fit inquisitorial processes” (Section 12(3)) and “shall decide the dispute in accordance with such rules of law as are chosen by the parties” (Section 12(5), read with Article 28 of the Model Law). These provisions presume that the arbitrator can weigh and interpret evidence, evaluate fairness, impartially manage adversarial arguments, handle procedural complexities, and supply reasons in a final award. While an AI tool might assist a human arbitrator, the notion of autonomous final human-like judgment is not recognized anywhere in the Act. C. Arbitrator’s Duties and the Necessity of Human Capacity Many duties that the IAA imposes on arbitrators are inherently personal and judgment-based: Fair Hearing and Due Process . Articles 18 and 24 of the Model Law stipulate that “The parties shall be treated with equality and each party shall be given a full opportunity of presenting his case,” and that “the arbitral tribunal shall hold hearings” or manage documentary proceedings. These tasks involve high-level procedural judgments, discretionary rulings, and the balancing of fairness concerns—indications that the law envisions a human decision-maker. Ability to Be Challenged, Replaced, or Terminated . Articles 12–15 of the Model Law describe the procedure for challenging an arbitrator for partiality, bias, or inability to serve. This only works meaningfully if the arbitrator is a natural person susceptible to partiality. Signing and Delivering the Award . The final step of the arbitration is thoroughly anchored in personal accountability: the written text is “signed by the arbitrator(s)” and delivered to the parties (Model Law Art. 31(1), (4)). D. Permanent Arbitral Institutions vs. Automated “AI Arbitrators” One might note that section 2 of the IAA includes “a permanent arbitral institution” in the definition of “arbitral tribunal.” This does not mean an institution by itself can decide as the “arbitrator.” Rather, it typically refers to an arbitral body that administers arbitration or that may act as the appointing authority. The actual day-to-day adjudication is still performed by an individual or panel of individuals. Indeed, the IAA draws a difference between: The arbitral institution that may oversee or administer proceedings (e.g. SIAC, ICC, LCIA, etc.). The arbitrator(s) who is/are physically deciding the merits of the dispute. Ethical and Due Process Considerations The use of AI in arbitration gives rise to several ethical and due process concerns . Arbitration is founded on principles of fairness, party autonomy, and the right to be heard; introducing AI must not undermine these fundamentals. Key considerations include: Transparency and Disclosure: One ethical question is whether arbitrators or parties should disclose their use of AI tools. Transparency can be crucial if AI outputs influence decisions. There is currently no universal rule on disclosure, and practices vary. The guiding principle is that AI should not become a “black box” in decision-making – parties must know the bases of an award to exercise their rights (like challenging an award or understanding the reasoning). Lack of transparency could also raise enforceability issues under due process grounds. Therefore, best practice leans towards disclosure when AI materially assists in adjudication , ensuring all participants remain on equal footing. Bias and Fairness: AI systems can inadvertently introduce or amplify bias . Machine learning models trained on historical legal data may reflect the biases present in those data – for example, skewed representation of outcomes or language that favors one group. In arbitration, this is problematic if an AI tool gives systematically biased suggestions (say, favoring claimants or respondents based on past award trends) or if it undervalues perspectives from certain jurisdictions or legal traditions present in training data. The ethical duty of arbitrators to be impartial and fair extends to any tools they use. One safeguard is using diverse and representative training data; another is having humans (arbitrators or counsel) critically review AI findings rather than taking them at face value. Due Process and Right to a Fair Hearing: Due process in arbitration means each party must have a fair opportunity to present its case and respond to the other side’s case, and the tribunal must consider all relevant evidence. AI use can challenge this in subtle ways. There is also the concern of explainability : due process is served by reasoned awards, but if a decision were influenced by an opaque algorithm, how can that reasoning be explained? Ensuring a fair hearing might entail allowing parties to object to or question the use of certain AI-derived inputs. Confidentiality and Privacy: Confidentiality is a hallmark of arbitration. Ethical use of AI must guard against compromising the confidentiality of arbitral proceedings and sensitive data. Many popular AI services are cloud-based or hosted by third parties, which could pose risks if confidential case information (witness statements, trade secrets, etc.) is uploaded for analysis. AI Use Cases and Real-World Examples Despite the concerns, AI is already making tangible contributions in arbitration practice. A number of use cases and real-world examples demonstrate how AI tools are being applied by arbitral institutions, arbitrators, and parties: Document Review and E-Discovery: Arbitration cases, especially international commercial and investment disputes, often involve massive document productions. AI-driven document review platforms (leveraging natural language processing and machine learning) can significantly speed up this process. Tools like Relativity and Brainspace use AI to sort and search document collections, identifying relevant materials and patterns without exhaustive human review. Language Translation and Interpretation: In multilingual arbitrations, AI has proven valuable for translation. Machine translation systems (from general ones like Google Translate to specialized legal translation engines) can quickly translate submissions, evidence, or awards. Moreover, AI is being used for real-time interpretation during hearings: recent advances have allowed live speech translation and transcription. Legal Research and Case Analytics: AI assists lawyers in researching legal authorities and prior decisions relevant to an arbitration. Platforms like CoCounsel (by Casetext) and others integrate AI to answer legal questions and find citations from vast databases. Products like Lex Machina and Solomonic (originally designed for litigation analytics) are being applied to arbitration data to glean insights on how particular arbitrators tend to rule, how long certain types of cases take, or what damages are typically awarded in certain industries. Arbitrator Selection and Conflict Checking: Choosing the right arbitrator is crucial, and AI is helping make this process more data-driven. Traditional selection relied on reputation and word-of-mouth, but now AI-based arbitrator profiles are available. Additionally, AI is used for conflict of interest checks: law firms use AI databases to quickly check if a prospective arbitrator or expert has any disclosed connections to entities involved, by scanning CVs, prior cases, and public records. This ensures compliance with disclosure obligations and helps avoid later challenges. Case Management and Procedural Efficiency: Arbitral institutions have begun integrating AI to streamline case administration. Furthermore, during proceedings, AI chatbots can answer parties’ routine questions about rules or schedules, easing the administrative burden. Another emerging idea is AI prediction for settlement : parties might use outcome-prediction AI to decide whether to settle early. For instance, an AI might predict a 80% chance of liability and a damages range, prompting a party to offer or accept settlement rather than proceed. This was reportedly used in a few insurance arbitrations to successfully settle before award, with both sides agreeing to consult an algorithm’s evaluation as one data point in negotiations. These examples show that AI is not just theoretical in arbitration – it is actively being used to augment human work, in ways both big and small. Arbitration Institutions Implementing AI Initiatives ICC (International Chamber of Commerce) The “ICC Overarching Narrative on Artificial Intelligence” outlines the ICC’s perspective on harnessing AI responsibly, stressing fairness, transparency, accountability, and inclusive growth. It promotes risk-based regulation that fosters innovation without stifling competition, encourages collaboration between businesses and policymakers, and calls for global, harmonized approaches that safeguard privacy, data security, and human rights. The ICC highlights the importance of fostering trust through robust governance, empowering SMEs and emerging markets with accessible AI tools, and ensuring AI’s benefits reach all sectors of society. While it does not specifically govern arbitration, the Narrative’s focus on ethical and transparent AI use offers guiding principles that align with the ICC’s broader commitment to integrity and due process. AAA-ICDR (American Arbitration Association & Intl. Centre for Dispute Resolution) In 2023 the AAA-ICDR published ethical principles for AI in ADR, affirming its commitment to thoughtful integration of AI in dispute resolution. It has since deployed AI-driven tools – for example, an AI-powered transcription service to produce hearing transcripts faster and more affordably, and a “AAAi Panelist Search” generative AI system to help identify suitable arbitrators/mediators from its roster. These initiatives aim to boost efficiency while upholding due process and data security. JAMS (USA) JAMS introduced the ADR industry’s first specialized AI dispute arbitration rules in 2024, providing a framework tailored to cases involving AI systems. Later in 2024 it launched “JAMS Next,” an initiative integrating AI into its workflow. JAMS Next includes AI-assisted transcription (real-time court reporting with AI for instant rough drafts and faster final transcripts) and an AI-driven neutral search on its website to quickly find arbitrators/mediators via natural language queries. SCC (Arbitration Institute of the Stockholm Chamber of Commerce) In October 2024, the SCC released a “Guide to the use of artificial intelligence in cases administered under the SCC rules” under its rules. This guideline advises arbitration participants (especially tribunals) on responsible AI use. Key points include safeguarding confidentiality, ensuring AI does not diminish decision quality, promoting transparency (tribunals are encouraged to disclose any AI they use), and prohibiting any delegation of decision-making to AI. CIETAC (China International Economic and Trade Arbitration Commission) CIETAC has integrated AI and digital technologies into its case administration. By 2024 it implemented a one-stop online dispute resolution platform with e-filing, e-evidence exchange, an arbitrator hub, and case management via a dedicated app – enabling fully paperless proceedings. CIETAC reports it has achieved intelligent document processing, including full digital scanning and automated identification of arbitration documents, plus a system for detecting related cases. CIETAC’s annual report states it is accelerating "the application of artificial intelligence in arbitration” to enhance efficiency and service quality. Silicon Valley Arbitration & Mediation Center The SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration (1st Edition, 2024) present a comprehensive, principle-based framework designed to promote the responsible and informed use of AI in both domestic and international arbitration proceedings. These guidelines aim to ensure fairness, transparency, and efficiency by providing clear responsibilities for all arbitration participants—including arbitrators, parties, and counsel. Key provisions include the obligation to understand AI tools' capabilities and limitations, safeguard confidentiality, disclose AI usage when appropriate, and refrain from delegating decision-making authority to AI. Arbitrators are specifically reminded to maintain independent judgment and uphold due process. The guidelines also address risks like AI hallucinations, bias, and misuse in evidence creation. A model clause is provided for integrating the guidelines into procedural orders, and the document is designed to evolve alongside technological advancements. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- ESMA Guidelines on the Conditions and Criteria for the Qualification of Crypto-Assets as Financial Instruments
The Guidelines apply to competent authorities, financial market participants, and any individual or entity engaged in crypto-asset activities. They seek to clarify how Article 2(5) of MiCA should be applied when determining whether a crypto-asset qualifies as a financial instrument. They come into force sixty days after their publication in all official EU languages on ESMA’s website, happened on March 19, 2025. Legislative References, Abbreviations, and Definitions Underpinning these Guidelines are several core pieces of legislation. The most central of these are MiFID II (Directive 2014/65/EU), AIFMD (Directive 2011/61/EU), MiCA (Regulation (EU) 2023/1114), UCITSD (Directive 2009/65/EC), the Money Market Fund Regulation (2017/1131/EU), and the ESMA Regulation (Regulation (EU) 1095/2010). They also make reference to DLTR (Regulation (EU) 2022/858), which governs pilot regimes for distributed ledger technology.Relevant abbreviations include AIF for Alternative Investment Fund, ART for Asset-Referenced Token, CASP for Crypto-Asset Service Provider, DLT for Distributed Ledger Technology, EMT for Electronic Money Token, and NFT for Non-Fungible Token. Classification of Crypto-Assets as Transferable Securities (Guideline 2) To determine whether a crypto-asset qualifies as a transferable security, it is important to verify whether the crypto-asset grants rights equivalent to those attached to shares, bonds, or other forms of securitised debt. The text of MiFID II (Article 4(1)(44)) underpins this assessment. According to the Guidelines, three main criteria must be cumulatively fulfilled. First, a crypto-asset must not be an instrument of payment, so if its sole use is as a medium of exchange, it would not qualify as a transferable security. Second, the crypto-asset must belong to a “class” of securities, meaning that it confers the same rights and obligations to all holders or else belongs to a distinct class within the issuance. Third, it must be “negotiable on the capital market,” which generally means that it can be freely transferred or traded, including on trading platforms equivalent to those covered by MiFID. If all these points are satisfied, then the crypto-asset should be classified as a transferable security and treated under the same rules that govern traditional instruments. Classification as Other Types of Financial Instruments Money-Market Instruments (Guideline 3) A crypto-asset that would be considered a money-market instrument must normally be traded on the money market and should not serve merely as an instrument of payment. The crypto-asset should exhibit features akin to short-term negotiable debt obligations, such as treasury bills or commercial paper, and typically have a short maturity or a fixed redemption date. An example might be a token representing a certificate of credit balance repayable within a short timeframe, though it must be clearly distinguishable from mere payment tools. Units in Collective Investment Undertakings (Guideline 4) A crypto-asset qualifies as a unit or share in a collective investment undertaking if it involves pooling capital from multiple investors, follows a predefined investment policy managed by a third party, and pursues a pooled return for the benefit of those investors. The focus is on whether participants lack day-to-day discretion over how the capital is managed and whether the project is not purely commercial or industrial in purpose. An example would be a token representing ownership in a fund-like structure that invests in a portfolio of other digital or traditional assets; if it meets the criteria from existing definitions in AIFMD and UCITSD (excluding pure payment or operational tools), it may be deemed a collective investment undertaking. Derivative Contracts (Guideline 5) The Guidelines recognize two broad scenarios for derivatives: crypto-assets can serve as the underlying asset for a derivative contract, or they can themselves be structured as derivative contracts. In both cases, reference must be made to Annex I Section C (4)-(10) of MiFID II, which identifies features such as a future commitment (forward, option, swap, or similar) and a value derived from an external reference point, such as a commodity price, interest rate, or another crypto-asset. Whether the derivative settles in fiat or crypto is not decisive if the essential characteristics of a derivative are present. This includes perpetual futures or synthetic tokens that track an index or basket of assets, provided they fit into one of MiFID II’s derivative categories. Emission Allowances (Guideline 6) A crypto-asset may be considered an emission allowance if it represents a right to emit a set amount of greenhouse gases recognized under the EU Emissions Trading Scheme, in line with Directive 2003/87/EC. If the token is interchangeable with official allowances and can be used to meet compliance obligations, it should then be regulated under MiFID II as an emission allowance. On the other hand, self-declared carbon credits or voluntary offsets that are not recognized by EU authorities do not fall under this category. Background on the Notion of Crypto-Assets Classification as Crypto-Assets (Guideline 7) The Guidelines reiterate that a crypto-asset, in general, is a digital representation of value or rights, transferred and stored via DLT. If it cannot be transferred beyond the issuer or if it is purely an instrument of payment, it typically falls outside the scope of these financial-instrument rules. Moreover, the fact that a holder anticipates profit from a token’s appreciation is not by itself sufficient to qualify the token as a financial instrument. Crypto-Assets That Are Unique and Non-Fungible (NFTs) (Guideline 8) Non-fungible tokens, which are unique and not interchangeable with each other, are excluded from MiCA provided they genuinely fulfill the requirement of uniqueness. This means having distinctive characteristics or rights that cannot be matched by any other asset in the same issuance. Merely assigning a unique technical identifier to each token is not enough to establish non-fungibility if the tokens effectively grant identical rights and are indistinguishable in economic reality. Fractionalizing an NFT into multiple tradable pieces typically renders those fractional parts non-unique unless each part has distinct attributes of its own. Hybrid Crypto-Assets (Guideline 9) Some tokens combine features typical of multiple crypto-asset categories, such as partial investment features (like profit participation) alongside a utility function (like access to a digital service). If, on closer assessment, any component of the token fits the definition of a financial instrument under MiFID II, the financial instrument classification applies, taking precedence over other labels. The Guidelines thus underline that hybrid tokens must be evaluated under a substance-over-form approach, with a focus on their actual rights, obligations, and economic features rather than how the issuer labels them. Conclusion Taken as a whole, the Guidelines demonstrate ESMA’s intention to ensure that all tokens conferring rights equivalent to conventional financial instruments are appropriately supervised under MiFID II. Although labels such as “utility” or “NFT” may be used by issuers, the ultimate question is whether the token’s real-world function and associated rights align with those of a security, a derivative, or another regulated category. By following this approach, authorities and market participants can maintain consistent, technology-neutral regulation in the fast-evolving crypto-asset space. Prokopiev Law Group stays at the forefront of Web3 compliance and regulatory intelligence, offering strategic support across NFT legal solutions, DAO governance, DeFi compliance, token issuance, crypto KYC, and smart contract audits. Leveraging a broad global network of partners, we ensure your project meets evolving regulations worldwide, including in the EU, US, Singapore, Switzerland, and the UK. If you want tailored guidance to protect your interests and remain future-proof, write to us for more information. Reference: Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- E-Money and Electronic Money Tokens (EMTs)
How do electronic money (e-money) and electronic money tokens (EMTs) differ, and what are the regulatory frameworks governing them within the European Economic Area (EEA)? Definition and Regulation of E-Money Tokens (EMTs) E-Money Tokens (EMTs): EMTs are a specific type of crypto-asset, their value typically pegged to a single fiat currency such as the Euro or US Dollar. These crypto-assets represent digital value or rights that can be transferred and stored electronically through distributed ledger technology (DLT) or similar systems. DLT operates as a synchronized information repository shared across multiple network nodes. Regulatory Framework: The Markets in Crypto-Assets Regulation EU 2023/1114 (MiCA) outlines stringent conditions for the issuance of EMTs. Key points include: EMTs can only be issued by credit institutions or Electronic Money Institutions (EMIs) regulated by an EEA regulator. MiCA came into effect in June 2023 and will be fully applicable from December 30, 2024. Issuer Obligations Under MiCA: Prudential, Organizational, and Conduct Requirements: Issuers must adhere to specific prudential standards, organizational requirements, and business conduct rules, including: Issuing EMTs at par value. Granting holders redemption rights at par value. Prohibiting the granting of interest on EMTs. White Paper Requirements: Issuers are mandated to publish a white paper with detailed information such as: Issuer details: Name, address, registration date, parent company (if applicable), and potential conflicts of interest. EMT specifics: Name, description, and details of developers. Public offer details: Total number of units offered. Rights and obligations: Redemption rights and complaints handling procedures. Underlying technology. Associated risks and mitigation measures. Significant e-money tokens (EMTs) are subject to higher capital requirements and enhanced oversight by the European Banking Authority (EBA). Significant EMTs are defined as those which can scale up significantly, potentially impacting financial stability, monetary sovereignty, and monetary policy within the EU. The EBA mandates that issuers of significant EMTs hold additional capital reserves. Specifically, significant issuers must maintain capital that is the higher of either €2 million or 3% of the average reserve assets. The EBA monitors these issuers closely, requiring detailed reports on their financial health and risk management practices. Issuers of significant EMTs must also adhere to comprehensive reporting obligations. They need to provide regular updates on their liquidity positions, stress testing results, and compliance with redemption obligations. Definition and Regulation of Electronic Money Electronic Money (E-Money): E-money is defined as electronically or magnetically stored monetary value representing a claim on the issuer. Its characteristics include: Issued upon receipt of funds for the purpose of payment transactions. Accepted by entities other than the issuer. Not excluded by Regulation 5 of the European Communities (Electronic Money) Regulations 2011 (EMI Regulations). Exclusions Under Regulation 5: The EMI Regulations exclude monetary value stored on specific payment instruments with limited use and monetary value used for specific payment transactions by electronic communications service providers. Electronic Money Institutions (EMIs): An EMI is an entity that has been authorized to issue e-money under the EMI Regulations, which is necessary for any e-money issuance within the EEA. Comparative Analysis of E-Money and EMTs Definition: E-Money: Electronically stored monetary value represented by a claim on the issuer. EMTs: Crypto-assets whose value is usually linked to a single fiat currency. Issuers: E-Money: Issued by EMIs upon receipt of funds for making payment transactions. EMTs: Issued by EMIs and/or credit institutions. Legal Regime: E-Money: Governed by the European Communities (Electronic Money) Regulations 2011. EMTs: Governed by MiCA. Status: E-Money: Not necessarily an EMT, but can be depending on how it is transferred and stored. EMTs: All EMTs are also considered e-money. To ensure compliance with the latest regulations and navigate the Web3 legal landscape, please contact Prokopiev Law Group. Our expertise in cryptocurrency law, smart contracts, and regulatory compliance, combined with our extensive global network of partners, guarantees that your business adheres to both local and international standards. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Cyprus Opens Submitting for MiCA License
The Cyprus Securities and Exchange Commission (CySEC) has initiated a preliminary assessment phase for Crypto-Asset Service Providers (CASPs) applying under the forthcoming EU Markets in Crypto-Assets Regulation (MiCA). Effective today, November 13, 2024 , CASPs in Cyprus can submit applications to CySEC in preparation for MiCA’s full implementation on December 30, 2024. This step by CySEC aligns with the MiCA framework, a regulation setting standardized rules for crypto-asset markets across the EU. As part of this preliminary phase, CySEC has made application and notification forms accessible on its website for CASPs and other financial entities authorized under Article 60 of MiCA, including investment firms, UCITS managers, and alternative investment fund managers, to submit notifications or seek authorization under Article 63. Important Points for this Preliminary Phase: During this phase, CySEC will receive applications from both entities currently regulated under Cyprus’ national crypto-asset laws and new market entrants aiming for MiCA compliance. While accepting applications early, CySEC retains the discretion to prioritize applications, particularly for entities already regulated under existing Cyprus’ crypto-asset rules. Submissions during this preliminary phase will be officially considered upon completion of formalities, including fee payment and verification of information accuracy by December 30, 2024. CySEC will make final decisions on granting/refusing authorization, as well as on the completeness of submitted notifications, after MiCA officially applies to CASPs on December 30, 2024. Reminder of Transitional Measures and Applicability Dates CySEC also reminds interested parties of a recent announcement regarding MiCA's phased applicability. MiCA became effective for issuers of Asset-Referenced Tokens (ARTs) and E-Money Tokens (EMTs) on June 30, 2024, and will extend to CASPs on December 30, 2024. Under MiCA’s transitional measures, CASPs registered under National Rules before December 30, 2024, may continue to provide their services until July 1, 2026, or until CySEC grants or refuses authorization per Article 63, whichever is sooner. Additionally, as of October 17, 2024, CySEC ceased accepting any CASP applications for registration under National Rules in view of MiCA becoming applicable to CASPs on 30 December 2024. So that, CySEC’s early application phase for MiCA is helping crypto service providers in Cyprus get ready for new EU rules, making the transition easier and clearer for everyone involved. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- MiCA comes fully into force: MiCA register was published
The EU's Markets in Crypto-Assets Regulation (MiCA) came into full effect on 30 December 2024 , following its initial entry into force on 29 June 2023. MiCA establishes the EU as the first major jurisdiction to regulate crypto-assets comprehensively. It creates a harmonized framework for crypto-asset issuance and services, covering various types of crypto-assets such as asset-referenced tokens (ARTs), electronic-money tokens (EMTs) and other crypto-assets (this blanket category covers utility tokens and other crypto-assets that don't qualify as ARTs or EMTs). This regulation also introduces a pan-European licensing and supervisory system for issuers, platforms, and crypto-asset service providers (CASPs). Notably, Titles III and IV, dealing with ARTs and EMTs, were applied from 30 June 2024. As of 30 December 2024, the European Securities and Markets Authority (ESMA) is empowered under Articles 109 and 110 of the MiCA Regulation to maintain and publish a central register of crypto-asset white papers, authorized crypto-asset service providers (CASPs), and non-compliant entities. This register will be sourced from the relevant National Competent Authorities (NCAs) and the European Banking Authority (EBA). To meet the legal deadline, ESMA has created an interim MiCA register , which will be updated and republished regularly. This interim register, accessible on the MiCA webpage and the Databases and Registers page, will be available as a collection of CSV files until mid-2026, when it will be formally integrated into ESMA’s IT systems. The interim register includes five CSV files, which cover: · White papers for crypto-assets other than asset-referenced tokens (ARTs) and e-money tokens (EMTs) (Title II) · Issuers of asset-referenced tokens (Title III) · Issuers of e-money tokens (Title IV) · Authorized crypto-asset service providers (Title V) · Non-compliant entities providing crypto-asset services Although four of the five files in the interim MiCA register are currently empty, the file related to issuers of EMTs contains crucial information. As of January 06, 2025, this file lists companies that have obtained authorization to issue e-money tokens under MiCA . Companies like: Membrane Finance Oy; Circle Internet Financial Europe SAS; Société Générale – Forge; Banking Circle S.A.; Quantoz Payments B.V. and Fiat Republic Netherlands B.V. are included in this file, showcasing their official approval status and providing access to their relevant white papers, authorization dates, and other key details. ESMA will update the register on a monthly basis , and while information will be reported by competent authorities on a rolling basis, it will not appear in the register immediately. Records in the interim MiCA register will reflect the information provided by the relevant authorities. If an authorization is withdrawn by a competent authority, the record will remain in the register, noting the date when the withdrawal took effect. With the establishment of the interim MiCA register and its regular updates, the European Union continues to lead the way in creating a transparent and compliant digital finance environment. We will continue to monitor and report on further updates to the MiCA framework and its impact on the crypto industry. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- The Commission published Guidelines on AI system definition
The European Commission (the ‘Commission’) issued the Guidelines on the definition of an artificial intelligence system under Regulation (EU) 2024/1689 (“AI Act”). The AI Act entered into force on 1 August 2024; it lays down harmonised rules for the development, placing on the market, putting into service, and use of artificial intelligence (‘AI’) in the Union. The Guidelines focus on clarifying Article 3(1) AI Act, which defines an “AI system” and therefore determines the scope of the AI Act. They are meant to help providers and other relevant persons (including market and institutional stakeholders) decide whether a specific system meets the definition of an AI system. They emphasize that the definition took effect on 2 February 2025 , alongside relevant provisions (Chapters I and II, including prohibited AI practices under Article 5). The Guidelines are not legally binding ; the ultimate interpretation belongs to the Court of Justice of the European Union. Key Elements of the AI System Definition AI System Article 3 (1) of the AI Act defines an AI system as follows:‘“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;’ According to the Guidelines, this definition comprises seven main elements: A machine-based system; Designed to operate with varying levels of autonomy; That may exhibit adaptiveness after deployment; For explicit or implicit objectives; Infers, from the input it receives, how to generate outputs; Such outputs include predictions, content, recommendations, or decisions; Which can influence physical or virtual environments. These elements should be interpreted with an understanding that AI systems exhibit machine-driven functionality, some autonomy, and possibly self-learning capabilities, but always within a context of producing outputs that “can influence” their surroundings. Machine-Based System The term ‘machine-based’ refers to the fact that AI systems are developed with and run on machines… The hardware components refer to the physical elements of the machine… The software components encompass computer code, instructions, programs, operating systems, and applications…” The Guidelines clarifies that “All AI systems are machine-based…” to emphasize computational processes (model training, data processing, large-scale automated decisions). This covers a wide variety of computational systems, including advanced quantum ones. Autonomy The second element of the definition refers to the system being ‘designed to operate with varying levels of autonomy’. Recital 12 of the AI Act clarifies that the terms ‘varying levels of autonomy’ mean that AI systems are designed to operate with ‘some degree of independence of actions from human involvement and of capabilities to operate without human intervention’. Full manual human involvement excludes a system from being considered AI. A system needing manual inputs to generate an output can still have “some degree of independence of action,” making it an AI system. Autonomy and risk considerations become particularly important in high-risk use contexts (as listed in Annex I and Annex III of the AI Act). Adaptiveness “(22) The third element… is that the system ‘may exhibit adaptiveness after deployment’. … ‘adaptiveness’ refers to self-learning capabilities, allowing the behaviour of the system to change while in use.” The word “may” means adaptiveness is not mandatory for a system to be classified as AI. Even if a system does not automatically adapt post-deployment, it may still qualify if it meets the other criteria. AI System Objectives “(24) The fourth element… AI systems are designed to operate according to one or more objectives. The objectives… may be different from the intended purpose of the AI system in a specific context.” Objectives are internal to the system, such as maximizing accuracy. The intended purpose (Article 3(12) AI Act) is external, reflecting the practical use context. Inferencing How to Generate Outputs “(26) The fifth element of an AI system is that it must be able to infer, from the input it receives, how to generate outputs. …This capability to infer is therefore a key, indispensable condition that distinguishes AI systems from other types of systems.” The capacity to derive models, algorithms, and outputs from data sets AI apart from simpler software that “automatically execute[s] operations” via predefined rules alone. AI Techniques that Enable Inference “(30) Focusing specifically on the building phase… ‘machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.’ Those techniques should be understood as ‘AI techniques’.” Machine Learning approaches: Supervised (e.g., spam detection) Unsupervised (e.g., drug discovery) Self-supervised (e.g., predicting missing pixels, language models) Reinforcement (e.g., autonomous vehicles, robotics) Deep Learning (e.g., large neural networks) Logic- and Knowledge-Based approaches: Use encoded knowledge , symbolic rules, and reasoning engines. The Guidelines cite examples such as classical natural language processing models based on grammatical logic, expert systems for medical diagnosis, etc. Systems Outside the Scope “(40) Recital 12 also explains that the AI system definition should distinguish AI systems from ‘simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.’” Systems aimed at improving mathematical optimization (e.g., accelerating well-established linear regression methods, parameter tuning in satellite telecommunication systems) remain outside if they do not “transcend ‘basic data processing.’” Basic data processing (sorting, filtering, static descriptive analysis, or visualizations) with no learning or reasoning also does not qualify. “Systems based on classical heuristics” (experience-based problem-solving that is not data-driven learning) are excluded. Simple prediction systems, employing trivial estimations or benchmarks (e.g., always predict the mean) do not meet the threshold for “AI system” performance. Outputs That Can Influence Physical or Virtual Environments “(52) The sixth element… the system infers ‘how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments’. … The capacity to generate outputs… is fundamental to what AI systems do and what distinguishes those systems from other forms of software.” The Guidelines detail four output categories: Predictions Content Recommendations Decisions Each type represents increasing levels of automatic functionality. Systems that produce these outputs from learned or encoded approaches generally fit the AI criteria. Interaction with the Environment “(60) The seventh element of the definition of an AI system is that system’s outputs ‘can influence physical or virtual environments’. That element should be understood to emphasise the fact that AI systems are not passive, but actively impact the environments in which they are deployed.” Influence may be physical (like controlling a robot arm) or digital (e.g., altering a user interface or data flows). Concluding Remarks “(61) The definition of an AI system encompasses a wide spectrum of systems. The determination of whether a software system is an AI system should be based on the specific architecture and functionality of a given system…” “(63) Only certain AI systems are subject to regulatory obligations and oversight under the AI Act. …The vast majority of systems, even if they qualify as AI systems… will not be subject to any regulatory requirements under the AI Act.” This underscores the risk-based approach in the AI Act: most AI systems face no or minimal obligations, while high-risk systems come under stricter prohibitions (Article 5), conformity requirements (Chapter II, Article 6), or transparency rules (Article 50). The Guidelines highlight that general-purpose AI models also fall under the AI Act (Chapter V), but detailed distinctions between them and “AI systems” exceed the scope of these Guidelines. Overall, these Guidelines precisely delineate what qualifies as an AI system. They serve as a structured reference for developers, providers, and other stakeholders to assess whether a given solution falls under Regulation (EU) 2024/1689. Link: Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act) If you need further guidance on AI compliance, DeFi compliance, NFT compliance, DAO governance, Metaverse regulations, MiCA regulation, stablecoin regulation, or any other web3 legal matters, write to us. Prokopiev Law Group has a broad global network of partners, ensuring your compliance worldwide, including in the EU, US, Singapore, Switzerland, Hong Kong, and Dubai. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Guidelines on prohibited artificial intelligence (AI) practices
The Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) (“the Guidelines”) were officially published on 04 February 2025. They provide an interpretation of the practices banned by Article 5 AI Act. These Guidelines are non-binding but form a crucial reference for providers, deployers, and authorities tasked with implementing the AI Act’s rules. SCOPE, RATIONALE, AND ENFORCEMENT Scope of the Guidelines “(1) Regulation (EU) 2024/1689 of the European Parliament and the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)(‘the AI Act’)1 entered into force on 1 August 2024. The AI Act lays down harmonised rules for the placing on the market, putting into service, and use of artificial intelligence(‘AI’) in the Union.” (Section (1) of the Guidelines) "(5) These Guidelines are non-binding. Any authoritative interpretation of the AI Act may ultimately only be given by the Court of Justice of the European Union (‘CJEU’).” (Section (5) of the Guidelines) According to section (1) of the Guidelines, the AI Act follows a risk-based approach, classifying AI systems into four risk categories: unacceptable risk, high risk, transparency risk, and minimal/no risk. Article 5 AI Act deals exclusively with “AI systems posing unacceptable risks to fundamental rights and Union values” (section (2) of the Guidelines). Additionally, the Guidelines clarify in section (6) that they are “regularly reviewed in light of the experience gained from the practical implementation of Article 5 AI Act and technological and market developments.” Their material scope and addressees are set out in sections (11)–(14) and (15)–(20) respectively. Rationale for Prohibiting Certain AI Practices "(8) Article 5 AI Act prohibits the placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values."(Section (8) of the Guidelines) In section (9), the Guidelines enumerate eight distinct prohibitions in Article 5(1), grounded on the AI Act’s principle that certain technologies or uses “contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights enshrined in the Charter” (section (8) of the Guidelines). As explained in section (28) of the Guidelines, the rationale is that unlawful AI-based surveillance, manipulative or exploitative systems, and unfair scoring or profiling schemes “are particularly harmful and abusive and should be prohibited because they contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law.” These prohibitions also respond to rapid AI developments that can facilitate large-scale data processing, possibly leading to heightened surveillance, discrimination, and erosion of autonomy. Section (4) of the Guidelines states that the prohibitions “should serve as practical guidance to assist competent authorities under the AI Act in their enforcement activities, as well as providers and deployers of AI systems in ensuring compliance.” Enforcement of Article 5 AI Act "(53) Market surveillance authorities designated by the Member States as well as the European Data Protection Supervisor (as the market surveillance authority for the EU institutions, agencies and bodies) are responsible for the enforcement of the rules in the AI Act for AI systems, including the prohibitions." (Section (53) of the Guidelines) "(54) …Those authorities can take enforcement actions in relation to the prohibitions on their own initiative or following a complaint, which every affected person or any other natural or legal person having grounds to consider such violations has the right to lodge. …Member States must designate their competent market surveillance authorities by 2 August 2025." (Section (54) of the Guidelines) As explained in section (53) of the Guidelines, enforcement occurs under the structure laid down by Regulation (EU) 2019/1020, adapted for AI. National market surveillance authorities will supervise compliance and “can take enforcement actions … or following a complaint” (section (54) of the Guidelines). Where cross-border implications arise, “the authority of the Member State concerned must inform the Commission and the market surveillance authorities of other Member States,” triggering a possible Union safeguard procedure (section (54)–(55) of the Guidelines). Penalties for Violations "(55) Since violations of the prohibitions in Article 5 AI Act interfere the most with the freedoms of others and give rise to the highest fines, their scope should be interpreted narrowly." (Section (57) of the Guidelines, referencing discussion on fines) "(55) …Providers and deployers engaging in prohibited AI practices may be fined up to EUR 35 000 000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher." (Section (55) of the Guidelines) Section (55) of the Guidelines notes that Article 99 AI Act sets out “a tiered approach … with the highest fines” reserved for breaches of Article 5. This penalty regime underscores the crucial nature of compliance with the prohibitions. Furthermore, according to section (56) of the Guidelines, the “principle of ne bis in idem should be respected” if the same prohibited conduct infringes multiple AI Act provisions. Applicability Timeline and Legal Effect "(430) According to Article 113 AI Act, Article 5 AI Act applies as from 2 February 2025. The prohibitions in that provision will apply in principle to all AI systems regardless of whether they were placed on the market or put into service before or after that date."(Section (430) of the Guidelines) As stated in section (431) of the Guidelines, enforcement and penalties become fully applicable six months after entry into force (on 2 August 2025). Section (432) of the Guidelines clarifies that even though certain aspects of the enforcement framework only take effect on 2 August 2025, “the prohibitions themselves have direct effect” as from 2 February 2025. Affected persons may seek relief in national courts against prohibited AI practices even in the interim period. Cooperation with Other Union Legislation According to sections (42)–(52) of the Guidelines, the prohibitions interact with other EU measures, such as consumer law, data protection, and non-discrimination instruments. In particular, data protection authorities may issue guidance or take enforcement actions for personal data infringements “alongside or in addition to” AI Act breaches. In short, enforcement is a multi-level process: Providers must ensure compliance prior to placing AI systems on the market. Deployers must ensure compliance during use, refraining from prohibited practices. Market surveillance authorities coordinate oversight, able to impose fines and other measures against infringements. PROHIBITED AI PRACTICES (ARTICLE 5 AI ACT) Article 5 AI Act: General Prohibition and Rationale “(8) Article 5 AI Act prohibits the placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values.” (Section (8) of the Guidelines) “Recital 28 AI Act clarifies that such practices are particularly harmful and abusive and should be prohibited because they contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights enshrined in the Charter of Fundamental Rights of the European Union.” (Section (8) of the Guidelines) According to section (8) of the Guidelines, the legislator identified certain “unacceptable risks” posed by specific AI uses — practices deemed inherently incompatible with fundamental rights, including the rights to privacy, autonomy, non-discrimination, and human dignity. Prohibitions Listed in Article 5 AI Act According to section (9) of the Guidelines, the AI Act enumerates eight prohibitions in Article 5(1). The Guidelines emphasize that these prohibitions “apply to the placing on the market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values” (section (8)). Unless a specific exception applies, these AI systems cannot be provided or deployed in the Union. Below is the full text of each prohibition as presented in the Guidelines: Article 5(1)(a) – Harmful manipulation, and deception “AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or with the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.” Article 5(1)(b) – Harmful exploitation of vulnerabilities “AI systems that exploit vulnerabilities due to age, disability or a specific social or economic situation, with the objective or with the effect of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.” Article 5(1)(c) – Social scoring “AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to detrimental or unfavourable treatment in unrelated social contexts and/or unjustified or disproportionate treatment to the gravity of the social behaviour, regardless of whether provided or used by public or private persons.” Article 5(1)(d) – Individual criminal offence risk assessment and prediction “AI systems for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; except to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to that criminal activity.” Article 5(1)(e) – Untargeted scraping of facial images “AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.” Article 5(1)(f) – Emotion recognition “AI systems that infer emotions of a natural person in the areas of workplace and education institutions, except where the use is intended to be put in place for medical or safety reasons.” Article 5(1)(g) – Biometric categorisation “AI systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation; except any labelling or filtering of lawfully acquired biometric datasets, including in the area of law enforcement.” Article 5(1)(h) – Real-time Remote Biometric Identification (RBI) Systems for Law Enforcement Purposes “The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. …” These eight prohibitions, as clarified in section (9) of the Guidelines, constitute “unacceptable risks” under Article 5 AI Act. Providers and deployers must refrain from making available or using AI systems that meet any of these descriptions, unless the AI Act itself provides for a narrowly interpreted exception (e.g., certain uses of real-time RBI for law enforcement). Legal Basis and Material Scope “(10) The AI Act is supported by two legal bases: Article 114 of the Treaty on the Functioning of the European Union (‘TFEU’) (the internal market legal basis) and Article 16 TFEU (the data protection legal basis).” (Section (10) of the Guidelines) “(11) The practices prohibited by Article 5 AI Act relate to the placing on the market, the putting into service, or the use of specific AI systems.” (Section (11) of the Guidelines) According to section (10) of the Guidelines, some prohibitions (notably the ban on real-time remote biometric identification for law enforcement, biometric categorisation, and individual risk assessments in law enforcement) derive from Article 16 TFEU, ensuring data protection. Others rely on Article 114 TFEU for the internal market. Sections (12) through (14) clarify: “Placing on the market” means the first supply of an AI system in the EU (section (12)). “Putting into service” refers to the first use in the EU for its intended purpose (section (13)). “Use” is interpreted “in a broad manner” (section (14)) to include any operation or deployment of an AI system after it is placed on the market/put into service. Personal Scope: Responsible Actors “(15) The AI Act distinguishes between different categories of operators in relation to AI systems: providers, deployers, importers, distributors, and product manufacturers.” (Section (15) of the Guidelines) “(16) According to Article 3(3) AI Act, providers are natural or legal persons … that develop AI systems or have them developed and place them on the Union market, or put them into service under their own name or trademark.” (Section (16) of the Guidelines) “(17) Deployers are natural or legal persons, public authorities, agencies or other bodies using AI systems under their authority, unless the use is for a personal non-professional activity.” (Section (17) of the Guidelines) Sections (15)–(20) of the Guidelines explain how these roles “may overlap”, but each actor faces specific obligations for compliance with the prohibitions. In particular: Providers must ensure their AI system is not prohibited upon placing it on the market or putting it into service. Deployers must avoid usage scenarios that fall within a prohibited practice, even if the provider excludes it in the terms of use (section (14) of the Guidelines). Exclusion from the Scope of the AI Act “(21) Article 2 AI Act provides for a number of general exclusions from scope which are relevant for a complete understanding of the practical application of the prohibitions listed in Article 5 AI Act.” (Section (21) of the Guidelines) (22) to (36) of the Guidelines specify exclusions such as national security, military or defence uses (Article 2(3)), judicial or law enforcement cooperation with third countries under certain agreements (Article 2(4)), R&D activities not placed on the market (Article 2(8)), and personal non-professional activities (Article 2(10)). Interplay with Other Provisions and Union Law “(37) The AI practices prohibited by Article 5 AI Act should be considered in relation to the AI systems classified as high-risk … In some cases, a high-risk AI may also qualify as a prohibited practice … if all conditions under one or more of the prohibitions … are fulfilled.” (Section (37) of the Guidelines) “(42) The AI Act is a regulation that applies horizontally across all sectors without prejudice to other Union legislation, in particular on the protection of fundamental rights, consumer protection, employment, the protection of workers, and product safety.” (Section (42) of the Guidelines) Sections (37)–(52) clarify: Some systems not meeting the threshold for prohibition might still be “high-risk” under Article 6 AI Act or subject to other EU laws (section (37)). The AI Act does not override data protection, consumer law, or non-discrimination statutes; these still apply (sections (42)–(45)). The highest fines apply to breaches of Article 5 (section (55) of the Guidelines). Enforcement Timeline “(430) According to Article 113 AI Act, Article 5 AI Act applies as from 2 February 2025. … all providers and deployers engaging in prohibited AI practices may be subject to penalties, including fines up to 7 % of annual worldwide turnover for undertakings.”(Sections (430) and (55) of the Guidelines) Even though full market surveillance mechanisms launch on 2 August 2025, the prohibitions (Article 5) are in force as of 2 February 2025. Affected individuals and authorities can invoke Article 5 bans immediately after that date (sections (430)–(432) of the Guidelines). HARMFUL MANIPULATION AND EXPLOITATION (ARTICLE 5(1)(A) AND (B)) Rationale and Objectives “(58) The first two prohibitions in Article 5(1)(a) and (b) AI Act aim to safeguard individuals and vulnerable persons from the significantly harmful effects of AI-enabled manipulation and exploitation. Those prohibitions target AI systems that deploy subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behaviour of natural persons or group(s) of persons (Article 5(1)(a) AI Act) or exploit vulnerabilities due to age, disability, or a specific socio-economic situation (Article 5(1)(b) AI Act).”(Section (58) of the Guidelines) According to section (59) of the Guidelines: "(59) The underlying rationale of these prohibitions is to protect individual autonomy and well-being from manipulative, deceptive, and exploitative AI practices that can subvert and impair an individual’s autonomy, decision-making, and free choices. … The prohibitions aim to protect the right to human dignity (Article 1 of the Charter), which also constitutes the basis of all fundamental rights and includes individual autonomy as an essential aspect." In section (59), the Guidelines also stress that Articles 5(1)(a) and (b) AI Act “fully align with the broader objectives of the AI Act to promote trustworthy and human-centric AI systems that are safe, transparent, fair and serve humanity and align with human agency and EU values.” Article 5(1)(a) AI Act – Harmful Manipulation, Deception “(60) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(a) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service’, or the ‘use’ of an AI system. (ii) The AI system must deploy subliminal (beyond a person's consciousness), purposefully manipulative or deceptive techniques. (iii) The techniques deployed by the AI system should have the objective or the effect of materially distorting the behaviour of a person or a group of persons. … (iv) The distorted behaviour must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons.” (Section (60) of the Guidelines) According to section (63), the prohibition covers three broad technique types: Subliminal techniques “beyond a person’s consciousness.” Purposefully manipulative techniques “designed or objectively aim to influence … in a manner that undermines individual autonomy.” Deceptive techniques “involving presenting false or misleading information with the objective or the effect of deceiving individuals.” As section (70) of the Guidelines notes, the deception arises from “presenting false or misleading information in ways that aim to or have the effect of deceiving individuals and influencing their behaviour in a manner that undermines their autonomy, decision-making and free choices.” Significant Harm and Material Distortion “(77) The concept of ‘material distortion of the behaviour’ of a person or a group of persons is central to Article 5(1)(a) AI Act. It involves the deployment of subliminal, purposefully manipulative or deceptive techniques that are capable of influencing people’s behaviour in a manner that appreciably impairs their ability to make an informed decision … leading them to behave in a way that they would otherwise not have.”(Section (77) of the Guidelines) “(86) The AI Act addresses various types of harmful effects associated with manipulative and deceptive AI systems … The main types of harms relevant for Article 5(1)(a) AI Act include physical, psychological, financial, and economic harms.”(Section (86) of the Guidelines) Section (85) summarizes that for the prohibition to apply, the harm must be “significant”, and “there must be a plausible/reasonably likely causal link between the manipulative or deceptive technique … and the potential significant harm.” Article 5(1)(b) AI Act – Exploitation of Vulnerabilities “(98) Article 5(1)(b) AI Act prohibits the placing on the market, the putting into service, or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.”(Section (98) of the Guidelines) As explained in section (101): “(101) To fall within the scope of the prohibition in Article 5(1)(b) AI Act, the AI system must exploit vulnerabilities inherent to certain individuals or groups of persons due to their age, disability or socio-economic situations, making them particularly susceptible to manipulative and exploitative practices.” Sections (104)–(112) detail the specific vulnerabilities tied to: Age (children, older persons), Disability (cognitive, physical, mental impairments), Specific socio-economic situation (e.g., extreme poverty, socio-economically disadvantaged, migrants). Section (114) clarifies that the harm must again be “significant,” and (115) states: "(115) For vulnerable groups — children, older persons, persons with disabilities, and socio-economically disadvantaged populations — these harms may be particularly severe and multifaceted due to their heightened susceptibility to exploitation." Interplay Between Article 5(1)(a) and (b) “(122) The interplay between the prohibitions in Article 5(1)(a) and (b) AI Act requires the delineation of the specific contexts that each provision covers to ensure that they are applied in a complementary manner.”(Section (122) of the Guidelines) Section (123) of the Guidelines describes that: Article 5(1)(a) “focuses on the techniques” (subliminal, manipulative, deceptive). Article 5(1)(b) “focuses on the exploitation of specific vulnerable individuals or groups,” requiring vulnerabilities related to age, disability, or socio-economic situations. The Guidelines highlight that “manipulative or deceptive techniques that specifically target the vulnerabilities of persons due to age, disability, or socio-economic situation” may overlap but fall more directly under Article 5(1)(b) if aimed at those recognized vulnerable groups (section (125)). Out of Scope “(127) Distinguishing manipulation from persuasion is crucial to delineate the scope of the prohibition in Article 5(1)(a) AI Act, which does not apply to lawful persuasion practices.”(Section (127) of the Guidelines) Sections (128)–(133) detail “lawful persuasion,” standard advertising practices, and “medical treatment under certain conditions” that do not amount to harmful manipulation or exploitation. For Article 5(1)(b), section (134) clarifies that “exploitative AI applications that are not reasonably likely to cause significant harms are outside the scope, even if they use manipulative or exploitative elements.” SOCIAL SCORING (ARTICLE 5(1)(c)) Rationale and Objectives “(146) While AI-enabled scoring can bring benefits to steer good behaviour, improve safety, efficiency or quality of services, there are certain ‘social scoring’ practices that treat or harm people unfairly and amount to social control and surveillance. The prohibition in Article 5(1)(c) AI Act targets such unacceptable AI-enabled ‘social scoring’ practices that assess or classify individuals or groups based on their social behaviour or personal characteristics and lead to detrimental or unfavourable treatment, in particular where the data comes from multiple unrelated social contexts or the treatment is disproportionate to the gravity of the social behaviour. The ‘social scoring’ prohibition has a broad scope of application in both public and private contexts and is not limited to a specific sector or field.” (Section (146) of the Guidelines) According to section (147) of the Guidelines, social scoring systems often “lead to discriminatory and unfair outcomes for certain individuals and groups, including their exclusion from society, as well as social control and surveillance practices that are incompatible with Union values.” Main Concepts and Components of the ‘Social Scoring’ Prohibition “(149) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(c) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, the ‘putting into service’ or the ‘use’ of an AI system; (ii) The AI system must be intended or used for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics; (iii) The social score created with the assistance of the AI system must lead or be capable of leading to the detrimental or unfavourable treatment of persons or groups in one or more of the following scenarios: (a) in social contexts unrelated to those in which the data was originally generated or collected; and/or (b) treatment that is unjustified or disproportionate to the gravity of the social behaviour.” (Section (149) of the Guidelines) ‘Social Scoring’: Evaluation or Classification Over Time “(151) The second condition for the prohibition in Article 5(1)(c) AI Act to apply is that the AI system is intended or used for the evaluation or classification of natural persons or groups of persons and assigns them scores based on their social behaviour or their known, inferred or predicted personal and personality characteristics. The score produced by the system may take various forms, such as a mathematical number or ranking.” (Sections (151)–(152) of the Guidelines) Furthermore, section (155) clarifies that this must happen “over a certain period of time.” If data or behaviour from multiple contexts are aggregated without a clear, valid link to the legitimate purpose of the scoring, “the AI system is likely to fall under the prohibition.” Detrimental or Unfavourable Treatment in Unrelated Social Contexts or Disproportionate Treatment “(160) For the prohibition in Article 5(1)(c) AI Act to apply, the social score created by or with the assistance of an AI system must lead to a detrimental or unfavourable treatment for the evaluated person or group of persons in one or more of the following scenarios: (a) in social contexts unrelated to those in which the data was originally generated or collected; and/or (b) unjustified or disproportionate to the gravity of the social behaviour.” (Section (160) of the Guidelines) Section (164) further explains “detrimental or unfavourable treatment” can mean denial of services, blacklisting, withdrawal of benefits, or other negative outcomes. It also covers cases where the social score “leads to broader exclusion or indirect harm.” Out of Scope “(173) The prohibition in Article 5(1)(c) AI Act only applies to the scoring of natural persons or groups of persons, thus excluding in principle legal entities where the evaluation is not based on personal or personality characteristics or social behaviour of individuals. … If the AI system evaluates or classifies a group of natural persons with direct impact on those persons, the practice may still fall within Article 5(1)(c) if all other conditions are fulfilled.” (Section (173) of the Guidelines) Moreover, sections (175)–(176) clarify that lawful scoring practices for “specific legitimate evaluation purposes” , such as credit-scoring or fraud prevention, generally do not fall under the prohibition when done in compliance with Union and national law “ensuring that the detrimental or unfavourable treatment is justified and proportionate.” Interplay with Other Union Legal Acts “(178) Providers and deployers should carefully assess whether other applicable Union and national legislation applies to any particular AI scoring system used in their activities, in particular if there is more specific legislation that strictly regulates the types of data that can be used as relevant and necessary for specific evaluation purposes and if there are more specific rules and procedures to ensure justified and fair treatment.” (Section (178) of the Guidelines) Section (180) highlights that social scoring must also comply with EU data protection law, consumer protection rules, and “union non-discrimination law” where relevant. INDIVIDUAL CRIME RISK PREDICTION (ARTICLE 5(1)(d)) Rationale and Objectives “(184) Article 5(1)(d) AI Act prohibits AI systems assessing or predicting the risk of a natural person committing a criminal offence based solely on profiling or assessing personality traits and characteristics.” (Section (184) of the Guidelines) According to section (185), the provision “indicates, in its last phrase, that the prohibition does not apply if the AI system is used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to that activity.” As clarified in section (186), the intention is to ensure “natural persons should be judged on their actual behaviour and not on AI-predicted behaviour based solely on their profiling, personality traits or characteristics.” Main Concepts and Components of the Prohibition “(187) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(d) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’ or the ‘use’ of an AI system; (ii) The AI system must make risk assessments that assess or predict the risk of a natural person committing a criminal offence; (iii) The risk assessment or the prediction must be based solely on either, or both, of the following: (a) the profiling of a natural person, (b) assessing a natural person’s personality traits and characteristics.” (Section (187) of the Guidelines) Assessing or Predicting the Risk of a Person Committing a Crime “(189) Crime prediction AI systems identify patterns within historical data, associating indicators with the likelihood of a crime occurring, and then generate risk scores as predictive outputs. … However, such use of historical data may perpetuate or reinforce biases and may result in crucial individual circumstances being overlooked.” (Section (189) of the Guidelines) Section (191) notes that although “crime prediction AI systems bring opportunities … any forward-looking risk assessment or crime forecasting is caught by Article 5(1)(d) if it meets the other conditions, particularly if it is based solely on profiling or personality traits.” ‘Solely’ Based on Profiling or Personality Traits “(193) The third condition for the prohibition in Article 5(1)(d) AI Act to apply is that the risk assessment to assess or predict the risk of a natural person committing a crime must be based solely on (a) the profiling of the person, or (b) assessing their personality traits and characteristics.” (Section (193) of the Guidelines) As explained in section (200), “Where the system is based on additional, objective and verifiable facts directly linked to criminal activity, the prohibition does not apply (Article 5(1)(d) last phrase).” Out of Scope Exception for Supporting Human Assessment “(203) Article 5(1)(d) AI Act provides, in its last phrase, that the prohibition does not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.” (Section (203) of the Guidelines) In section (205), the Guidelines recall the principle that “no adverse legal decision can be based solely on such AI output,” ensuring human oversight remains central. Location-Based or Geospatial Predictive Policing “(212) Location-based or geospatial predictive or place-based crime predictions … fall outside the scope of the prohibition, provided the AI system does not also profile an individual.” (Section (212) of the Guidelines) If the AI system eventually singles out specific natural persons as potential offenders “solely based on profiling or personality traits,” it can fall under Article 5(1)(d). Private Sector or Administrative Context “(210) Where a private entity profiles customers for its ordinary business operations and safety, with the aim of protecting its own private interests, the use of AI systems to assess criminal risks is not deemed to be covered by the prohibition of Article 5(1)(d) AI Act unless the private operator is entrusted by law enforcement or subject to specific legal obligations for anti-money laundering or terrorism financing.” (Section (210) of the Guidelines) Similarly, administrative offences (section (217)) do not fall within the prohibition if they are not classified as criminal under Union or national law. Interplay with Other Union Legal Acts “(219) The interplay of the prohibition in Article 5(1)(d) AI Act with the LED and GDPR is relevant when assessing the lawfulness of personal data processing … Article 11(3) LED prohibits profiling that results in direct or indirect discrimination.” (Section (219) of the Guidelines) Section (220) notes the connection to Directive (EU) 2016/343 on the presumption of innocence, emphasizing that “the AI Act must not undermine procedural safeguards or the fundamental right to a fair trial.” UNTARGETED SCRAPING OF FACIAL IMAGES (ARTICLE 5(1)(e)) Rationale and Objectives “(222) Article 5(1)(e) AI Act prohibits the placing on the market, putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage.” (Section (222) of the Guidelines) According to section (223) of the Guidelines: “(223) The untargeted scraping of facial images from the internet and from CCTV footage seriously interferes with individuals’ rights to privacy and data protection and deny those individuals the right to remain anonymous. … Such scraping can evoke a feeling of mass surveillance and lead to gross violations of fundamental rights, including the right to privacy.” As clarified in section (224), this prohibition applies specifically to AI systems whose purpose is to “create or expand facial recognition databases” through the indiscriminate or “vacuum cleaner” approach of harvesting facial images. Main Concepts and Components of the Prohibition “(225) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(e) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’ or the ‘use’ of an AI system; (ii) for the purpose of creating or expanding facial recognition databases; (iii) the means to populate the database are through AI tools for untargeted scraping; and (iv) the sources of the images are either from the internet or CCTV footage.” (Section (225) of the Guidelines) Facial Recognition Databases “(226) The prohibition in Article 5(1)(e) AI Act covers AI systems used to create or expand facial recognition databases. ‘Database’ … is any collection of data or information specially organized for search and retrieval by a computer. A facial recognition database is capable of matching a human face from a digital image or video frame against a database of faces … .” (Section (226) of the Guidelines) Untargeted Scraping of Facial Images “(227) ‘Scraping’ typically refers to using web crawlers, bots, or other means to extract data or content from different sources, including CCTV, websites or social media, automatically. … ‘Untargeted’ means that the scraping operates without a specific focus on a given individual or group of individuals, effectively indiscriminately harvesting data or content.” (Section (227) of the Guidelines) “(230) If a scraping tool is instructed to collect images or video containing human faces only of specific individuals or a pre-defined group of persons, then the scraping becomes targeted … the scraping of the Internet or CCTV footage for the creation of a database step-by-step … should fall within the prohibition if the end-result is functionally the same as pursuing untargeted scraping from the outset.” (Section (230) of the Guidelines) From the Internet and CCTV Footage “(231) For the prohibition in Article 5(1)(e) AI Act to apply, the source of the facial images may either be the Internet or CCTV footage. Regarding the internet, the fact that a person has published facial images of themselves on a social media platform does not mean that that person has given his or her consent for those images to be included in a facial recognition database.” (Section (231) of the Guidelines) In section (232), the Guidelines exemplify real-life scenarios, including the use of automated crawlers to gather online photos containing human faces, or the use of software to systematically extract faces from CCTV feeds for a large database. Out of Scope “(234) The prohibition in Article 5(1)(e) AI Act does not apply to the untargeted scraping of biometric data other than facial images (such as voice samples). The prohibition does also not apply where no AI systems are involved in the scraping. Facial image databases that are not used for the recognition of persons are also out of scope, such as facial image databases used for AI model training or testing purposes, where the persons are not identified.” (Section (234) of the Guidelines) As clarified in sections (235)–(236), the mere fact of collecting large amounts of images for other legitimate purposes does not automatically trigger the ban, provided the system “is not intended for, nor used to create or expand a facial recognition database.” Interplay with Other Union Legal Acts “(238) In relation to Union data protection law, the untargeted scraping of the internet or CCTV material to build-up or expand face recognition databases, i.e. the processing of personal data (collection of data and use of databases) would be unlawful and no legal basis under the GDPR, EUDPR and the LED could be relied upon.” (Section (238) of the Guidelines) Section (238) further explains that the AI Act complements these data protection rules by banning such scraping at the level of placing on the market, putting into service, or use of the AI systems themselves. EMOTION RECOGNITION (ARTICLE 5(1)(f)) Rationale and Objectives “(239) Article 5(1)(f) AI Act prohibits AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the system is intended for medical or safety reasons.” (Section (239) of the Guidelines) According to section (240), the ban reflects concerns regarding the “intrusive nature of emotion recognition technology, the uncertainty over its scientific basis, and its potential to undermine privacy, dignity, and individual autonomy.” As stated in section (241) of the Guidelines, “(241) Emotion recognition can be used in multiple areas and domains … but it is also quickly evolving and comprehends different technologies, raising serious concerns about reliability, bias, and potential harm to human dignity and fundamental rights.” Main Concepts and Components of the Prohibition “(242) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(f) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’, or the ‘use’ of an AI system; (ii) AI system to infer emotions; (iii) in the area of the workplace or education and training institutions; and (iv) excluded from the prohibition are AI systems intended for medical or safety reasons.” (Section (242) of the Guidelines) AI Systems to Infer Emotions “(244) Inferring generally encompasses identifying as a prerequisite, so that the prohibition should be understood as including both AI systems identifying or inferring emotions or intentions … based on their biometric data.” (Section (244) of the Guidelines) Sections (246)–(247) confirm that “emotion recognition” means “identifying or inferring emotional states from biometric data such as facial expressions, voice, or behavioural signals.” Limitation to Workplace and Education “(253) The prohibition in Article 5(1)(f) AI Act is limited to emotion recognition systems in the ‘areas of workplace and educational institutions’. … This aims to address the power imbalance in those contexts.” (Section (253) of the Guidelines) According to section (254), “workplace” includes all settings where professional or self-employment activities occur (offices, factories, remote or mobile sites). As stated in section (255), “education institutions” include all levels of formal education, vocational training, and educational activities generally sanctioned by national authorities. Exception for Medical or Safety Reasons “(256) The prohibition in Article 5(1)(f) AI Act contains an explicit exception for emotion recognition systems used in the area of the workplace and education institutions for medical or safety reasons, such as systems for therapeutic use.” (Section (256) of the Guidelines) Section (258) clarifies the narrow scope of that exception, stating that it only covers use “strictly necessary” to achieve a medical or safety objective. Further, in section (261), the Guidelines note that “detecting a person’s fatigue or pain in contexts like preventing accidents is considered distinct from ‘inferring emotions’ and may be allowed.” More Favourable Member State Law “(264) Article 2(11) AI Act provides that the Union or Member States may keep or introduce ‘laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers’.” (Section (264) of the Guidelines) Such stricter national laws or collective agreements could forbid emotion recognition entirely, even for medical or safety reasons in the workplace. Out of Scope “(266) Emotion recognition systems used in all other domains other than in the areas of the workplace and education institutions do not fall under the prohibition in Article 5(1)(f) AI Act. Such systems are, however, considered high-risk AI systems according to Annex III (1)(c).” (Section (266) of the Guidelines) Additionally, per section (265), uses that do not involve biometric data (e.g. text-based sentiment analysis) or do not seek to infer emotions are not caught by the prohibition. The Guidelines note these systems may still be subject to other AI Act requirements or other legislation if potential manipulative or exploitative effects arise. BIOMETRIC CATEGORISATION FOR SENSITIVE ATTRIBUTES (ARTICLE 5(1)(g)) Rationale and Objectives “(272) A wide variety of information, including ‘sensitive’ information, may be extracted, deduced or inferred from biometric information, even without the knowledge of the persons concerned, to categorise those persons. This may lead to unfair and discriminatory treatment … and amounts to social control and surveillance that are incompatible with Union values. The prohibition of ‘biometric categorisation’ in Article 5(1)(g) AI Act aims to protect these fundamental rights.” (Section (272) of the Guidelines) According to section (271), “Article 5(1)(g) AI Act prohibits biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation.” The aim is to prevent “unfair, discriminatory and privacy-intrusive AI uses that rely on highly sensitive characteristics.” Main Concepts and Components of the Prohibition “(273) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(g) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’, or the ‘use’ of an AI system; (ii) The system must be a biometric categorisation system; (iii) individual persons must be categorised; (iv) based on their biometric data; (v) to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.” (Section (273) of the Guidelines) Biometric Categorisation System “(276) ‘Biometric categorisation’ is typically the process of establishing whether the biometric data of an individual belongs to a group with some predefined characteristic. It is not about identifying an individual or verifying their identity, but about assigning an individual to a certain category.” (Section (276) of the Guidelines) As section (277) notes, this includes the automated assignment of individuals to categories such as “race or ethnicity,” “religious beliefs,” or “political stance,” purely on the basis of features derived from biometric data. Sensitive Characteristics: Race, Political Opinions, Religious Beliefs, etc. “(283) Article 5(1)(g) AI Act prohibits only biometric categorisation systems which have as their objective to deduce or infer a limited number of sensitive characteristics: race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.” (Section (283) of the Guidelines) The Guidelines underscore (section (283)) that “the use of any ‘proxy’ or correlation-based approach that aims to deduce or infer these protected attributes from biometric data is likewise covered.” Out of Scope “(284) The prohibition in Article 5(1)(g) AI Act does not cover AI systems engaged in the labelling or filtering of lawfully acquired biometric datasets … if they do not entail the categorisation of actual persons to deduce or infer their sensitive attributes, but merely aim at ensuring balanced and representative data sets for training or testing.” (Section (284) of the Guidelines) Section (285) clarifies that labeling or filtering biometric data to reduce bias or ensure representativeness is specifically exempted: “(285) The labelling or filtering of biometric datasets may be done by biometric categorisation systems precisely to guarantee that the data equally represent all demographic groups, and not over-represent one specific group.” Thus, mere dataset management or quality-control uses of biometric categorisation remain lawful if they do not aim to classify real individuals by their sensitive traits. Interplay with Other Union Law “(287) AI systems intended to be used for biometric categorisation according to sensitive attributes or characteristics protected under Article 9(1) GDPR on the basis of biometric data, in so far as these are not prohibited under this Regulation, are classified as high-risk under the AI Act (Recital 54 and Annex III, point (1)(b) AI Act).” (Section (287) of the Guidelines) Section (289) notes that the AI Act’s ban under Article 5(1)(g) “further restricts the possibilities for a lawful personal data processing under Union data protection law, such as the GDPR … by excluding such practices at the earlier stage of placing on the market and use.” REAL-TIME REMOTE BIOMETRIC IDENTIFICATION (RBI) FOR LAW ENFORCEMENT (ARTICLE 5(1)(h)) Rationale and Objectives “(289) Article 5(1)(h) AI Act prohibits the use of real-time RBI systems in publicly accessible spaces for law enforcement purposes, subject to limited exceptions exhaustively set out in the AI Act.” (Section (289) of the Guidelines) According to section (293): “(293) Recital 32 AI Act acknowledges the intrusive nature of real-time RBI systems in publicly accessible spaces for law enforcement purposes … that can affect the private life of a large part of the population, evoke a feeling of constant surveillance, and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” The Guidelines (section (295)) note that, unlike other prohibitions in Article 5(1) AI Act, the ban here concerns “the use” of real-time RBI (rather than its placing on the market or putting into service). Main Concepts and Components of the Prohibition “(295) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(h) AI Act to apply: (i) The AI system must be a RBI system; (ii) The activity consists of the ‘use’ of that system; (iii) in ‘real-time’; (iv) in publicly accessible spaces, and (v) for law enforcement purposes.” (Section (295) of the Guidelines) Remote Biometric Identification (RBI) “(298) According to Article 3(41) AI Act, a RBI system is ‘an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.’” (Section (298) of the Guidelines) Sections (299)–(303) clarify that “biometric identification” differs from verification (where a person’s identity claim is checked), focusing on “comparing captured biometric data with data in a reference database.” Real-time “(310) Real-time means that the system captures and further processes biometric data ‘instantaneously, near-instantaneously or in any event without any significant delay’.” (Section (310) of the Guidelines) Section (311) points out that “real-time” also covers a short buffer of processing, ensuring no circumvention by artificially adding minimal delays. Publicly Accessible Spaces “(313) Article 3(44) AI Act defines publicly accessible spaces as ‘any publicly or privately owned physical space accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.’” (Section (313) of the Guidelines) Sections (315)–(316) explain that “spaces such as stadiums, train stations, malls, or streets” are included, while purely private or restricted-access areas are excluded. For Law Enforcement Purposes “(320) Law enforcement is defined in Article 3(46) AI Act as the ‘activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security.’” (Section (320) of the Guidelines) Exceptions to the Prohibition “(326) The AI Act provides three exceptions to the general prohibition on the use of real-time RBI in publicly accessible spaces for law enforcement purposes. Article 5(1)(h)(i) to (iii) AI Act exhaustively lists three objectives for which real-time RBI may be authorised … subject to strict conditions.” (Section (326) of the Guidelines) Those objectives, detailed in sections (329)–(356), are: Targeted search for victims of abduction, trafficking, or sexual exploitation, or missing persons. Prevention of a specific, substantial, and imminent threat to life or safety, or a genuine and present or foreseeable threat of a terrorist attack. Localisation or identification of suspects of the serious crimes listed in Annex II AI Act, punishable by at least four years of imprisonment. As clarified in section (360), “any such use must be proportionate, strictly necessary, and limited in time, geography, and the specific targeted individual.” Authorisation, Safeguards, and Conditions (Article 5(2)–(7)) “(379) Article 5(3) AI Act requires prior authorisation of each individual use of a real-time RBI system and prohibits automated decision-making based solely on its output … The deployer must also conduct a Fundamental Rights Impact Assessment (FRIA) in accordance with Article 27 AI Act.” (Section (379) of the Guidelines) Section (381) underscores that the request for authorisation must show “objective evidence or clear indications” of necessity and proportionality, and that “no less intrusive measure is equally effective” for achieving the legitimate objective. Out of Scope “(426) All other uses of RBI systems that are not covered by the prohibition of Article 5(1)(h) AI Act fall within the category of high-risk AI systems … provided they fall within the scope of the AI Act.” (Section (426) of the Guidelines) Sections (427)–(428) note that “retrospective (post) RBI systems” do not fall under the real-time ban but are still classified as high-risk and subject to additional obligations (Article 26(10) AI Act). Private sector uses in non-law enforcement contexts (e.g., stadium access control) likewise do not trigger this specific prohibition, though they must still comply with other AI Act requirements and Union data protection law. Link: Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act . * * * Prokopiev Law Group stands ready to meet your AI and web3 compliance needs worldwide—whether you are exploring AI Act compliance, crypto licensing, web3 regulatory frameworks, NFT regulation, or DeFi and AML/KYC requirements. Our broad network spans the EU, US, UK, Switzerland, Singapore, Malta, Hong Kong, Australia, and Dubai, ensuring every local standard is met promptly and precisely. Write to us now for further details and let our proven legal strategies keep your projects fully compliant. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Compliance Challenges in DeFi: AML/KYC & Securities Law Complexities
Decentralized Finance (DeFi) promises financial services without traditional intermediaries – but this very decentralization creates thorny compliance challenges. Regulators worldwide are grappling with how anti-money laundering (AML), know-your-customer (KYC) rules and securities laws apply in a permissionless ecosystem. Regulatory Landscape Across Jurisdictions European Union: Framework with Decentralization Carve-Outs The European Union has moved toward comprehensive crypto regulation, though it distinguishes truly decentralized arrangements from those with intermediaries. Two pillars of EU policy affect DeFi compliance: the new Markets in Crypto-Assets (MiCA) regulation for crypto markets and existing/upcoming AML directives and regulations for financial crime prevention. MiCA’s Scope and Application Coverage of Centralized Crypto Services The EU’s Markets in Crypto-Assets Regulation (MiCA) establishes a regulatory framework for crypto-asset issuers and service providers. It applies to any natural or legal person (or similar undertaking) engaged in crypto-asset activities – for example, operating trading platforms, facilitating exchanges, custody services, and so forth. In effect, centralized crypto services (such as exchanges, brokers, custodians, and other intermediaries) fall squarely under MiCA. These Crypto-Asset Service Providers (CASPs) must obtain authorization and comply with operational and prudential requirements, similar to traditional financial institutions. MiCA enumerates various types of regulated crypto-asset services, including: custody and administration of crypto-assets for clients, operating a crypto-asset trading platform, exchanging crypto-assets for fiat or other crypto, execution of client orders, placing of crypto-assets, providing advice on crypto-assets, and related functions. Any intermediary performing these activities in the EU is in scope and will need a MiCA license (with associated governance, capital, and consumer-protection obligations). Stablecoin Issuers MiCA devotes special provisions to stablecoins, referred to as Asset-Referenced Tokens (ARTs) (stablecoins referencing multiple assets or non-fiat values) and E-Money Tokens (EMTs) (stablecoins referencing a single fiat currency). Issuers of such tokens must be legally incorporated in the EU and obtain authorization from a national regulator to issue them. Key obligations for stablecoin issuers include maintaining sufficient reserve assets, publishing a detailed white paper, offering redemption rights at par for holders, and adhering to prudential safeguards to protect monetary stability. Significant stablecoins (with large user bases or transaction volumes) face even tighter supervision, potentially including limits on daily transaction volume. Crypto Trading Platforms MiCA explicitly covers crypto trading venues. Any entity operating a platform that brings together buyers and sellers of crypto-assets (whether for crypto-to-crypto or crypto-to-fiat trades) is considered a CASP and must be authorized. Such platforms must meet ongoing compliance duties: maintaining minimum capital, ensuring managers are fit and proper, implementing cybersecurity controls, segregating client assets, and providing transparent operations. Carve-Out for Decentralized Services (Recital 22) While MiCA casts a wide net over centralized actors, it pointedly exempts fully decentralized activities. Recital 22 clarifies that if crypto-asset services are provided in a “fully decentralised manner” without any intermediary, they should not fall within the scope of the regulation. Thus, if a service truly runs autonomously on a decentralized network with no controlling party, EU lawmakers did not intend to capture it. However, this exemption is noted in a recital rather than detailed operative articles, leaving room for interpretation. Regulators emphasize that partial or hybrid decentralization (where an identifiable party retains some control) likely does not qualify for the carve-out. In essence, MiCA’s coverage extends to any service where a person or entity performs an intermediary function. DeFi Projects with Some Central Control A key question is how MiCA classifies DeFi projects that are not fully decentralized. MiCA’s wording suggests that any form of central “intermediation” brings a project within scope. If a DeFi arrangement involves a team operating a front-end, collecting protocol fees, or otherwise controlling upgrades, that team could be deemed a CASP. Officials have signaled that true decentralization must mean the absence of a controlling entity. If a DeFi project is partially decentralized (“HyFi,” or hybrid finance), MiCA will likely apply. Many current DeFi protocols have facets of centralization—admin keys, core dev teams, or small governance groups—which, from a regulatory standpoint, may trigger full compliance obligations under MiCA. Impact of MiCA on DeFi Defining “Fully Decentralized” – EU Perspective MiCA does not define “fully decentralized,” leaving a significant gray area. European regulators generally consider whether a project has no entity exercising control, no governance token concentration, no fee collection by a specific party, and no centralized front-end with gatekeeping powers. Only if every aspect is automated and dispersed, with no single group in charge, would it likely be exempt. Because that threshold is high, most DeFi projects risk classification as CASPs if they retain any managerial or economic control. Which DeFi Aspects Might Still Fall Under MiCA Even if the protocol itself is autonomous, various aspects can bring it under MiCA: Governance Token Issuance: A team offering governance tokens to EU users may need to comply with token issuance rules under MiCA, including drafting a compliant white paper. Liquidity Pools & Protocol Operations: If a DeFi developer or entity retains an admin key or collects fees, regulators could treat them as a crypto-asset service provider. Treasury Management: Fees that accrue to a foundation or multisig group could be viewed as service revenue, suggesting there is an identifiable operator or beneficiary. Protocol Governance: If governance token holders can upgrade or change the protocol, the system may not be fully autonomous, exposing key holders to regulatory obligations. Obligation for DeFi Protocols to Comply If a project maintains operations in the EU but does not meet the “fully decentralized” standard, it may be forced to obtain a CASP license under MiCA. That entails duties akin to those imposed on centralized exchanges, such as governance, disclosures, capital, and consumer protection. Projects may evolve into hybrid models—where the underlying smart contract is open, but the front-end or development team is regulated. Others may prefer to geofence EU users or further decentralize operations to avoid regulation. EU AMLD6, DORA, and AML Regulations Application of AMLD6 to Crypto Services: The EU’s Sixth Anti-Money Laundering Directive (AMLD6), part of a broader AML legislative package, expands the scope of “obliged entities” to include crypto-asset service providers (CASPs) across the EU. This means centralized crypto businesses (exchanges, custodial wallet providers, brokers, etc.) are explicitly subject to AML/CFT requirements akin to those for traditional financial institutions. Under AMLD6 and the accompanying EU AML Regulation, CASPs must implement customer due diligence, monitor transactions, keep records, and report suspicious activity. Newly adopted EU rules also require CASPs to collect and store information on the source and beneficiary for each crypto transaction, effectively implementing the “travel rule.” Application of DORA to Crypto Businesses: The Digital Operational Resilience Act (DORA) is a separate EU regulation focusing on cybersecurity and operational continuity for financial entities, including CASPs authorized under MiCA. As of January 2025, centralized crypto businesses must have robust security controls, incident reporting mechanisms, business continuity plans, and undergo operational resilience testing. In essence, bank-grade resilience standards will apply to crypto firms, aiming to reduce hacks and service outages in the digital asset space. DeFi Projects as VASPs – Classification Challenges: Under global standards (FATF) and EU definitions, simply labeling a platform as “DeFi” does not exempt it from regulation. If individuals or entities exercise control or significant influence over a DeFi arrangement, they may be treated as virtual asset service providers (VASPs) with AML obligations. A DeFi project featuring an identifiable company collecting fees or operating a front-end will likely be deemed a crypto-asset service provider. On the other hand, a fully decentralized protocol with no controlling party is in a gray area; however, authorities are inclined to apply a “substance over form” test, meaning a DeFi platform with centralized elements can be compelled to comply with AML/KYC requirements. Obligations for DeFi Lending, DEXs, and Custodial Services: If a DeFi platform is deemed an obliged entity, it faces obligations similar to centralized providers. For lending protocols, this could mean enforcing KYC on users supplying or borrowing assets if there is an identifiable operator. Decentralized exchanges (DEXs) that match trades or collect fees may be treated as VASPs, thus required to identify users and report suspicious transactions. Custodial services are straightforwardly in scope—anyone holding crypto on behalf of others has been subject to AML laws since AMLD5 and continues to be under AMLD6. Even non-custodial DeFi projects that interact with EU customers could face indirect obligations if there is an identifiable entity offering the service. Implementation Status & Timeline of Key EU Measures AMLD6 (Sixth Anti-Money Laundering Directive) AMLD6 was adopted as part of the 2024 AML package and entered into force in July 2024. As a Directive, it must be transposed into national law by EU Member States, with a final deadline of mid-2027 for full implementation. While many AML obligations already apply to crypto service providers under prior directives, AMLD6 introduces more stringent mechanisms and penalties. EU AML Regulation (AMLR) – the “Single Rulebook” Alongside AMLD6, the EU is enacting an AML/CFT Regulation to unify rules across Member States. This direct-applicability regulation also takes effect by mid-2027. It includes detailed requirements for customer due diligence, suspicious activity reporting, and broadens the definition of obliged entities to include crypto. Regulatory Technical Standards and guidance from the new EU AML Authority (AMLA) will shape practical implementation, with significant milestones from 2025 to 2026. EU Anti-Money Laundering Authority (AMLA) The AML package creates a new supra-national regulator. AMLA will supervise high-risk or cross-border financial institutions, including major crypto providers, from 2026–2027 onward. Until then, national regulators handle primary enforcement. AMLA will issue technical standards and guidelines, resulting in more centralized oversight of AML in crypto. Digital Operational Resilience Act (DORA) DORA was published in late 2022 and becomes fully applicable in January 2025. It imposes robust ICT risk management, incident reporting, and business continuity requirements on all in-scope financial entities, including CASPs. By late 2024, crypto firms should finalize compliance measures, such as incident response protocols and third-party risk assessments, in preparation for enforcement beginning in January 2025. Transfer of Funds Regulation (Crypto Travel Rule) Adopted in 2023, the recast Transfer of Funds Regulation applies the travel rule to crypto-asset transfers, starting from December 2024. Any CASP transferring crypto must include originator and beneficiary details with the transaction, similar to wire transfers in traditional finance. This rule also covers interactions with unhosted wallets, requiring CASPs to collect and verify identifying information on transfers above certain thresholds. Firms must refuse or halt transfers lacking necessary data, making travel rule compliance a major priority for crypto service providers. Pending Proposals and 2024+ Outlook With AMLD6, AMLR, AMLA, TFR, DORA, and MiCA all in play, the EU’s regulatory framework for crypto is rapidly harmonizing. By 2027, AMLD6 and AMLR will fully bind all Member States, cementing Europe’s single AML rulebook. Comparative Analysis EU vs. U.S. The EU is moving toward a unified AML framework enforced by AMLA, whereas the U.S. relies on FinCEN regulations (BSA) and multiple enforcement agencies. While both treat crypto exchanges as obliged entities, the U.S. often communicates compliance expectations via enforcement actions, and has taken high-profile measures against mixers and exchanges. The EU’s single rulebook aims for more preventive supervision, though it can also impose large fines and coordinate criminal prosecutions via Member State authorities. EU vs. UK After Brexit, the UK has its own AML regime under the Money Laundering Regulations, requiring crypto exchanges and custodians to register with the FCA. The UK often requires the travel rule to apply with no de minimis threshold, making it stricter than the EU for smaller transfers. The UK’s approach is less centralized than the EU’s future AMLA model, with existing agencies overseeing compliance. EU vs. Singapore Singapore has a strong licensing regime under the Payment Services Act, requiring AML compliance for digital payment token providers. Like the EU, Singapore has adopted the FATF travel rule, imposing a ~$1,000 threshold for enhanced due diligence on unhosted wallets. Both jurisdictions share a risk-based approach but proactively supervise crypto businesses. EU vs. Switzerland Switzerland integrates crypto into its existing AML laws, demanding that exchanges and wallet providers comply with due diligence, often imposing a strict ~$1,000 threshold for unhosted wallet verification. Enforcement is overseen by FINMA, which has sanctioned firms for AML breaches. Regulatory Obligations & Risks for DeFi Projects Smart Contract-Based Services Under AML Rules: If a DeFi service qualifies as exchange, transfer, or custody under EU law, and there is an entity or persons behind it, AMLD6 and related regulations can apply. Authorities will hold operators or developers accountable if they exercise control, even if transactions occur via automated smart contracts. The law targets the persons or entities benefiting from or running the platform, rather than the code itself. Enforcement on Decentralized Protocols: Truly decentralized protocols pose challenges for regulators, but enforcement can focus on: Individuals/Entities : Developers, founders, or DAOs who maintain or profit from the system. On/Off-Ramps : EU-regulated exchanges can reject or scrutinize funds from non-compliant DeFi sources. Technical Measures : Authorities may require front-ends to implement geolocation blocks, address screening, or adopt zero-knowledge-based KYC solutions. Legal Risks for DeFi Founders and DAOs: Those found knowingly facilitating money laundering or ignoring AML obligations could face fines or criminal liability. AMLD6 broadens the definition of offenses and strengthens information sharing among Member States, boosting cross-border investigations. DAOs might be treated as unincorporated associations whose active participants can be liable. Will DeFi Have to Implement KYC/AML? If DeFi wants mainstream adoption and integration with regulated finance, it may need optional or mandatory compliance layers—e.g., whitelisted pools or KYC gates. Over time, regulatory pressure from authorities and off-ramps will likely push more DeFi protocols to adopt or at least accommodate AML measures. Interaction with the Travel Rule & Unhosted Wallet Rules EU Travel Rule Extension to Crypto: Starting December 2024, all crypto transfers involving a CASP must include identifying information on the originator and beneficiary, mirroring traditional wire transfer rules. CASPs must refuse or halt transfers lacking complete data. No de minimis threshold exists—any transfer, regardless of amount, requires this information. Treatment of Unhosted Wallets: Unhosted wallets (self-custody) complicate the travel rule because there is no second institution to receive data. EU CASPs must still record and verify user information for unhosted wallets above certain thresholds. In practice, this may mean proving ownership of the receiving address. Smart contract addresses used in DeFi (e.g., liquidity pools, DAOs) also count as “unhosted,” prompting some CASPs to demand enhanced due diligence or refuse direct transfers. Impact on DeFi Liquidity Providers and DAOs: Liquidity providers withdrawing from an EU exchange to a DeFi pool might need to route funds to their own verified wallet first or prove ownership if exceeding €1,000. Deposits from privacy-enhancing protocols can be flagged as high risk. DAOs operating treasuries could face challenges off-ramping to fiat if no single verified individual claims the wallet. Exchanges, under pressure to comply, may reject or closely scrutinize funds from unknown DeFi addresses. Privacy and GDPR Considerations: An interesting twist in the EU is the General Data Protection Regulation (GDPR), which imposes rules on handling personal data. KYC data (names, IDs, etc.) is obviously personal data that must be stored securely and minimized. A conflict arises if one tried to record compliance information on an immutable blockchain. Once on-chain, data cannot be erased, clashing with GDPR’s “right to be forgotten.” Most DeFi projects avoid putting any personal data on-chain (favoring off-chain or zero-knowledge proofs), but as regulators push for on-chain identity attestation or allow verified credentials, projects must design systems that reconcile transparency with privacy law. Moreover, any DeFi company handling EU resident data needs GDPR compliance (privacy notices, breach protocols, etc.), adding another layer of regulatory complexity beyond financial laws. Asia Singapore Singapore has sought to be a crypto-friendly hub while enforcing strict AML standards. Under the Payment Services Act 2019 (PSA), any business providing digital payment token services (e.g. crypto trading, transfer, or custody services) must be licensed by the Monetary Authority of Singapore (MAS). This licensing comes with AML/CFT requirements – Singapore mandates full KYC for regulated crypto services, transaction monitoring, and compliance with FATF travel rule requirements. DeFi protocols per se are not explicitly carved out under the PSA; however, MAS has generally taken a “same activity, same regulation” stance. If a Singapore-based team runs a crypto lending or trading platform (even if using DeFi tech), regulators would likely view it as a financial service that needs either a license or an exemption via a sandbox. In practice, Singapore has encouraged experimentation through initiatives like Project Guardian, where regulated financial institutions explore DeFi for tokenized assets in a controlled environment. MAS officials have acknowledged the reality of DeFi and discussed potentially new frameworks – for instance, MAS’s chief fintech officer has suggested that entirely avoiding identification in DeFi is “not realistic” in the long run. Startups in Singapore thus often create two layers: an open-source protocol (which MAS might not regulate directly if truly decentralized) and a front-end company that interfaces with users (which would need a license and KYC). Notably, Singapore has also restricted marketing of crypto to the public and discouraged risky retail speculation, which means DeFi projects targeting Singaporean users should be cautious in how they advertise and ensure they are not offering prohibited products (like derivatives) without proper authorization. Hong Kong Hong Kong pivoted to embrace crypto under a new regulatory regime, allowing retail trading of approved cryptocurrencies on licensed exchanges as of 2023. The Securities and Futures Commission (SFC) in Hong Kong has made it clear that DeFi projects are not above the law. If a DeFi activity falls under existing definitions of regulated activity – e.g., operating an exchange, offering securities, or managing assets – it will require the appropriate SFC license. Providing automated trading services, even in a decentralized platform, triggers licensing (Type 7 ATS license) if the assets traded are “securities” or futures by Hong Kong’s definition. Likewise, offering what amounts to a collective investment scheme (like a yield farming pool inviting Hong Kong public investment) would require authorization. Hong Kong regulators see DeFi through the lens of risk: concerns include financial stability, lack of transparency, market manipulation (e.g. oracle attacks, front-running) and investor protection. They have indicated that operators and developers can be held accountable if they are in Hong Kong or target Hong Kong investors. Thus a DeFi startup in Hong Kong might need to either geo-fence Hong Kong users or ensure full compliance (including KYC and investor eligibility checks) for any product that might be deemed a security or trading facility. Japan Japan was one of the first major jurisdictions with a clear regulatory regime for cryptocurrency, and it continues to enforce one of the strictest AML/KYC standards. All crypto exchanges in Japan must register with the Financial Services Agency (FSA) and implement KYC for all customers. While Japan has not issued specific DeFi regulations, any service that custodies assets or intermediates trades would likely fall under existing laws (the Payment Services Act for crypto exchange or funds transfer, and the Financial Instruments and Exchange Act if it involves securities/derivatives). For example, a DeFi protocol enabling margin trading or synthetic stocks would likely be seen as offering derivatives to Japanese residents, which is unlawful without a license. Japan implements the FATF Travel Rule through the Japan Virtual Currency Exchange Association – meaning exchanges must collect counterparty information for transactions. If DeFi usage makes it hard to trace such information, Japanese regulators may respond by limiting exchange interactions with DeFi platforms that don’t meet compliance standards. Culturally, Japan emphasizes consumer protection (they infamously have a whitelist for tokens that exchanges can list). A completely permissionless DeFi application sits uneasily with that ethos. Thus, while one won’t find an “FSA DeFi rulebook,” a Japan-based founder should assume that if their protocol becomes popular in Japan, authorities might demand a compliance interface or even pressure to block Japanese IPs if the product can’t be monitored for AML. Other financial centers in Asia are also shaping DeFi oversight. South Korea treats crypto exchanges strictly (real-name verified accounts only, strict AML). After incidents like the Terra-Luna collapse, Korean regulators grew even more vigilant about crypto schemes. A DeFi project involving Korean users could be seen as an unregistered securities offering (if promising yields) or simply an illegal investment program if not approved. China remains effectively closed to cryptocurrency trading (outright ban), focusing on its central bank digital currency and permissioned blockchain tech. India has taken a tough stance with heavy taxation on crypto transactions, sometimes discussing an outright ban – a hostile environment for DeFi compliance. Meanwhile, Thailand and Malaysia have licensing for digital asset businesses that might ensnare certain DeFi activities. Overall, Asia presents a mix of innovation sandboxes and strict rules; the common theme is that if a DeFi project has an identifiable presence or target market in a jurisdiction, local regulators will apply existing financial laws. Other Jurisdictions Switzerland: Long seen as a crypto-friendly jurisdiction, Switzerland (through regulator FINMA) applies a technologically neutral approach to DeFi. FINMA has explicitly stated that it applies existing rules to DeFi applications under the principles of technology neutrality and “same risks, same rules,” looking past form to substance. If a DeFi application in Switzerland offers a service equivalent to banking, trading, or asset management, FINMA will require the appropriate license just as it would for a traditional provider. For example, running a decentralized exchange in Switzerland could trigger the need to be an authorized securities dealer or exchange if there is a central coordinating entity. Switzerland is notable for its AML rules regarding crypto: FINMA regulations (as of 2021) lowered the threshold for anonymous crypto transactions to CHF 1000, meaning Swiss VASPs must KYC customers even for relatively small amounts. This was done to close a loophole and prevent structuring of transactions to avoid AML checks. In a DeFi context, while Swiss law can’t force an on-chain DEX to conduct KYC, any Swiss-regulated intermediary (like a crypto bank or broker) interacting with DeFi liquidity must ensure no anonymous large transfers occur. Swiss authorities have also pioneered solutions like OpenVASP (a protocol for Travel Rule data exchange) to facilitate compliance even in decentralized transfers. Moreover, many DeFi projects have used the Swiss nonprofit foundation model to launch (to issue tokens under guidance from FINMA’s ICO framework). While this can be effective for token classification (utility vs asset tokens), the foundation must still implement AML controls if it engages in any custodial or exchange-like activities. United Arab Emirates: The UAE, particularly Dubai and Abu Dhabi, has set up regulatory regimes to attract crypto businesses. Dubai’s new Virtual Assets Regulatory Authority (VARA) issues licenses for various crypto activities in the emirate (and some free zones), with an emphasis on meeting FATF standards – meaning KYC/AML programs are mandatory for licensees. Abu Dhabi’s financial free zone (ADGM) has a framework treating crypto exchanges and custodians on par with financial institutions, requiring customer due diligence and monitoring. Even as the UAE markets itself as a crypto hub, it demands compliance measures from those who set up shop. A DeFi exchange or yield platform based in Dubai would need to register with VARA under the appropriate category and implement KYC for users, transaction monitoring, and sanctions screening. The UAE is interesting because it explicitly allows what some other places don’t (like crypto token fundraising) but under oversight. DeFi founders often incorporate entities in the UAE to benefit from clear rules and 0% tax, but they should expect close interaction with regulators and ongoing audits to ensure no illicit finance is flowing. On the flip side, purely decentralized operations with no UAE entity fall outside these regimes – but then cannot easily use the UAE’s traditional banking or legal system. Latin America: In Latin America, regulation ranges from nascent to non-existent, though the trend is toward more oversight. Brazil passed a law (effective 2023) requiring crypto service providers to register with the central bank and comply with AML/CFT measures. Mexico’s Fintech Law and subsequent rules require exchanges to register with the central bank and impose KYC – again focusing on centralized players. Many Latin American countries are still developing regulatory approaches; in places with capital controls or inflation, DeFi usage is high as an alternative, which raises political and AML concerns. Authorities might see DeFi as a channel to bypass currency rules or launder narcotics money, increasing the likelihood of future clampdowns. Enforcement can be uneven, but as global standards trickle down, regulators in the region are expected to tighten controls on DeFi. El Salvador adopted Bitcoin as legal tender and has been encouraging crypto businesses – but it also must follow FATF standards, meaning AML obligations still apply. Global Standards (FATF): Overarching all these jurisdictions is the influence of the Financial Action Task Force (FATF), which sets AML/CFT standards followed by 200+ countries. FATF extended its standards to “virtual asset service providers” in 2019, which countries are implementing in various ways. FATF has explicitly highlighted DeFi as a potential gap: in a 2023 update, it noted that conducting comprehensive DeFi risk assessments is challenging for most jurisdictions due to data and enforcement difficulties. FATF recommends that if a DeFi arrangement has “owners or operators,” countries should hold those parties accountable as VASPs even if the system brands itself decentralized. Conversely, if truly no person exercises control, some activity might fall outside conventional regulation. The Travel Rule applies to crypto transfers over a threshold, meaning as countries enforce this, DeFi protocols that interface with regulated entities will feel indirect pressure to facilitate the required information sharing or risk being geofenced. Decentralization vs. Regulatory Requirements The core paradox is that DeFi is designed to eliminate centralized control, yet laws are enforced by finding someone – a person or entity – to hold responsible. Traditional compliance frameworks assume a regulated entity (a bank, exchange, broker) can perform KYC checks, maintain records, and be examined or sanctioned for failures. DeFi breaks this model by enabling peer-to-peer interactions governed by smart contracts. If no one controls a protocol, who is responsible for ensuring compliance? Regulators are increasingly taking the view that most DeFi projects aren’t as decentralized as they claim. Truly decentralized projects present a dilemma: regulators either have to regulate the users themselves or impose rules at the periphery (e.g., on interfaces or on-off ramps). This conflict can put founders in an untenable position. If they fully decentralize (renounce control, launch code and step away), they might avoid being a regulated entity – but then they also relinquish the ability to adapt the protocol to comply with future rules. If they retain control to enforce compliance (like adding KYC gating), they defeat the core premise of open, permissionless access. Walking this tightrope is perhaps the fundamental challenge of DeFi compliance. Enforcing AML/KYC in Permissionless Systems AML and KYC rules require identifying customers, monitoring transactions, and reporting suspicious activity – tasks that assume a gatekeeper is present. In DeFi protocols, users connect wallets and transact with no onboarding process collecting names or IDs. Smart contracts do not discern good versus bad actors; anyone with a wallet and assets can participate. This permissionless design is superb for accessibility and innovation, but it’s a nightmare for AML enforcement. Authorities worry that criminals, sanctioned nations, or terrorists can freely move funds through DeFi protocols to obscure their origin. Enforcing KYC in this environment has proven difficult. A DeFi platform can’t easily compel every user globally to upload an ID – there is no customer account creation step in most DApps. Some projects have tried to implement opt-in KYC or whitelisting: for instance, creating “permissioned pools” where only verified addresses can participate. Others have introduced blocklists on front-ends, preventing known illicit addresses from interacting through the official website. Regulators increasingly rely on ex post enforcement and chain analytics to trace illicit funds and identify suspects. DeFi founders can mitigate risk by integrating blockchain monitoring tools that flag suspicious flows, cooperating with law enforcement when required, and avoiding explicit facilitation of money laundering. Smart Contracts, Anonymity, and Illicit Finance DeFi’s infrastructure offers both transparency (every transaction is public) and anonymity (user addresses are pseudonymous). This combination creates opportunity and risk. It enables open financial innovation but also invites exploitation by criminals. Bad actors can chain-hop across multiple DeFi protocols, use privacy mixers, and rapidly launder stolen funds. Sanctions evasion becomes easier if no on-ramp checks identity. Fraud and market manipulation (rug pulls, pump-and-dumps, oracle exploits) are common in a space lacking centralized oversight. DeFi founders thus face mounting pressure to proactively mitigate illicit finance risks. Without a compliance framework, entire protocols can be blacklisted or sanctioned, as the Tornado Cash saga illustrated. Self-regulation through code audits, risk monitoring, KYC gating, or blocklists may become the norm if DeFi wants to operate within the confines of global law. Projects that remain staunchly permissionless risk isolation from regulated finance, as banks and centralized exchanges refuse to interact with them due to compliance concerns. Securities Regulations in DeFi If a token is deemed a security in a certain jurisdiction (like the U.S.), offering it or facilitating trades could require registration, disclosures, and licenses. DeFi blurs lines because tokens serve multiple roles (governance, utility, investment instrument). Regulators have grown skeptical of superficial decentralization arguments, emphasizing that governance tokens conferring profits or yields are likely securities. Yield farming and liquidity mining may likewise be treated as investment contracts if participants expect profit from a team’s efforts. Some DeFi projects try to exclude U.S. IPs, use disclaimers, or adopt progressive decentralization to reduce securities risk. Yet enforcement actions indicate that disclaimers alone won’t suffice. Projects must carefully structure tokenomics and marketing to avoid crossing into regulated territory. Extraterritorial Impact of Major Regulations A daunting aspect for global DeFi startups is that the laws of the U.S. (and, to a lesser extent, the EU) can reach far beyond their borders. U.S. regulators have not hesitated to prosecute foreign projects that serve American users, claiming jurisdiction whenever U.S. investors or markets are involved. Likewise, the EU’s regulations can affect anyone offering services to EU citizens. This extraterritorial reach means that even if a DeFi project is based in a crypto-friendly jurisdiction, it could still face scrutiny from major regulators. The risk of enforcement may lead projects to geo-block certain regions, incorporate offshore, or become genuinely decentralized so no entity can be targeted. Nonetheless, as DeFi grows, regulators are forging cross-border coalitions to prevent regulatory arbitrage. Founders cannot ignore the biggest markets if they want mainstream adoption. Jurisdictional choices thus require careful planning, balancing legal exposure with business strategy. Guidance for DeFi Founders & Startups Balance Decentralization with Compliance from Day One Decide early which aspects of your project will be decentralized vs. controlled, and plan compliance measures accordingly. Incorporate a legal entity for the interface or development company if you anticipate regulatory scrutiny. Document a roadmap to progressive decentralization, but maintain compliance for any centralized functions until fully decentralized. Implement “Smart” AML/KYC Measures That Align with DeFi Ethos Use tiered access or feature gating: allow basic permissionless usage for small amounts but require verification for large-volume activities. Leverage decentralized identity solutions or zero-knowledge proofs to balance privacy with compliance. Integrate real-time risk monitoring tools (e.g., blockchain analytics) to flag suspicious addresses. Maintain off-chain documentation or audit trails to show good faith if investigated. Navigate Securities Law Proactively Avoid marketing tokens with explicit profit-sharing or investment-like features. Focus on utility and governance. Use regulatory compliant fundraising (e.g., Reg D or Reg S offerings) if you sell tokens to raise capital. Conduct ongoing legal reviews as token features evolve, documenting how you minimize securities risk. Engage Regulators, Auditors, and Advisors Early Join regulatory sandboxes or innovation hubs where possible to get feedback on compliant DeFi models. Undergo third-party audits (technical security and legal compliance) and keep the reports to show regulators. Stay in dialogue with compliance experts who can update you on shifting regulations. Smart Jurisdiction Choices and Legal Arbitrage Incorporate in crypto-friendly jurisdictions (e.g., Switzerland, Singapore, UAE) that offer clear frameworks. Create separate entities for different functions (protocol foundation vs. operating company) to compartmentalize risk. Remain flexible: regulations can shift, so be ready to relocate or restructure if your chosen haven becomes hostile. The regulatory environment is moving fast, and every crypto or DeFi project deserves a clear strategy for staying ahead. Whether you’re determining if your platform qualifies for carve-outs or planning a compliant token sale, the right legal guidance can make all the difference. At Prokopiev Law , we blend practical crypto experience with deep legal insight to help you: Pinpoint where your project stands under applicable laws—and whether you can leverage its DeFi exemptions Structure your operations, from entity setup to licensing routes, to safeguard your vision Create and review essential documentation for token offerings, stablecoin issuance, and more Stay informed and compliant as the EU’s regulatory framework continues to evolve Your innovation deserves the strongest legal foundation. Reach out to Prokopiev Law today to learn how we can protect your ambitions and pave the way for long-term success. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.











