Search Results
138 items found for ""
- MiCA comes fully into force: MiCA register was published
The EU's Markets in Crypto-Assets Regulation (MiCA) came into full effect on 30 December 2024 , following its initial entry into force on 29 June 2023. MiCA establishes the EU as the first major jurisdiction to regulate crypto-assets comprehensively. It creates a harmonized framework for crypto-asset issuance and services, covering various types of crypto-assets such as asset-referenced tokens (ARTs), electronic-money tokens (EMTs) and other crypto-assets (this blanket category covers utility tokens and other crypto-assets that don't qualify as ARTs or EMTs). This regulation also introduces a pan-European licensing and supervisory system for issuers, platforms, and crypto-asset service providers (CASPs). Notably, Titles III and IV, dealing with ARTs and EMTs, were applied from 30 June 2024. As of 30 December 2024, the European Securities and Markets Authority (ESMA) is empowered under Articles 109 and 110 of the MiCA Regulation to maintain and publish a central register of crypto-asset white papers, authorized crypto-asset service providers (CASPs), and non-compliant entities. This register will be sourced from the relevant National Competent Authorities (NCAs) and the European Banking Authority (EBA). To meet the legal deadline, ESMA has created an interim MiCA register , which will be updated and republished regularly. This interim register, accessible on the MiCA webpage and the Databases and Registers page, will be available as a collection of CSV files until mid-2026, when it will be formally integrated into ESMA’s IT systems. The interim register includes five CSV files, which cover: · White papers for crypto-assets other than asset-referenced tokens (ARTs) and e-money tokens (EMTs) (Title II) · Issuers of asset-referenced tokens (Title III) · Issuers of e-money tokens (Title IV) · Authorized crypto-asset service providers (Title V) · Non-compliant entities providing crypto-asset services Although four of the five files in the interim MiCA register are currently empty, the file related to issuers of EMTs contains crucial information. As of January 06, 2025, this file lists companies that have obtained authorization to issue e-money tokens under MiCA . Companies like: Membrane Finance Oy; Circle Internet Financial Europe SAS; Société Générale – Forge; Banking Circle S.A.; Quantoz Payments B.V. and Fiat Republic Netherlands B.V. are included in this file, showcasing their official approval status and providing access to their relevant white papers, authorization dates, and other key details. ESMA will update the register on a monthly basis , and while information will be reported by competent authorities on a rolling basis, it will not appear in the register immediately. Records in the interim MiCA register will reflect the information provided by the relevant authorities. If an authorization is withdrawn by a competent authority, the record will remain in the register, noting the date when the withdrawal took effect. With the establishment of the interim MiCA register and its regular updates, the European Union continues to lead the way in creating a transparent and compliant digital finance environment. We will continue to monitor and report on further updates to the MiCA framework and its impact on the crypto industry. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Cyprus Opens Submitting for MiCA License
The Cyprus Securities and Exchange Commission (CySEC) has initiated a preliminary assessment phase for Crypto-Asset Service Providers (CASPs) applying under the forthcoming EU Markets in Crypto-Assets Regulation (MiCA). Effective today, November 13, 2024 , CASPs in Cyprus can submit applications to CySEC in preparation for MiCA’s full implementation on December 30, 2024. This step by CySEC aligns with the MiCA framework, a regulation setting standardized rules for crypto-asset markets across the EU. As part of this preliminary phase, CySEC has made application and notification forms accessible on its website for CASPs and other financial entities authorized under Article 60 of MiCA, including investment firms, UCITS managers, and alternative investment fund managers, to submit notifications or seek authorization under Article 63. Important Points for this Preliminary Phase: During this phase, CySEC will receive applications from both entities currently regulated under Cyprus’ national crypto-asset laws and new market entrants aiming for MiCA compliance. While accepting applications early, CySEC retains the discretion to prioritize applications, particularly for entities already regulated under existing Cyprus’ crypto-asset rules. Submissions during this preliminary phase will be officially considered upon completion of formalities, including fee payment and verification of information accuracy by December 30, 2024. CySEC will make final decisions on granting/refusing authorization, as well as on the completeness of submitted notifications, after MiCA officially applies to CASPs on December 30, 2024. Reminder of Transitional Measures and Applicability Dates CySEC also reminds interested parties of a recent announcement regarding MiCA's phased applicability. MiCA became effective for issuers of Asset-Referenced Tokens (ARTs) and E-Money Tokens (EMTs) on June 30, 2024, and will extend to CASPs on December 30, 2024. Under MiCA’s transitional measures, CASPs registered under National Rules before December 30, 2024, may continue to provide their services until July 1, 2026, or until CySEC grants or refuses authorization per Article 63, whichever is sooner. Additionally, as of October 17, 2024, CySEC ceased accepting any CASP applications for registration under National Rules in view of MiCA becoming applicable to CASPs on 30 December 2024. So that, CySEC’s early application phase for MiCA is helping crypto service providers in Cyprus get ready for new EU rules, making the transition easier and clearer for everyone involved. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- The EU AI Act: Overview and Key Legal Insights
The European Union's AI Act (Regulation (EU) 2024/1689) introduces a legal framework to regulate artificial intelligence systems across Europe. The AI Act establishes harmonized rules for developing, deploying, and using AI systems to ensure that these technologies are safe, transparent, and respectful of fundamental rights. The regulation takes a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited systems), high-risk, limited-risk, and minimal-risk systems. Each classification comes with its own set of obligations, with the most stringent requirements applied to high-risk systems that can significantly affect people's lives, such as those used in critical infrastructure, law enforcement, or education. Implementation Timeline 2024 12 July 2024 : The AI Act is officially published in the Journal of the European Union. 1 August 2024 : The AI Act enters into force, but its requirements won't apply immediately—they will gradually be phased in over time. 2 November 2024 : Member States are required to identify and publicly list the authorities responsible for fundamental rights protection. 2025 2 February 2025 : This is the date when prohibitions on certain AI systems begin to apply, as outlined in Chapters 1 and 2 of the Act. These prohibitions concern the use of AI systems that are deemed to pose unacceptable risks to fundamental rights and safety (prohibited AI practices are described below). 2 May 2025 : By this date, the European Commission must ensure that codes of practice are ready. These codes are expected to provide guidance on complying with various parts of the AI Act, specifically ensuring that AI providers and other stakeholders adhere to the required standards. 2 August 2025 : Several critical provisions take effect on this date: Notified bodies, General Purpose AI (GPAI) models, and governance rules (Chapter III, Section 4; Chapter V; Chapter VII) begin to apply. Provisions around confidentiality (Article 78) and penalties (Articles 99 and 100) also start. Providers of GPAI models, which were placed on the market before this date, must comply with the AI Act by 2 August 2027. Member States are required to submit their first reports on the financial and human resources of their national competent authorities by this date, and every two years thereafter. Member States must designate national competent authorities responsible for the oversight of AI systems, such as notifying and market surveillance authorities, and make their contact details publicly available. Member States are also expected to establish and implement rules on penalties and fines related to violations of the AI Act. If the codes of practice have not been finalized or are found inadequate by the AI Office, the European Commission can establish common rules for the implementation of obligations for providers of GPAI models via implementing acts. The European Commission will also begin its annual review of prohibitions and may amend them if necessary. 2026 2 February 2026 : The European Commission is required to provide guidelines that specify the practical implementation of Article 6, which relates to the classification of prohibited AI systems. 2 August 2026 : The majority of the EU AI Act's provisions will begin to apply to AI systems across the EU, with the exception of Article 6(1); it states that an AI system is classified as high-risk if it serves a critical safety function within a product, and that product must undergo external assessment to verify its compliance with safety regulations. Operators of high-risk AI systems (other than those covered under Article 111(1)) must comply with the AI Act if their systems were placed on the market or put into service before this date. Member States are required to ensure that they have at least one AI regulatory sandbox operational at the national level by this date. 2027 August 2, 2027: The obligations outlined in Article 6(1) of the AI Act start to apply. Providers of General Purpose AI (GPAI) models, which were placed on the market or put into service before August 2, 2025, are required to fully comply with the obligations laid out in the EU AI Act by this date. AI systems that are components of large-scale IT systems, listed in Annex X of the AI Act, and were placed on the market or put into service before August 2, 2027, must also comply with the AI Act by this date. However, they are given until December 31, 2030, for full compliance. 2028 2 August 2028: The European Commission is tasked with evaluating the functioning of the AI Office. 2 August 2028 (and every three years thereafter): The Commission will evaluate the impact and effectiveness of voluntary codes of conduct related to AI systems. These evaluations will help determine if further regulatory action is needed for voluntary codes to align with the goals of the Act. 2 August 2028 (and every four years thereafter): The Commission must evaluate and report on the necessity for amendments in several critical areas: Annex III : This annex lists the AI systems that require additional transparency measures. Article 50 : Pertains to transparency requirements for certain AI systems. Supervision and Governance Systems : The governance and supervision mechanisms are reviewed for potential adjustments or improvements. 2 August 2028 (and every four years thereafter): A report will be submitted to the European Parliament and the Council regarding the energy-efficient development of general-purpose AI models. This report aims to ensure that AI models are designed in a sustainable manner. 1 December 2028 (nine months prior to August 2029): By this date, the Commission must produce a report on the delegation of powers, as specified in Article 97 of the Act. 2029 1 August 2029: The Commission’s powers to adopt delegated acts (as defined in various Articles such as 6, 7, 11, 43, 47, 51, 52, 53) will expire unless extended by the European Parliament or the Council. The default is for these powers to be extended for recurring five-year periods unless opposed. 2 August 2029 (and every four years thereafter): The Commission is required to submit a report on the evaluation and review of the AI Act to the European Parliament and the Council. Beyond 2029 2 August 2030: Providers and deployers of high-risk AI systems that are intended for use by public authorities must comply with the obligations and requirements of the Act by this date. 31 December 2030: AI systems that are components of large-scale IT systems (as listed in Annex X) and were placed on the market before 2 August 2027 must be brought into compliance with the Act by this date. 2 August 2031: The Commission will assess the enforcement of the AI Act and submit a report to the European Parliament, the Council, and the European Economic and Social Committee. Purpose of the Regulation Harmonized Rules for AI Systems : The regulation establishes uniform rules for the marketing, operation, and use of AI systems across the EU. Prohibited AI Practices : The Act outright bans certain AI practices deemed unacceptable, such as manipulative or harmful AI. High-Risk AI Systems : Special provisions apply to AI systems classified as "high-risk," imposing stricter requirements on their development and deployment to mitigate potential harm. Transparency Requirements : The regulation mandates clear transparency rules for AI systems, particularly those that could significantly affect individuals, such as AI that interacts with humans or collects sensitive data. General-Purpose AI Models : The Act also covers general-purpose AI models, ensuring their safe placement in the market. Market Monitoring and Enforcement : The regulation sets out how AI systems will be monitored and regulated. Innovation Support : The Act specifically includes measures to foster innovation, with a focus on small and medium-sized enterprises (SMEs) and start-ups. Entities Covered Providers of AI systems : This regulation applies to any individual or company that sells or puts AI systems into service within the EU, regardless of whether it is based in the EU or outside it (third countries). Deployers of AI systems : These are entities using AI systems that are based in or located within the EU. Third-country providers and deployers : The regulation applies even if AI systems are deployed or provided from outside the EU if their output is used within the EU. Importers and distributors : Entities involved in importing or distributing AI systems within the EU. Manufacturers : Companies that integrate AI systems into their products and market them under their own name or trademark. Authorized representatives : If a provider outside the EU is not established in the Union, the authorized representatives who act on their behalf within the EU must comply with the regulation. Affected persons : The regulation includes protections for individuals in the EU affected by AI systems. An AI system is any machine-based system that operates with varying levels of autonomy. It is capable of receiving inputs, processing data, and producing outputs, which could be predictions, decisions, recommendations, or content. The key feature of AI systems is their ability to influence physical or virtual environments and adapt post-deployment. A general-purpose AI model is an AI model that is capable of performing a wide range of tasks and is typically trained on a large amount of data. These models exhibit a high degree of adaptability and can be integrated into various downstream applications or systems. A general-purpose AI system is based on a general-purpose AI model but serves multiple purposes. It can be used directly by end-users or integrated into other AI systems for diverse applications. This term captures AI systems with broader utility beyond a single, specialized function. A provider refers to any individual or entity (such as a public authority, agency, or legal body) that develops or has developed, an AI system or general-purpose AI model. The provider places this AI system or model on the market under their own name or trademark. Importantly, this applies whether the AI system is offered for payment or free of charge. A deployer is any person or organization that uses an AI system under their control. Deployers are different from providers, as they are responsible for using the AI system rather than creating or placing it on the market. Exemptions AI systems related to national security, defense, or military purposes are exempt from the regulation. This applies to systems regardless of the type of entity (public or private) developing or using the AI system. This exemption covers public authorities or international organizations using AI systems in law enforcement or judicial cooperation with the EU or its member states. However, such entities must ensure they offer adequate safeguards for the protection of individual rights and freedoms. The regulation does not affect the liability provisions related to intermediary service providers as outlined in Chapter II of Regulation (EU) 2022/2065 (the Digital Services Act). These providers typically act as platforms or hosts for third-party content or services. AI systems developed and used solely for scientific research and development are not subject to the regulation. However, this exclusion applies only if these systems are not marketed or used for commercial purposes within the Union. Real-world testing, though, is not covered by this exclusion, meaning such systems would still need to comply with relevant rules if tested in live environments. AI systems or models in the research, testing, or development stages are not covered by the regulation unless they are placed on the market or put into service. However, testing AI in real-world conditions would require compliance with the Act. Natural persons using AI systems purely for personal and non-professional purposes (e.g., using AI tools at home) are exempt from the regulation. AI systems released under free and open-source licenses are generally excluded unless they are placed on the market or deemed high-risk AI systems (or systems falling under Articles 5 or 50, which cover prohibited AI practices and transparency requirements). Prohibited AI Practices Subliminal or Manipulative Techniques: AI systems cannot be used if they deploy subliminal techniques (i.e., beyond an individual’s conscious awareness) to distort behavior in a way that significantly impairs a person’s ability to make informed decisions. This could involve psychological manipulation or deception, leading individuals or groups to make decisions they wouldn’t have otherwise made, causing harm. The prohibition covers both the intent and the effect of such techniques, especially if they cause significant harm, either physical, emotional, or financial. Exploitation of Vulnerabilities: AI systems are banned if they exploit the vulnerabilities of certain people or groups based on factors like age, disability, or social and economic circumstances. For instance, using AI to take advantage of an elderly person’s possible cognitive decline to push them into harmful decisions would fall under this prohibition. This applies to any situation where the AI distorts behavior and leads to significant harm. Social Scoring: AI systems used to evaluate or classify individuals based on their social behavior or personal characteristics over time, similar to a "social credit" system, are prohibited. Such systems cannot lead to unfair or detrimental treatment of people based on their social behavior, especially if the treatment is unrelated to the context in which the data was collected or if it is disproportionate to the actual behavior. Unrelated detrimental treatment : If someone is treated unfairly based on social behavior from one context being applied to another unrelated context (e.g., being denied a service based on past behavior in a different setting). Disproportionate treatment : If the treatment is excessively negative or unfair given the nature of the behavior being evaluated. Predicting Criminal Behavior: AI systems cannot be used solely to predict the risk of someone committing a crime based on profiling or personality traits. However, AI can be used to support human decision-making in criminal investigations as long as it is based on objective facts tied directly to the crime rather than assumptions from personality traits. Facial Recognition Database Creation: AI systems that build or expand facial recognition databases by scraping images from the Internet or CCTV without consent are prohibited. This also applies to the collection of facial images without clear, targeted authorization, especially for law enforcement or surveillance purposes. Emotion Inference in Sensitive Contexts: AI systems that infer emotions in workplaces or educational settings are generally prohibited unless they serve a medical or safety purpose. Biometric Categorization for Sensitive Attributes: AI systems are prohibited from categorizing people based on biometric data (like facial features) to infer sensitive personal attributes such as race, political beliefs, or sexual orientation. However, this prohibition doesn’t apply to law enforcement uses where biometric data has been lawfully acquired, such as for filtering or categorizing within law enforcement datasets. Real-Time Biometric Identification in Public Spaces: The use of AI for real-time remote biometric identification (such as facial recognition) in public spaces by law enforcement is generally prohibited, except in highly specific cases like: (i) Searching for specific victims of crimes like abduction or trafficking. (ii) Preventing imminent, severe threats like a terrorist attack. (iii) Identifying or locating individuals involved in serious criminal offenses punishable by at least four years in prison. The deployment of real-time biometric identification by law enforcement in public spaces must be strictly necessary and proportional to the harm that would occur without its use. It also requires an assessment of the consequences on individual rights and freedoms. High-Risk AI Systems Criteria An AI system is classified as high-risk if both of the following conditions are met: AI system as a safety component or product : The AI system is either: A critical safety component of a product, or A product itself is subject to Union harmonization legislation (laws that govern product safety in various sectors, such as medical devices or machinery). Third-party conformity assessment : The product or AI system must undergo a third-party assessment to ensure it complies with safety standards. This process is mandatory before the product or system can be sold or used, as required by the harmonization legislation. The AI Act also lists specific areas where AI systems are automatically considered high-risk due to their direct impact on people's lives and fundamental rights: Biometrics : AI systems used for remote biometric identification (e.g., facial recognition). AI categorizing individuals based on sensitive attributes like race, gender, or political beliefs. AI for emotion recognition. Critical Infrastructure : AI systems involved in managing essential services (e.g., electricity, water, road traffic). Education and Vocational Training : AI systems used in student admissions, evaluating learning outcomes, or monitoring student behavior. Employment : AI systems for recruitment, performance evaluation, promotion, or monitoring employee behavior. Access to Essential Services : AI determining eligibility for public services (e.g., healthcare, social benefits). AI assessing creditworthiness or evaluating life and health insurance risks. AI prioritizing emergency service responses or triaging patients in healthcare. Law Enforcement : AI systems used in criminal investigations for risk assessments (e.g., predicting reoffending). AI systems assessing evidence reliability or criminal profiling. Migration and Border Control : AI assessing security or health risks for individuals entering a country. AI assisting in asylum or visa applications and evaluating eligibility for immigration services. Judiciary and Elections : AI used by courts to assist in legal decision-making or evidence interpretation. AI systems influencing election outcomes or voting behavior. Exceptions to High-Risk Classification Certain AI systems listed above may not be classified as high-risk if they do not pose a significant risk of harm to health, safety, or fundamental rights. These exceptions apply if the AI system: Performs narrow procedural tasks (specific, limited functions). Improves the result of a previously completed human activity (assists but does not replace human decisions ). Detects decision-making patterns but does not influence or replace human judgments without proper review. Performs preparatory tasks for assessments but does not directly influence decision-making. Despite these exceptions, any AI system used for profiling individuals (assessing characteristics like personality traits or behavior) is always classified as high-risk. If a provider believes an AI system listed in Annex III of the AI Act does not meet the high-risk criteria, they must document their assessment and be prepared to present this justification to relevant authorities. Compliance Obligations for High-Risk AI Systems Compliance should consider: Intended Purpose: How the AI system is intended to be used. State of the Art: Current technological standards and best practices in AI and related technologies. Risk Management System: A risk management system must be in place as described below in detail. The risk management system is defined as a continuous, iterative process that spans the entire lifecycle of the AI system. It involves regular reviews and updates to adapt to evolving risks and technological changes. The risk management system ensures that all potential risks associated with the AI system are identified, assessed, and mitigated throughout its lifecycle. Steps Involved: Identification and Analysis of Risks: Known Risks: Risks that are already identified and understood. Reasonably Foreseeable Risks: Potential risks that can be anticipated based on current knowledge and usage scenarios. Focus Areas: Health, safety, and fundamental rights. Estimation and Evaluation of Risks: Assess the likelihood and impact of identified risks. Consider both intended use and conditions of reasonably foreseeable misuse. Evaluation of Other Risks: Analyze additional risks using data from post-market monitoring systems (Article 72 of the reguation). Ensure comprehensive risk assessment beyond initial identification. Adoption of Risk Management Measures: Implement targeted measures to address identified risks. Ensure that these measures are appropriate and effective. Risk management measures should: Account for Interactions: How different requirements and measures interact with each other. Minimize Risks Effectively: Achieve a balance that maximizes risk reduction while maintaining functionality. After implementing risk management measures, the remaining risks (residual risks) must be: Judged Acceptable: Based on established criteria and standards. Overall Residual Risk: The cumulative risk from all residual hazards should remain within acceptable limits. High-risk AI systems must undergo testing to identify appropriate risk management measures, ensure consistent performance for their intended purpose, and confirm compliance with the specified requirements. Testing procedures may include r eal-world conditions , providing a practical assessment of AI systems in environments that mimic actual usage scenarios and allowing for the identification of unforeseen risks that may not surface in controlled testing environments. Testing should be conducted throughout the development process and before the system is placed on the market or deployed. When implementing the risk management system, providers must consider: Impact on Minors: Potential adverse effects on individuals under 18. Other Vulnerable Groups: Groups that may be more susceptible to harm due to specific characteristics or circumstances. Data Governance Requirements for High-Risk AI Systems High-risk AI systems that involve training AI models must be developed using training, validation, and testing data sets that meet specified quality standards. These data sets must follow strict guidelines to ensure the AI system functions accurately and safely. Data governance refers to the set of policies and procedures governing how data is collected, processed, and managed. High-risk AI systems must have data management practices tailored to their specific purposes, including: (a) Design Choices : The AI system's design choices must reflect best practices in data handling and governance. (b) Data Collection and Origin : Clear documentation of how data is collected, its origin, and the original purpose of collection (especially for personal data). (c) Data Preparation : Data processing steps like annotation, labelling, cleaning, and aggregation must be documented. (d) Assumptions : The assumptions made about the data, especially regarding what it represents or measures, must be articulated. (e) Data Suitability : The quantity, availability, and relevance of the data must be assessed to ensure it is appropriate for the AI system's purpose. (f) Bias Assessment : Potential biases in the data that could impact safety or lead to discrimination must be thoroughly examined. (g) Bias Mitigation : Measures must be taken to detect, prevent, and correct biases. (h) Data Gaps : Any shortcomings or gaps in the data that could hinder regulatory compliance must be identified and addressed. Data Quality Standards for High-Risk AI Systems The data sets used for training, validation, and testing must meet several key criteria: Relevance : The data must be applicable to the AI system’s intended purpose. Representation : The data should be representative of the population or environment where the system will be deployed. Accuracy : Data must be as free of errors as possible and complete. Statistical Properties : Data must have suitable statistical characteristics, especially for systems intended to affect particular groups of people. Data sets must reflect the geographical, contextual, behavioral, or functional settings in which the AI system is intended to operate. This ensures that the AI system performs as expected in its real-world context. Handling Special Categories of Personal Data by High-Risk AI Systems In cases where it is necessary to detect and correct biases, special categories of personal data (e.g., racial or ethnic origin, political opinions, health data) may be processed, but only under strict conditions, such as: Necessity : No other data (e.g., anonymized or synthetic data) can achieve the same result. Security Measures : The data must be protected with state-of-the-art security measures (e.g., pseudonymization, strict access controls). Deletion : Special data must be deleted once biases are corrected or once the data retention period ends. Documented Justification : A clear record must explain why processing special data was necessary, including reasons why alternative data couldn’t be used. If an AI system doesn't rely on training data (e.g., rule-based systems), the governance and management practices described above apply only to testing data sets . Technical Documentation for High-Risk AI Systems Before placing a high-risk AI system on the market, providers must create technical documentation . This documentation must be continuously updated to demonstrate the system's compliance with the regulatory requirements. Small and medium-sized enterprises (SMEs) and start-ups may provide simplified versions of the required technical documentation. Record-Keeping (Logging) High-risk AI systems must allow for automatic event logging over the entire lifetime of the system. Logs provide an audit trail, helping to monitor the system’s performance and trace any malfunctions or issues. The logging capabilities should provide sufficient traceability to help in: Risk Identification : Detecting situations where the system may pose a risk or where substantial modifications have been made. Post-Market Monitoring : Supporting ongoing monitoring of the system once it is deployed. Operational Monitoring : Ensuring the system operates as intended during its use. For AI systems related to biometric identification (e.g., facial recognition), additional logging requirements apply: Usage Period : Record the start and end times of each use of the system. Reference Database : Document the database against which input data is checked. Input Data : Log the data that resulted in a match during the system’s use. Human Verification : Record the identity of any persons involved in verifying the system’s output, to ensure accountability and transparency. Transparency and Provision of High-Risk AI Systems High-risk AI systems must be designed to be transparent enough for deployers (those using the AI system) to understand and interpret the system’s outputs. The degree of transparency should align with the requirements and obligations of both the AI provider (the one who developed the system) and the deployer. AI systems must come with clear, concise, and accurate instructions for use . These should be provided in a digital or suitable format and must be easy for deployers to understand. The instructions must cover the following key areas: (a) Provider’s Information : The name and contact details of the provider and, if applicable, their authorized representative. (b) Performance Characteristics : Purpose : The intended use of the AI system. Accuracy, Robustness, and Cybersecurity : The levels of accuracy and cybersecurity tested and validated, as well as any circumstances that might affect performance. Risks : Any known risks to health, safety, or fundamental rights when the system is used as intended or under foreseeable misuse. Output Explanation : Technical capabilities that allow deployers to understand and explain the system's outputs. Performance with Specific Groups : How well the AI system performs for specific groups or individuals it is intended to serve. Input Data : Information about the data used in training, validation, and testing, particularly if it is relevant to the system's performance. Output Interpretation : Guidance on interpreting the system's output appropriately. (c) Predetermined Changes : Any changes in the system’s performance or design anticipated by the provider. (d) Human Oversight : Measures that help deployers interpret and monitor the system’s outputs effectively. (e) Resources and Maintenance : Information on the necessary hardware, computational resources, and maintenance schedules, including software updates. (f) Log Collection : Instructions on how to collect, store, and interpret logs. Human Oversight of High-Risk AI Systems High-risk AI systems must be designed with tools that allow natural persons (human operators) to oversee and intervene in the system's operation effectively. The goal of human oversight is to minimize risks related to health, safety, and fundamental rights that might arise during the AI system's use, including risks that could occur despite the application of other regulatory safeguards. Human oversight measures should match the risk level , the system’s autonomy , and its context of use . Oversight can be achieved through: (a) Built-in Measures : Measures integrated directly into the AI system by the provider to facilitate human oversight. (b) Deployable Measures : Measures that the provider specifies for the deployer to implement. The AI system must be designed so that human operators can: (a) Understand System Capabilities and Limitations : Operators should have a clear understanding of the system’s functions and be able to monitor its operation for anomalies or malfunctions. (b) Be Aware of Automation Bias : Operators should avoid over-relying on AI outputs, particularly for high-stakes decisions (e.g., medical diagnoses or legal judgments). (c) Interpret Outputs Correctly : The system should provide interpretation tools to help operators understand and assess the AI's output. (d) Override the System : Operators must be able to disregard or reverse the AI’s output if necessary, ensuring that the system doesn’t make irreversible decisions without human intervention. (e) Interrupt the System : Operators must have access to a ‘stop’ function that allows them to safely halt the system’s operation if required. Special Oversight for Biometric Systems For biometric AI systems, such as facial recognition, the Regulation requires additional verification steps: Before any action or decision is made based on the AI system’s identification, at least two qualified humans must verify the identification separately. Exceptions to this requirement apply in cases of law enforcement, migration, or border control where such a procedure is deemed disproportionate under national or EU law. Accuracy, Robustness, and Cybersecurity of High-Risk AI Systems High-risk AI systems must be designed to achieve and maintain high levels of accuracy , robustness , and cybersecurity throughout their lifecycle. The systems should be able to perform reliably under varying conditions and resist errors or faults. The accuracy and robustness of the system should be measurable. The system’s level of accuracy, along with relevant accuracy metrics, must be declared in the instructions for use . AI systems must be resilient to errors , faults , or inconsistencies in their environment or interactions with humans or other systems. This can be achieved through measures like technical redundancy , where the system includes backup plans or failsafe mechanisms to ensure continuous operation. For AI systems that continue to learn after deployment, safeguards must be in place to prevent feedback loops, where biased outputs could affect future inputs. Proper mitigation measures are required to avoid such biases. High-risk AI systems must be resilient against attacks or attempts by unauthorized third parties to exploit vulnerabilities, alter outputs, or manipulate system performance. These attacks might include: Data Poisoning : Attempts to corrupt the training data to alter the AI’s behavior. Model Poisoning : Manipulating pre-trained models used by the AI. Adversarial Examples : Feeding the system deceptive input designed to make it fail. Confidentiality Attacks : Attempts to exploit weaknesses in the system’s data handling to access sensitive information. Providers must implement measures to prevent, detect, and respond to these security risks, ensuring that the system remains secure and performs reliably throughout its lifecycle. Obligations of Providers of High-Risk AI Systems Compliance : Providers must ensure that their high-risk AI systems meet all regulatory requirements for safety, reliability, and ethical use. Provider Information : Providers must clearly display their name, trade name or trademark, and contact details either on the AI system, its packaging, or in accompanying documentation. Quality Management System (QMS) , that should include: A clear strategy for following regulations and handling assessments to prove compliance. Methods for system design, control, and verification. Procedures to test and validate the AI system throughout its development. Clear technical specifications and standards to ensure the system functions as required. Comprehensive data management processes for collecting, storing, and analyzing data used in the AI system. A risk management system to identify and mitigate potential risks. Systems to monitor the AI system after it’s released to the market, including reporting any serious incidents. Systems for managing communication with authorities, clients, and other stakeholders. Efficient documentation retention and resource management, including strategies to ensure continuity in the supply chain. A responsibility framework, clearly defining who is accountable within the organization. Documentation Keeping : Providers must retain essential documentation for 10 years after the AI system is made available. This includes technical details, quality management records, and any changes approved by regulatory bodies. If the provider goes out of business, arrangements must be made to keep this documentation accessible to authorities. Log Keeping : Providers must keep automatically generated logs from their AI systems for at least six months , or longer if required. Conformity Assessment : Before placing the AI system on the market or putting it into use, it must undergo a conformity assessment to ensure it meets the necessary legal and regulatory standards. EU Declaration of Conformity : Providers must create a declaration of conformity , confirming that the AI system complies with the relevant EU rules and standards. CE Marking : Providers must affix the CE marking on the AI system or its packaging. The CE mark shows that the system conforms to EU safety and performance regulations. Registration of AI System : Providers must register the AI system in the EU database before offering it in the market. Corrective Actions : If the AI system is found to be non-compliant or poses any risks, providers must take corrective actions immediately. This could involve fixing, recalling, or disabling the system. They must also notify distributors, clients, and relevant parties about the issue and the actions taken. Cooperation with Authorities : Providers must fully cooperate with national authorities by providing any necessary documentation or access to logs to prove the AI system’s compliance. Accessibility Compliance : High-risk AI systems must be designed to ensure accessibility , meaning they must be usable by people with disabilities in accordance with relevant EU directives. Incident Reporting and Post-Market Monitoring : Providers must monitor the AI system after it’s released to the market. If serious incidents occur, they must report these immediately and investigate any risks. Authorized Representatives of Providers of High-Risk AI Systems Providers of high-risk AI systems that are established in third countries (i.e., outside the EU) must appoint an authorized representative within the EU before making their high-risk AI systems available in the Union market. This appointment must be formalized through a written mandate , a legal document that defines the role and tasks of the representative. The representative acts on behalf of the provider and must fulfill the following key responsibilities: (a) Verify Conformity Documentation and Procedures The representative must ensure that: The EU declaration of conformity has been drawn up, which certifies that the high-risk AI system meets the requirements set out in relevant EU regulations. The technical documentation has been prepared, which provides detailed information about the design, development, and functionality of the AI system. The appropriate conformity assessment procedures have been carried out, ensuring the AI system complies with the legal standards before it is made available on the EU market. (b) Retain Documents for 10 Years The representative is responsible for keeping important documents, including: Contact details of the provider. A copy of the EU declaration of conformity. Technical documentation. If applicable, the certificate issued by a notified body (an organization designated to assess conformity). (c) Provide Information to Authorities Upon a reasoned request by a competent authority, the representative must provide all necessary information and documentation to demonstrate that the high-risk AI system is compliant with the relevant requirements. (d) Cooperate with Authorities to Reduce Risks The representative is required to cooperate with authorities if they take any actions to reduce or mitigate risks posed by the high-risk AI system. (e) Ensure Compliance with Registration Obligations The representative must ensure that the high-risk AI system is registered according to the regulations in Article 49(1), which require registration in the EU database for high-risk AI systems. If the provider carries out the registration, the representative must ensure that the information is accurate. The authorized representative has the right to terminate the mandate if they believe that the provider is acting in violation of the regulations. If this happens, the representative must immediately notify: The market surveillance authority (the body responsible for enforcing compliance with regulations). The notified body , if applicable, which would be involved if a certificate of conformity was issued. Obligations of Importers of High-Risk AI Systems Conformity Before Market Placement : Importers must verify that a high-risk AI system meets the requirements of the EU AI Act before placing it on the market. This includes ensuring the provider has followed the appropriate conformity assessment procedure outlined in Article 43. This procedure involves checking that the system complies with standards set in the regulation. If the provider applies harmonized standards (Article 40) or common specifications (Article 41), the system may undergo internal checks or a third-party evaluation (via a notified body). Technical Documentation : The importer needs to confirm that the provider has prepared the necessary technical documentation as required by Article 11 and Annex IV. Marking and Declaration : Importers must ensure the AI system bears the CE marking, which indicates it complies with EU safety standards, and is accompanied by the EU declaration of conformity as required by Article 47. Authorized Representative : The provider must appoint an authorized representative within the EU to handle regulatory matters if they are based outside the EU. Non-Conformity and Falsified Documentation : If an importer suspects a system is non-compliant or that its documentation is falsified, they must prevent its market entry until it is corrected. In cases where the system poses a risk (as outlined in Article 79), the importer must notify the provider, representative, and market surveillance authorities. Importer Identification : The importer must ensure that their name and contact details are visible on the AI system, its packaging, or accompanying documents. This is crucial for traceability. Storage and Transport : Importers are responsible for ensuring that the system's storage or transport conditions don’t compromise its compliance with the regulation. Retention of Documentation : Importers must retain a copy of the EU declaration of conformity, technical documentation, and certificate from the notified body (if applicable) for at least 10 years after the product enters the market. Cooperation with Authorities : Upon request from regulatory authorities, importers must provide all relevant information and documentation demonstrating compliance, including technical details. Obligations of Distributors of High-Risk AI Systems Before a distributor makes a high-risk AI system available on the market, they must ensure: The system has the CE marking —a sign that it complies with EU safety and legal standards. The system comes with a copy of the EU declaration of conformity (Article 47), which confirms that it meets the requirements set by EU regulations. The system has appropriate instructions for use . Both the provider and importer have complied with their responsibilities under the Regulation. If a distributor has any doubts, based on the information available to them, that a high-risk AI system is not compliant with the core technical requirements of the AI system, they are prohibited from releasing it until the system meets the necessary standards. If the system poses risks to health, safety, or fundamental rights), the distributor must notify the provider or importer. Distributors must ensure that, while the AI system is under their control (e.g., during storage or transport ), its compliance with the safety and legal requirements is not compromised. If the distributor finds, based on information available to them, that a high-risk AI system already placed on the market does not conform to the requirements, they must take the necessary corrective actions. These actions may include: Bringing the system into compliance, Withdrawing it from the market, Recalling it from consumers. If the system poses a risk, the distributor must immediately inform the provider or importer and the relevant authorities, detailing the issue and any corrective measures taken. Upon request from a competent authority , distributors must supply all relevant information and documentation proving their actions regarding the conformity of the high-risk AI system. Distributors must cooperate with relevant authorities in any action they take concerning high-risk AI systems they’ve made available on the market. Assumption of Provider Responsibilities Any distributor, importer, deployer, or other third party can be classified as a "provider" of a high-risk AI system, and thus subject to the obligations of a provider under Article 16 , in the following situations: (a) If they put their name or trademark on an already marketed high-risk AI system, they take on the role of the provider. Even if a contract assigns responsibilities differently, for regulatory purposes, they become the provider. (b) If they make a substantial modification to a high-risk AI system already on the market, such that it remains a high-risk AI system as defined by Article 6 , they are considered the new provider. (c) If they modify the intended purpose of an AI system (including general-purpose systems) that was not classified as high-risk, but as a result of the modification, the system becomes classified as high-risk under Article 6 , they assume the provider role. When one of the above circumstances occurs, the original provider who first placed the AI system on the market is no longer considered the provider for that specific system. The original provider must cooperate with the new provider by: Supplying the necessary technical information and access to ensure the new provider can meet their obligations under the regulation, especially for compliance assessments. However, if the original provider has explicitly stated that their AI system should not be modified to become high-risk, they are not obligated to provide this documentation. If a high-risk AI system forms part of a product covered by Union harmonization laws (as listed in Annex I, Section A ), the product manufacturer is considered the provider under these circumstances: (a) The system is marketed together with the product under the manufacturer’s name or trademark. (b) The system is put into service under the manufacturer’s name or trademark after the product is already on the market. This means that the product manufacturer must assume all obligations as the provider of the high-risk AI system. The provider of a high-risk AI system and any third party supplying AI tools, services, components, or processes used in the system must formalize an agreement that: Specifies the necessary information, technical capabilities, and assistance required to comply with the regulation. This rule does not apply to third parties who provide tools or services under a free and open-source license , unless the AI model itself is a general-purpose AI model. Obligations of Deployers of High-Risk AI Systems Compliance with Instructions for Use Deployers of high-risk AI systems must take necessary technical and organizational measures to ensure that the system is used according to the instructions provided by the system's creator or supplier. Human Oversight and Competence Deployers must assign natural persons (humans) to oversee the operation of these systems. These overseers must have the appropriate competence, training, authority, and support to handle the AI system responsibly. Control Over Input Data When deployers control the input data used by the AI system, they must ensure that the data is relevant and sufficiently representative for the intended purpose of the AI. Monitoring, Incident Reporting, and Risk Mitigation Deployers are obligated to monitor the operation of the high-risk AI system in line with the instructions provided. If there is any indication that the system may pose risks, they must inform the system provider or distributor and relevant authorities without delay. If a serious incident occurs, the deployer must immediately report it to the provider, importer, distributor, and market surveillance authorities. Log Keeping Deployers must retain automatically generated logs from the AI system for an appropriate period, with a minimum of six months, unless specified otherwise by national or Union law (particularly in data protection legislation). Workplace Information When a high-risk AI system is introduced into the workplace, deployers who are also employers must inform the workers and their representatives that such a system is being used. This transparency should align with relevant labor laws and practices. Registration for Public Authorities Deployers who are public authorities or entities within the Union must ensure that their high-risk AI systems are registered in the EU database referred to in Article 71. If the system is not registered, they cannot use it and must inform the provider or distributor. Data Protection Compliance Deployers must use the information provided under Article 13 of this regulation to comply with data protection impact assessments as required by GDPR (Article 35 of Regulation (EU) 2016/679) or law enforcement directives. Biometric Identification in Law Enforcement When law enforcement deploys a high-risk AI system for post-remote biometric identification (such as facial recognition), they must obtain judicial or administrative authorization before or shortly after its use. This authorization must occur within 48 hours, unless it’s for the initial identification of a potential suspect. If authorization is rejected, the use of the biometric system must stop immediately, and any related personal data must be deleted. The system cannot be used indiscriminately for law enforcement purposes without a specific link to a criminal case, investigation, or genuine threat. Law enforcement decisions cannot be based solely on AI output. Each use of such systems must be documented in the relevant police files and made available to market surveillance or data protection authorities upon request, excluding sensitive law enforcement data. Deployers must also submit annual reports on their use of these systems, although aggregated reports can cover more than one deployment. Informing Affected Persons When high-risk AI systems make decisions affecting individuals, deployers must inform the affected persons that they are subject to such AI decisions. For law enforcement use, this must comply with Article 13 of Directive (EU) 2016/680, ensuring transparency and protecting individuals' rights. Cooperation with Authorities Deployers are required to cooperate with competent authorities in any action related to the AI system's operation, helping authorities implement regulations and investigate compliance. Testing High-Risk AI Systems in Real-World Environments, Outside of Regulatory Sandboxes Scope of Testing in Real-World Conditions High-risk AI systems can be tested in real-world conditions, outside of regulatory sandboxes. Providers or prospective providers of these systems can conduct testing, including submitting a real-world testing plan. However, these tests must comply with Article 5 , which may include prohibitions on certain uses of AI (for example, potentially harmful applications). The European Commission will further define what the real-world testing plan should include through "implementing acts" (which are detailed legal measures to implement legislation). National or Union law concerning product testing (e.g., products covered by EU harmonization laws) still applies to these systems. Timing and Conduct of Testing Providers can conduct real-world tests before the AI system is placed on the market or put into service. Testing can be done either by the provider alone or in partnership with deployers (entities or individuals who implement or use the system). The testing must respect any ethical review requirements laid down by national or Union law, ensuring ethical standards are maintained. Conditions for Testing in Real-World Conditions Testing can only proceed if the following conditions are met: (a) Testing Plan Submission : A real-world testing plan must be drawn up and submitted to the market surveillance authority in the country where the testing will occur. (b) Approval by Authorities : The surveillance authority must approve both the testing and the plan. If they don’t respond within 30 days, the plan is considered automatically approved. Some national laws may prevent such "tacit approval," in which case explicit authorization is required. (c) Registration Requirements : Testing must be registered with a Union-wide unique identification number. Specific systems, such as those related to law enforcement, migration, and border control (Annex III points 1, 6, 7), must be registered in a secure non-public section of the EU database for privacy and security reasons. (d) Union-Based Legal Representation : Providers must be established in the EU or appoint a legal representative within the EU. (e) Data Transfers : Data collected during the testing can only be transferred to non-EU countries if they comply with appropriate Union law safeguards. (f) Duration of Testing : Testing can last up to six months, with a possible extension of another six months, but only if justified and pre-notified to the market surveillance authority. (g) Protection of Vulnerable Groups : Extra care must be taken to protect individuals belonging to vulnerable groups, such as those with disabilities or age-related vulnerabilities. (h) Deployers' Awareness and Agreement : If testing involves deployers, they must be informed of all relevant details. A formal agreement between the provider and deployer must specify roles and responsibilities, ensuring compliance with applicable laws. (i) Informed Consent : Subjects involved in the testing must give informed consent (unless it's law enforcement-related testing where consent could interfere with the test). In such cases, the test must not negatively impact individuals, and any personal data must be deleted afterward. (j) Oversight : Testing must be overseen by qualified personnel from the provider and deployer, ensuring compliance with testing regulations. (k) Reversibility of AI Predictions : The outcomes of the AI system (predictions, recommendations, decisions) must be capable of being reversed or disregarded. Rights of Subjects in Testing Testing requires obtaining informed consent from individuals participating: Consent must be freely given and informed . Participants must receive clear, concise information about the testing's nature, objectives, and any inconveniences. Participants must be informed about their rights, such as the ability to refuse participation or withdraw without facing any detriment. They must be told how to request a reversal or disregarding of the AI system’s outputs. Consent must be documented, dated, and a copy provided to the participant or their legal representative. Participants in the testing, or their representatives, have the right to withdraw consent at any time without facing any consequences. They can also request the deletion of their personal data, but withdrawal does not affect activities already conducted. Incident Reporting Any serious incidents occurring during testing must be reported to the market surveillance authority. Providers must take immediate mitigation measures , or, if necessary, suspend or terminate the testing. Providers must also have a procedure for recalling the AI system in case of such terminations. Notifying Authorities Providers must notify the national market surveillance authority about any suspension or termination of the testing and provide the final outcomes. Fundamental Rights Impact Assessment (FRIA) for High-Risk AI Systems Who is required to perform the FRIA? Deployers of high-risk AI systems referred to in Article 6(2), such as public bodies or private entities providing public services, are obligated to perform an FRIA. High-risk AI systems in areas such as biometrics, education, law enforcement, and administration of justice are specifically targeted. However, certain AI systems, like those used in critical infrastructure (e.g., energy, water, traffic), are exempt. What does the FRIA involve? A description of how the AI system will be used and the context. Identification of the individuals or groups likely to be affected. Evaluation of risks, particularly concerning harm to fundamental rights, and measures for human oversight. A plan for mitigating risks and handling complaints. When must the FRIA be updated? FRIA applies to the first deployment of a high-risk AI system. However, if circumstances change—such as updates to the system or changes in its use—the FRIA must be revised to reflect the new situation. Data Protection Impact Assessments (DPIA) If a Data Protection Impact Assessment (DPIA) has already been conducted under GDPR (which covers data protection rights), the FRIA will complement it, focusing on a broader set of fundamental rights beyond just data protection. Notification and template use Once the FRIA is completed, the deployer must notify the market surveillance authority. The European AI Office will provide a template questionnaire to streamline this process for deployers. Conformity Assessment for High-Risk AI Systems Options for Conformity Assessment (Annex III, Point 1) Harmonized Standards (Article 40): Providers must choose between two options if they apply harmonized standards or common specifications (Article 41). Internal Control : This method is described in Annex VI, allowing the provider to internally assess compliance through predefined procedures. Quality Management System (QMS) : If providers opt for this route, it involves a notified body to evaluate the system’s quality management and technical documentation, as detailed in Annex VII. Exceptions : If harmonized standards are unavailable or only partially applied, the provider must follow the procedure in Annex VII, which mandates a third-party notified body to ensure compliance. Conformity for Other High-Risk Systems (Annex III, Points 2 to 8) For AI systems in sectors such as biometrics , education , and law enforcement , the internal control process (as outlined in Annex VI) applies, without the need for external notified bodies. Substantial Modifications and Learning Systems A new conformity assessment is required if the AI system undergoes significant changes. However, if the system continues learning within predefined limits, no additional assessment is necessary. Exceptional Authorization for Public Security or Health Reasons Market Surveillance Authorities can authorize high-risk AI systems to be placed on the market for exceptional reasons (e.g., public security, protection of life, environmental protection, or safeguarding critical infrastructure) within a Member State. This is a temporary measure while the full conformity assessment is completed. The derogation is allowed only for a limited time, and the assessment process must proceed without undue delay. In urgent situations, such as an imminent threat to public safety , law enforcement or civil protection authorities can use a high-risk AI system without prior authorization . They must, however, apply for the required authorization either during or immediately after the AI system’s use. If the authorization is denied, the use of the system must cease immediately, and any data or results from its usage must be discarded. Market surveillance authorities can only issue the authorization if they conclude that the high-risk AI system complies with the fundamental requirements of Section 2 of the AI Act, which covers safety and fundamental rights. After granting authorization, the market surveillance authority must notify the European Commission and other Member States about the authorization. Sensitive operational data, particularly from law enforcement, is excluded from this reporting. If no Member State or the Commission objects to the authorization within 15 days , the authorization is considered justified. If a Member State or the Commission objects within 15 days, consultations between the Commission and the Member State that issued the authorization are initiated. The relevant stakeholders, including AI system providers, are allowed to present their views. The Commission then decides if the authorization is justified based on the consultations and informs the relevant parties of its decision. If the Commission finds that the authorization was unjus tified, the market surveillance authority of the Member State must withdraw the authorization. EU Declaration of Conformity of High-Risk AI System The provider of a high-risk AI system is required to create a written EU declaration of conformity . This document must be machine-readable , either physical or electronically signed, and retained for 10 years after the system is placed on the market. The declaration must state that the AI system complies with the requirements in Section 2 of the Act. It should contain specific information as outlined in Annex V and be translated into a language that is easily understood by the authorities in the Member States where the system is marketed. If the AI system is also subject to other Union harmonisation legislation , the provider can prepare a single EU declaration of conformity covering all applicable legal frameworks. This helps streamline compliance by consolidating all relevant regulatory requirements into one document. By issuing the EU declaration, the provider assumes full responsibility for ensuring that the AI system meets the compliance standards set out in Section 2. The declaration must be kept up to date, reflecting any changes in the system's status or updates to its compliance. Registration of AI Systems Before placing a high-risk AI system on the market, the provider (or their authorized representative) must register themselves and their system in the EU database (as specified in Article 71 ). This applies to high-risk AI systems listed in Annex III , except for those in point 2 of Annex III (related to educational or vocational training). Providers who conclude that their AI system is not high-risk (under Article 6(3) ) must also register both the system and themselves in the EU database. For specific high-risk AI systems related to law enforcement , migration , asylum , and border control (Annex III, points 1, 6, and 7), the registration process must take place in a non-public section of the EU database. High-risk AI systems listed under point 2 of Annex III (primarily those related to education or vocational training) must be registered at the national level , rather than in the EU database. Post-Market Monitoring of High-Risk AI Systems Providers of high-risk AI systems must create and document a post-market monitoring system . The system needs to be proportional to the nature of the AI system and the specific risks associated with it. The "proportionate" aspect means the complexity of the monitoring system should match the complexity and risk of the AI system. For example, an AI used in medical diagnostics may need more detailed and continuous monitoring compared to an AI used for less critical tasks like customer service automation. Key Points: The monitoring system must actively and systematically collect, document, and analyze relevant data. Data can come from deployers (those who actually use the AI system in real-world applications) or from other sources. This data collection spans the entire lifetime of the AI system . Providers must ensure the AI system’s continuous compliance with the legal requirements laid out in Chapter III, Section 2 (which refers to specific safety, transparency, and ethical standards). Monitoring should also include analysis of interactions between the AI system and other AI systems, if relevant. There is an exemption for law enforcement authorities —providers do not need to monitor sensitive operational data from these bodies. The idea is that providers should not just launch their AI system and forget about it. They need to constantly gather information about how well the system is performing, whether it continues to meet safety and compliance standards, and if it interacts with other AI systems in a way that could affect safety or performance. The monitoring system must be backed by a post-market monitoring plan , which is included in the technical documentation (described in Annex IV). The European Commission will adopt an implementing act (a type of regulatory document) by 2 February 2026 . This will provide a template for the monitoring plan and list all the elements that need to be included in it. For high-risk AI systems that are already covered by Union harmonization legislation (other EU laws that require monitoring), providers have the option to integrate their AI monitoring system into the already existing monitoring frameworks, where possible. Providers can incorporate the requirements described above into their existing monitoring systems, provided they achieve an equivalent level of protection . This flexibility also applies to high-risk AI systems used by financial institutions , which are already subject to specific governance and compliance rules under EU financial services law. Transparency in AI Systems AI systems interacting with natural persons Providers of AI systems designed to directly interact with people must ensure that users know they are interacting with an AI system, unless it is obvious to a reasonably well-informed, observant, and cautious person based on the circumstances. AI systems used for law enforcement purposes (e.g., detecting, preventing, investigating, or prosecuting criminal offenses) are exempt from this transparency rule, as long as safeguards for individual rights and freedoms are in place. However, if these systems are used to allow the public to report crimes, the obligation to inform users applies. AI-generated or manipulated content Providers of AI systems that generate synthetic audio, image, video, or text content must: Mark the outputs of these systems as artificially generated or manipulated in a machine-readable format, ensuring they are detectable as such. Ensure the marking technology is effective, interoperable, robust, and reliable , considering the nature of the content, cost, and current technological standards. This obligation does not apply if the AI system is used for minor edits or enhancements (e.g., AI-assisted photo filters) that do not significantly alter the content. It also does not apply to AI systems used for law enforcement purposes like criminal investigations. Transparency in emotion recognition and biometric categorization systems Deployers of AI systems that recognize emotions or categorize individuals based on biometric data must inform individuals that these systems are in use. Additionally, they must comply with relevant privacy laws: Regulations (EU) 2016/679 (GDPR) Regulations (EU) 2018/1725 Directive (EU) 2016/680 AI systems used for law enforcement purposes (e.g., criminal investigations) are exempt from this transparency obligation, provided safeguards for rights and freedoms are maintained. Disclosure of deep fakes and AI-generated text For AI systems generating or manipulating deep fakes (synthetic or manipulated image, audio, or video content), deployers must disclose that the content is artificially created. Exceptions: This obligation does not apply to law enforcement purposes. For artistic, creative, satirical, or fictional works (e.g., a movie using AI-generated special effects), transparency obligations are relaxed. The only requirement is that there be some disclosure of AI-generated content, but it must not interfere with the artistic experience. For AI systems generating text meant to inform the public on important matters, deployers must disclose that the text was generated or manipulated by AI unless: The text is subject to human review or editorial control, and someone takes legal responsibility for the publication. The system is used for law enforcement purposes. Timing and clarity of information disclosure The required information must be communicated clearly and in a manner that is easy to distinguish for the individuals concerned. This disclosure must happen at the first interaction or exposure to the AI system or its content. Additionally, any accessibility requirements (e.g., for individuals with disabilities) must be taken into account. General-Purpose AI Models A general-purpose AI model is defined as an AI model that: Is trained with a large amount of data, often using self-supervision (an approach where the AI learns from unlabeled data at scale). Shows significant generality , meaning it can perform a wide range of distinct tasks competently, regardless of how it is distributed or marketed. Can be integrated into various downstream systems or applications , making it highly flexible and adaptable for different uses. This definition excludes AI models that are only used for research, development, or prototyping purposes before being placed on the market. General-purpose AI models, such as those behind language models (like GPT), image generators, and recommendation engines, are versatile and can be adapted for numerous applications across industries. A general-purpose AI system is based on a general-purpose AI model and can serve a variety of direct purposes or be integrated into other AI systems . This system could, for example, power applications like customer service chatbots, image recognition systems, or predictive analytics. The key distinction here is that a general-purpose AI system is an actual functional system built upon a general-purpose AI model, making the model more specific or tailored to particular tasks or industries. Obligations for Providers of General-Purpose AI Models (a) Technical Documentation Providers must maintain up-to-date technical documentation of the AI model. This includes details on the training and testing processes, as well as the results of evaluations. The documentation must meet the requirements laid out in Annex XI and be made available upon request to the AI Office or national authorities. All general-purpose AI model providers must include the following: General Description of the Model, covering: Tasks the model is designed for and types of AI systems it can be integrated into. Acceptable use policies. Release date and distribution methods. Architecture and number of parameters. Input/output modalities (e.g., text, image). Licensing information. Detailed Model Development Information, including: Technical means required for integration into other systems. Model design specifications, training methodologies, key design choices, and objectives. Data used for training, testing, and validation, including its source, characteristics, and measures to mitigate biases. Computational resources used (e.g., floating point operations), training time, and relevant details. Known or estimated energy consumption during training. (b) Information for AI System Providers Providers must prepare and update documentation that helps other providers who intend to integrate the general-purpose AI model into their own AI systems. This documentation should: Enable these AI system providers to understand the capabilities and limitations of the AI model. Help the AI system providers comply with their own regulatory obligations. Contain the minimum information as required in Annex XII: Tasks the model is designed to perform and types of AI systems it can be integrated into. Acceptable use policies. Release date and distribution methods. Interaction with external hardware or software (if applicable). Relevant software versions (if applicable). Model architecture and number of parameters. Input/output modalities (e.g., text, image) and formats. Licensing information for the model. Technical means (instructions, tools, infrastructure) required for integration. Inputs and outputs modalities, formats, and maximum size (e.g., context window length). Information on data used for training, testing, and validation, including data type, provenance, and curation methods. (c) Copyright Policy Providers must have a policy in place to comply with EU copyright laws, especially ensuring that their AI systems respect any reservation of rights expressed under Article 4(3) of Directive (EU) 2019/790, which deals with copyright and related rights in the digital single market. (d) Public Summary of Training Data Providers must publish a summary of the content used to train the AI model, using a template provided by the AI Office. This summary should give sufficient details about the training data, providing transparency while protecting proprietary data where necessary. Exemptions for Open-Source AI Models The obligations above do not apply to AI models released under a free and open-source license, provided: The AI model’s parameters, architecture, and usage information are publicly available. Note: This exemption does not apply to general-purpose AI models that pose systemic risks (as defined under Article 51). Use of Codes of Practice and Harmonized Standards Providers may rely on codes of practice (Article 56) or European harmonized standards to demonstrate compliance with the obligations under this regulation. Until a harmonized standard is published, providers can use these codes to show conformity. Providers who do not adhere to an approved code or standard must demonstrate compliance through alternative means, subject to Commission review. Authorized Representatives of Providers of General-Purpose AI Models Providers of general-purpose AI models established outside the EU must appoint an authorized representative based in the EU before placing their models on the Union market. The provider must empower the authorized representative to perform tasks outlined in the mandate, such as ensuring the model complies with relevant regulations. The authorized representative must carry out specific tasks as per the mandate, including: The authorized representative must ensure that the technical documentation (Annex XI) is properly drawn up and that all regulatory obligations under Article 53 are fulfilled. The representative must keep a copy of the technical documentation for 10 years after the AI model has been placed on the market. They must also keep the provider’s contact details on file. The representative must provide documentation to the AI Office or national authorities upon reasoned request to demonstrate compliance with the regulation. The representative must cooperate with the AI Office and other competent authorities in any actions related to the AI model, including its integration into downstream AI systems. The authorized representative can be addressed by the AI Office or authorities, in addition to or instead of the provider , on all issues related to ensuring compliance with this regulation. If the authorized representative believes the provider is acting contrary to their obligations, they must terminate the mandate and immediately inform the AI Office with reasons for the termination. The obligation to appoint an authorized representative does not apply to providers of general-purpose AI models released under a free and open-source license , unless the model presents systemic risks . General-Purpose AI Models with Systemic Risk An AI model will be classified as having systemic risk if it meets one of the following conditions: (a) High Impact Capabilities: The model is evaluated using technical tools and methodologies, including benchmarks and indicators, to determine if it has a high impact. A high impact model can affect significant societal areas like privacy, safety, democracy, or economic systems. (b) Commission Decision: The European Commission, either on its own or after receiving an alert from a scientific panel , can designate an AI model as having systemic risk if it deems the model to have capabilities or impacts similar to those described in point (a). The criteria for this assessment are laid out in Annex XIII of the regulation. Criteria for Designating General-Purpose AI Models with Systemic Risk (a) The Number of Parameters of the Model Parameters are the variables within an AI model that are learned during the training process. The number of parameters is a key indicator of the model’s complexity and capacity to learn and process information. Large models, like modern large language models (e.g., GPT-4), can have billions or even trillions of parameters, making them highly powerful and capable of handling a wide variety of tasks. More parameters often mean the model can have broader impacts, as it is capable of understanding and generating more nuanced or complex outputs. (b) The Quality or Size of the Dataset This refers to the dataset used to train the AI model, specifically its size and quality . Size can be measured in terms of the number of tokens (e.g., words or data points) used in training. Quality refers to the relevance, accuracy, and diversity of the data. High-quality data can make a model more effective and versatile. A large, high-quality dataset generally enables the model to generalize better across different tasks, potentially increasing its impact and risk due to broader applicability. (c) The Amount of Computation Used for Training This criterion looks at the computational resources required to train the AI model, which are measured in floating point operations (FLOPs) —a standard metric for computational intensity. Other indicators of computational effort include: Estimated cost of training : Training large AI models often requires significant financial resources. Training time : Long training periods imply that the model is processing vast amounts of data and computations. Energy consumption : Training large models can consume enormous amounts of energy, raising concerns about environmental impact. An AI model is presumed to have high-impact capabilities if the amount of computation used for training exceeds 10^25 floating-point operations (FLOPs), which indicates large-scale computational resources and complexity. (d) Input and Output Modalities This criterion focuses on the modalities (types of inputs and outputs) the AI model can handle. Modalities include: Text to text : Like large language models that take text input and generate text output (e.g., GPT models). Text to image : Models that generate images based on text prompts (e.g., DALL·E). Multi-modality : Models capable of handling different types of inputs and outputs simultaneously, such as combining text, image, audio, and video processing. Biological sequences : Specialized AI models that process biological data, such as genetic sequences. (e) Benchmarks and Evaluations of the Model’s Capabilities This criterion refers to the performance benchmarks used to evaluate the AI model’s capabilities, including: The number of tasks it can perform without needing further training (showing versatility and generality). Adaptability to new tasks: How easily the model can be fine-tuned or retrained to handle new, distinct tasks. Autonomy : The model’s ability to operate independently without continuous human oversight. Scalability : How well the model’s capabilities scale as it is deployed in different environments or across different industries. Tools access : If the model has access to external tools (e.g., APIs), it may enhance its capabilities further, increasing its potential impact. (f) High Impact on the Internal Market This criterion assesses the model’s reach within the European Union, particularly its availability to businesses. A model will be presumed to have a high impact on the EU’s internal market if it has been made available to at least 10,000 registered business users . (g) The Number of Registered End-Users The number of end-users is another important factor in assessing the model’s overall impact. A large user base indicates that the model has extensive reach, which means it could affect a broad range of people, businesses, or industries. This widespread adoption heightens the model's potential to create societal or economic risks . The European Commission is empowered to adopt delegated acts (legally binding acts) to: Amend the thresholds (such as the 10^25 FLOP requirement) based on advances in technology, like algorithmic improvements or hardware efficiency. Update benchmarks and indicators to ensure that risk assessments keep up with the evolving capabilities of AI systems. Procedures for Managing Systemic Risk AI Models If a provider develops a general-purpose AI model that meets the high-impact criteria, they must notify the European Commission within two weeks of becoming aware that the criteria have been met. This notification must include evidence that the model meets the high-impact requirement. Additionally, if the Commission learns about a high-risk AI model that hasn't been reported, it has the authority to classify it as an AI model with systemic risk on its own. The provider can present arguments to show that, despite meeting the technical threshold for systemic risk (e.g., the FLOP threshold), the model does not pose systemic risks due to its specific characteristics. This could happen, for example, if the model is tightly controlled or used in a manner that mitigates the risk. However, these arguments must be well-substantiated. If the Commission finds that the provider’s arguments are not sufficiently convincing , the model will remain classified as having systemic risk. The decision to classify the model will be based on the failure to demonstrate that the model's unique characteristics mitigate the potential risks. Providers can request a reassessment of the systemic risk designation, but this can only happen six months after the initial designation. The provider must present new and objective reasons for the reassessment. The European Commission will maintain and publish a list of general-purpose AI models that are classified as having systemic risk. This list will be kept up to date, but the publication must respect intellectual property rights, business confidentiality, and trade secrets , as required by EU and national laws. Obligations for Providers of AI Models with Systemic Risk Beyond the general duties mentioned above, the specific obligations are: Model Evaluation with Adversarial Testing AI providers must assess their models using standardized protocols and state-of-the-art tools. This involves testing the AI system against potential attacks or attempts to manipulate it. For example, an adversarial test might simulate a scenario where someone tries to trick an AI system into making incorrect decisions. The goal is to identify vulnerabilities and mitigate the risks associated with them. Risk Assessment at Union Level Providers must assess systemic risks at the EU level, considering the impact the AI model might have within the European Union. These could range from disruptions in critical infrastructure (e.g., energy grids) to widespread misinformation or privacy violations. Incident Reporting Providers must keep track of serious incidents and report them without delay to relevant authorities, including the AI Office and, if necessary, national authorities. Serious incidents could involve unexpected failures, malicious use, or significant impacts on public safety. The provider must also document and implement corrective measures to address these incidents. Cybersecurity Protection Providers are responsible for ensuring that the AI model and its physical infrastructure (e.g., servers and databases) are adequately protected from cyber threats. This could include encryption, access controls, regular security audits, and intrusion detection systems. Codes of Practice and Harmonised Standards Providers can rely on codes of practice (voluntary guidelines or industry standards) to meet their obligations until formal EU-wide harmonised standards are published. Harmonised standards: These are official, EU-endorsed technical specifications that provide a benchmark for compliance. Once these standards are published, providers who follow them are assumed to be compliant with the law. If a provider doesn't follow a code of practice or a harmonized standard, they must demonstrate an alternative way of complying with the requirements, subject to approval by the European Commission. Additional Information for Models with Systemic Risk For general-purpose AI models classified as having systemic risk , additional details are required in the technical documentation of the AI model : Evaluation Strategies : Description of evaluation criteria, results, and limitations, using public or internal evaluation methods. Adversarial Testing : Details on internal or external testing (e.g., red teaming), model adaptations, alignment, and fine-tuning processes. System Architecture : Explanation of how software components work together within the model. AI Regulatory Sandboxes Establishment of AI Regulatory Sandboxes at the National Level Each Member State is required to set up at least one AI regulatory sandbox by 2 August 2026 . These sandboxes provide controlled environments for AI developers to experiment with new AI systems under regulatory supervision before entering the market. A sandbox can be established jointly with other Member States' competent authorities. This collaboration helps smaller states or regions pool resources and knowledge to support AI development. Participation in an existing sandbox is acceptable if it provides national coverage comparable to a standalone sandbox. Regional and Cross-Border Sandboxes Beyond the national level, additional sandboxes may be established at regional or local levels or in cooperation with other Member States. AI Regulatory Sandboxes for EU Institutions The European Data Protection Supervisor (EDPS) has the authority to create AI regulatory sandboxes specifically for EU institutions, bodies, offices, and agencies . The EDPS can fulfill the roles and tasks of national competent authorities in these cases, ensuring compliance with the AI Act for Union-level entities. Structure and Purpose of AI Regulatory Sandboxes AI regulatory sandboxes provide a controlled environment where AI systems can be developed, tested, and validated under supervision. These tests can involve real-world conditions and aim to foster innovation while identifying and mitigating risks, particularly regarding fundamental rights, health, and safety. The sandbox operates for a limited time under a pre-agreed plan between the AI provider and the supervising authority. Documentation and Exit Reports Upon completing their participation in the sandbox, AI providers will receive written proof of the activities carried out. Authorities will issue an exit report , detailing the results and lessons learned during the sandbox process. These documents can then be used by providers to demonstrate their compliance with the AI Act in the conformity assessment process or other market surveillance activities. The reports may also accelerate regulatory approvals. Access to Exit Reports and Confidentiality While exit reports are generally confidential, the European Commission and the AI Board may access them to aid in regulatory oversight. If both the AI provider and the national authority agree, exit reports may also be made public to promote transparency and share knowledge within the AI ecosystem. Objectives of the AI Regulatory Sandboxes The sandboxes aim to: Improve legal certainty by helping AI developers understand and comply with the AI Act. Foster best practice sharing among authorities. Encourage innovation and strengthen the AI ecosystem within the EU. Contribute to evidence-based regulatory learning to improve future AI regulations. Facilitate faster market access for AI systems, especially for SMEs and start-ups. Coordination with Data Protection Authorities If AI systems in the sandbox involve the processing of personal data or require supervision from other national authorities, data protection agencies and other relevant bodies must be involved in the sandbox to ensure compliance with applicable data protection laws. Risk Mitigation and Authority Supervision Competent authorities have the power to suspend sandbox activities if significant risks to health, safety, or fundamental rights are detected, especially if no effective mitigation measures can be implemented. Liability and Protection for AI Providers AI providers participating in sandboxes remain liable under applicable Union and national laws for any damages caused during sandbox testing. However, if providers follow the agreed sandbox plan and comply with guidance from the supervising authority, they will not face administrative fines for infringements related to the AI Act or other laws overseen within the sandbox. Centralized Platform and Stakeholder Interaction The European Commission will create a centralized interface for AI regulatory sandboxes. This platform will provide relevant information and allow stakeholders to interact with authorities, seek regulatory guidance, and monitor sandbox activities. It will help streamline communication and foster a collaborative environment for AI innovation across the EU. Uniformity Across the Union The European Commission will adopt implementing acts to ensure that the setup, operation, and supervision of AI regulatory sandboxes are consistent across all Member States, to prevent fragmentation and confusion. Eligibility and selection criteria will be transparent and fair. This means any provider or prospective provider of an AI system who meets the set criteria can apply for participation in a sandbox. National authorities will inform applicants of their decision within three months , ensuring a predictable timeline. Broad Access AI sandboxes will be open to partnerships between providers, deployers, and other relevant third parties. This broadens opportunities to collaborate with other stakeholders in the AI ecosystem, such as SMEs, researchers, and testing labs. Importantly, participation in one Member State’s sandbox will be mutually recognized across the EU . SMEs and start-ups can participate in the sandbox free of charge , except for any exceptional costs that authorities may recover fairly. Focus on Testing Tools and Risk Mitigation The sandboxes will facilitate the development of tools to assess key aspects of AI systems, such as accuracy, robustness, cybersecurity , and other dimensions important for regulatory learning. Authorities will assist in developing measures to mitigate risks to fundamental rights and societal impacts, helping ensure that a system aligns with EU values and safety standards. If an AI system needs testing in real-world conditions , this can be arranged within the sandbox. However, such testing will require specific safeguards agreed upon with national authorities to protect fundamental rights, health, and safety . Cross-border cooperation may also be required to ensure consistent practices in real-world testing. Supervision of Testing in Real World Conditions Surveillance authorities are responsible for ensuring that real-world testing of AI systems is conducted in compliance with this regulation. When AI systems are tested in regulatory sandboxes (controlled environments for testing new technologies), surveillance authorities ensure compliance with specific rules and may allow certain exceptions during testing. Authorities can suspend, terminate, or modify real-world testing if serious issues are detected or if the testing does not comply with Articles 60 and 61 (concerning testing conditions and risk management). These decisions must be justified and can be challenged by the provider. Support Measures for Small and Medium-sized Enterprises (SMEs), including Start-Ups Member State Actions Priority Access to AI Regulatory Sandboxes : SMEs, including start-ups, with a registered office or branch in the EU, are given priority access to AI regulatory sandboxes, assuming they meet the eligibility conditions and selection criteria. This priority, however, does not exclude other SMEs or start-ups from access, provided they also meet the criteria. Awareness Raising and Training : Member States are tasked with organizing specific awareness campaigns and training programs on how this regulation applies to SMEs and start-ups. Communication Channels : Existing channels or newly created ones should be used to facilitate communication between SMEs, start-ups, deployers, and local authorities. Standardisation Process Participation : Member States should help SMEs participate in standardisation processes. Standardisation refers to the creation of uniform technical specifications, which helps ensure that products and services are consistent across the EU, fostering innovation and safety. Fee Adjustments for SMEs When it comes to conformity assessments (referred to in Article 43 of the regulation), the fees are adjusted to account for the specific needs and characteristics of SMEs and start-ups. Factors such as the size of the company, market presence, and other relevant indicators are used to proportionally reduce the fees. AI Office Actions Standardised Templates : The AI Office should provide standardized templates that help SMEs and others meet their regulatory obligations. Information Platform : A single, user-friendly information platform should be developed for all operators in the EU. Communication Campaigns : The AI Office is tasked with raising awareness through campaigns to inform companies about their obligations under the AI regulation. Public Procurement Best Practices : The office is also responsible for promoting best practices in public procurement processes when it comes to acquiring AI systems. Simplified Compliance for Microenterprises Microenterprise, as defined by the 2003/361/EC Recommendation, is an enterprise which employs fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 2 million. These companies are granted a simplified approach to comply with certain elements of the quality management system required by the regulation under Article 17 . These microenterprises can adopt a more straightforward version of the quality management system to meet the regulation's requirements. The European Commission will issue guidelines outlining which elements of the system can be simplified, making it easier for smaller businesses to comply without reducing the required protection standards, especially for high-risk AI systems. While microenterprises are allowed to follow a simplified process for certain parts of the quality management system, they are not exempt from other regulatory obligations . Specifically, they must still comply with the following key Articles from the regulation: Article 9 : Relates to the risk management system, requiring companies to identify and mitigate risks associated with AI systems. Article 10 : Concerns the data and data governance requirements that ensure the quality, relevance, and accuracy of the data used to train and test AI systems. Article 11 : Focuses on ensuring continuous testing and evaluation of AI systems throughout their lifecycle. Article 12 : Involves record-keeping requirements, which obligate businesses to maintain logs related to the operation of high-risk AI systems. Article 13 : Discusses transparency and the obligation to provide adequate information to users and deployers of high-risk AI systems. Article 14 : Requires human oversight to ensure that AI systems are used appropriately, especially in high-risk environments. Article 15 : Establishes requirements for accuracy, robustness, and cybersecurity of AI systems. Articles 72 and 73 : Relate to post-market monitoring and the necessary surveillance activities that ensure ongoing compliance of AI systems after they are placed on the market. European Artificial Intelligence Board The EAI Board is a key governance mechanism outlined in European Union regulations to ensure consistent and effective oversight of AI technologies across Member States. The EAI Board is created to facilitate the coordination and consistency in applying the EU's regulations on artificial intelligence (AI). The Board consists of one representative from each Member State of the EU. Additionally, the European Data Protection Supervisor participates as an observer , and the AI Office attends but does not participate in voting. Other authorities, bodies, or experts from national or EU levels may be invited to meetings on relevant issues, but they do not have voting rights. Each representative is appointed by their Member State for a term of three years , renewable once. These representatives are responsible for ensuring that their country's AI regulations align with the broader EU framework and for coordinating AI activities across national authorities: Representatives must have the skills and authority to contribute to the Board’s work. Each representative is the primary contact for the Board and possibly for national stakeholders, depending on the Member State’s needs. They are responsible for ensuring consistent application of AI regulations within their country and for gathering necessary data to inform the Board’s activities. The Board operates based on rules adopted by a two-thirds majority vote among the representatives. These rules define the procedures for electing the Chair, setting mandates, voting protocols, and organizing the Board’s activities. The Board establishes two standing sub-groups : Market Surveillance : Acts as an Administrative Cooperation Group (ADCO) , overseeing AI systems' compliance and market regulations. Notifying Authorities : Facilitates coordination among authorities responsible for notifying and certifying AI systems. The Board can create additional standing or temporary sub-groups for specific issues, and representatives from the advisory forum can be invited as observers. The Board's primary function is to assist and advise the European Commission and Member States to ensure the consistent and effective application of AI regulations. Key tasks include: Coordination of National Authorities : Promoting cooperation among national bodies responsible for AI regulation. Expertise Sharing : Gathering and distributing technical and regulatory knowledge across Member States, especially in emerging AI areas. Advice on Enforcement : Offering guidance on enforcing AI-related rules, particularly for general-purpose AI models . Harmonization of Practices : Supporting the alignment of administrative procedures, such as the functioning of AI regulatory sandboxes and real-world testing environments. Recommendations and Opinions : The Board can issue recommendations on any aspect of AI regulation, including codes of conduct, standards, and updates to the regulation itself. Promotion of AI Literacy : The Board helps raise awareness of AI risks, benefits, and safeguards among the public and stakeholders. Cooperation with Other Bodies : Working with other EU institutions, agencies, and international organizations to ensure a unified approach to AI regulation. An advisory forum is established to provide technical expertise and advice to the Board and the Commission. Composition : The forum includes a balanced group of stakeholders representing industry (including startups and SMEs), civil society, and academia. It also includes permanent members from EU standardization bodies such as ENISA , CEN , CENELEC , and ETSI . Tasks : The forum advises the Board and the Commission on AI matters and can prepare opinions and recommendations. It meets at least twice a year and may invite experts to its meetings for specific issues. Governance : The forum elects two co-chairs for a two-year term (renewable), and it can create sub-groups to focus on specific topics. The forum also prepares an annual report on its activities, which is publicly available. National Competent Authorities Each Member State is required to designate at least two types of authorities for the purposes of the regulation: A notifying authority : Responsible for overseeing the conformity and compliance of AI systems that are notified or certified under EU law. A market surveillance authority : In charge of monitoring and ensuring AI systems in the market comply with regulations, particularly in relation to safety, health, and standards. These authorities must act independently and impartially, meaning they cannot be influenced by external factors, and they should focus solely on the proper implementation of the regulation. Member States have flexibility in how they organize these authorities. They can appoint multiple authorities to perform these tasks, or consolidate the responsibilities within one or more authorities, depending on their internal organizational needs, as long as they adhere to the principles of independence and objectivity. By August 2, 2025 , Member States are required to make information about these competent authorities and single points of contact publicly available, especially through electronic means . Each Member State must designate a market surveillance authority as the single point of contact for the regulation. This authority will be the central entity responsible for liaising with both the Commission and other stakeholders on AI regulatory matters. The Commission will make this list public , allowing easy access to the designated contact points in each country. By August 2, 2025 , and every two years thereafter, Member States must report to the Commission on the financial and human resources available to their national competent authorities. This reporting includes an assessment of whether those resources are adequate. The Commission will then pass this information to the European Artificial Intelligence Board (EAI Board) for review and possible recommendations on how to address any deficiencies. Market Surveillance and Control of AI Systems in the Union Market Market surveillance authorities (MSAs) must annually report to the European Commission and national competition authorities about AI market activity that may affect competition law. They also report annually on the prohibited practices they encountered and actions taken. For high-risk AI systems linked to products covered by existing EU harmonization laws, the same authorities designated under those laws will act as surveillance authorities. Member States can assign other authorities to manage AI system surveillance, provided they ensure coordination with sectoral authorities. If existing sectoral laws already provide adequate safety and surveillance procedures for certain products (such as medical devices), these procedures will apply, rather than the new AI-specific regulations. Market surveillance authorities are empowered to carry out remote inspections and enforcement actions to ensure compliance with AI regulations, such as accessing data from manufacturers. Surveillance of high-risk AI used by financial institutions falls under the authority of national financial regulators. Other relevant authorities may also be involved, provided coordination is ensured. For banks involved in the Single Supervisory Mechanism, surveillance findings relevant to financial supervision must be reported to the European Central Bank. High-risk AI systems used in sensitive areas like law enforcement or border management must be supervised by data protection authorities or other relevant authorities. The European Data Protection Supervisor is the surveillance authority for EU institutions, except the European Court of Justice when acting in a judicial capacity. Market surveillance authorities and the European Commission can propose joint investigations or activities to promote AI compliance and identify non-compliance across multiple Member States. The AI Office helps coordinate these efforts. Surveillance authorities must have access to the documentation, training data, and validation datasets used to develop high-risk AI systems, possibly through APIs or other technical means, subject to security measures. In specific cases, where necessary for assessing compliance, authorities can request access to the source code of AI systems after other verification methods have been exhausted. Procedure for AI Systems that Present a Risk AI systems presenting a risk are treated as "products presenting a risk" under Article 3, point 19 of Regulation (EU) 2019/1020. These systems are flagged if they endanger health, safety, or fundamental rights. When a national market surveillance authority (MSA) identifies an AI system as risky, they evaluate whether it complies with EU AI regulations. Special attention is given to systems affecting vulnerable groups , and the authority must cooperate with other relevant national bodies, particularly where risks to fundamental rights are involved. If the AI system does not comply , the authority demands corrective actions (e.g., withdrawal, recall, or compliance adjustments), which must happen within 15 working days or sooner if required by harmonized laws. If non-compliance is not limited to one country, the national MSA must notify the European Commission and other EU Member States about the risk and actions being taken. The operator (entity deploying the AI system) is responsible for taking necessary corrective actions across all markets in the EU if an issue is identified. If the operator fails to do so within the prescribed time, the MSA can implement provisional measures such as prohibiting the sale or use of the AI system within the country. When corrective measures are imposed, the MSA must share detailed information about the AI system's risks with the Commission and other Member States, including data about non-compliance, origin, and the supply chain. The notification must specify whether the non-compliance stems from: Prohibited AI practices (e.g., AI systems manipulating behavior). High-risk AI systems failing to meet obligations (covered in Chapter III, Section 2). Failures in meeting standards for presumed compliance. Breaches of transparency requirements. Other national MSAs will share any additional information they have on the AI system and notify the Commission of their own measures. If they disagree with the initial MSA’s actions, they must raise objections . If no objections are raised within three months , the corrective measures are considered justified and enforced across the EU. Special Considerations for AI Systems Misclassified as Non-high-risk If a market surveillance authority believes a system classified as non-high-risk should be considered high-risk, it will evaluate the system based on Annex III (which lists criteria for high-risk AI). If reclassification to high-risk is necessary, the provider is required to take corrective actions to bring the system in compliance with regulations. The market surveillance authority must inform the Commission and other EU Member States of the results if the reclassification impacts AI systems deployed across borders. Providers that intentionally misclassify AI systems to evade high-risk requirements face fines as outlined below. Enforcement of General-Purpose AI Model Obligations The European Commission is the main authority responsible for supervising and enforcing rules related to general-purpose AI models. To handle these tasks, the Commission will delegate responsibilities to a specialized body called the AI Office . This does not interfere with how tasks are divided between the EU and its Member States. If a national market surveillance authority (like a country's consumer safety body) needs help enforcing the AI rules, it can request that the Commission steps in. This is only done if it’s necessary and proportional to the task. The AI Office is responsible for monitoring whether providers of general-purpose AI models are complying with the AI Act. This includes checking if they follow approved codes of practice, which are guidelines they voluntarily agree to follow. Any business or individual that uses a general-purpose AI model (referred to as a “downstream provider”) can file a complaint if they believe the AI model provider has violated the regulations. A valid complaint must: Include the contact details of the AI provider, Provide a clear description of the violation, Offer any additional relevant information to support the claim. The Commission can ask AI providers to provide documentation and information, such as details about how their models are tested for safety and how they comply with regulations. Before formally requesting information, the AI Office may first engage the provider in a structured dialogue to clarify any concerns or gather preliminary information. When the Commission requests information, they must explain: The legal basis for the request, The purpose of the request, What specific information is needed, The deadline for providing the information, and The penalties for providing incorrect or incomplete information. The AI provider must supply the requested information. If the provider is a legal entity (like a corporation), its authorized representative or lawyer can handle the submission, but the provider remains responsible for the accuracy. If the information provided by the AI provider is insufficient, or if the AI model is believed to pose a systemic risk , the AI Office can conduct its own evaluation of the AI model to check compliance with the rules. The Commission can hire independent experts (including those from the scientific panel) to conduct the evaluation on its behalf. The Commission can request technical access to an AI model, such as through its APIs (application programming interfaces) or even access to its source code , in order to perform the evaluation.The request for access must include: The legal basis and reasons for the request, The deadline for providing access, and The penalties for non-compliance. Like with information requests, the AI provider (or its legal representative) must comply with the access request. The Commission will issue further detailed rules on how these evaluations should take place and how independent experts are involved. If necessary, the Commission can ask AI providers to take specific corrective actions, such as: Ensuring compliance with legal obligations, Implementing risk mitigation measures if a serious risk is identified, Removing the AI model from the market if it poses significant risks. If the AI provider offers to implement appropriate measures to mitigate identified risks, the Commission can make these commitments legally binding, and no further action would be necessary. Penalties Member States are responsible for setting penalties and enforcement measures for violations of the regulation by AI operators. These measures can include both monetary and non-monetary penalties (such as warnings). Penalties must be effective , proportionate , and dissuasive . This means they should effectively discourage non-compliance without being unnecessarily harsh. Member States should also consider the impact on SMEs , including startups, ensuring that penalties do not disproportionately harm their economic viability. Member States must notify the European Commission about their penalty rules by the time the regulation comes into effect. Any future changes to these rules must also be promptly communicated to the Commission. Non-compliance with the prohibition of AI practices in Article 5 (covering the prohibited AI practices, explained above) can result in administrative fines of up to EUR 35 million or 7% of the violator's total worldwide annual turnover , whichever is higher. Non-compliance with other obligations (e.g., transparency, obligations of providers or distributors, etc.) can lead to fines up to EUR 15 million or 3% of total worldwide turnover, whichever is higher . Supplying incorrect, incomplete, or misleading information to authorities or notified bodies may result in fines up to EUR 7.5 million or 1% of global annual turnover . For SMEs (including startups), the maximum fines are either a percentage of their turnover or a set amount (whichever is lower). When deciding on fines, authorities will consider factors such as: The nature, gravity, and duration of the infringement. Whether the infringement affected people and to what extent. Whether the operator cooperated with authorities to remedy the issue. The economic benefit gained from non-compliance. The intentional or negligent character of the violation. This ensures that penalties are tailored to the specific context of the violation. Depending on the legal system, Member States may allow fines to be imposed by national courts or other bodies. The mechanism used must have the same effect as the fines imposed under this regulation. * * * Prokopiev Law Group offers expert guidance on AI regulatory landscape, ensuring full compliance with complex frameworks like the EU AI Act. With our experience and a global network of partners, we help clients meet AI compliance requirements in every jurisdiction, avoiding costly fines and operational setbacks. Whether you are developing or deploying AI systems across borders, our firm has the expertise to advise on regulatory obligations, from data protection to AI risk management. Contact us today to ensure your business is fully compliant and prepared for the future of AI regulation. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Regulation on Markets in Crypto Assets (MiCAR) Implementation
The Regulation on Markets in Crypto Assets (MiCAR) is set to reach a significant milestone on 30 June 2024, with the provisions concerning stablecoins coming into effect. This brief explores the recent updates regarding Level 2 and Level 3 measures under MiCAR. MiCAR Overview MiCAR, an EU Level 1 legislative measure, establishes and harmonizes the regulatory framework for issuers and offerors of crypto-assets and crypto-asset service providers (CASPs). This regulation is directly effective across the European Union and fills the regulatory gaps not covered by existing EU financial services regimes. Implementation of MiCAR The implementation of MiCAR involves multiple EU Level 2 and Level 3 legislative measures, including Regulatory Technical Standards (RTS), Implementation Technical Standards (ITS), and Guidelines. Level 2 Measures: MiCAR authorizes the European Commission to issue delegated acts autonomously. Additionally, it mandates the European Banking Authority (EBA) and the European Securities and Markets Authority (ESMA), sometimes in collaboration with the European Central Bank (ECB), to develop RTS and ITS for subsequent adoption by the European Commission. These standards provide detailed requirements for the effective application of MiCAR. Level 3 Measures: Level 3 measures encompass the development of Guidelines by EBA and ESMA. These Guidelines add clarity and direction to specific aspects of MiCAR. Key Regulatory Bodies and Their Roles European Commission: Empowered to create delegated acts and adopt RTS and ITS developed by EBA and ESMA. European Banking Authority (EBA): Responsible for drafting RTS and ITS, particularly in areas requiring technical expertise and financial oversight. European Securities and Markets Authority (ESMA): Shares responsibility with EBA in developing technical standards and producing Guidelines. European Central Bank (ECB): Collaborates with EBA and ESMA where necessary, particularly on matters impacting financial stability and the broader economic environment. European Commission Delegated Regulations Supplementing MiCAR On 30 May 2024, the European Commission published several Delegated Regulations in the Official Journal, which supplement MiCAR. These regulations specify various operational and procedural aspects for the oversight and regulation of crypto-assets, particularly significant asset-referenced tokens (ARTs) and e-money tokens (EMTs). The regulations will become effective on 19 June 2024. Specific Delegated Regulations Commission Delegated Regulation (EU) 2024/1503: it outlines the fees charged by the European Banking Authority (EBA) to issuers of significant ARTs and EMTs. Commission Delegated Regulation (EU) 2024/1504: it details the procedural rules for the EBA's authority to impose fines or periodic penalty payments on issuers of significant ARTs and EMTs. Commission Delegated Regulation (EU) 2024/1506: it specifies criteria for classifying ARTs and EMTs as significant. The criteria include factors such as market size, transaction volume, and systemic importance. Commission Delegated Regulation (EU) 2024/1507: it outlines the criteria and factors to be considered by the European Securities and Markets Authority (ESMA), the EBA, and competent national authorities (e.g., the Central Bank of Ireland) in relation to their intervention powers. EBA Final Reports On 7 May 2024, the European Banking Authority (EBA) published four final reports detailing regulations for market access by issuers of asset-referenced tokens (ARTs) and those seeking significant influence through qualifying holdings. Final Reports on Market Access 1. RTS on information required for applicants seeking authorisation to offer and trade ARTs: The Regulatory Technical Standards (RTS) specify the information required from applicants seeking authorization to offer and trade ARTs. Notably, the RTS clarify that: Applicants must be legal entities or undertakings established within the EU. The authorization pertains only to public offerings or admissions to trading, not to issuance itself. Only issuers can apply for and be granted authorization. 2. ITS on information required for authorisation application: The Implementing Technical Standards (ITS) provide further details on the information requirements for authorization applications, including standardized forms, templates, and procedural guidelines. 3. RTS on information for assessment of a proposed acquisition of qualifying holdings in issuers of ARTs: These RTS outline the information for assessing proposed acquisitions of qualifying holdings in ART issuers. Required information includes: Identity and background of the acquirer. Financial soundness and past convictions of the acquirer. The acquirer's management body must have good repute, knowledge, skill, and experience. 4. RTS on the approval process for white paper of ARTs issued by credit institutions: These RTS harmonizes the approval process for white papers issued by credit institutions. Governance, Conflicts of Interest, and Remuneration Reports On 6 June 2024, the EBA released three final reports addressing governance, conflicts of interest, and remuneration policies for issuers under MiCAR. 1. Guidelines on the minimum content of the governance arrangements for issuers of ARTs: The guidelines specify the minimum content for governance arrangements, emphasizing proportionality and sound risk management, including risks related to money laundering, fraud, cyber threats, and compliance. 2. RTS on Remuneration Policies: These RTS define the main governance processes and policy elements for the remuneration of significant ART issuers and electronic money institutions. 3. RTS on Conflicts of Interest: The RTS provide detailed policies and procedures for identifying, preventing, managing, and disclosing conflicts of interest, particularly those related to asset reserves. They align with frameworks under Directive 2014/65/EU (MiFID) and Directive 2013/36/EU (CRD), tailored for ART issuers. Prudential Requirements Reports On 13 June 2024, the EBA published six further reports covering own funds, liquidity, and recovery plans. 1. Guidelines on Recovery Plans: These guidelines specify the format and content of recovery plans, including governance, recovery options, and communication strategies. 2. RTS on Liquidity Management: These RTS outline the content and procedures for liquidity management policies, drawing from Basel Standards and adapting them to the crypto-asset context. 3. RTS on Highly Liquid Instruments: These RTS identify financial instruments with minimal market, credit, and concentration risks, incorporating standards from the UCITS Directive and LCR Delegated Regulation. 4. RTS on Liquidity Requirements for Reserve Assets: These standards specify the liquidity requirements for reserve assets, considering international regulatory frameworks and reports on crypto activities. 5. RTS on Own Funds Adjustment Procedure: These RTS detail the procedures and timeframes for adjusting own funds to 3% of the average reserve assets for significant ART issuers, as outlined in MiCAR Articles 43 and 44. 6. RTS on Stress Testing and Own Funds Requirements: These RTS provide criteria for competent authorities to assess the need for issuers to increase own funds, applying to both ART and EMT issuers. Next Steps The EBA’s draft RTS will come into force 20 days after publication in the Official Journal of the European Union. The Guidelines will apply two months after the publication of all translations on the EBA website. ESMA Final Reports on MiCAR Implementation First Final Report on MiCAR (25 March 2024) On 25 March 2024, ESMA released its first final report on MiCAR, focusing on several key areas to ensure comprehensive regulatory oversight and investor protection. The report includes proposals on: 1. CASP Authorisation: The report outlines the information requirements for CASPs seeking authorization to operate within the EU. This includes criteria that CASPs must meet to obtain and maintain their licenses, ensuring they comply with the necessary regulatory standards. 2. Notification by Financial Entities: Financial entities intending to provide crypto-asset services must notify their intent. The report specifies the notification process, ensuring that these entities provide all necessary information to the relevant authorities before commencing operations. 3. Acquisition of Qualifying Holdings: The report details the assessment criteria for the intended acquisition of qualifying holdings in a CASP. This includes evaluating the financial soundness, reputation, and suitability of the acquirer to maintain the integrity and stability of the crypto-asset market. 4. Complaint Handling by CASPs: The report proposes requirements for CASPs to effectively address and resolve complaints from investors and consumers. Second Final Report on MiCAR (31 May 2024) On 31 May 2024, ESMA published its second final report on MiCAR, focusing on rules concerning conflicts of interest for CASPs. This report includes Regulatory Technical Standards (RTS) to provide a clear framework for identifying, managing, and disclosing conflicts of interest. 1. Conflicts of Interest Policies and Procedures: The RTS set forth requirements for the policies and procedures CASPs must implement to identify, prevent, manage, and disclose conflicts of interest. These requirements take into account the scale, nature, and range of crypto-asset services provided by CASPs, ensuring that all potential conflicts are adequately addressed. 2. Disclosure Methodology: The report outlines the methodology for the content of conflict of interest disclosures. This includes specific details on how CASPs should disclose conflicts to ensure transparency and inform investors and stakeholders about potential issues. Prokopiev Law Group provides extensive legal support to ensure your compliance with MiCAR and other global regulations. Our expertise spans key crypto jurisdictions, including the EU, the US, Singapore, and Hong Kong. We are well-versed in navigating complex regulatory landscapes, covering areas such as CASP authorization, conflict of interest management, and liquidity requirements. With our global network of partners, we ensure your project is compliant worldwide. Contact us for tailored advice on developing a legal strategy for your Web3 project. For more information, write to us today. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- E-Money and Electronic Money Tokens (EMTs)
How do electronic money (e-money) and electronic money tokens (EMTs) differ, and what are the regulatory frameworks governing them within the European Economic Area (EEA)? Definition and Regulation of E-Money Tokens (EMTs) E-Money Tokens (EMTs): EMTs are a specific type of crypto-asset, their value typically pegged to a single fiat currency such as the Euro or US Dollar. These crypto-assets represent digital value or rights that can be transferred and stored electronically through distributed ledger technology (DLT) or similar systems. DLT operates as a synchronized information repository shared across multiple network nodes. Regulatory Framework: The Markets in Crypto-Assets Regulation EU 2023/1114 (MiCA) outlines stringent conditions for the issuance of EMTs. Key points include: EMTs can only be issued by credit institutions or Electronic Money Institutions (EMIs) regulated by an EEA regulator. MiCA came into effect in June 2023 and will be fully applicable from December 30, 2024. Issuer Obligations Under MiCA: Prudential, Organizational, and Conduct Requirements: Issuers must adhere to specific prudential standards, organizational requirements, and business conduct rules, including: Issuing EMTs at par value. Granting holders redemption rights at par value. Prohibiting the granting of interest on EMTs. White Paper Requirements: Issuers are mandated to publish a white paper with detailed information such as: Issuer details: Name, address, registration date, parent company (if applicable), and potential conflicts of interest. EMT specifics: Name, description, and details of developers. Public offer details: Total number of units offered. Rights and obligations: Redemption rights and complaints handling procedures. Underlying technology. Associated risks and mitigation measures. Significant e-money tokens (EMTs) are subject to higher capital requirements and enhanced oversight by the European Banking Authority (EBA). Significant EMTs are defined as those which can scale up significantly, potentially impacting financial stability, monetary sovereignty, and monetary policy within the EU. The EBA mandates that issuers of significant EMTs hold additional capital reserves. Specifically, significant issuers must maintain capital that is the higher of either €2 million or 3% of the average reserve assets. The EBA monitors these issuers closely, requiring detailed reports on their financial health and risk management practices. Issuers of significant EMTs must also adhere to comprehensive reporting obligations. They need to provide regular updates on their liquidity positions, stress testing results, and compliance with redemption obligations. Definition and Regulation of Electronic Money Electronic Money (E-Money): E-money is defined as electronically or magnetically stored monetary value representing a claim on the issuer. Its characteristics include: Issued upon receipt of funds for the purpose of payment transactions. Accepted by entities other than the issuer. Not excluded by Regulation 5 of the European Communities (Electronic Money) Regulations 2011 (EMI Regulations). Exclusions Under Regulation 5: The EMI Regulations exclude monetary value stored on specific payment instruments with limited use and monetary value used for specific payment transactions by electronic communications service providers. Electronic Money Institutions (EMIs): An EMI is an entity that has been authorized to issue e-money under the EMI Regulations, which is necessary for any e-money issuance within the EEA. Comparative Analysis of E-Money and EMTs Definition: E-Money: Electronically stored monetary value represented by a claim on the issuer. EMTs: Crypto-assets whose value is usually linked to a single fiat currency. Issuers: E-Money: Issued by EMIs upon receipt of funds for making payment transactions. EMTs: Issued by EMIs and/or credit institutions. Legal Regime: E-Money: Governed by the European Communities (Electronic Money) Regulations 2011. EMTs: Governed by MiCA. Status: E-Money: Not necessarily an EMT, but can be depending on how it is transferred and stored. EMTs: All EMTs are also considered e-money. To ensure compliance with the latest regulations and navigate the Web3 legal landscape, please contact Prokopiev Law Group. Our expertise in cryptocurrency law, smart contracts, and regulatory compliance, combined with our extensive global network of partners, guarantees that your business adheres to both local and international standards. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Regulation on Artificial Intelligence in the European Union
The European Union has enacted a regulation on artificial intelligence (AI) designed to stimulate innovation, ensure the trustworthiness of AI systems, and safeguard fundamental rights (the Regulation or the AI Act). This Regulation provides standardized rules and responsibilities for providers, deployers, and users of AI systems within the EU. It also extends to third-country entities whose AI systems impact the EU market or individuals within the EU. Additionally, the Regulation establishes governance structures, enforcement mechanisms, and penalties for non-compliance at both EU and national levels. Legal Basis and Scope The AI Act is established on the foundation of Articles 16 and 114 of the Treaty on the Functioning of the European Union (TFEU). It aims to improve the internal market by creating a legal framework specifically for the development, market placement, and usage of artificial intelligence (AI) systems within the Union. Uniform Legal Framework AI systems can be deployed across various sectors and regions and easily circulate throughout the Union. Diverging national rules can fragment the internal market and reduce legal certainty for operators. Therefore, the AI Act ensures a consistently high level of protection across the Union, promoting trustworthy AI while preventing obstacles to free circulation, innovation, deployment, and uptake of AI systems. Complementarity with Existing Laws The Regulation complements Union laws on data protection, consumer protection, fundamental rights, employment, and product safety. It does not affect the rights and remedies such acts provide, including compensation for damages and social policy laws related to employment and working conditions. Exclusions AI systems developed solely for scientific research and development are excluded from the Regulation's scope until market placement or service provision. Additionally, AI systems for military defense or national security purposes are excluded. However, if these systems are used for civilian purposes, they must comply with the AI Act. Data Protection Compliance The Regulation complements existing data protection laws, ensuring AI systems processing personal data adhere to the General Data Protection Regulation (GDPR) and other relevant regulations. It does not seek to alter the application of existing Union laws governing personal data processing but rather facilitates the effective implementation and exercise of data subjects' rights and remedies. Third-Country Entities The Regulation applies to AI systems that are not placed on the market within the European Union but whose outputs are utilized within the Union. This includes scenarios where: Contractual Agreements: An operator based in the EU contracts services involving AI systems from an operator established in a third country. The AI system processes data lawfully collected within the EU and transfers the output back to the EU operator for utilization within the Union. Impact on Individuals: The AI Act applies to AI systems used in a third country that produce outputs affecting individuals within the EU, regardless of the system's physical location or the operator's establishment. The Regulation does not apply to public authorities of third countries or international organizations when acting within the framework of cooperation or international agreements concluded at the Union or national level for law enforcement and judicial cooperation. These entities are exempted provided they offer adequate safeguards for the protection of fundamental rights and freedoms. This includes: Bilateral Agreements: Agreements established between Member States and third countries or between the EU, its agencies, and international organizations. Adequate Safeguards: The relevant authorities assess whether these agreements include sufficient safeguards for the protection of fundamental rights and freedoms. Prohibited AI Practices 1. Manipulative Techniques AI systems that employ subliminal components or other manipulative techniques designed to distort human behavior in a manner that causes or is likely to cause significant harm are strictly prohibited. These manipulative techniques include but are not limited to, the use of stimuli beyond human perception to nudge individuals towards specific behaviors, significantly impairing their autonomy, decision-making, and free choice. 2. Exploitation of Vulnerabilities AI systems that exploit the vulnerabilities of specific groups due to their age, disability, or social and economic conditions, resulting in behaviors that materially distort their actions and cause significant harm, are banned. This includes AI systems that exploit individuals' lack of understanding or capacity to resist specific influences, leading to detrimental outcomes. 3. Social Scoring by Public Authorities AI systems utilized by public authorities for social scoring, which leads to discriminatory outcomes or unjustly limits individuals' access to essential services, are prohibited. For example, systems that evaluate or classify individuals based on their social behavior, personal characteristics, or predicted behavior across various contexts, resulting in detrimental treatment unrelated to the original data context. 4. Remote Biometric Identification in Public Spaces for Law Using real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is generally prohibited. Exceptions are strictly limited to narrowly defined situations where such use is necessary to achieve a substantial public interest that outweighs the risks. These situations include: Locating or identifying missing persons, including victims of crime. Preventing imminent threats to life or physical safety, such as terrorist attacks. Identifying perpetrators or suspects of serious criminal offenses listed in an annex to the AI Act, where the offense is punishable by a custodial sentence of at least four years. The use of such systems must be subject to prior judicial or independent administrative authorization, except in cases of urgency where obtaining prior authorization is impractical. In such urgent cases, the use must be limited to the minimum necessary duration, and the reasons for not obtaining prior authorization must be documented and submitted for approval as soon as possible. 5. Biometric Categorization and Emotion Recognition AI systems used for biometric categorization, which assign individuals to specific categories based on biometric data, are prohibited if they result in discrimination or harm fundamental rights. Additionally, AI systems intended for emotion recognition in sensitive contexts such as workplaces or educational settings are banned due to their potential for misuse and the significant privacy risks involved. Risk Assessment and Mitigation Providers and deployers of AI systems must conduct risk assessments to ensure their systems do not fall into the prohibited categories. This includes evaluating the potential impact on individuals' autonomy, decision-making, and fundamental rights. Transparency and accountability measures must be in place to ensure compliance with these prohibitions, including maintaining documentation of AI system design, development, and deployment processes, allowing for effective monitoring and enforcement by relevant authorities. High-Risk AI Systems 1. General Criteria for Classification of High-Risk AI Systems An AI system is classified as high-risk if it meets specific conditions relating to safety components and conformity assessments. These conditions are detailed with reference to the Union harmonization legislation listed in Annex I of the Regulation. The legislation includes: Regulation (EC) No 300/2008: Concerning the safety and security of civil aviation. Regulation (EU) No 167/2013: Regarding the approval and market surveillance of agricultural and forestry vehicles. Regulation (EU) No 168/2013: Relating to the approval and market surveillance of two- or three-wheel vehicles and quadricycles. Directive 2014/90/EU: On marine equipment, ensuring the compliance of equipment used on EU ships. Directive (EU) 2016/797: On the interoperability of the rail system within the European Union. Regulation (EU) 2018/858: On the approval and market surveillance of motor vehicles and their trailers, and systems, components, and separate technical units intended for such vehicles. Regulation (EU) 2018/1139: Establishing common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency. Regulation (EU) 2019/2144: On type-approval requirements for motor vehicles and their trailers, and systems, components, and separate technical units intended for such vehicles, with a focus on general safety and the protection of vehicle occupants and vulnerable road users. 2. Additional Criteria In addition to the criteria mentioned above, AI systems listed in Annex III are also classified as high-risk. These systems include those used in: Biometrics: Remote biometric identification systems, biometric categorization, and emotion recognition systems. Critical Infrastructure: AI systems used in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity. Education and Vocational Training: Systems determining access or admission to educational institutions, evaluating learning outcomes, and monitoring prohibited behavior during tests. Employment and Workforce Management: AI systems used for recruitment, selection, monitoring, and performance evaluation of employees. Essential Services and Benefits: Systems used by public authorities for evaluating eligibility for public assistance, creditworthiness, risk assessment in life and health insurance, and emergency response services. 3. Exemptions An AI system will not be considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including not materially influencing the outcome of decision-making. This applies under specific conditions: The AI system is intended to perform a narrow procedural task. It is designed to improve the result of a previously completed human activity. It detects decision-making patterns or deviations without replacing or influencing the human assessment. It performs a preparatory task relevant to the assessment purposes listed in Annex III. However, AI systems referred to in Annex III that perform profiling of natural persons are always considered high-risk. Providers who consider their AI systems, listed in Annex III, as not high-risk must document their assessment before placing the system on the market. These providers are subject to the registration obligation set out in Article 49(2). Upon request, they must provide the assessment documentation to national competent authorities. Compliance and Enforcement 1. General Obligations Providers of high-risk AI systems must ensure that their systems comply with the requirements set out in the AI Act before they are placed on the market or put into service. These obligations include: Risk Management System: Providers must establish and implement a risk management system that identifies, analyzes, and mitigates risks associated with the AI system throughout its lifecycle. This includes both pre-market and post-market activities. Quality Management System: Providers must establish a quality management system that ensures the AI system consistently meets the requirements of the Regulation. This system must include documented policies and procedures for design, development, testing, and monitoring. Technical Documentation: Providers must prepare and maintain detailed technical documentation for each AI system. This documentation must include information on the system's design, development, testing, and risk management measures. Conformity Assessment: Providers must ensure that the AI system undergoes the appropriate conformity assessment procedure before it is placed on the market or put into service. This includes ensuring that the system meets all applicable requirements and standards. Post-Market Monitoring: Providers must establish and maintain a post-market monitoring system to continuously assess the AI system's performance and safety. This includes collecting and analyzing data on the system's operation and any incidents or malfunctions. 2. Specific Requirements Providers must also ensure compliance with the following specific requirements for high-risk AI systems: Human Oversight: Providers must design AI systems to enable effective human oversight, ensuring that individuals can intervene in the system's operation and prevent or mitigate potential harm. Accuracy, Robustness, and Cybersecurity: Providers must ensure that the AI system is accurate, robust, and secure. This includes implementing measures to protect the system from cybersecurity threats and ensuring that it can withstand foreseeable operating conditions. Transparency and Traceability: Providers must ensure that the AI system operates transparently, providing clear information on its capabilities, limitations, and decision-making processes. This includes maintaining detailed records to ensure traceability and accountability. Data Governance: Providers must implement data governance measures to ensure the quality and integrity of data used by the AI system. This includes procedures for data collection, storage, and processing, as well as measures to protect data privacy and security. 3. Obligations of Importers Importers must ensure that AI systems they place on the market comply with the requirements of the AI Act. This includes: Verification of Conformity: Importers must verify that the provider has conducted the appropriate conformity assessment procedure and that the AI system meets all applicable requirements. Technical Documentation and Information: Importers must ensure that the provider has prepared the necessary technical documentation and made it available upon request by national authorities. Post-Market Monitoring and Reporting: Importers must monitor the performance of AI systems they place on the market and report any incidents or non-compliance to the relevant national authorities. Contact Information: Importers must include their name, registered trade name or trademark, and contact address on the AI system or its packaging, ensuring that end-users and authorities can easily identify and contact them. Storage and Transport: Importers must ensure that the AI system is stored and transported under conditions that do not affect its compliance with the requirements of the AI Act. 4. Obligations of Distributors Distributors must verify that the AI systems they make available on the market comply with the requirements of the AI Act. This includes: Verification of Compliance: Distributors must verify that the provider and importer have fulfilled their obligations under the Regulation, including the completion of the conformity assessment procedure and the availability of technical documentation. Information to Authorities: Distributors must provide relevant information to national authorities upon request and cooperate with them to ensure compliance with the AI Act. Storage and Transport: AI systems are stored and transported in conditions that do not affect their compliance with the requirements of the Regulation. Post-Market Monitoring: Distributors must participate in post-market monitoring activities and report any incidents or non-compliance to the relevant national authorities. Penalties The Regulation mandates that Member States establish penalties for non-compliance that are effective, proportionate, and dissuasive. Specific measures include: 1. Fines Non-compliance with the prohibition of AI practices referred to in Article 5 shall result in administrative fines of up to 35,000,000 EUR or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. Non-compliance with other provisions related to operators or notified bodies (excluding those laid down in Article 5) shall be subject to administrative fines of up to 15,000,000 EUR or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher. This includes obligations under: Article 16 (obligations of providers), Article 22 (obligations of authorised representatives), Article 23 (obligations of importers), Article 24 (obligations of distributors), Article 26 (obligations of deployers), Articles 31, 33(1), 33(3), 33(4), or 34 (requirements and obligations of notified bodies), Article 50 (transparency obligations for providers and users). Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request shall result in administrative fines of up to 7,500,000 EUR or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher. 2. Suspension or Withdrawal In cases of serious non-compliance, Member States may suspend or withdraw AI systems from the market to prevent further infractions and mitigate any ongoing risks. 3. Corrective Actions Providers of non-compliant AI systems may be required to undertake mandatory corrective actions to ensure conformity with the AI Act. This may involve updating system functionalities, revising operational processes, or enhancing data protection measures. Formal Non-Compliance Measures: The market surveillance authority of a Member State may mandate providers to address formal non-compliances such as improper CE marking, incorrect EU declaration of conformity, failure to register in the EU database, lack of an authorized representative, and unavailability of technical documentation. Persistent non-compliance can lead to further restrictions, prohibition, recall, or withdrawal of the high-risk AI system from the market. 4. Union AI Testing Support Structures The Commission designates Union AI testing support structures to provide independent technical or scientific advice to market surveillance authorities. Remedies The Regulation ensures that individuals and entities affected by non-compliant AI systems have access to appropriate remedies, which include: 1. Complaints Any natural or legal person who believes there has been an infringement of the Regulation can submit reasoned complaints to the relevant market surveillance authority. These complaints must be considered in the course of market surveillance activities and handled according to established procedures. 2. Judicial Redress Affected individuals have the right to seek judicial redress for damages caused by non-compliant AI systems. This includes the right to obtain clear and meaningful explanations from the deployer of high-risk AI systems when a decision significantly affects their health, safety, or fundamental rights. 3. Right to Explanation Individuals significantly affected by decisions based on high-risk AI systems listed in Annex III, with certain exceptions, are entitled to an explanation of the role of the AI system in the decision-making process and the main elements of the decision taken. Protection of Whistleblowers Persons reporting infringements of the Regulation are protected under Directive (EU) 2019/1937, ensuring they are safeguarded when reporting such violations European Artificial Intelligence Board The European Artificial Intelligence Board (the Board) is established to support the consistent application of the AI Regulation across the Union. The Board comprises representatives from: National supervisory authorities responsible for the implementation of the Regulation. The European Data Protection Supervisor. The European Commission, which chairs the Board. The Board's primary responsibilities include: Advising and Assisting the Commission: The Board advises and assists the European Commission in matters related to AI regulation, including providing opinions and recommendations. Promoting Cooperation: The Board promotes cooperation between national supervisory authorities to ensure consistent application and enforcement of the AI Act across Member States. Issuing Guidelines and Recommendations: The Board issues guidelines, recommendations, and best practices to facilitate the implementation of the Regulation, ensuring a harmonized approach to AI governance. Facilitating Exchange of Information: The Board facilitates the exchange of information among national authorities, enhancing the effectiveness of supervision and enforcement actions. The Board operates based on internal rules of procedure, which detail its functioning, including decision-making processes and meeting schedules. The rules of procedure are adopted by a simple majority vote of the Board members. The Board may establish subgroups to address specific issues or tasks. These subgroups are composed of Board members or external experts as needed. The establishment of subgroups must be approved by the Board. National Supervisory Authorities Each Member State must designate one or more national supervisory authorities responsible for monitoring the application of the AI Act. The responsibilities of national supervisory authorities include: Monitoring and Enforcement: Ensuring that AI systems placed on the market or put into service in their jurisdiction comply with the Regulation. Investigations and Inspections: Conducting investigations and inspections to verify compliance, including the power to access premises and documents. Handling Complaints: Receiving and handling complaints from individuals and entities regarding potential non-compliance with the AI Act. Imposing Penalties: Imposing administrative penalties and corrective measures for non-compliance, as outlined in the Regulation. National supervisory authorities must operate independently and be free from external influence. Member States must ensure that these authorities have adequate resources, including financial, technical, and human resources, to effectively perform their duties. * * * For more information on how the AI Regulation can ensure compliance and foster innovation within the web3 landscape, please reach out to us. Prokopiev Law Group, with its broad global network of partners, ensures your compliance worldwide. Popular legal inquiries in the web3 sector include regulatory compliance for decentralized finance (DeFi), NFT marketplaces, and blockchain gaming platforms. Our team is well-equipped to address these complexities and provide tailored legal solutions to navigate the evolving regulatory environment of web3 technologies. Contact us to ensure your web3 projects align with current legal standards and maximize their potential within the global market. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Generative AI and EUDPR Compliance
The EDPS has issued Orientations on generative AI and personal data protection to provide guidance to EU institutions, bodies, offices, and agencies (EUIs) on processing personal data using generative AI systems. These guidelines aim to ensure compliance with Regulation (EU) 2018/1725 (EUDPR). Although the Regulation does not explicitly mention AI, it is essential to interpret and apply data protection principles to safeguard individuals' fundamental rights and freedoms. Definition of Generative AI Generative AI, a subset of artificial intelligence, uses machine learning models to produce various outputs such as text, images, and audio. These models, known as foundation models, serve as the core architecture for more specialized models fine-tuned for specific tasks. Foundation models are trained on extensive datasets, including publicly available information, and can handle complex structures like language, images, and audio. Large language models (LLMs) are specific foundation models trained on vast amounts of text data to generate natural language responses. Applications of generative AI include code generation, virtual assistants, content creation, language translation, speech recognition, medical diagnosis, and scientific research tools. Use of Generative AI by EUIs EUIs can develop, deploy, and use generative AI systems for public services, provided they comply with applicable legal requirements and ensure respect for fundamental rights and freedoms. The Regulation applies fully to personal data processing activities involving generative AI, irrespective of the technologies used. EUIs may use generative AI solutions developed internally or procured from external providers. In such cases, they must determine the specific roles (controller, processor, joint controllership) for processing operations and their implications under the Regulation. Transparency, ethical development, and adherence to a risk-based approach are essential to ensure trustworthy AI. Identifying Personal Data Processing in Generative AI Systems Personal data processing can occur at various stages in the lifecycle of a generative AI system, including dataset creation, training, inference, and user interactions. Developers or providers must ensure that personal data is not processed, mainly if anonymized or synthetic data is used. The EDPS cautions against web scraping for data collection, as it may violate data protection principles. Role of Data Protection Officers (DPOs) Article 45 of the Regulation outlines the tasks of DPOs, including advising on data protection obligations, monitoring internal compliance, and acting as a contact point for data subjects and the EDPS. In the context of generative AI, DPOs must understand the system's lifecycle, including data processing mechanisms, decision-making processes, and the impact on individuals' rights. They should also advise on Data Protection Impact Assessments (DPIAs) and ensure transparency and documentation of processing activities. Conducting DPIAs for Generative AI Systems A DPIA is required before processing operations that likely involve high risks to individuals' rights and freedoms, particularly when using new technologies like generative AI. The DPIA should assess risks, document mitigation actions, and ensure data protection compliance by design and default principles. Controllers must consult the EDPS if reasonable measures cannot mitigate risks. Lawfulness of Personal Data Processing The processing of personal data in generative AI systems must be based on one of the lawful grounds listed in the Regulation. For special categories of data, an exception under the Regulation must apply. Legal grounds include performing tasks in the public interest or complying with legal obligations. Consent may be used but must meet specific legal requirements. EUIs must ensure that providers comply with data protection principles, especially when using legitimate interest as a legal basis. Principle of Data Minimization Data minimization requires that personal data processing is limited to what is necessary for the purposes. This principle applies throughout the lifecycle of the AI system. EUIs must use high-quality, well-curated datasets and implement technical procedures to minimize data use. Data Accuracy Data controllers must implement measures to ensure data accuracy, including verifying dataset content, regular monitoring, and human oversight. Contractual assurances from third-party providers on data accuracy procedures are necessary. Despite efforts, generative AI systems may still produce inaccurate results, necessitating careful data accuracy assessment. Informing Individuals about Data Processing EUIs must provide clear and comprehensive information to individuals about personal data processing in generative AI systems. This includes details about data sources, processing activities, and the logic of automated decisions. Transparency policies help mitigate risks and ensure compliance. Data protection notices should be regularly updated to reflect changes in data processing activities. Automated Decision-Making Generative AI systems may involve automated decision-making, requiring compliance with Article 24 of the Regulation. EUIs must ensure safeguards for individuals, including the right to human intervention, to express their views, and to contest decisions. The use of AI in decision-making must be carefully considered to avoid unfair, unethical, or discriminatory outcomes. Ensuring Fair Processing and Avoiding Bias Bias in generative AI systems can arise from training data, algorithms, or developers. Biases can lead to unfair processing and discrimination, affecting individuals' rights and freedoms. EUIs must ensure datasets are representative and implement accountability mechanisms to monitor and correct biases. Regular testing and validation help identify and mitigate bias. Exercising Individual Rights Generative AI systems present challenges for exercising individual rights, such as access, rectification, erasure, and objection. Proper dataset management and traceability support the exercise of these rights. Data minimization techniques can mitigate risks associated with managing individual rights. EUIs must implement measures to ensure the effective exercise of individual rights throughout the AI system lifecycle. Data Security Generative AI systems may pose unique security risks, requiring specific controls and continuous monitoring. EUIs must implement technical and organizational measures to ensure data security, including regular risk assessments and updates. Security measures should address known vulnerabilities and evolving threats. Conclusion The EDPS Orientations provide a framework for EUIs to develop, deploy, and use generative AI systems while ensuring compliance with data protection principles under the Regulation. Adherence to data protection by design and by default, transparency, accountability, and continuous monitoring are essential to safeguard individuals' rights and freedoms. Prokopiev Law Group is well-equipped to ensure your compliance with evolving Web3 regulations, leveraging our extensive global network of partners. We offer expert guidance on issues such as decentralized finance (DeFi) regulations, NFT legal frameworks, smart contract governance, and cross-border crypto-asset reporting standards. Please contact us for comprehensive advice on navigating the complex regulatory landscape of Web3, including matters like the FATF Travel Rule, MiCA in the EU, and on-chain dispute resolution mechanisms. Our expertise spans worldwide jurisdictions, ensuring compliance wherever your operations are based. Please write to us for tailored solutions to your Web3 legal needs. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Marketing Guidelines for Crypto Entrepreneurs
A web3 venture involves following a complex set of crypto-marketing rules globally. Founders are often not fully prepared for the regulatory challenges. These guidelines will help mitigate common issues and avoid major risks. This document is not legal advice and does not cover all aspects, but it clearly explains the main rules to follow. Our previous article covered one side of crypto marketing. This document offers more detailed material to guide your marketing efforts further. General Recommendations Ensure that all information provided is honest and easy to understand. Avoid using complex language; instead, present information in a straightforward manner. Tailor your messages to suit the knowledge level of your audience. Provide enough information for informed decision-making, and never hide critical details. Explain Risks Clearly Always present a balanced view of risks and potential returns. When discussing returns, include the associated risks. Do not downplay the risks of dealing with cryptoassets, as transparency is crucial. Avoid Exaggeration Refrain from making unrealistic claims. All assertions must be backed by verifiable evidence to maintain credibility and trustworthiness. Include All Fee Information Clearly state all costs, fees, and charges. If there are complex fee structures, provide detailed information to ensure complete transparency. Use Accurate and Current Information Ensure that all facts, figures, and statements are up-to-date and correct. Avoid using misleading graphics or images, and always include a publication date with any piece of information. Use of Terms like "Guaranteed" or "Secure" These terms should only be used if they are accurate and verified. Provide all necessary information to explain these terms clearly to avoid any misunderstandings. Highlight Informational Nature Always clarify that the information provided is for informational purposes. Make it clear that users should perform their research or consult with a financial advisor before making any decisions. Avoiding Financial Terminology When a project involves financial activities, investment implications, or similar elements, and a project is uncertain about the appropriate jurisdiction for licensing or registration, it is crucial to avoid language that could trigger the application of financial or securities laws. To ensure compliance, avoid the following terms and phrases: Investment Advice Avoid terms like "investment advice," "investment strategy," or "investment recommendations." Instead, use "informational service" or "educational content." This ensures the information is understood to be for informational purposes only. Financial Planning Refrain from using terms like "financial planning," "wealth management," or "financial strategy." Use phrases like "general financial education" or "financial literacy content" to avoid the implications of personalized financial planning. Securities Implications Do not use language that suggests the offering of securities, such as "equity," "shares," or "dividends." Instead, describe your offerings as "digital assets" or "utility tokens," if applicable, making it clear that they do not confer ownership or profit-sharing rights. Guaranteed Returns Avoid statements that imply guaranteed returns or risk-free investments, such as "guaranteed profit," "risk-free investment," or "secure returns." Use disclaimers and emphasize risks. Personalized Recommendations Do not provide specific actions for individual users, such as "you should invest in this" or "this is the best option for you." Offer general information applicable to a broad audience, like "explore various options based on general criteria" or "consider different strategies based on risk tolerance." Financial Terminology in Marketing Ensure that marketing materials do not include terms that could be interpreted as financial promises or advice. Avoid phrases like "maximize your investment" or "secure your financial future." Stick to neutral language focusing on education and information. Inducements Avoid creating inducements when communicating with users. Inducements are steps that persuade or encourage someone to engage in specific activities. Refrain from using high-pressure sales tactics that force users into making quick decisions, such as, "Hurry, invest now, or miss out forever!" Always consider whether your message significantly encourages clients. When in doubt, avoid language that directly invites or strongly persuades someone without appropriate disclaimers. Promoted Materials Clearly label all promotional content with "Sponsored" or "Advertisement" to inform viewers of the endorsement's nature. Influencers Influencers should always disclose paid relationships or conflicts of interest. They must ensure that endorsements are truthful and not misleading. They should disclose that the endorsement is part of a paid partnership. Profit and Value Statements Do not state that a native token will share any profits or upside. Avoid discussing how a native token could increase in value. Do not promise profits. Discourage Speculative Behavior Do not encourage "buy low, sell high" behavior. Do not promise any value growth, even indirectly. Stick to Facts Only share objective, factual information when a user inquires. Use neutral, informative language. Avoid speculative statements or any language that could be interpreted as promoting an investment. Conclusion Adhering to these marketing guidelines will help crypto entrepreneurs navigate the complex regulatory environment and foster trust with their audience. Of course, these guidelines are not exhaustive and cover only several core aspects, but they can still be helpful as basic rules to adhere to. Always remember to consult with legal and financial professionals for comprehensive compliance and to stay updated with the evolving regulations in the crypto space. Prokopiev Law Group has a broad global network of partners, ensuring your compliance worldwide. For more information, write to us, and we'll assist you in staying ahead in the dynamic world of Web3 and crypto regulations. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Compliance with the EU AI Act
The European Union (EU) has enacted the AI Act, a comprehensive regulation governing artificial intelligence (AI) systems. The Council of the EU approved the AI Act on May 21, 2024, and will affect businesses within the EU and those outside the EU with customers in the EU. Scope of Application The AI Act applies to: Businesses in the 27 EU Member States. Businesses in Norway, Iceland, and Liechtenstein under the European Economic Area (EEA) arrangements. Non-EU businesses, including those in the UK and USA, with customers in the EU. Any business where the outputs of the AI system are used within the EEA. Definition of AI System According to Article 3 of the AI Act, an "AI system" is defined as a machine-based system operating with varying levels of autonomy, potentially exhibiting adaptiveness after deployment. These systems generate outputs such as predictions, content, recommendations, or decisions influencing physical or virtual environments. Implementation Timeline Late June/Early July 2024: AI Act Becomes Law The AI Act will be published in the Official Journal and become a binding law 20 days after publication. Businesses must begin compliance preparations immediately upon publication. Late 2024: Prohibitions Effective six months post-enactment, prohibitions will cover AI applications posing unacceptable risks to health, safety, or fundamental rights, as outlined in Article 5. Prohibited uses include: AI systems employing subliminal or manipulative techniques causing significant harm. AI exploiting vulnerabilities such as age or disability, leading to significant harm. Social scoring based on personal characteristics, resulting in unjustified detrimental treatment. Unconsented expansion of facial-recognition databases. Emotion-inference systems in workplaces or educational settings (exceptions apply). Biometric categorization inferring sensitive characteristics (exceptions for law enforcement). Predictive policing based solely on profiling. Real-time remote facial recognition in public spaces (exceptions apply). Non-compliance penalties include fines of up to €35 million or 7% of worldwide turnover (Article 99(3)). Summer 2025: General-Purpose AI Regime Providers of general-purpose AI models must meet transparency obligations regarding training data and copyright. Additional obligations apply to AI models with "systemic risk," designated by the European Commission, including model evaluation, risk mitigation, incident reporting, cybersecurity, and energy consumption monitoring. Non-compliance penalties are up to €15 million or 3% of worldwide turnover (Article 101). Summer 2026: High-Risk AI Regime High-risk AI includes systems subject to EU product safety regulations (Annex I) and specifically classified high-risk AI (Annex III). Obligations include continuous risk management, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. Data governance for training, validation, and testing must ensure relevance, representativeness, and minimal errors. Penalties for non-compliance are up to €15 million or 3% of worldwide turnover (Article 99(4)). Summer 2026: Low-Risk Transparency Obligations Certain AI systems not classified as high-risk must ensure transparency, informing users when interacting with AI systems or outputs. This applies to chatbots, emotion recognition, biometric categorization, and AI-generated content. Penalties for non-compliance are up to €15 million or 3% of worldwide turnover (Article 99(4)). Summer 2027: High-Risk Systems under Product Safety Regulation For AI integrated into products subject to Annex I regulations, the high-risk regime will apply, with compliance requirements and penalties mirroring those for Annex III systems. For further information on ensuring compliance with the AI Act and other emerging regulations, contact Prokopiev Law Group. With our broad global network of partners, we guarantee comprehensive compliance solutions worldwide. Our expertise extends to current widespread Web3 legal concerns, including decentralized finance (DeFi) regulations, smart contract enforceability, and data privacy in blockchain applications. Please write to us to navigate these legal complexities effectively. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Essential Legal Provisions for the SAFT (Simple Agreement for Future Tokens)
Introduction to SAFT The Simple Agreement for Future Tokens (SAFT) is a contractual mechanism designed for the issuance of digital tokens in the future, typically used by blockchain projects to raise funds from investors. It acknowledges the purchase of tokens that will be delivered at a later date, usually linked to a token generation event (TGE). Legal Provisions to Strengthen SAFT Definitions and Interpretations: Define all terms and relevant blockchain terminologies such as "Tokens," "Blockchain," "Blockchain Address," "Token Generation Event," and "Vesting Period" to avoid ambiguity. Token Issuance and Delivery Conditions: Specify the conditions under which tokens will be issued, including any prerequisites such as payment completion and compliance checks. Define the consequences of failing to meet these conditions, including the potential for the agreement to be voided. Clearly state the deadlines for token delivery and the actions to be taken if tokens are not delivered by the specified date, including any rights of the purchaser to terminate the agreement. Specific Rights and Limitations on Tokens: Tokens may be defined not as securities but as functional digital units that allow interaction within a protocol. The token classification influences their regulatory and legal treatment. The SAFT should specify conditions under which tokens may be considered securities and detail the restrictions on their transferability. This includes the conditions under which tokens can be sold, resold, or transferred, potentially subject to registration or exemption under securities laws. Vesting and Cliff Periods: Include vesting schedules that specify when the tokens will become available to the purchaser post the Token Generation Event (TGE). Outline any cliff periods during which tokens do not vest, specifying the duration and the conditions under which tokens begin to vest post-cliff. Rights and Obligations of Parties: Clarify the rights and obligations of both the issuer and the purchaser, particularly relating to the issuance, handling, and potential return of tokens. Ensure that the agreement specifies the issuer's and purchaser's limitations, especially regarding the transfer and sale of tokens. Limitation of Liability and Indemnities: Include clauses that limit the company's liability for issues not within its control, such as blockchain malfunctions or cybersecurity breaches. Delineate indemnification provisions, protecting the company against breaches of the agreement by the purchaser. Confidentiality and Data Protection: Incorporate confidentiality clauses to protect the sensitive information of all parties involved. Comply with data protection laws, specifying how personal data will be handled, stored, and protected under the agreement. Termination and Survival: Define specific conditions under which either party can terminate the agreement. Ensure that key provisions such as confidentiality, liability, and indemnity survive the termination of the agreement. Representations and Warranties: The company should make representations about its legal authority to enter into the SAFT, the non-infringement of third-party rights, and compliance with relevant laws. The purchaser must represent their eligibility as an accredited or qualified investor and confirm their token purchase's legal and regulatory permissibility. Anti-Money Laundering (AML) and Counter Financing of Terrorism (CFT) Compliance Outline the procedures for verifying purchasers' identities through KYC (Know Your Customer) and KYB (Know Your Business) checks to comply with AML and CFT regulations. Specify requirements for proof of identity, address, source of funds, and checks against sanctions and politically exposed persons (PEPs). Jurisdiction and Dispute Resolution: State the governing law and jurisdiction for the agreement, providing guidelines on how disputes will be resolved. Include arbitration or mediation provisions before or instead pursuing litigation, specifying the rules and location. Digital Signature and Electronic Delivery Confirm that digital signatures are recognized as valid and binding, equivalent to traditional handwritten signatures. Detail the process for electronic delivery of notices and other communications, ensuring they are considered received when delivered electronically. Risk Disclosures for SAFT The SAFT should highlight critical risks such as irreversible token loss from private key misplacement, financial jeopardy from cyber threats, and token malfunctions due to blockchain failures. Market risks include limited liquidity due to lack of secondary market support and significant price volatility from market and regulatory changes. Regulatory uncertainties could lead to adverse legal and tax implications, requiring investor diligence. Investment risks are heightened by the lack of insurance, potential issuer dissolution, and possible underperformance due to inadequate protocol development. Legal provisions should mandate investor acknowledgment of these risks, include disclaimers about the speculative nature of tokens, and require affirmations of understanding related to regulatory and tax responsibilities. *** For expertly crafted SAFT templates that address all legalities look no further than Prokopiev Law Group. Engage with our tailored legal services to ensure your token issuance complies with current regulatory frameworks and secure your future. Contact Prokopiev Law Group today to receive your personalized SAFT template, tailored to meet the unique needs of your blockchain project. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- FCA Guidance for Financial Services Promotion
In March 2024, the UK's financial watchdog, the Financial Conduct Authority (FCA) issued a press release, which is giving out new Guidance (“Guidance” or “Guidelines”) for financial services promotion. These new Guidelines aim to prevent scams and ensure people are making informed decisions about their finances. It applies to all online financial promotion, including the increasingly popular world of memes, TikTok and YouTube videos, and even Livestreams. The comprehensive nature of the Guidance, which supersedes the previous guidance (FG15/4: Social media and customer communications), is designed to ensure all parties involved in promoting financial products or services online are aware of their responsibilities. This includes authorized firms, social media influencers, and even affiliate marketers. Trade bodies representing these groups are also expected to be familiar with the updated guidelines. All financial promotions must empower consumers to make informed decisions. This means firms need to consider the target audience, the advertised product's complexity, and potential areas of confusion that might arise. The Guidance extends its reach beyond authorized firms to encompass unauthorized persons as well. Social media influencers who promote financial products or services are now responsible for ensuring their communications comply with FCA rules. This may involve obtaining approval from authorized firms for their promotional content. The FCA also clarifies that even communications on "private" social media channels like Discord or Telegram can be considered financial promotions if they encourage investment activity. This broadens the scope of the FCA's regulations and requires firms to be mindful of their communications across all platforms. Here are the key takeaways from the FCA's updated guidance, broken down by the relevant sections of the Guidance: Standalone compliance Each element of a financial promotion, be it a social media post, email, or banner ad, must comply with FCA rules. Promotions for intricate financial products might require additional information or disclaimers to ensure consumer understanding. Firms can use hyperlinks or separate pages for this purpose, but the initial promotion itself must still be clear and informative. For promotions displayed across multiple frames (like Instagram stories), the FCA will assess the overall message and ensure a balanced presentation of both benefits and risks. The level of detail required in a promotion will depend on the target audience's needs, the type of decision involved, and any potential for confusion. Prominence There are existing FCA regulations on what information needs to be displayed prominently in financial promotions, and these apply equally to social media as any other channel. Make sure you understand the relevant rules for your products. When promoting on social media, information required to be prominent should be easy to find and understand. This could involve factors like size, position, font style, or even using visuals like graphs or audio-visuals. Don't overwhelm consumers with excessive information. This can be especially problematic on social media, where attention spans are shorter. Consider user testing to see if your promotion is clear and easy to understand. Burying risk information in captions or relying solely on visuals isn't enough. Risks should be presented prominently, following the FCA's Handbook rules. FCA discourages hiding important information behind click-throughs or other user actions. If truncation (like "...see more") cuts off key details, you'll need to find a way to display as much as possible. If displaying all information prominently is impossible, consider including it in an accompanying image (but only if the platform allows displaying images alongside text). The FCA reminds firms of their duty to ensure consumer understanding. If a promotion relies on obscured or truncated information on social media, it might not be compliant with FCA regulations. Suitability of social media for financial promotions Financial promotions must be fair, clear, and not misleading. This means highlighting both potential benefits and relevant risks to inform consumers. Not all financial products are suited for social media promotion. Complex products with intricate features or high risks might be difficult for consumers to understand in a limited format. Consider the platform's limitations. Social media with character restrictions might not be ideal for explaining intricate financial products. Social media can be a good tool to direct potential customers towards other channels with more detailed information. Consider using "image advertising" to promote your firm generally, without referencing specific products. The FCA advises debt counseling firms to carefully evaluate if social media is an appropriate platform for promoting their services due to the complexity of debt solutions. Promotions for debt solutions should be balanced and highlight both the potential benefits and drawbacks, including risks and costs. Promotions for Buy-Now-Pay-Later (BNPL) products must clearly communicate the associated risks, such as the unregulated nature of the agreements, potential debt burden, consequences of missed payments, and fees. Even seemingly lighthearted content like memes can be considered financial promotions and fall under FCA regulations, especially in the cryptoasset sector. High‑risk investments (HRIs) Firms promoting investment products must be familiar with the specific marketing restrictions outlined in the FCA's Conduct of Business Sourcebook (COBS) for the products they advertise. Certain HRIs, like non-mainstream pooled investments and speculative illiquid securities, are banned from mass promotion to retail investors on social media. While some HRIs like crowdfunding, cryptoassets, and CFDs can be marketed to retail investors, they are subject to specific restrictions. Firms need to ensure their promotions comply with these rules, including prominent risk warnings and bans on investment incentives. Prescribed risk warnings Risk warnings for HRIs and high-cost short-term credit (HCSTC) must be displayed prominently and at the same time as the promotion itself, not buried later or hidden in less noticeable areas. Research shows consumers are more likely to understand risk warnings if they are concise and clear. Avoid burying them amongst other promotional elements. When a full risk warning is required, firms cannot hide it behind a click-through or another user action. This applies particularly to platforms that truncate text, where the full warning must be visible without needing to click "see more...". If FCA rules allow a shortened warning, ensure the entire shortened phrase is displayed clearly and the full warning is easily accessible through a click-through. Don't drown out risk warnings with flashy visuals or highlight only the benefits of a product. The promotion needs to be fair and clear, presenting both sides of the coin. The FCA offers examples of prominent risk warnings on various social media platforms. Consulting these case studies can help firms ensure compliance. Fortunately, the FCA Guidance includes various of useful tables and illustrative examples of compliant and non-compliant promotions, offering firms a clear side-by-side comparison to ensure their social media marketing hits the right note. For instance, see the Table 1 below for illustrative examples of prominent risk warnings across various social media channels: Compliance with the regime for unregulated non‑UK based entities The FCA guidance clarifies the rules for overseas entities promoting financial products on social media that might reach UK consumers: Even if a financial promotion originates outside the UK, it can be subject to FCA regulations if it's accessible to UK consumers. Unauthorized overseas firms have several options to comply with FCA rules: Getting their promotions approved by a UK authorized person. Geo-blocking UK users from accessing their promotions. Modifying their content to avoid inviting UK consumers to invest. Implementing controls to prevent UK consumers from engaging with the promotion. Simply stating a promotion is "not for UK consumers" is unlikely to be sufficient for compliance. When authorized and unauthorized entities within a group share social media channels, extra caution is needed. The FCA has seen cases where UK consumers interacted with what they believed to be a UK-regulated firm, but were actually connected to an unregulated overseas entity. Groups with both authorized and unauthorized entities using shared social media channels need to: Ensure unauthorized entities' promotions comply with FCA rules. Have safeguards in place to prevent UK consumers from being directed to unregulated overseas websites. Consider having the UK authorized entity approve all social media promotions. As an alternative, firms can create separate social media accounts specifically for UK consumers. These accounts must be actively managed and not simply empty shells. Unregulated Activity: If unauthorized overseas entities provide financial services to UK consumers, they may be in breach of separate FCA regulations prohibiting unauthorized regulated activity. The Consumer Duty The Duty applies to all social media communications and financial promotions, even if there's no direct customer relationship. Meeting basic standards for fair and clear communication is no longer enough. Firms must actively support informed decision-making by consumers. Marketing strategies should consider the target audience and the specific social media platform being used. Testing for clarity and understanding among the target market is encouraged. Confining promotions to a limited target market on social media can be challenging. Simply disclaiming "for professionals only" might not be sufficient. Firms need to ensure they can effectively control who sees the promotion to avoid unintended exposure to unsuitable consumers. The FCA warns against bombarding consumers with repeated promotions, especially those exploiting behavioral biases of vulnerable audiences. Regularly reviewing and adapting social media marketing strategies is crucial, especially as platforms evolve and new features emerge. The FCA offers its own research (OP23, OP26) on consumer behavior to help firms understand how to best promote financial products. Firms should also consider the FCA's sector-specific reviews, such as marketing expectations for high-cost lenders, when formulating their social media marketing strategies. Recipients sharing or forwarding communications, Unsolicited promotions, and Approval and record‑keeping Firms remain responsible for breaches in their original communication, even if others share or forward it on social media. Sharing promotions with a limited target market can be difficult on social media. Firms should consider if it's the right platform for such products. If a firm shares a customer's social media post promoting a financial product, the firm is responsible for compliance, even if they didn't create the original content. The FCA reminds firms of existing rules regarding unsolicited promotions (cold calling) and electronic marketing communications (PECR). Following a customer's social media profile does not constitute an established client relationship for exemption from these rules. Understanding the distinction between real-time and non-real-time promotions is crucial, as different FCA rules apply. Social media promotions (e.g., tweets) are generally considered non-real-time. Firms must have a system for approving social media communications by qualified senior personnel, as required by the Senior Management Arrangements sourcebook (SYSC). Adequate records of these communications must also be maintained (SYSC 9) to protect consumers and address potential complaints. Social media platforms themselves should not be relied upon for record-keeping due to potential content deletion. Influencers and Financial Promotions Even if a financial promotion is approved by an authorized person, the FCA can still take action if influencers promote it in a non-compliant way. The FCA recognizes various influencer models: Celebrity influencers with large followings, but no financial expertise, may be promoting financial products without necessarily understanding the intricacies. "Finfluencers," who may not be FCA-authorized to provide financial advice but offer recommendations on social media, require special attention due to the high level of trust consumers place in them. Online forums and discussion groups can also be used to promote financial products or services, and the FCA is aware of their potential for misuse. The FCA emphasizes that any influencer can be held responsible for communicating illegal financial promotions regardless of their follower count. The FCA has partnered with the Advertising Standards Authority (ASA) to create an infographic specifically to help influencers understand their obligations when promoting financial products or services. This resource encourages influencers to carefully consider if they are a suitable fit for the product and educates them about the potential legal risks of promoting financial products illegally. Social Media Platforms and Financial Promotions Firms and influencers must comply with both FCA regulations and the specific advertising policies of each social media platform they use. Social media platforms have a role to play in preventing illegal financial promotions. This includes removing such content when identified and considering the suitability of their platform for promoting complex financial products. The Online Safety Act places additional duties on social media platforms to proactively mitigate the risks of illegal content, including illegal financial promotions. The FCA is working with Ofcom (the Office of Communications), the regulator overseeing the Act, to ensure a smooth alignment between the Online Safety Act and financial promotion legislation. *** The Financial Conduct Authority's recent Guidance ensures that every entity involved in online financial promotions, including influencers and social media platforms, adheres to stringent rules to protect consumers. As digital interactions deepen, staying compliant is crucial for your business. At Prokopiev Law Group, we leverage a broad global network of partners to ensure your compliance worldwide, keeping you ahead in the rapidly evolving financial promotion landscape. If you require detailed information or guidance on navigating these regulations, write to us today. Prokopiev Law Group delivers dynamic legal solutions tailored to the digital and blockchain sectors. Our services span from DAO legal support and Web3 terms of service to crypto token sale legal advice and NDA for blockchain teams. We excel in protecting intellectual property in the Web3 space, ensuring crypto source code protection, and managing trademark registration in blockchain. Our legal team is adept in formulating litigation strategies for crypto startups, devising robust tax strategies for blockchain projects, and enforcing compliance with token sale regulations. We also offer advanced advisory in insider trading policies for crypto businesses, Web3 legal risks management, and blockchain data protection laws. Whether it's decentralized finance consulting, NFT rights protection, smart contract analysis, or comprehensive blockchain compliance audits, Prokopiev Law Group is your premier partner for navigating the intricate legalities of the digital world. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Regulation of Crypto-Asset Activities in Abu Dhabi Global Market (ADGM)
Introduction The Abu Dhabi Global Market (ADGM), a finance-focused free zone within the Emirate of Abu Dhabi, administers a distinct and comprehensive regulatory framework for crypto-asset activities. Established by the Financial Services Regulatory Authority (FSRA), this framework governs the operations of entities engaging in crypto-asset spot transactions from within ADGM since June 2018. Regulatory Framework Overview Entities wishing to undertake regulated crypto-asset services must comply with regulatory requirements, including Anti-Money Laundering and Counter-Terrorism Financing (AML/CFT) rules, Know Your Customer (KYC) regulations, market surveillance protocols, and specific licensing mandates. Central to this regulatory approach is acquiring a Financial Services Permit (FSP), compulsory for conducting Regulated Activities concerning Virtual Assets in or from ADGM. Application Process for Financial Services Permit The process for obtaining an FSP is structured into five principal stages: Due Diligence and Initial Discussions: Prospective entities engage in preliminary discussions with FSRA teams. This stage involves explaining their business models, demonstrating compliance with established regulations, and providing technological demonstrations. Formal Application Submission: Entities submit a detailed Virtual Asset Application Form, requisite supporting documentation, a comprehensive launch plan, and the associated fees. FSRA commences a formal review after receiving these submissions. In Principle Approval (IPA): This approval is granted after reviewing the application and supporting documents, contingent on the applicant's adherence to all regulations. Certain conditions must be fulfilled before receiving final approval. Final Approval: Final approval is granted conditional upon satisfactory completion of operational testing and capabilities, as well as third-party system verifications if necessary. Operational Launch Testing: This is particularly crucial for Multilateral Trading Facilities (MTFs) and Virtual Asset Custodians; this phase involves operational testing to ensure compliance with FSRA's standards, potentially including third-party system verifications. Corporate Establishment and Location Requirements To be eligible for an FSP, an entity must establish a corporate presence within ADGM. The regulatory requirements stipulate that an Authorized Person, Recognized Body, or Applicant must have their head office and registered office within ADGM to conduct any Regulated Activities or Activities under a Recognition Order. Applicants can be either a Body Corporate or a Partnership, depending on the nature of the Regulated Activities envisaged. Specific Eligibility for Regulated Activities Different types of Regulated Activities under the Virtual Asset Framework demand specific organizational forms: Entities aiming to effect or carry out Contracts of Insurance must be incorporated as a Body Corporate. Those engaging in Accepting Deposits may be either a Body Corporate or a Partnership. Entities acting as the Trustee of an Investment Trust must be a Body Corporate. Virtual Assets Definition and Regulatory Treatment In line with the Financial Action Task Force's guidelines, FSRA defines a "Virtual Asset" within the Financial Services and Markets Regulations (FSMR) as a digital representation of value that can be traded digitally, which functions as a medium of exchange, a unit of account, or a store of value. However, it does not have legal tender status in any jurisdiction. Furthermore, Virtual Assets are neither issued nor guaranteed by any jurisdiction and are recognized solely by agreement within the respective user community, distinguishing them from Fiat Currency and E-money. Digital Securities are recognized under paragraph 58(2)(b) of FSMR as securities encompassing digital/virtual tokens exhibiting characteristics akin to shares, debentures, and units in a collective investment fund. Entities engaging in services related to Digital Securities, such as managing investments or providing advice, require a Financial Services Permission (FSP) and are subject to market intermediaries and market operators' regulations within ADGM. Conversely, Virtual Assets are classified as commodities, and although they are not deemed Specified Investments under the FSMR, market intermediaries handling these assets, such as brokers and custodians, must obtain FSRA approval and an FSP. Regulatory Approach The FSRA's regulatory approach delineates the treatment of different categories of digital assets: Derivatives and Collective Investment Funds of Virtual Assets and Digital Securities are regulated as Specified Investments under the FSMR. Utility Tokens, which are for access to a specific product or service using a DLT platform but do not exhibit the features of a regulated investment, are treated as commodities. Fiat Tokens are digital representations of Fiat Currency. If used as a payment instrument, they are regulated under the FSMR and considered Providing Money Services. Risks and Mitigations The FSRA outlines risk areas and mitigation strategies within the Virtual Asset Framework: AML/CFT/Tax: Compliance with the AML Rulebook is mandatory for all Authorized Persons, alongside reporting obligations under FATCA and Common Reporting Standards. Consumer Protection: The risks associated with Virtual Assets must be transparently disclosed to consumers, and they must be monitored and updated regularly. Technology Governance: Authorized Persons must ensure robust governance over virtual asset wallets, private keys, origin and destination of funds, security, and risk management systems. 'Exchange-Type' Activities: MTFs using Virtual Assets are mandated to establish market surveillance, fair and orderly trading, settlement processes, transaction recording, a rulebook(s), transparency, and public disclosure mechanisms. Custody: Providers holding or controlling Virtual Assets or client money (e.g., fiat currencies) must comply with Safe Custody and COBS (Client Order Book Systems) under the FSMR. Activities involving Virtual Assets that are subject to regulation include: Operating a Multilateral Trading Facility (MTF) Acting as a Virtual Asset Custodian Dealing in or Arranging transactions in Virtual Assets Managing and Advising on investments in Virtual Assets Entities performing these functions must adhere to the corresponding regulations and obtain the necessary FSRA approvals. Capital and Fee Structure In adherence to the Abu Dhabi Global Market (ADGM) regulations, entities engaging in Virtual Asset activities are subject to specific capital and fee requirements reflecting the substantial supervisory resources needed for these operations. Capital Requirements Pursuant to COBS Rule 17.3 and MIR Rule 3.2.1, an Authorised Person dealing with Virtual Assets must maintain regulatory capital in fiat currency. This must equate to at least 12 months' operational expenses for an entity operating a Multilateral Trading Facility (MTF) for Virtual Assets. Other Authorised Persons conducting regulated activities related to Virtual Assets must hold capital equivalent to 6 months' operational expenses. Suppose an Authorised Person engages in other regulated activities unrelated to Virtual Assets. In that case, the FSRA enforces the higher capital requirements from those mandated by the Prudential – Investment, Insurance Intermediation, and Banking Rules (PRU). Fee Requirements Fees are imposed on entities within the ADGM performing Virtual Asset services, including authorization and annual supervision fees. The structure is as follows: General Virtual Asset service providers must pay an initial authorization fee of USD 20,000 and an annual supervision fee of USD 15,000. Entities operating an MTF for Virtual Assets are subject to an authorization fee of USD 125,000 and an annual supervision fee of USD 60,000. A sliding-scale trading levy applies to MTFs handling Virtual Assets, determined by the transactions' Average Daily Value (ADV). The fees for entities conducting multiple regulated activities are cumulative and adjust according to the specific combination of services provided. Mandatory Appointments The General Rulebook (GEN) mandates that every Authorised Person appoint approved individuals to essential roles, including a Senior Executive Officer, Finance Officer, Compliance Officer, and Money Laundering Reporting Officer, all of whom must reside in the U.A.E. Additionally, any Directors of a Body Corporate with headquarters and registered offices within ADGM must be registered as Licensed Directors. Accounting and Auditing Requirements The GEN Rulebook requires that: Financial statements are prepared annually for each Authorized Person and Recognized Body. The Regulator must be notified of Auditor appointments, terminations, or resignations in the prescribed form. Appropriate steps must be taken to ensure the selected Auditor possesses the necessary qualifications to audit the entity's business. Multilateral Trading Facilities (MTFs) Defined under the FSMR and related guidance, an MTF is a system that consolidates buying and selling interests for investments in a non-discretionary manner, resulting in a contractual agreement. Entities operating an MTF or an Organised Trading Facility must adhere to stringent regulations, including the maintenance of non-discretionary rules and engagement in activities that result in legally binding contracts for financial instruments, Virtual Assets, or spot commodities. PRU Categorization The PRU Rulebook categorizes Authorized Persons to determine applicable provisions. Authorized Persons are permitted to conduct regulated activities of a lower category if authorized under their Financial Services Permission. Categories of Authorised Persons In the regulatory framework established by the Financial Services Regulatory Authority (FSRA) within the Abu Dhabi Global Market (ADGM), Authorised Persons are classified into distinct categories based on the regulated activities they are authorized to conduct. The classification is as follows: Category 1: Activities include Accepting Deposits and Managing a Profit-Sharing Investment Account, which is PLS (Profit and Loss Sharing). Category 2: Providing Credit and Dealing in Investments as Principal. Category 3: Split into three subcategories (3A, 3B, and 3C), this includes various activities such as Dealing in Investments as an Agent, Managing Assets, Providing Custody, and Operating a Multilateral Trading Facility. Category 4: Encompasses Arranging Credit, Advising on Investments or Credit, Insurance Intermediation, and other specific activities not included in the higher categories. Category 5: This category is reserved for those conducting non-mainstream regulated activities, such as Operating a Private Financing Platform. Capital Requirements The FSRA mandates base capital requirements for Authorised Persons operating within the ADGM, applicable across all categories as a fundamental component of their financial adequacy. Base capital requirements vary according to the category of Authorised Persons as follows: Category 1: USD 10 million Category 2: USD 2 million Category 3A: USD 500,000, unless the Authorised Person is dealing in investments as a principal involving OTC Leveraged Products with Retail Clients, in which case the base capital requirement is USD 2 million. Category 3B: USD 4 million Category 3C: USD 250,000, which may increase to USD 150,000 or USD 500,000 based on the type of fund managed or the provision of Financing Platforms and holding Client Assets. Maintenance and Notification of Capital Resources Authorized Persons in Category 3B, 3C, or 4 must always maintain capital resources that meet or exceed the capital requirement. If capital resources fall below 120% of the Capital Requirements, the regulator must be notified proactively. Capital Calculation for Categories 3B, 3C, and 4 The capital requirement for these categories is calculated as the higher of the base capital requirement or the Expenditure Based Capital Minimum. The latter is informed by the actual expenses and provides a real-time reflection of the capital adequacy relative to the entity's operational volume. Regulated Activity of Providing Money Services For entities engaged in the regulated activity of Providing Money Services, capital requirements are to be calculated as the greatest of: The base capital requirement, The Expenditure Based Capital Minimum, or A Variable Capital Requirement applicable to specific activities within the Money Services domain. Variable Capital Requirement Calculation for Money Remitters Money Remitters must calculate their Variable Capital Requirement based on a percentage of their monthly payment volume, with the following tiered structure: 1.25% of the first USD 10 million 0.5% of the next USD 90 million 0.25% of the subsequent USD 150 million 0.125% of any further payment volume The monthly payment volume is determined by the annual funds remitted, averaged per month, or by a combination of actual and projected figures for newer entities. Variable Capital Requirement for Payment Account Providers Payment Account Providers calculate their Variable Capital Requirement using a similar tiered percentage structure of their monthly payment volume, with rates of: 2.5% of the first USD 10 million 1% of the next USD 90 million 0.5% of the subsequent USD 150 million 0.25% of any additional volume Guidance for Variable Capital Calculation The FSRA provides guidance on calculating the Variable Capital Requirement, emphasizing a tranche-based approach. Payment volumes are segmented, and different percentage factors are applied to each tranche to determine the cumulative Variable Capital Requirement. Substantive Operational Presence in ADGM The FSRA stipulates that an Authorised Person conducting regulated activities in relation to Virtual Assets must have a substantive operational presence within the Abu Dhabi Global Market (ADGM). Central to this requirement is the establishment of the 'mind and management' of the Authorised Person within ADGM to ensure effective control and oversight. Specific Requirements for Multilateral Trading Facilities For Multilateral Trading Facilities (MTFs) engaging with Virtual Assets, the FSRA mandates a physical presence within ADGM. This contains involvement in the MTF's operations, including but not limited to: Control over the order book Management of the matching engine Adherence to established rulebook(s) Ensuring the facilitation of fair and orderly markets Implementing settlement procedures Monitoring and prevention of market abuse in line with the Market Infrastructure Rules (MIR) and the Conduct of Business Sourcebook (COBS) Chapter 8 For start-up MTFs, complete regulatory oversight by the FSRA is required over their entire order book and matching engine functionalities. Existing virtual asset exchanges with components of their order book or matching engine located outside of ADGM must delineate the aspects that will fall under FSRA's jurisdiction as part of their application to become authorized MTFs within ADGM. Exclusive Operation of Markets within ADGM The FSRA asserts that within ADGM's jurisdiction, only authorized MTFs may conduct market operations that involve the matching of orders or aid in price discovery for Accepted Virtual Assets. The scope and degree of FSRA's regulatory oversight are designed to be comprehensive and may differ significantly from other global regulatory bodies. Trading Pairs on MTFs In the trading environment of MTFs, the FSRA permits trading pairs that consist exclusively of: Exchanges between Fiat Currency (or its equivalent value) and Accepted Virtual Assets Exchanges between Accepted Virtual Assets and Fiat Currency (or its equivalent value) Trades involving one Accepted Virtual Asset for another Links Guidance – Regulation of Virtual Asset Activities in ADGM Other necessary documents to analyze: Conduct of Business Rulebook (COBS) Fees Rules (FEES) General Rulebook (GEN) FINANCIAL SERVICES AND MARKETS REGULATIONS 2015 Code of Market Conduct (CMC) Market Infrastructure Rulebook (MIR) The description of the application process Presentation of ADGM about Regulated activities, fees and key requirements (including virtual assets), namely about virtual assets *** Prokopiev Law Group is adept at navigating the complex requirements for crypto licensing, ensuring comprehensive compliance for your operations globally. Our network of partners amplifies our capability to facilitate your adherence to international standards. For entities seeking to engage in crypto-asset activities within the Abu Dhabi Global Market (ADGM) and require authoritative guidance on acquiring a Financial Services Permit (FSP), Prokopiev Law Group stands ready to assist. Our global partnership encompasses all facets of regulatory compliance, from capital requirements to the establishment of a substantive operational presence and effective governance structures. Whether your focus is on Multilateral Trading Facilities (MTFs), Virtual Asset Custodianship, or other regulated activities within the dynamic sphere of Virtual Assets, we ensure that your business is fortified against regulatory uncertainties. Connect with Prokopiev Law Group for tailored solutions that align with the Financial Services Regulatory Authority (FSRA) mandates. Let us be the cornerstone of your successful compliance journey in the burgeoning realm of crypto licensing, both within the ADGM and on a global scale. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.