top of page
Writer's pictureILLIA PROKOPIEV

GDPR Responsibilities in AI Usage. Joint Controllership

1. Identifying Roles in Data Processing


AI User's Role: It is crucial to recognize the facets of personal data processing in GenAI models where a company, known as the AI user, is seen as the controller.


AI Provider's Role: At a glance, it may seem the AI provider is responsible for training the AI model. However, a more in-depth look is required to understand specific scenarios. An AI user deploying GenAI might have an impact on the AI's training, especially related to its conversational capabilities. This becomes especially evident when settings enable the reuse of training data to enhance the general AI. While this suits the AI provider's objectives, benefiting all AI users, it brings forth the question of shared responsibility during the training process.


2. Insights on “Joint Controllership”


Definition: For an entity to be considered a joint controller, it must define both the purpose and means of processing personal data.


Case Studies:

  • Jehovah's Witnesses Case: This case highlighted the community directing its members as a joint controller.

  • Facebook-Related Cases (Wirtschaftsakademie and FashionID): These revolved around deriving commercial advantages from Facebook ads (establishing purpose) and deciding on data categories or using Facebook's code for AI user data transmission (means of processing).

Key Takeaway: The idea of joint controllership is widely interpreted and doesn’t mandate equal responsibility among involved parties.


3. AI Provider and Data Processing


AI Provider's Role: The AI provider dictates how data obtained from end-users will be processed for refining GenAI.


AI User's Involvement: When settings are adjusted to allow the reuse of training data for AI enhancement, AI users provide their end users' data. This data is knowingly used by the AI provider for training to further enhance GenAI services. Both parties aim for high-quality GenAI services.


Potential Risk: Given the shared commercial benefits, there exists a danger that organizations employing these AI solutions might have joint responsibilities with the AI provider as per Art. 26 GDPR. This joint responsibility would increase risk exposure. Though a vast interpretation may surpass the directives of Art. 26 GDPR, due to existing case law, it's advised to:

  • Avoid using settings that let the AI provider reuse input data (if viable commercially).

  • Deliberately evaluate and brace for possible outcomes of joint controllership.

A Trend to Watch: More GenAI solutions, especially enterprise versions, are acknowledging this potential risk and proposing alternatives to AI users to reduce such concerns.


4. Delving into Data Responsibility


Responsibility for Data: The AI user holds the responsibility for processing data.


AI Provider's Position: The AI provider is a processor in the context of Art. 28 GDPR, processing data based on the AI user's directives.


Required Agreement: It's imperative to finalize a data processing agreement in line with Art. 28 GDPR with the AI provider. This pact outlines the stipulations and duties of the provider to guarantee GDPR-compliant data handling.


5. The Framework for Lawful Processing:


Foundation Principle: Based on Article 6 GDPR, ensuring the lawful processing of input and output data, overseen by the AI user, is contingent upon alignment with the precepts outlined in Article 6 and Article 9 GDPR.


Recommendation: Before initiating any data processing activities, it's prudent to confirm alignment with these Articles.


6. Key Contexts of Data Processing and Associated Legal Considerations:


Processing of Non-sensitive Data:

  • Internal Deployment: When AI users handle standard non-sensitive personal data for internal purposes, they may find guidance in Article 6(1)(f) of the GDPR, provided sensitive personal data isn't implicated.

  • Recommendation: When dealing with non-sensitive data, ensure that AI functionalities align seamlessly with the criteria specified under Article 6.

  • In simple terms: Article 6(1)(f) allows businesses (or third parties) to process personal data if they have a good, valid reason to do so, unless doing that processing would harm the rights of the individual whose data is being processed. And when it comes to children, there's an added layer of caution to ensure their utmost protection.

Handling of Sensitive Data:


Complex Scenarios: Environments where AI is used necessitate adherence to Article 9 of GDPR. This is especially true when dealing with distinct data types like health records, genetic data, or biometric identifiers.


Establishing a Legitimate Basis: When navigating this space, it's crucial to validate that the grounds for processing resonate with the standards set by Article 9 GDPR. This might involve obtaining clear and explicit consent from the data subject or confirming the indispensable nature of processing for medical interventions.


In essence: while Article 9 of the GDPR highlights that sensitive personal data should be treated with utmost care and generally not processed, there are well-defined exceptions and circumstances where such processing can take place, always under specific conditions and safeguards.


Recommendation: In instances involving sensitive data, AI users must conduct a rigorous assessment. This ensures that data processing is underpinned either by unequivocal consent or by its essentiality for health-related reasons.


The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.




Comments


bottom of page