HighlightsPrivacy espresso seriesResourcesOctober 3, 2024LinkedIn’s AI data policy and ICO’s response overview

maxresdefault (8)

In this latest episode of the PrivacyRules #privacyespresso series, we dive into a crucial topic at the intersection of AI, data protection, and global regulations with insights on a recent development: the ICO’s response to Microsoft’s use of LinkedIn data for training AI models, from Trevor Fenton, partner at Shakespeare Martineau, PrivacyRules UK law firm member.

Microsoft quietly rolled out a feature that uses LinkedIn user-generated data to train its AI models. While this feature was enabled worldwide, it notably excluded the EEA and Switzerland—but not the UK. This raises questions about how global companies perceive post-Brexit UK regulations compared to the EU’s GDPR.

Key points :

– How Microsoft implemented AI data processing without explicitly informing users in certain regions, and why the UK was treated differently from the EU and Switzerland, despite having a similar regulatory framework.

– The ICO’s stance on the matter, emphasizing the high-risk nature of AI processing and the Data Protection Impact Assessment (DPIA) requirements under the UK GDPR.

– The potential reasoning behind Microsoft’s decision, and whether this signals a perception of the UK as having more flexible data protection enforcement post-Brexit

– The evolving role of data protection authorities, and how companies can better prepare for increased scrutiny on AI and data usage.

This episode is a must-listen for professionals dealing with AI, privacy, and global data protection. We break down the critical elements of this case and provide practical insights into how businesses should navigate AI model training while adhering to different regional regulations.

Listen to the episode : https://bit.ly/47OasST