Data Privacy in the AI Era: Reminder and Practical Guide


Just recently, I listened to the Diary of a CEO podcast with Mo Gawdat | E252 and watched the new Apple Vision Pro video, which both left me inspired to write this article. A big thank you to my old friend and colleague Alan Dahi for peer reviewing my article—you're the best!

Confidentiality is a big deal in the legal world, but when it comes to data privacy, it's easy to forget if you're not in that field. That's why I'm here to remind all my non-legal friends and offer some handy tips to help everyone stay safe in our new AI era.

1-Spread Awareness: Let your colleagues know about the importance of protecting personal data when working with AI. Regular reminders and educational moments can create a privacy-conscious mindset. A practical and easy tip: share this post with them!

2-Keep Data Minimised: Whenever possible, avoid putting personal data into AI systems ,particularly until we gain a better understanding of how the right to be forgotten functions. The possibility of effectively deleting information once it becomes part of an AI system remains uncertain. If you have to include personal data, make sure to remove direct personal identifiers. But remember, even indirect personal data can still pose privacy risks. Take a cue from Google's AI terms, which clearly state not to input any personal or sensitive information. The good news is that the recent case T-557/20, SRB v EDPS, has made the 'what is personal data' assessments a bit more logical, and pseudonymised sharing would be out of scope as long as the AI provider didn't have any means to re-identify (which given the scope of data they have could be tricky to confirm).

3-Be mindful of the data you share publicly online: In the past, sharing information like voice based on legitimate-interests may have been balanced. However, the landscape has changed, and we should take technological developments into account, even for data sharing done in the past. For example, new security risks, like scammers using voice-generating AI to deceive people, have emerged (source: TechSpot).

4-Conduct a Team Review: Get your team involved in identifying AI usage and the types of data involved. This helps establish governance processes and uncovers potential risks, like using AI for automated decisions in People Teams. Remember, people have a right to object to automated decisions, and you're ultimately responsible for how the services you use handle personal data. Think critically before sharing any personal information with AI systems to protect your customers and employees.

5-Transparency and Consent: Update your privacy and people policies to provide transparent information on how personal data is used and who it is shared with. When necessary, you may need to seek consent.

6-Enhance Security Measures: If you're in the EU, choose AI providers based in the EU or with EU contracting entities to align with privacy requirements and make it easier for data protection authorities to intervene if needed. Consider disabling history functions and regularly deleting data to minimise risks. Be cautious when integrating AI directly into systems that typically have personal data, like Google Docs.

Previous
Previous

Navigating the Toggle Terrain: The FCA Consumer Duty in Action

Next
Next

Navigating the Consumer Duty Rules in Fintech Contracts