AI is transforming the world at an unprecedented speed, and yet, a growing regulatory storm seems ready to slow it down. Recently, LinkedIn was forced to halt its AI-powered data processing in the UK after concerns raised by the ICO (Information Commissioner’s Office). But could this be just the tip of the iceberg for a much larger battle between the future of AI and the right to privacy? While it may seem like tech titans are on a collision course with regulations like GDPR and the AI Act, perhaps the story isn’t as simple as tech vs. laws.

 What Did LinkedIn Do?

LinkedIn’s AI was designed to use user data—like posts, profiles, and interactions—to provide personalized services, such as job recommendations and networking suggestions. The issue, however, was the lack of transparency and insufficient user control over whether their data was used in these AI models. Many users were unaware of how their information was being processed for AI training, and although an opt-out option existed, it was not clearly communicated.

 What Did the ICO Criticize?

The ICO’s concerns focused on the lack of clarity and transparency in LinkedIn’s practices. The issue wasn’t just about using data for AI but about giving users clear, simple options to control how their data was being processed. The ICO emphasized that LinkedIn needed to make the opt-out process more visible and give users better transparency into how their data was being utilized for AI. In response, LinkedIn suspended its AI processing in the UK and other regions, pending further discussions​.

Similarly, Meta also faced regulatory pressure recently, pausing its AI data processing in Europe after the Irish Data Protection Commission (DPC) and the ICO raised concerns. Meta resumed processing only after introducing clearer data-use mechanisms and providing users with a more transparent way to opt out​.

The Laws Don’t Stop AI; They Just Demand Transparency

The message is clear: GDPR and the AI Act are not out to stop AI innovation. Instead, they are designed to ensure that users know how their data is used and have control over it. These laws focus on transparency and user control, making sure that companies provide users with straightforward ways to understand and manage their data. For tech companies, this doesn’t have to be a setback. The rapid growth of AI tools, such as ChatGPT, which boasts more than 100 million active users, shows that people are increasingly comfortable with AI when they understand its benefits. LinkedIn’s AI-driven features, from personalized job recommendations to content curation, offer real value to users, and with better transparency, most users will likely appreciate these enhancements​.

Why Don’t Companies Just Inform Users Properly from the Outset?

While it may seem simple for companies to be upfront about data use, several challenges complicate this. One reason is that AI systems and data processes are intricate and constantly evolving, making it difficult to communicate in a clear, digestible way for all users. Another reason is the fear of mistrust or backlash. Companies may worry that too much transparency could cause users to become wary, especially when they don’t fully understand the scope of data collection. Even though users increasingly enjoy AI-driven tools like job recommendations, they might still feel uncomfortable if they don’t grasp the underlying data processes. Balancing transparency with user experience remains a delicate task​.

However, the rapid rise of AI adoption shows that when companies are clear and give users control over their data, they can continue to innovate without losing trust. The key is finding the right way to communicate—making data practices accessible and understandable, while also empowering users with choices.

 Regulators Are Becoming More Active—Transparency Will Be Inevitable

As we’ve seen with LinkedIn and Meta, data protection authorities like the ICO and DPC are becoming more active in scrutinizing AI data practices. Regulatory bodies are pushing back on unclear data practices, and companies are being forced to respond by improving their transparency and offering users more control. Sooner or later, tech companies will have no choice but to be more transparent if they want to avoid regulatory pushback and maintain user trust.