In recent years, Europe has made decisive efforts to lead companies and people into the digital future. These efforts are ongoing and are leading to a dynamic process at legislative level with a flood of new legislation. This article provides an overview of several significant new pieces of legislation introduced by the EU as part of its digital strategy, namely the Digital Markets Act, the Digital Services Act, the Data Act and the AI Act.
The Digital Markets Act
The Digital Markets Act (DMA) entered into force on 1 November 2022 and became applicable on 2 May 2023. It defines specific, objective criteria to designate a large online platform as a “gatekeeper” and ensures these platforms operate fairly online allowing for competition. The European Commission established criteria to characterise a company as a gatekeeper. Those include that the companies possess (i) a strong economic position with significant influence on the internal market and activity in multiple EU countries; (ii) a robust intermediation role, connecting a large user base to numerous businesses; (iii) an established markets presence, indicating stable market positions over time. Currently, six companies are designated as gatekeepers: Alphabet (Google), Apple, Amazon, Meta, Microsoft, Byte Dance (TikTok).
Gatekeepers’ obligations under the DMA include:
- The obligation to allow third parties to interoperate with the gatekeeper’s own services in certain specific situations.
For example: WhatsApp (operated by Meta, one of the designated gatekeepers) might be required to open its platform to other messaging services. This would mean that a user of another messaging service, such as Signal, or Telegram, could send messages to a WhatsApp user and vice versa. This interoperability ensures that users are not locked into a single service and promotes competition by allowing new or smaller providers to interact with large, established platforms.
- The obligation to allow business users to access the data that they generate in their use of the gatekeeper’s platform.
For example: Amazon might be required to provide sellers on its platform access to the data generated from their sales activities, such as customer purchasing patterns, feedback, and performance metrics. This access can empower sellers to make informed decisions about their business and improve their competitiveness on Amazon’s marketplace.
- The obligation to provide companies advertising on their platform with the tools and information necessary for advertisers and publishers to carry out their own independent verification of their advertisements hosted by the gatekeepers.
- The obligation to give business users the opportunity to promote offers and conclude contracts with customers outside the gatekeeper’s platform.
The Digital Services Act
The Digital Services Act (DSA) entered into force on 16 November 2022 and became applicable on 17 February 2024 for all platforms. The DSA includes extensive regulations for providers of online search engines and online platforms. A distinction is made between (i) very large online platforms, (ii) online platforms, (iii) hosting services, and (iv) intermediary services. The more influence an intermediary service has in the online ecosystem, the more stringent the DSA requirements.
Regulations of the DSA include:
- Transparency obligations: This means that a search engine (e.g. Google) is required to provide information about how results are ranked.
- Regulations for dealing with illegal content: Therefore, an online platform is required to have procedures in place for quickly removing illegal content, such as hate speech or counterfeit goods.
- Regulations for advertising on online platforms: Therefore, an online platform is required to provide users with clear information about advertisements they see.
The Data Act
The Data Act entered into force on 11 January 2024. It will become applicable in September 2025. The Data Act defines who can access data generated by smart products and under what conditions. This is in response to the increasing prevalence of Internet of Things (IoT) technologies and the associated data. The Act mandates that connected products be designed and manufactured to enable users to easily and securely access, use, and share the generated data.
Benefits for consumers and businesses according to the Data Act include:
- Lower costs for aftermarket services and repairs of their connected devices.
For example, if a Tesla car breaks down, the user should be able to request that a repair service other than the Tesla company, which may be cheaper, be given access to the data.
- New possibilities to utilize services that depend on accessing this data.
For example, if a smart home owner uses devices from various brands, including thermostat, security cameras, and a smart refrigerator, the homeowner should be able to use a service provider that consolidates data from all these devices to offer personalized insights and recommendations for improving energy efficiency and enhancing security measures.
- Improved access to data collected or generated by a device.
For example, if a bar owner uses a smart coffee machine, not only should the manufacturer of the coffee machine have access to the data generated to improve their product, but the bar owner should also have access to information such as water quantity and temperature, as well as coffee strength, in order to serve better coffee.
The AI Act
The potential applications of AI have greatly expanded over the past two years, driven by significant advances in the adaptability and learning capabilities of AI systems. This development has necessitated a regulatory framework in the form of the AI Act. The special feature of the current AI Act is that it is the first law specifically designed for AI. It aims to address the risks associated with AI and to establish Europe as a global leader in this field.
The AI Act was formally adopted by the European Council on 21 May 2024 and is expected to be published in the Official Journal of the European Union in July. Twenty days later, likely in August 2024, the law will come to force.
Unlike the GDPR, which is a fundamental rights law that focuses on the rights of the individual to privacy, the AI Act is a product safety law that uses a risk-based approach to classify AI systems into different levels of risk. In this regard the AI Act defines four levels of risk for AI systems: (i) minimal risk, (ii) limited risk, (iii) high risk, (iv) unacceptable risk. While systems with an unacceptable risk are prohibited to be produced or distributed, for the other levels of risk there is a comprehensive catalogue of duties and requirements under the AI Act.
The four levels of risk according to the AI Act and the obligations to produce or distribute those systems:
- AI system with a minimal risk: Under the AI Act, low-risk AI applications like AI-powered video games or spam filters can be freely used without restrictions.
- AI system with a limited risk: Limited risk in AI systems refers to uncertainties due to unclear AI usage. The AI Act mandates transparency to inform people when AI, like chatbots, is used. This helps build trust by letting users decide whether to proceed. Providers must also mark AI-created content, like text, audio, or video, especially for public information. This includes labelling deep fakes clearly.
- AI system with a high risk: High-risk AI systems are subject to a strict catalogue of obligations, such as…
- Risk assessments
- Documentation of technical details
- Transparency and information obligations
- Design of the AI system with the possibility of human supervision (explainability of the processes)
- Design of the AI system with an appropriate level of accuracy, robustness and cybersecurity
- Role-specific obligations, such as quality management obligations for providers of high-risk AI
- Certification obligations
An example of a high-risk AI system is an AI technology used in automated examination of visa applications for migration, asylum, and border control management.
- AI system with an unacceptable risk: These AI systems are considered a clear threat to the safety, livelihoods and rights of people. This includes systems used by governments for social scoring or toys that use voice commands to encourage dangerous behaviour.
Conclusion
Europe’s digital strategy is rapidly evolving through a series of landmark legislative acts aimed at fostering a fair, secure, and innovative digital environment. The Digital Markets Act and the Digital Services Act target large online platforms to ensure competition and regulate online activities responsibly. The Data Act addresses data access and usage rights, particularly in the IoT realm, enhancing consumer and business opportunities while safeguarding privacy. Lastly, the AI Act pioneers comprehensive regulations tailored to different risk levels of AI systems, setting new standards for safety, transparency, and ethical deployment. Together, these initiatives position Europe at the forefront of global digital governance, balancing innovation with regulatory oversight to empower businesses and protect individuals in the digital age.