EU's DSA enforcers send more questions to Snapchat, TikTok and YouTube about AI risks
On Wednesday, the European Union requested more information from Snapchat, TikTok and YouTube about their respective algorithms for content
On Wednesday, the European Union requested more information from Snapchat, TikTok and YouTube about their respective algorithms for content
EU Requests More Information from Snapchat, TikTok, and YouTube on Algorithmic Content Recommendation
The European Union has sent requests for information (RFI) to Snapchat, TikTok, and YouTube, asking for more details about the design and functioning of their algorithms used for content recommendation. The move is part of the bloc's efforts to ensure compliance with the digital Services Act (DSA), a new online governance rulebook that aims to regulate the use of artificial intelligence (AI) in content recommendation.
The EU's online governance framework, which came into effect in late summer, requires very large online platforms (VLOPs) like Snapchat, TikTok, and YouTube to identify and mitigate risks associated with their use of AI-based content recommendation tools. The law stipulates that these platforms must take action to prevent negative impacts on users' mental health, civic discourse, and the dissemination of harmful content.
The Commission's latest RFIs focus on the algorithms used by the three social media platforms to recommend content to their users. The EU is seeking detailed information on the AI-based parameters used to make these recommendations, as well as how these algorithms play a role in amplifying systemic risks, including:
The commission is also asking Snapchat and YouTube to provide information on their measures to mitigate the potential influence of their recommender systems on the spread of illegal content. For TikTok, the EU is seeking more detail on anti-manipulation measures deployed to prevent malicious actors from gaming the platform to spread harmful content.
The EU's concerns about the potential negative impacts of algorithmic content recommendation are not new. In February, the bloc opened a formal investigation into TikTok's DSA compliance, citing concerns about the platform's approach to minor protection, addictive design, and the risk management of harmful content. That investigation is ongoing.
The Commission has given the three social media platforms until November 15 to provide the requested information. The responses will inform any next steps, including potentially opening formal investigations.
What's at Stake
The EU's efforts to regulate algorithmic content recommendation are aimed at preventing the spread of harmful content, protecting users' mental health, and promoting a healthy online environment. The bloc's online governance framework contains tough penalties for violations, including fines of up to 6% of global annual turnover.
The stakes are high for the three social media platforms, which have been given until November 15 to provide the requested information. Failure to comply with the EU's requests could result in formal investigations, fines, and reputational damage.
What's Next
The Commission's latest RFIs are part of its ongoing efforts to ensure compliance with the DSA. The bloc has already sent several requests for information to the three social media platforms, including questions about election risks, child safeguarding issues, and content risks related to the Israel-Hamas war.
The EU's investigation into TikTok's DSA compliance is ongoing, and the bloc has yet to conclude any of the several probes it has open on larger platforms. However, in July, the Commission put out preliminary findings related to some of its investigations on X, saying it suspects the social network of breaching the DSA's rules on dark pattern design, providing data access to researchers, and ad transparency.
As the EU continues to scrutinize the algorithms used by social media platforms, it's clear that the stakes are high for these companies. The bloc's efforts to regulate algorithmic content recommendation are aimed at promoting a healthy online environment, protecting users' mental health, and preventing the spread of harmful content.
Article
https://inleo.io/threads/view/taskmaster4450le/re-leothreads-mepkbmjy?referral=taskmaster4450le