TikTok, Twitter, Facebook, Google, and Amazon are facing rising pressure from European authorities as London and Brussels advanced new rules on Tuesday to curb the power of digital companies.
They’re among those on a list of the 19 biggest online platforms and search engines that the European Union’s executive arm said must meet extra obligations for cleaning up illegal content and disinformation and keeping users safe under the 27-nation bloc’s landmark digital rules that take effect later this year.
The UK government, meanwhile, unveiled draft legislation that would give regulators more power to protect consumers from online scams and fake reviews and boost digital competition.
The updates help solidify Europe’s reputation as the global leader in efforts to rein in the power of social media companies and other digital platforms.
TikTok will allow European Commission officials to carry out a “stress test” of its systems to ensure they comply with the Digital Services Act, commissioner Thierry Breton said in an online briefing.
He proposed the idea to TikTok CEO Shou Zi Chew when they met in Brussels earlier this year. “I’m happy that they came back to us saying they are interested,” Breton said, but added that he’s waiting for Chew to provide a date. TikTok did not reply to a request for comment.
Twitter had agreed earlier to a stress test, and Breton said he and his team will travel to the company’s headquarters in San Francisco at the end of June to carry out the voluntary mock exercise. Breton didn’t detail what the test would entail.
Starting August 25, the biggest online platforms will have to give European users more control by making it easier to report illegal content like hate speech and providing more information on why their systems recommend certain content.
There are guardrails for content generated by artificial intelligence like deepfake videos and synthetic images, which will have to be clearly labelled when they come up in search results, Breton said. Platforms will have to “completely redesign” their systems to ensure high a level of privacy and safety for children, including verifying users’ ages, Breton said.
Big Tech companies also will have to revamp their systems to “prevent algorithmic amplification of disinformation,” he said, saying he was particularly concerned about Facebook’s content moderation systems ahead of September elections in Slovakia. “Now that Facebook has been designated as a very large online platform, Meta needs to carefully investigate its system and fix it where needed ASAP,” he said.
Facebook’s parent company said it supports the EU’s new Digital Services Act. “We take significant steps to combat the spread of harmful content on Facebook and Instagram across the EU,” Meta said while pointing out its efforts on content moderation and media literacy in Slovakia. “While we do this all year round, we recognise it’s particularly important during elections and times of crisis, such as the ongoing war in Ukraine.”
Violations could result in fines worth up to 6 per cent of a company’s annual global revenue — amounting to billions of dollars — or even a ban on operating in the EU.
The European Commission’s list of very large online platforms is limited to those with at least 45 million users in Europe, which includes Google’s Search, Play, Maps, Shopping and YouTube services; Amazon Marketplace; Apple’s App Store; Microsoft’s Bing and LinkedIn; Meta’s Facebook and Instagram; plus Pinterest, Snapchat, TikTok, Twitter, Wikipedia, Booking.com, China’s Alibaba Aliexpress and German ecommerce company Zalando.
Breton said more platforms could be added, and the commission is analyzing “four to five” others that it will decide on in coming weeks.
In Britain, the government’s Digital Markets, Competition and Consumers bill proposed Tuesday would give watchdogs more teeth to counter the dominance of tech companies, backed by the threat of fines worth up to 10% of their annual revenue.
Under the proposals, online platforms and search engines could be required to give rivals access to their data or be more transparent about how their app stores and marketplaces work.
The rules would make it illegal to hire someone to write a fake review or allow the posting of online consumer reviews “without taking reasonable steps” to verify they’re genuine. They also would make it easier for consumers get out of online subscriptions.
The new rules, which still need go through the legislative process and secure parliamentary approval, would apply only to companies with £25 million ($31 billion) in global revenue or £1 billion in UK revenue.