/cloudfront-us-east-2.images.arcpublishing.com/reuters/TRGQTR4UE5OLDEP7I3ZE47Q5KM.jpg)
[ad_1]
LONDON, Nov 28 (Reuters) – Britain will not force tech giants to remove “legal but harmful” content from their platforms after campaigners and lawmakers raised concerns that the move could curtail freedom of speech, the government said on Monday.
The online safety law would instead focus on protecting children and ensuring companies remove illegal or prohibited content in their terms of service, he said, adding that it would not specify what legal content should be censored.
Platform owners, such as Facebook-owned Meta and Twitter, will be prohibited from removing or restricting user-generated content or suspending or banning users, where there is no violation of their terms of service or the law, it said.
The government previously said social media companies could be fined up to 10% of turnover or 18 million pounds ($22 million) if they failed to stamp out harmful content such as abuse even if it fell below the criminal threshold, while senior managers could also. . Facing criminal proceedings.
The proposed law, which was beset by delays and rows before the latest version, would remove state influence over how private companies conduct legal speech, the government said.
It will also avoid the risk of legitimate posts being taken by the platform to avoid sanctions.
Digital Secretary Michelle Donnell said her aim is to stop unregulated social media platforms harming children.
“I will bring back to Parliament a strong online safety bill that will allow parents to see and act on dangerous sites for young people,” she said. “It’s also free from any threat that tech companies or future governments could use the law as a license to censor legitimate ideas.”
Britain, like the European Union and other countries, is grappling with the problem of enacting laws to protect users, and especially children, from harmful user-generated content on social media platforms without harming freedom of speech.
The government said the revised Online Safety Bill, which returns to parliament next month, would put the onus on tech companies to remove content in breach of their own terms of service and enforce their user age limits to prevent children from bypassing authentication methods.
If users are likely to encounter controversial content such as eating disorders, racism, anti-Semitism or misconduct that does not meet the criminal threshold, the platform must provide adult users with tools to avoid it, it said.
Fines of up to 10% of annual turnover can only be applied if platforms fail to comply with their own rules or remove infringing content.
Britain said late on Saturday that a new criminal offense of aiding or abetting self-harm online would be included in the bill.
($1 = 0.8317 pounds)
Reporting by Paul Sandle; Editing by Alex Richardson
Our Standards: The Thomson Reuters Trust Principles.
[ad_2]
Source link