
[ad_1]
LONDON (AP) — The British government has abandoned plans to force tech companies to remove harmful but legal Internet content after the proposal drew sharp criticism from lawmakers and civil liberties groups.
The UK on Tuesday defended its decision to reject the Online Safety Bill, an ambitious but controversial attempt to crack down on online racism, sexual abuse, bullying, fraud and other harmful content.
Similar efforts are underway in the European Union and the United States, but the UK has had the most success. In its original form, the bill gave regulators broad powers to sanction digital and social media companies like Google, Facebook, Twitter and TikTok.
Critics expressed concern that requiring the largest platforms to remove “legal but harmful” content could lead to censorship and undermine freedom of speech.
The conservative government of Prime Minister Rishi Sunak, who took office last month, has now dropped that part of the bill, saying it could “over-criminalize” online content. The government hopes the change will be enough to get the bill through parliament by mid-2023, where it has languished for 18 months.
Digital Secretary Michelle Donnellan said the change removed the risk that “tech companies or future governments could use the law as a license to censor legitimate views.”
“It was the creation of a quasi-legal category between illegal and legal,” she told Sky News. “This is not what the government should do. It’s confusing. It will create a different kind of online rules to the offline ones in the legal field.”
Instead, the bill says companies should set clear terms of service and stick to them. Companies will be free to allow adults to post and view offensive or harmful content, as long as it is not illegal. But platforms that promise to ban racist, homophobic or other offensive content and then fail to keep the promise could be fined up to 10% of their annual turnover.
The law also requires people to help avoid viewing content that is legal but may be harmful — such as glorification of eating disorders, abuse and some other types of abuse — through warnings, content moderation or other means.
Companies must also show how they enforce user age limits designed to prevent children from viewing harmful content.
The bill still criminalizes some online activity, including cyberflashing — sending someone unwanted explicit images — and epilepsy trolling, sending flashing images that can trigger seizures. It also makes it an offense to aid or encourage self-harm, a move which follows a campaign by the family of 14-year-old Molly Russell, who took her own life in 2017 after viewing self-harm and suicidal material online.
Her father, Ian Russell, said he was relieved the stalled bill was finally moving forward. But he said it was “very difficult to understand” why protections against harmful material had been watered down.
Donnellan stressed that “legal but harmful” content would only be allowed for adults, and that children would still be protected.
“The content that Molly Russell has seen will not be allowed as a result of this bill,” she said.
[ad_2]
Source link