[ad_1]
for the past Ten years on, the biggest companies in the tech industry have been allowed to effectively mark their own homework. Hiding behind the infamous tech industry adage, “move fast and break things,” they have secured their power through extensive lobbying.
Food and beverage companies, the automotive industry and financial services are all subject to regulation and accountability measures to ensure high levels of ethics, fairness and transparency. Tech companies, on the other hand, have often argued that any legislation would limit their ability to operate efficiently, profitably, and do what has made them powerful. Currently, there is a slate of bills and laws around the world that aim to eventually curtail these powers, like the UK’s long-awaited Online Safety Bill. That bill will pass in 2023, but its limitations mean it won’t be effective.
The Online Safety Bill has been in the works for several years, and effectively places a duty of care to monitor illegal content on platforms. It could potentially impose an obligation on platforms to restrict content that is technically legal but may be deemed harmful, which would set a dangerous precedent for free speech and the protection of marginalized groups.
In 2020 and 2021, YouGov and BT (along with the charity I run, Glitch) found that 1.8 million people surveyed said they had been the victim of bullying behavior online in the past year. Twenty-three percent of those surveyed were members of the LGBTQIA community, and 25 percent of those surveyed said they had experienced racist abuse online.
In 2023, legislation to tackle some of these disadvantages will come into force in the UK, but it will not go far enough. Campaigners, think tanks and experts in the field have raised numerous concerns about the effectiveness of the Online Safety Bill as it currently stands. Think tank Demos emphasizes that the bill does not specifically name minority groups—such as women and the LGBTQIA community—even though these communities are disproportionately affected by online abuse.
The Carnegie UK Trust noted that while the term “significant loss” is used in the bill, there are no specific procedures to define what this is or how platforms will have to measure it. Academics and other groups have raised the alarm over the Bill’s proposal to drop the previous Article 11 requirement that Ofcom should “encourage the development and use of technologies and systems to control access. [electronic] Content.” Other groups have raised concerns about removing clauses around education and future proofing — making the law reactive and ineffective, as it won’t account for potential harm caused by platforms that haven’t yet gained prominence.
The platform has to change, and other countries have passed legislation trying to make this possible. Already, we have seen Germany enact NetzDG in 2017, the first country in Europe to take a stand against hate speech on social networks – platforms with more than 2 million users have a seven-day window to remove illegal content or face a maximum fine of 50 million euros. until. In 2021, EU lawmakers imposed a package of rules on big tech giants through the Digital Markets Act, which prevents platforms from giving preferential treatment to their own products, and in 2022, we saw progress with the EU AI Act, which includes A working arrangement that campaigners in the UK are calling for is wider consultation with marginalized groups and civil society organizations to adequately address concerns around the technology. In Nigeria, the federal government issued a new internet code of practice in an effort to address misinformation and cyberbullying, including specific clauses to protect children from harmful content.
In 2023, the UK will pass legislation aimed at tackling similar harms, eventually progressing on a regulatory body for tech companies. Unfortunately, the Online Safety Bill does not actually include enough measures to keep vulnerable people safe online and more will need to be done.
[ad_2]
Source link