LONDON: The UK government has set out new powers that could see tech companies facing fines of up to 18 million pounds or 10 per cent of their annual global turnover, whichever is higher, if they fail to act on harmful online content.
Under the plans for new laws set out by UK digital secretary Oliver Dowden and home secretary Priti Patel, social media sites, websites, apps and other services which host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal content, such as child sexual abuse, terrorist material and suicide content.
The UK’s regulatory body Office of Communications (Ofcom) is now confirmed as the regulator, which will also have the power to block non-compliant services from being accessed in the country.
“We are giving internet users the protection they deserve and are working with companies to tackle some of the abuses happening on the web,” said Patel.
“We will not allow child sexual abuse, terrorist material and other harmful content to fester on online platforms. Tech companies must put public safety first or face the consequences,” she said.
The new legislation also includes provisions to impose criminal sanctions on senior managers. The government said it will not hesitate to bring these powers into force should companies fail to take the new rules seriously – for example, if they do not respond fully, accurately and in a timely manner to information requests from Ofcom.
“I’m unashamedly pro tech but that can’t mean a tech free for all. Today Britain is setting the global standard for safety online with the most comprehensive approach yet to online regulation,” said Dowden.
“We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech. This proportionate new framework will ensure we don’t put unnecessary burdens on small businesses but give large digital businesses robust rules of the road to follow so we can seize the brilliance of modern technology to improve our lives,” he said.
Under the rules, tech platforms will be expected to do far more to protect children from being exposed to harmful content or activity such as grooming, bullying and pornography.
The most popular social media sites, with the largest audiences and high-risk features, will need to go further by setting and enforcing clear terms and conditions which explicitly state how they will handle content which is legal but could cause significant physical or psychological harm to adults.
This includes dangerous disinformation and misinformation about coronavirus vaccines, and will help bridge the gap between what companies say they do and what happens in practice.
Dame Melanie Dawes, Ofcom’s chief executive, said: “Being online brings huge benefits, but four in five people have concerns about it. That shows the need for sensible, balanced rules that protect users from serious harm, but also recognise the great things about online, including free expression.
“We’re gearing up for the task by acquiring new technology and data skills, and we’ll work with Parliament as it finalises the plans.”
The government plans to bring the laws forward in an Online Safety Bill next year. The powers, in response to an Online Harms White Paper consultation, would be introduced by Parliament via secondary legislation. The government said it is also progressing work with the Law Commission on whether the promotion of self-harm should be made illegal.
Companies will have different responsibilities for different categories of content and activity, under an approach focused on the sites, apps and platforms where the risk of harm is greatest. A small group of companies with the largest online presences and high-risk features, likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1.
These companies will need to assess the risk of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”. They will then need to make clear what type of “legal but harmful” content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently.
All companies will need mechanisms so people can easily report harmful content or activity while also being able to appeal the takedown of content. Category 1 companies will be required to publish transparency reports about the steps they are taking to tackle online harms.
Financial harms will be excluded from this framework, including fraud and the sale of unsafe goods.
Source From : Times Of India