The UK government announced it would hand new powers to regulator Ofcom covering social media companies, as it aims to crack down on content deemed to be harmful to users.

Under plans revealed today (12 February) Ofcom would be given authority over sites enabling the sharing of user-generated content. Social media companies offering such services would be required to minimise the risk of harmful content being displayed and act quickly to remove it in the event it appears.

In a statement, the Department for Digital, Culture, Media and Sport, said it would take “particularly robust action on terrorist content and online child sexual abuse”. Companies will also be required to define what content and behaviour is acceptable on their platforms.

The government said Ofcom would “hold companies to account if they do not tackle internet harms such as child sexual exploitation and abuse and terrorism”.

Digital secretary Nicky Morgan explained: “We will give the regulator the powers it needs to lead the fight for an internet that remains vibrant and open but with the protections, accountability and transparency people deserve”.

The move follows the opening of an Online Harms consultation by the government in April 2019. A full response is due to be issued in the coming months, which will offer more details on the “potential enforcement powers Ofcom may have”.

Several social media companies have already put their own regulations in place to tackle harmful online content.

In December 2019, Instagram began deploying a feature to alert users when they may be about to use offensive language, as part of attempts to combat online bullying.

And parent Facebook last month moved to expand a UK-based team dedicated to building tools to detect and remove harmful content from its platforms.