European Union (EU) plans to combat online material related to terrorism and extremist violence means platforms including Facebook, YouTube and Twitter will need to up their efforts to quickly delete such content, or face fines.

Julian King, the EU’s commissioner for security, told Financial Times (FT) the EU had “not seen enough progress” on this issue from technology companies and felt there was a need to “take stronger action in order to better protect our citizens”.

“We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” he added.

The European Commission (EC) is working on a draft regulation due to be published next month, which some sources stated will give social media companies a one-hour limit to delete content once it is flagged as inappropriate by authorities.

At the moment, the EC has a self-regulation policy which leaves tech companies to their own devices when it comes to dealing with such material. Earlier this year its guidelines became stricter and encouraged action be taken swiftly, but there was no way to ensure this was implemented.

The draft regulation will need approval from the European Parliament and a majority of EU member states.

FT stated Google said more than 90 per cent of terrorist material removed from YouTube was flagged automatically, while Facebook said it had removed almost all of the 1.9 million examples of terrorist propaganda it found at the start of the year.

EU member state Germany recently passed a law whereby companies which fail to remove illegal material within 24 hours face fines of up to €50 million.