Twitter executives detailed refinements to the methods used to review content which violates its policies, as it provided updates on the effectiveness of efforts made earlier this year to eradicate long-standing problems.

Donald Hicks, VP, and David Gasca, senior director of product management and health, said in a blog post the company is improving “technology to help us review content that breaks our rules faster and before it’s reported; specifically those who Tweet private information, threats, and other types of abuse”.

Going forward it wants to “make it easier for people who use Twitter to share specifics when reporting so we can take action faster, especially when it comes to protecting people’s physical safety,” the pair stated.

The post also included a summary of progress made in the opening quarter of the year: 38 per cent of abusive content is now surfaced proactively for human review instead of relying on reports from users.

Between January and March, 100,000 accounts created by previously suspended members were deleted.

Three-times more abusive account was suspended within 24 hours after being reported, compared to the same period in 2018; the company has been able to respond 60 per cent faster to appeals requests using a new in-app feature; and 2.5-times more private information was removed through a simplified reporting process.

Twitter has been trying to clean up its act after years of criticism for not responding fast enough in matters involving hate speech, violent content and misinformation.

In February CEO Jack Dorsey said the company will focus on making Twitter a healthier and more conversational service.