Instagram introduced two new features as part of an anti-bullying campaign, which will flag potentially offensive posts before they are published and allow users to discretely restrict interactions with aggressors.

The former employs AI to provide notifications of comments which may be considered harmful, a move the company said gives users “a chance to reflect” and remove potentially offensive material before posting. It added early tests of the feature, which began rolling out this week, encouraged some users to ultimately “share something less hurtful”.

Additionally, a forthcoming option called Restrict will allow users to avoid bullies without their knowledge. When applied, the restricted person’s comments on a post will only be visible to themselves, unless the enforcing user chooses to make those comments public.

The tool will also prevent restricted people from seeing whether a user is active on the platform or has read a direct message.

In a blog post, Instagram chief Adam Mosseri said the changes are part of the app’s effort to “do more to prevent bullying from happening on Instagram and…empower the targets of bullying to stand up for themselves”.

The features build on Instagram’s move in October 2018 to use AI to help platform moderators detect offensive content in photos and comments.