At a time of school shutdowns caused by the Covid-19 (coronavirus) pandemic, parents may be forgiven for allowing their children to sit on their tablets and phones a little bit longer than usual, while trying to get some work done.

Ironically, in normal scenarios, parents are faced with a daily battle to get their little ones away from their screens. But, as we keep being told, this is an unprecedented situation.

Looking at the issue more broadly, the last month aside, recent statistics indicate the youth of today are winning, and the adults losing, as children spend more time than ever glued to their devices.

A recent report by market research agency Childwise found children have increasing access to smartphones, with 47 per cent of those aged between five- and ten-years old possessing a mobile in 2019, up from 38 per cent from 2018, while the figure among seven-to-sixteen year-olds more than 66 per cent.

Childwise’s Monitor Report 2020 also found children are using their phones for longer periods of time than before. In fact, one-in-four children spends four-to-six hours online a day.

Research director Simon Leggett told Mobile World Live (MWL) messaging was the sphere children engage with the most on mobiles, with 75 per cent dedicating time to this activity in 2019, 72 per cent playing games and 63 per cent using social media and video content apps.

The company ranked video platform YouTube as children’s most favourite service in 2019; followed by social media apps Snapchat, Instagram and TikTok; and online games Fortnite, Roblox and Minecraft.

Childwise outlined WhatsApp as the most-used app, with more than half of a total of 932 seven-to-sixteen year-olds communicating over the Facebook-owned messaging service.

Leggett claimed children’s ownership of a mobile phone poses a challenge for parents or guardians to monitor the content they access online, as “it’s such a private technology that most keep, literally, close to their chest”. In many cases, this leaves adults relying on tech companies doing whatever it takes to ensure online protection for youngsters.

YouTube, which children use on average for nearly two-and-a-half hours a day, has a specific offering designed to offer a safer experience when viewing video content. It strives to protect children from harmful materials by blending automated filters, human reviews and user feedback. The Google-owned app, however, admits not all of the content on the child-oriented version are manually reviewed and pledges to conduct fast reviews when videos are flagged as inappropriate.

Safety in social media apps
A big issue with harmful content on social media was noted by the Independent Inquiry into Child Sexual Abuse (IICSA), which discovered a rapid increase in cases of online child grooming. Facebook, Instagram and Snapchat were the most frequently cited apps in this regard. So how do parents Facebook and Snap tackle the problem?

David Miles, Facebook’s head of safety for EMEA, said the social media giant was an industry leader “in combating this grievous harm” and had made huge investments in sophisticated solutions, “including photo- and video-matching technology so we can remove harmful content as quickly as possible”.

Miles noted child sexual exploitation was an industry-wide issue, and pledged the company would keep developing new technologies, work with law enforcement and specialist experts in child protection.

Facebook said a team of more than 35,000 is focused on dealing with checking content across its platforms and taking down materials found to be inappropriate.

The company removed more than 12 million pieces of content from its main site and Instagram between July and September 2019 due to policy violations on child nudity or sexual exploitation, with the majority of the materials (95 per cent) deleted before being reported.

It said it invested in various methods to combat abuse including hashing technology, which creates a digital fingerprint for harmful content so it’s immediately removed.

And despite IICSA’s report findings, Snap told MWL it was very difficult for online grooming to be carried out on Snapchat because of built-in safety-by-design processes. Some of these include a default inability to receive messages from people a user hasn’t added as a friend, and a setting which prevents the sharing of a users’ location with anyone other than friends.

Snap also said it was impossible for potential predators to stalk users on its app as there are no public profiles with location, age or photos, and it doesn’t share a user’s list of friends publicly.

The company stated it relies on human moderation for a majority of its operations, but it was also developing machine learning tools to identify account behaviours which suggest abusive accounts or other suspicious activity, such as grooming.

While it did not disclose the number of people working in its Trust and Safety team, the company assured its employees are able to act on concerns raised within 24 hours, and often made moves within two hours of a report.

TikTok, another of the most popular apps among children, also relies on a mix of technology and human moderation teams to identify, review and remove dangerous or abusive content from its service.

What else can be done?
While harnessing the strength of technology and specialised safety teams might lead to positive outcomes in the end, it’s important to note these measures are often one step behind content violators and criminal offenders.

IICSA insisted internet companies should make efforts to screen images and videos before they are displayed on their platforms. Whether this would require expansion of human moderator teams or creativity in building innovative technology, or a combination of the two, we’re yet to see, but as long as a genuine intention for online safety is present, an environment with no harmful content is imaginable.

The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.