Instagram under fire for sexualized images of children | Online Abuse

0

Instagram will not remove accounts that receive hundreds of sexualized comments for posting pictures of children in swimwear or separates, even after alerting them via the in-app reporting tool.

Instagram’s parent company, Meta, claims it takes a zero-tolerance approach to child exploitation. But accounts flagged as suspicious via the in-app reporting tool have been deemed acceptable by automated moderation technology and remain active.

In one case, an account that posted photos of children in sexualized poses was reported by a researcher using the in-app reporting tool. Instagram responded the same day, saying it was unable to view the report “due to the high volume,” but that its “technology determined that this account was likely not in violation of our community guidelines.” infringes”. The user was advised to ban or unfollow the account or report it again. It stayed live on Saturday with more than 33,000 followers.

Similar accounts – known as “Tribute Pages” – were also found on Twitter.

An account that posted images of a man performing sexual acts on images of a 14-year-old TikTok influencer has been ruled as not violating Twitter’s Rules after being reported using the in-app tools – despite being in Posts had indicated that he was seeking to connect with people to share illegal material. “I want to trade some younger stuff,” said one of his tweets. It was removed after the campaign group Collective Shout publicly posted about it.

The findings raise concerns about the platforms’ in-app reporting tools, with critics saying the content appeared to be allowed to stay live because it didn’t reach a criminal threshold — despite being linked to suspected illegal activity.

Often the accounts are used for “breadcrumbing” – where perpetrators post technically legal images but arrange to meet up online in private messaging groups to share other material.

Andy Burrows, head of online safety policy at the NSPCC, described the accounts as a “shop window” for pedophiles. “Companies should proactively identify and then remove this content themselves,” he said. “But even if it is reported to them, their judgment is that it poses no threat to children and should remain on the premises.”

He urged MPs to address “loopholes” in the proposed online safety bill – set to regulate social media companies and due to be debated in Parliament on April 19. They should force companies to take action not only against illegal content, but also against content that is clearly harmful but may not reach the criminal threshold.

Lyn Swanson Kennedy of Collective Shout, an Australia-based charity that monitors exploitative content worldwide, said the platforms rely on outside organizations to moderate their content for them. “We are calling on platforms to address some of this very worrying activity, which puts underage girls in particular at serious risk of harassment, exploitation and sexualization,” she said.

Meta, Instagram’s parent company, said it had strict rules against content that sexually exploits or endangers children and that it removed it as soon as it found out about it. “We are also focused on preventing harm by banning suspicious profiles, preventing adults from sending children unrelated messages, and defaulting under-18s to private accounts,” a spokesman said.

Twitter said the accounts reported to him have now been permanently suspended for violating its rules. A spokesman said: “Twitter does not tolerate any material that contains or promotes the sexual exploitation of children. We aggressively fight online CSE and have invested heavily in technology…to enforce our policy.”

Imran Ahmed, executive director of the Center for Countering Digital Hate, a non-profit think tank, said: “Relying on automated detection, which we know is no match for simple hate speech, let alone cunning, determined rings of child exploitation, it is a repeal of the fundamental duty to protect children.”

Share.

Comments are closed.