Predatory podcast: Child sex abuse survivors unprotected by social media giants


Australian survivors of child sex attacks are being forced to personally scour the web for images of their abuse — and ask big tech to delete them.

Technology companies don’t have an obligation to search for the material — only to remove it — leaving victims in a horrifying quandary.

Associate Professor Michael Salter of UNSW has helped Australian abuse survivors carry out the grim searches.

“What it means is the victims and survivors whose content is circulating online are in a position where the only way to try and slow the spread of their material is to go looking for it,” Prof Salter said.

“We have a cohort of victims and survivors, both children and adults in Australia and overseas, who will spend time every day trying to find their own illegal content, to report it to the webmaster or the administrator and to see removal of it. If they don’t do that, nobody does it for them – that’s happening in Australia 100 per cent.”

Prof Salter said tech companies should take ownership of the issue and proactively detect and remove known Child Sexual Abuse Material (CSEM), using newly available software.

He said it was “not uncommon” for abuse survivors to be sent sex toys in the mail by paedophiles – one woman in the US was forced to move home three times as paedophiles continued to locate her.

Prof Salter told News Corp there had been a tolerance by governments and the private sector for increasing levels of child sex abuse material online, which has resulted in few Australians making it through the teenage years without some form of harm online.

LISTEN TO THE PREDATORY PODCAST

“I think it’s simply gotten to a point where both governments and industry has no choice but to act. It’s certainly been the last couple of years that we’ve seen a real sea-change there.”

But he warned the tech industry would not give in easily and change would be costly.

eSafety Commissioner Julie Inman Grant said in August, eSafety sent the first legal notices issued under Australia’s new Basic Online Safety Expectations to Apple, Meta (Facebook and Instagram), WhatsApp, Microsoft, Skype, Snap and Omegle, requiring them to answer detailed questions about how they were tackling child sexual exploitation, including images of abuse.

These notices — the first of their kind in the world — pinpoint what technology companies are and are not doing to protect children and abuse survivors.

“The results reveal a number of shortcomings, including inadequate and inconsistent use of technology to detect child abuse material, livestreaming and grooming, as well as slow response times when this material is flagged by users,” Ms Inman Grant said.

“The continued presence of this material on platforms operated by some of the world’s biggest technology companies is not only morally repugnant it is also profoundly damaging, including for survivors forced to seek removal themselves, then wait for platforms to act,” she said.

“That is exactly why eSafety works so hard to take material down as soon as it is reported to us via esafey.gov.au – to prevent survivors’ re-traumatisation by their sexual abuse being shared amongst paedophiles and perpetuated across the internet.”

In the past 12 months, eSafety has completed more than 16,000 investigations into child sexual exploitation material, 99 per cent of which was referred to the INHOPE international network for rapid removal action.

“While some companies are making an effort to tackle these issues, others are doing very little. Better detection by the platforms is needed, along with improved response times and more rapid removal. Mechanisms within platforms for reporting online abuse should be easy to find and use,” she said.

Apple told eSafety’s landmark report they would continue to invest in technologies that protect children, without giving specifics, while Meta said it had “expanded efforts to detect and remove networks that violate its child exploitation policies” and noted that the work is similar to its “efforts against coordinated inauthentic behaviour and dangerous organisations”.

WhatsApp said users cannot search for unconnected people or groups without a phone number, and a notification was sent if someone oustide their network sent a message.

Microsoft reported that it doesn’t have specific automated tools to detect new CSEA material, and doesn’t review all the content on Teams and Outlook after harmful content was detected.

Skype also said that it does not review all user content when CSEA is detected.

For more details about the Predatory podcast, go to predatory.com.au

If you have a story to tell, email us at crimeinvestigations@news.com.au



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *