Australia’s eSafety Commissioner reveals that some industry giants are failing to meet their responsibilities in combating the proliferation of content surrounding child sexual exploitation, sexual extortion, and the live streaming of child sexual abuse.
A report, made publically available on Monday, October 16, highlights deficiencies in how some companies detect, remove, and prevent child sexual abuse material and grooming. It also points out inconsistencies in how companies handle such material across their various services and significant variations in their response times to public reports.
eSafety Commissioner Julie Inman Grant emphasised that the proliferation of online child sexual exploitation is a growing problem, not only in Australia but globally. Tech companies bear a moral responsibility to protect children from sexual exploitation and abuse on their platforms.
“We really can’t hope to have any accountability from the online industry in tackling this issue without meaningful transparency, which is what these notices are designed to surface,” said Inman Grant.
The report reveals that X, formerly known as Twitter, and Google did not comply with the notices issued to them. Google received a formal warning due to its generic responses and aggregated information, failing to provide specific answers about its services. In contrast, X’s non-compliance was more severe, as the company either left some sections entirely blank or provided incomplete and inaccurate responses to questions. This included crucial information about its response times to child sexual exploitation reports, detection measures for live streaming, and staff levels following an acquisition.
As a result of its non-compliance, X has been issued an infringement notice with a penalty of US$387,004 and it has 28 days to respond or pay the fine with failure to do so potentially leading to further action by the Commissioner.
Inman Grant expressed disappointment in X and Google’s non-compliance, particularly as these questions directly relate to the protection of children and combating online harm. She stressed that the industry must back up its commitments to tackling child sexual exploitation with tangible actions.
The report also highlights several key findings, including:
YouTube, TikTok, and Twitch take steps to detect child sexual exploitation in livestreams, while Discord cited prohibitive costs as reason for not doing so. X failed to provide information.
TikTok and Twitch use language analysis technology to detect CSEA activity across their services, whereas Discord does not employ such technology. Google uses it on YouTube but not on Chat, Gmail, Meet, and Messages.
Google (excluding its search service) and Discord do not block links to known child sexual exploitation material, despite the availability of databases from organisations like the Internet Watch Foundation.
YouTube, TikTok, and Twitch using technology to detect grooming, while X, Discord, and other Google services do not.
Google not using its technology to detect known child sexual exploitation videos on certain services, including Gmail, Chat, and Messages.
A decline in proactive detection of child sexual exploitation material on X following a change in ownership in October 2022, though it improved in 2023.
Discord and Twitch not automatically notifying professional safety staff when a volunteer moderator identifies child sexual exploitation and abuse material.
Significant variations in response times to user reports of child sexual exploitation material, with TikTok responding in five minutes for public content, Twitch in eight minutes, Discord in 13 hours for direct messages, and X and Google not providing information.
Differences in the languages covered by content moderators, with X covering only 12 languages, while Discord reported 29. This can affect the identification of harms like grooming or hate speech that require contextual understanding.
Comments