Instagram algorithm boosted ‘vast pedophile network’: report
Instagram’s recommendation algorithms have been found to link and promote a “vast pedophile network” that advertises the sale of illicit “child-sex material” on the social media platform, according to a report by researchers at Stanford University and the University of Massachusetts Amherst published on Wednesday. The report explains that Instagram users are allowed to search by hashtags related to child-sex abuse, including graphic terms such as #pedowhore, #preteensex, #pedobait, and #mnsfw, which stand for “minors not safe for work”. These hashtags directed users to accounts offering to sell pedophilic materials, including videos of children harming themselves or committing acts of bestiality. Some accounts allowed buyers to “commission specific acts” or arrange “meet-ups”. The research team set up test accounts to observe the network and started receiving “suggested for you” recommendations to other accounts that promoted pedophilia or linked out to external websites.
In response to the report, a Meta spokesperson explained that the company has since restricted the use of “thousands of additional search terms and hashtags on Instagram”. Meta disabled over 490,000 accounts that violated its child safety policies in January and blocked more than 29,000 devices for policy violations in May and June 2022. The company has taken down 27 networks that spread abusive content on its platforms between 2020 and 2022. Meta pointed out its extensive enforcement efforts related to child exploitation. A Meta spokesperson added that Meta is “continuously exploring ways to actively defend against this behavior” and has “set up an internal task force to investigate these claims and immediately address them”.
Meta’s algorithm on Instagram also previously sent users a pop-up notification warning that certain searches on the platform would yield results that “may contain images of child sexual abuse”. The report said Meta disabled the option to view the results but has not disclosed why it was ever offered.
FAQs
What was the report about?
The report by researchers at Stanford University and the University of Massachusetts Amherst detailed how Instagram’s recommendation algorithms have linked and promoted a “vast pedophile network” that advertises the sale of illicit “child-sex material” on the social media platform. The hashtags directed users to accounts that offered to sell paedophilic materials, including videos of children harming themselves or committing acts of bestiality.
What has Meta done in response to the report?
A Meta spokesperson explained that the company has since restricted the use of “thousands of additional search terms and hashtags on Instagram”. The company has disabled over 490,000 accounts that violated its child safety policies in January, blocked more than 29,000 devices for policy violations between May 27 and June 2, 2022, and taken down 27 networks that spread abusive content on its platforms from 2020 to 2022. Meta has also set up an internal task force to investigate the report’s claims.
What was the response to the report?
The report has raised alarm and criticism around Instagram’s algorithm, with calls for Meta to hire more human investigators, particularly specialists from law enforcement, to combat child exploitation. The Journal’s findings show that social media platforms face ongoing scrutiny over their efforts to police and prevent the spread of abusive content on their platforms.

According to a report, Instagram algorithm amplified a large-scale pedophile network.
Instagram’s recommendation algorithms were found to have linked and promoted a “vast pedophile network” that advertised the sale of illicit “child-sex material” on the platform, according to an alarming report published on Wednesday. Researchers from Stanford University and the University of Massachusetts Amherst found that Instagram allowed users to search using hashtags related to child-sex abuse, including graphic terms like “pedowhore”, “preteensex”, “pedobait” and “mnsfw”. The hashtags led users to accounts that allegedly offered to sell pedophilic materials via “menus” of content, including videos of children harming themselves or committing acts of bestiality. Some accounts even allowed buyers to “commission specific acts” or arrange “meet ups”.
When contacted by The Post for comment, a spokesperson from Meta, Instagram’s parent company, said that the company had since restricted the use of “thousands of additional search terms and hashtags on Instagram”. The spokesperson added that Meta is “continuously exploring ways to actively defend against this behavior” and has “set up an internal task force to investigate these claims and immediately address them”. Meta pointed to its extensive enforcement efforts related to child exploitation, including disabling over 490,000 accounts that violated its child safety policies in January and blocking over 29,000 devices for policy violations between May 27 and June 2. The company has also taken down 27 networks that spread abusive content on its platforms from 2020 to 2022.
The researchers at Stanford and UMass Amherst discovered “large-scale communities promoting criminal sex abuse” on Instagram. When the researchers set up test accounts to observe the network, they began receiving “suggested for you” recommendations to other accounts that purportedly promoted pedophilia or linked out to outside websites. “That a team of three academics with limited access could find such a huge network should set off alarms at Meta”, said Alex Stamos, the head of the Stanford Internet Observatory and Meta’s former chief security officer. Stamos called for Meta to “reinvest in human investigators”. Meta said it already hires specialists from law enforcement and collaborates with child safety experts to ensure its methods for combating child exploitation are up to date.
The Journal’s report came as Meta and other social media platforms face ongoing scrutiny over their efforts to police and prevent the spread of abusive content on their platforms. In April, a group of law enforcement agencies that included the FBI and Interpol warned that Meta’s plans to expand end-to-end encryption on its platforms could effectively “blindfold” the company from detecting harmful content related to child sex abuse.