Real child murder victims recreated by AI in sick deepfake trend
A disturbing trend has emerged on social media platform TikTok, with users creating videos that use generative artificial intelligence to depict grisly murders and other heinous true crimes committed against children.
The resulting video clips, known as modified deepfakes, show young children, sometimes bruised, narrating their chilling real-life experiences with computer-generated voices. These videos have been viewed almost 3 million times, with some clips claiming that AI is used in lieu of real photos as a way of “respecting the family” of the deceased.
Notable murders spoken of, in the first person, include that of Elisa Izquierdo, a 6-year-old murdered by her mother, and Carl Newton Mahan, the 6-year-old Kentucky boy who fatally shot an 8-year-old in 1929. Some of these videos have been viewed almost 3 million times.
Recreated images of famous deaths—such as those of President John F. Kennedy and George Floyd—have also made rounds on the social media platform, but those videos have been taken down.
Rolling Stone reported that one account, known for posting these videos, @truestorynow, has been banned from TikTok for violating community guidelines.
Although some viewers see these videos as strange and creepy, others believe they are designed to trigger strong emotional reactions that get clicks and likes.
Paul Bleakley, an assistant professor in criminal justice at the University of New Haven, warns that this trend has the potential to revictimize people who have already been victimized. He says that legal options for fighting back against these clips represent a “very sticky, murky gray area,” and for the parents or relatives of the children depicted in these AI videos, it could be an uncomfortable experience to watch.
FAQs:
What are modified deepfakes?
Modified deepfakes, unlike traditional deepfakes, do not alter existing images but instead create new ones. These are used to create videos that depict grisly murders and other heinous true crimes committed against children.
What kind of murders are depicted in these videos?
Notable murders spoken of, in the first person, include that of Elisa Izquierdo, a 6-year-old murdered by her mother, and Carl Newton Mahan, the 6-year-old Kentucky boy who fatally shot an 8-year-old in 1929.
Why are AI-generated images used instead of real photos?
Some video clips claim that AI is used in lieu of real photos as a way of “respecting the family” of the deceased.
Have any of these accounts been banned from TikTok?
Yes, the account @truestorynow has been banned from TikTok for violating community guidelines.
What are the potential consequences of these videos?
According to Paul Bleakley, an assistant professor in criminal justice at the University of New Haven, these videos have the potential to revictimize people who have already been victimized. Bleakley warns that the legal options for fighting back against these clips represent a “very sticky, murky gray area.”

AI uses sick deepfake trend to recreate real child murder victims
TikTok users are creating modified deepfake video clips using generative artificial intelligence to depict heinous true crimes committed against children. In these videos, young children, sometimes shown bruised, narrate their chilling real-life experiences with computer-generated voices. The videos depict notable murders, including the case of Elisa Izquierdo, a 6-year-old murdered by her mother, and Carl Newton Mahan, the 6-year-old Kentucky boy who fatally shot an 8-year-old in 1929.
These videos have been viewed millions of times, and some claim that AI is used in lieu of real photos to “respect the family” of the deceased. Rolling Stone reported that modified deepfake video clips recreating famous deaths, such as those of President John F. Kennedy and George Floyd, have also made rounds on the social media platform.
While one account with nearly 50,000 followers has been banned from TikTok for violating community guidelines, many users find these videos disturbing and upsetting. Paul Bleakley, an assistant professor in criminal justice at the University of New Haven, warns that the practice has the “real potential to revictimize people who have been victimized before.” He explains that, “imagine being the parent or relative of one of these kids in these AI videos…here’s an AI image [based on] your deceased child, going into very gory detail about what happened to them.”
The legal options for fighting back against these videos represent a “very sticky, murky gray area,” Bleakley says, adding that the videos appear “designed to trigger strong emotional reactions, because it’s the surest-fire way to get clicks and likes. It’s uncomfortable to watch, but I think that might be the point.”