Organizations that track the material are reporting a surge in A.I. images and videos, which are threatening to overwhelm law enforcement.
A new flood of child sexual abuse material created by artificial intelligence is hitting a tipping point of realism, threatening to overwhelm the authorities.
Over the past two years, new A.I. technologies have made it easier for criminals to create explicit images and videos of children. Now, researchers at organizations including the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning of a surge of new material this year that is nearly indistinguishable from actual abuse.
New data released Thursday from the Internet Watch Foundation, a British nonprofit that investigates and collects reports of child sexual abuse imagery, identified 1,286 A.I.-generated videos of child sexual abuse so far this year globally, compared with just two in the first half of 2024.
The videos have become smoother and more detailed, the organization’s analysts said, because of improvements in the technology and collaboration among groups on hard-to-reach parts of the internet called the dark web to produce them.
The rise of lifelike videos adds to an explosion of A.I.-produced child sexual abuse material, or CSAM. In the United States, the National Center for Missing & Exploited Children said it had received 485,000 reports of A.I.-generated CSAM, including stills and videos, in the first half of the year, compared with 67,000 for all of 2024.
“It’s a canary in the coal mine,” said Derek Ray-Hill, interim chief executive of the Internet Watch Foundation. The A.I.-generated content can contain images of real children alongside fake images, he said, adding, “There is an absolute tsunami we are seeing.”