Police warn of viral AI homeless man TikTok prank sparking false 911 calls nationwide

Police departments across the United States and the United Kingdom are warning residents about a dangerous new social media trend involving an “AI homeless man” prank circulating on TikTok. 

The viral challenge uses generative artificial intelligence to fabricate realistic images of homeless individuals inside people’s homes, prompting alarmed parents to call emergency services often resulting in unnecessary police responses.

The prank, which has spread rapidly on TikTok and Snapchat, has caused confusion, wasted resources, and raised ethical concerns about the exploitation of unhoused individuals for entertainment. 

Several police departments have now issued public statements urging the public to stop participating in the trend.

The “AI homeless man” prank begins when young social media users generate images of a disheveled person inside their home, usually with the help of generative AI tools such as Snapchat’s AI camera. 

They then send the image to unsuspecting family members, claiming that a homeless person broke in or asked to rest indoors. Many parents, believing the situation to be real, react in panic and some have gone as far as contacting the police.

Videos of these reactions have gained millions of views online under the hashtag #homelessmanprank, which has accumulated more than 1,200 videos. 

The trend appears to have emerged in late September and has since gained traction across multiple platforms, particularly among teenagers experimenting with AI generated images.

In Texas, the Round Rock Police Department reported receiving at least two emergency calls that turned out to be hoaxes related to the trend. 

“While no one was harmed, making false reports like these can tie up emergency resources and delay responses to legitimate calls for service,” the department said in a statement posted on X.

Similarly, the Oak Harbor Police Department in Washington state responded to a call about an alleged homeless individual on a high school campus later discovering it was a false alarm triggered by students participating in the same prank.

Experts warn that while the prank might seem harmless to participants, it exposes several troubling social and ethical issues. 

Dr. Amanda Kessler, a sociologist at the University of California, Los Angeles, who studies digital behavior, said the trend highlights the risks of blending AI technology with social media’s viral culture.

“What we’re seeing is the convergence of powerful image generation tools and teenage impulsivity,” Kessler said. 

“The realism of AI imagery can create confusion, and when you mix that with fear and quick sharing online, the results can escalate fast sometimes to dangerous levels.”

Digital safety analyst Robert Finlay from the nonprofit group CyberTruth noted that pranks involving emergency services are not new but warned that generative AI has made misinformation easier to create and harder to detect. 

AI generated hoaxes are a new frontier in digital deception, Finlay said. “Unlike past pranks using Photoshop, these tools can produce photorealistic results that convince even tech savvy users.”

Law enforcement agencies are concerned not only about wasted resources but also the potential for escalation. In an interview with NBC’s Nightly News, Round Rock Police Patrol Commander Andy McKinney said that false reports of intruders often trigger high level tactical responses. 

“Getting a call about an intruder causes a pretty aggressive response for us because we’re worried about safety,” McKinney said. “It could mean clearing a home with guns drawn or even a SWAT deployment.

While police data specific to the “AI homeless man” prank remains limited, false emergency calls linked to social media trends have been increasing. 

According to the National Emergency Number Association, US emergency dispatchers handle an estimated 240 million calls annually, with roughly 10 percent classified as non emergency or prank related.

Comparatively, AI generated misinformation has become a growing global concern. A 2024 Pew Research Center study found that 68 percent of adults in the United States had encountered an AI generated image or video online that they initially believed was real. 

Social media platforms, including TikTok and Snapchat, have faced mounting pressure to improve content moderation and transparency around AI generated content.

Parents affected by the prank have described moments of genuine fear. “When my daughter sent me that photo, I froze,” said Laura Jennings, a mother from Dallas. “It looked like a man was standing right in our living room. 

I locked myself in the bedroom and called 911 before realizing it was fake. I felt embarrassed but also angry that something like this could look so real.”

Homelessness advocates have condemned the trend, arguing that it reinforces harmful stereotypes. “Using the image of a homeless person as the punchline of a joke dehumanizes an already vulnerable population,” said Michael Raines, director of the Seattle Housing Outreach Coalition. 

It shows how disconnected social media users can become from real world suffering. In Massachusetts, Salem police issued a statement calling the prank dangerous and disrespectful, adding that officers “do not know it’s a prank when responding” and treat such calls as potential break ins. 

It’s not just a waste of resources it can put both officers and civilians at risk, the statement read. Authorities and educators are now calling for digital literacy programs that teach young people about the consequences of AI misuse. 

Some school districts have already begun incorporating lessons on ethical technology use. Tech platforms, meanwhile, face renewed scrutiny over how they label or moderate AI generated content.

Dr. Kessler suggested that these incidents could prompt new legislation. “As AI deepfakes and image manipulation become more prevalent, lawmakers will likely move toward clearer regulations governing misuse,” she said. “The challenge is doing so without stifling innovation.”

Snapchat and TikTok have yet to release official responses to the controversy. Experts believe social platforms may introduce stronger content detection tools or disclaimers identifying AI generated visuals.

What began as a digital prank has evolved into a national concern, blending social media virality, emerging AI technology, and real world safety risks. 

While many of the incidents have ended without injury, police departments continue to warn that the misuse of AI for hoaxes can strain emergency systems and potentially lead to harm.

As law enforcement agencies and digital platforms grapple with the implications of this trend, one message remains clear not everything seen on screen is real and sometimes, the illusion can carry serious consequences.

Leave a Comment