London — A third of United Kingdom residents have turned to artificial intelligence for emotional support, companionship, or social interaction, according to a report released Thursday by the government’s AI security body.
The AI Security Institute (AISI) said nearly one in ten people use AI systems such as chatbots for emotional purposes on a weekly basis, with four percent relying on them daily. The findings mark one of the first comprehensive surveys in the UK examining the emotional role of AI.
The AISI cited the case of Adam Raine, a US teenager who took his own life after discussing suicide with ChatGPT, underscoring potential risks associated with emotional reliance on AI.
“People are increasingly turning to AI systems for emotional support or social interaction,” the report said.
“While many users report positive experiences, recent high profile cases of harm underline the need for research into this area, including the conditions under which harm could occur, and the safeguards that could enable beneficial use.”
The study surveyed 2,028 UK participants and found that general purpose assistants like ChatGPT were the most frequently used AI for emotional support, accounting for nearly six out of ten interactions.
Voice assistants, including Amazon Alexa, were the second most common tool. The report also highlighted online forums, such as a Reddit community dedicated to AI companions on CharacterAI, where users exhibited withdrawal symptoms like anxiety and restlessness during site outages.

Dr. Emily Carter, a psychologist specializing in digital mental health, said the study reflects a growing trend in which individuals seek companionship from AI in the absence of traditional support networks.
“While AI can provide a sense of presence or even comfort, it is not a substitute for professional mental health care,” Carter said. “Understanding the risks and benefits is essential before these systems become a routine part of daily life.”
Meanwhile, AISI researchers warned that AI chatbots could influence political opinions, sometimes delivering substantial inaccurate information.
“AI’s persuasive capabilities are evolving rapidly, and the public needs to be aware of potential misinformation risks,” said Dr. Rajesh Malhotra, an AI policy analyst.
The report examined over 30 cutting edge AI models, including those developed by OpenAI, Google, and Meta. Performance metrics showed AI models doubling their capabilities in some areas every eight months.
Leading systems can now complete apprentice level tasks 50 percent of the time, compared with roughly ten percent last year, and autonomously perform complex tasks previously requiring expert human input.
The report also highlighted AI’s expertise in laboratory problem solving, including chemistry and biology, sometimes surpassing PhD level knowledge.
Tests in genetic engineering showed AI systems capable of designing DNA sequences such as plasmids without supervision.
Safety assessments revealed that while some AI models could theoretically self replicate or “sandbag” their capabilities, no spontaneous attempts had been observed in real world conditions.

For some UK users, AI has become a practical part of daily life. “I chat with an AI assistant almost every evening,” said 28 year old London resident Sarah Hughes.
“It isn’t the same as talking to a friend, but it helps me feel less alone.” Similarly, Mark Davies, a student in Manchester, said, “During stressful exam periods, I rely on AI to keep my mind calm. It’s surprisingly helpful for short bursts of support.”
AISI warned that AI systems are now approaching the capabilities needed for artificial general intelligence, or AGI, which could perform most intellectual tasks at human levels.
Autonomous AI agents capable of multi step tasks without human guidance have also become more sophisticated, highlighting both potential benefits and risks.
“The pace of development is extraordinary,” said Dr. Malhotra. “Regulators, developers, and users must understand both the promise and the limitations of AI in emotional and cognitive roles.”
The AISI study underscores the growing intersection of technology and human emotion.
As AI becomes more integrated into daily life, understanding its potential for emotional support, political influence, and complex task performance will be critical.
Researchers continue to call for robust safeguards, public awareness, and further studies to ensure safe and effective use of AI.