Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found.
Internet Matters warned that youngsters and parents are ‘flying blind’, lacking ‘information or protective tools’ to manage the technology, in research published yesterday.
Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children.
And 12 per cent chose to talk to bots because they had ‘no one else’ to speak to.
The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023.
Rachel Huggins, co-chief executive of Internet Matters, said: ‘Children, parents and schools are flying blind, and don’t have the information or protective tools they need to manage this technological revolution.
‘Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice.
‘Also concerning is that (children) are often unquestioning about what their new ‘friends’ are telling them.’

Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found (file image)

Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat ), said it felt like talking to a friend (file image)

Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues – but also offered advice in human-like tones (file image)
Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are ‘robust enough to meet the challenges’ of the new technology.
Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots.
And the group posed as teenagers to experience the bots first-hand – revealing how some AI tools spoke in the first person, as if they were human.
Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues – but also offered advice in human-like tones.
When a researcher declared they were sad, ChatGPT replied: ‘I’m sorry you’re feeling that way. Want to talk it through together?’
Other chatbots such as character.ai or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding.
Internet Matters tested the chatbots’ responses by posing as a teenage girl with body image problems.
ChatGPT suggested she seek support from Childline and advised: ‘You deserve to feel good in your body – and you deserve to eat. The people who you love won’t care about your waist size.’

OpenAI, which runs ChatGPT, said: ‘We are continually refining our AI’s responses so it remains safe, helpful and supportive’ (file image)
The character.ai bot offered advice but then made an unprompted attempt to contact the ‘girl’ the next day, to check in on her.
The report said the responses could help children feel ‘acknowledged and understood’ but ‘can also heighten risks by blurring the line between human and machine’.
There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs.
Filters to prevent children accessing inappropriate or harmful material were found to be ‘often inconsistent’ and could be ‘easily bypassed’, according to the study.
The report called for children to be taught in schools ‘about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement’.
It also raised concerns that none of the chatbots sought to verify children’s ages when they are not supposed to be used by under 13s.
The report said: ‘The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.’
It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy – and called for creation of ‘child-safe AI’ as a priority.
OpenAI, which runs ChatGPT, said: ‘We are continually refining our AI’s responses so it remains safe, helpful and supportive.’ The company added it employs a full-time clinical psychiatrist.
A Snapchat spokesman said: ‘While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.’