Children could face a lifetime ban from social media if they share classmates’ nudes, under new proposals.
Ofcom is reportedly urging online platforms to block these classroom cyberbullies from using their sites ever again.
Tech companies will be told to cover all bases to ensure offenders cannot re-register using a different name, by using identity verification systems and internet address tracking.
The rules are also likely to apply to school kids who share nude images on group chats.
More than half of 13 to 15-year-old children who have sent nude pictures of themselves online have had the images distributed to unintended recipients, according to Snapchat.
The regulator said those who ‘share, generate, or upload CSEA (child sexual exploitation and abuse) content… should be banned from the service and prevented from returning’, the Telegraph reports.
Ofcom admitted that it was a ‘particularly difficult issue’ to establish whether children should come under such online rules, including young people who were coerced into sending pictures of themselves.
The intention is neither to penalise grooming victims nor those in consensual relationship, the body insisted, but images distributed to a wider group ‘can have a significant impact on victims’.

Children could face a lifetime ban from social media if they share a classmates’ nudes, under new proposals (Stock Photo)
The body has considered more lenient options including banning children on a case-by-case basis or introducing an appeals process, but it fears taking an easier position against younger people would encourage adults to pretend to be children in a bid to escape punishment.
Sharing or receiving sexual images of children is already against the law but children are not normally prosecuted for the crime if they are a victim or it is seen as non-abusive.
The new rules would also apply to images produced by artificial intelligence.
Some sites have existing zero tolerance policies for those who distribute such pictures but the new plans will enforce these in law.
Tech giants with more than one service, including Meta which owns Facebook, WhatsApp and Instagram, would be required to ban people who break new rules from all of their platforms.
The policy is part of a wider aim to protect children online and prevent illegal content going viral.
The Internet Watch Foundation, Britain’s child abuse imagery hotline, reported receiving an alert every 74 seconds in 2024 – a rise of eight per cent compared with the previous year.
In June it was reported that ministers are considering proposals to hand children a social media curfew under measures to improve online safety.

Peter Kyle, the Technology Secretary, pictured in June, said he was considering limiting access to apps to two hours a day, outside of school time and before 10pm
Technology Secretary Peter Kyle indicated he was considering an ‘app cap’ to restrict how much time youths spend on their phones.
The cap would limit access to apps to two hours a day, outside of school time and before 10pm.
It came as Mr Kyle came under fire from the father of a teen who took her own life after viewing harmful content warned ‘sticking plasters’ will not be enough to strengthen online safety measures.
The Online Safety Act has passed into law, and from this year will require tech platforms to follow new Ofcom-issued codes of practice to keep users safe online, particularly children.
But Ian Russell, whose 14-year-old daughter Molly died in 2017, said it was not tough enough and urged the Prime Minister to ‘act decisively’ in toughening legislation to protect young people online.
Mr Russell, who is chairman of the Molly Rose Foundation set up in his daughter’s memory, said: ‘Every day the Government has delayed bringing in tougher online safety laws we’ve seen more young lives lost and damaged because of weak regulation and inaction by big tech.
‘Parents up and down the country would be delighted to see the Prime Minister act decisively to quell the tsunami of harm children face online, but sticking plasters will not do the job.
‘Only a stronger and more effective Online Safety Act will finally change the dial on fundamentally unsafe products and business models that prioritise engagement over safety.’
Hefty fines and site blockages are among the penalties for those caught breaking the rules, but many critics have argued the approach gives tech firms too much scope to regulate themselves.