The British government is taking a stand against harmful content on social media platforms like Facebook, Instagram, and TikTok by proposing measures that will require these companies to “tame” their algorithms. The goal is to filter out or downgrade material that could potentially harm children.
Under the proposed plan by regulator Ofcom, tech companies will need to implement more than 40 practical steps outlined in Britain’s Online Safety Act, which became law in October. These steps include having robust age checks to prevent children from accessing harmful content related to suicide, self-harm, and pornography.
Ofcom Chief Executive Melanie Dawes emphasized the importance of holding tech firms accountable for keeping children safe online. She stated, “In line with new online safety laws, our proposed Codes firmly place the responsibility for keeping children safer on tech firms.”
The use of complex algorithms by social media companies to prioritize content and keep users engaged has raised concerns about the potential for children to be exposed to harmful material. By addressing these algorithms and implementing age checks, Technology Secretary Michelle Donelan believes there will be a fundamental change in how children in Britain experience the online world.
Ofcom is expected to publish its final Children’s Safety Codes of Practice within a year, following a consultation period that ends on July 17. Once approved by parliament, the regulator will begin enforcement, which may include fines for non-compliance. This proactive approach aims to protect children from the negative impacts of harmful online content.