A change to the online safety account means that articles that violate a service’s terms and conditions cannot be removed or hidden until the publisher has been notified and received a verdict on any objection to the platform. The change in legislation aims to avoid a repeat of an incident last year when YouTube suddenly banned digital station TalkRadio from its platform for violating its content guidelines. It was restored 12 hours later. Culture secretary Nadine Dorries said democracy “depends on people’s access to high-quality journalism”. He added: “We have seen tech companies arbitrarily take down legitimate journalism with a complete lack of transparency and this could seriously affect public debate. These additional protections will prevent that from happening.” The amendment applies to the biggest tech platforms, such as Twitter, YouTube and Facebook, and is designed to address concerns that the bill’s provisions to protect users from harmful content could encourage tech companies to show excessive diligence in removing or degrading content; The new provision does not apply to illegal content, known as priority offenses in the bill, which cover content such as terrorist material and child sexual abuse, which can be removed without appeal. It also would not apply to sanctioned news outlets such as RT and Kremlin-backed Sputnik. Other amendments include requiring Ofcom, the communications watchdog, to review the bill’s impact on news publishers’ content within two years of the bill coming into force. There will also be further protections for reader comments on articles. The bill is due to return to parliament next week before being implemented early next year. The News Media Association, which represents 900 titles in the national, regional and local news industry, said the changes were “essential” to protect press freedom. “These amendments are necessary to protect media freedom and to ensure consumers have access to accurate, timely and reliable news and information online,” said Sayra Tekin, legal director at the NMA. “By ensuring that identified news publisher content cannot be arbitrarily removed from platforms, the Internet Safety Bill will help tackle the flood of misinformation and disinformation online.” The government also announced some of the harmful content it expects tech platforms to tackle. The charges include online abuse and harassment, promoting eating disorders and encouraging self-harm. Major platforms will have to explain how they will handle this content – ​​including whether it will lead to removal – in their terms and conditions, which will be monitored by Ofcom.