1. Instagram
  2. TikTok
  3. YouTube

    Now reading: the government has been discussing banning instagram

    Share

    the government has been discussing banning instagram

    The Health Secretary said social media sites that fail to remove harmful material relating to self-harm and suicide could face serious sanctions.

    Share

    It’s not news that spending too much time on social media sites that present a curated, exaggerated view of other people’s lives can be devastating for our mental health. The potential damage comes not just from the tendency to compare ourselves to others, but also from the potentially triggering material hosted on platforms like Instagram and Tumblr, where users in crisis can view content linked to depression, self-harm and suicide with relative ease.

    Now, Health Secretary Matt Hancock has declared war on social media platforms and their lack of commitment to protecting vulnerable users from triggering imagery — and he’s even threatened a blanket ban on social media in the UK if Twitter, Snapchat, Pinterest, Apple, Google, Facebook and Instagram fail to respond to his pleas for change. “We can legislate if we need to,” said the politician, speaking about the issue with Andrew Marr today. “It would be far better to do it in concert with the social media companies, but if we think they need to do things that they are refusing to do, then we can and we must legislate.”

    “Ultimately, parliament does have that sanction.”

    The MP was spurred into action by the death of 14-year-old Molly Russell, who sadly took her own life in 2017 after struggling with her mental health. Speaking out about the dangers social media can pose to those trying to cope with mental health issues, Molly’s dad accused social media platforms of being partly responsible. In fact he claimed Instagram “helped kill my daughter” — Molly having viewed disturbing content about suicide and self-harm before her death.

    In spite of the warning to tech giants, Matt Hancock explained that banning the sites would be a last resort. “It’s not where I’d like to end up, because there’s a great positive to social media too,” he said. “But we run our country through parliament and we will act if we have to.”

    Currently social media platforms do have some safeguards in place against harmful, violent and triggering content. Facebook allows users to flag posts they find worrying and these can be banned or removed by moderators. Instagram bans a bizarre array of hashtags, but doesn’t include those which may be used to promote triggering material. On Tumblr, users searching for triggering hashtags or content which may reflect a crisis in mental health are asked if they are struggling, with a particular focus on pro-ana or ‘thinspiration’ blogs. But the Health Secretary believes we’re still not going far enough in protecting those in mental health crisis.

    “I welcome that [social media companies] have already taken important steps and developed some capabilities to remove harmful content,” he said in an open letter. “But I know you will agree that more action is urgently needed. It is appalling how easy it is to still access this content online and I am in no doubt about the harm this material can cause, especially for young people.

    “It is time for internet and social media providers to step up and purge this content once and for all.”

    And it seems his words are already having an effect. Instagram has already issued a response promising to reduce the amount of harmful content on its platform and add more hashtag blocks. “Nothing is more important to us than the safety of the people in our community, and we work with experts every day to best understand the ways to keep them safe,” Instagram said.

    “We do not allow content that promotes or encourages eating disorders, self-harm or suicide and use technology to find and remove it. Mental health and self-harm are complex and nuanced issues, and we work with expert groups who advise us on our approach.” The platform said that, in consultation with these groups, they had chosen to redirect those searching for offensive content to support services rather than banning the content itself, but they promised this — among other policies — would now be under review.

    “We are undertaking a full review of our enforcement policies and technologies around self-harm, suicide and eating disorders.”

    Loading