Earlier this month, Instagram announced they’d be cracking down on self-harm imagery. Or rather, they’ve doubled down on the sensitive content filters implemented in 2017.
The filters, or “sensitivity screens,” blur out content related to suicide or self-harm and users can tap the screen to view the image. The idea was to reduce the likelihood of stumbling upon unwanted or harmful content.
Since then, Instagram has been dubbed the worst social media platform for mental health, particularly among young people.
From body dysmorphia to eating disorders, self-harm, and low self-esteem — the platform is an undeniable minefield for all kinds of mental health issues.
The initial filters didn’t ban images of cutting or other forms of self-harm, as Instagram didn’t want to ban posts from those who were struggling.
While most of the dialog surrounding social media and mental health has focused on things like FOMO and body image issues, the conversation has shifted toward what content should be banned in an effort to protect vulnerable users.
Here’s a little more about Instagram’s push toward a healthier platform–and why.
Why is Instagram increasing controls on sensitive content?
The decision to ramp up controls over self-harm posts came two years after 14-year-old Molly Russell took her own life. Molly’s parents believe that content found on Pinterest and Instagram may have played a role in her death. Her father, Ian Russell has been vocal about the failure of social media companies to protect their young users.
In the wake of the tragedy, UK health secretary Matt Hancock warned Facebook (who, of course, owns Instagram) that he would take legal action to protect young people from harmful content. In a letter to “social media bosses,” Hancock said it is appalling how easily young people can access content that “leads to self-harm and suicide.”
Adam Mosseri, the head of Instagram, responded by announcing that they would roll out sensitivity screens blocking images that depict self-harm.
“We still allow people to share that they are struggling,” Mosseri stated. The app’s policy only bans images that “glorify” self-harm. The platform does hide posts that reference self-harm or post non-graphic depictions.
As it stands, the app does ban promoting content that relates to self-harm or suicide. However, they’ve struggled to gain control over the content the algorithm recommends — allowing graphic imagery to pass through the filters anyway.
Instagram’s algorithm is set up so users receive personalized recommendations based on past activity. So, despite Instagram’s efforts, the platform guides users seeking self-harm content toward other posts/accounts that align with their “interests.”
Mosseri admits in an op-ed that the platform hasn’t done enough to keep these images out of feeds. While the 2017 effort was a step in the right direction, this latest move involves allocating more resources to fighting back — content moderators and engineers.
Is it Instagram’s responsibility to ban sensitive content?
Early reports of the tool in action show Instagram did censor certain hashtags — obvious ones like #selfharm or #selfinjury came with the sensitive content label, but the issue was, users could just add new hashtags and circumvent the filters.
Like some of the biased algorithms we’ve seen in the news recently — problems aren’t evident until something bad happens. A tool isn’t programmed to have racist inclinations or built-in gender bias, but once it’s out in the wild, dealing with real people, it doesn’t have the “training” to respond appropriately to every situation.
It’s troubling that the platform has been recommending sensitive content to a vulnerable audience. But it’s unclear if Instagram’s ban will extend to accounts that raise awareness for mental health issues or speak out about their experience with depression.
As we’ve seen with Facebook’s crackdown on fake news and political ads, many accounts may find themselves subject to penalty. But, it’s certainly a step in the right direction.