The prevalence of social media in our world has led to deeper connections across oceans and continents. It has also given abusers an opportunity to promote violence, make threats, and control survivors.

Twitter hopes to address violence on social media by making changes to their conduct policy. Under Twitter’s current policy, threats of violence, death, or disease was always considered a violation of Twitter’s policies. However, new rules expand these policies to address abusive and hateful content in usernames and profiles.

In particular, Twitter’s new policies prohibit:

  • Accounts affiliated with violence organizations on or off the social media platform
  • Content that glorifies or celebrates violent acts and violent people
  • Accounts that contain abusive or threatening content in the username, display name, or profile photos
  • Hateful imagery such as logos, symbols, or images used to promote hostility against others

Repercussions for these behaviors vary from having media be flagged as “sensitive” (meaning that content will not be visible to the public in searches with “Safe Mode” on) to users being asked to remove content to Twitter suspending accounts.

Twitter has recently received criticism for granting verification badges to prominent white nationalists who promote violence and for responding poorly to violent conduct reports. While these new policies aim to address these criticisms, Twitter predicts a short period of adjustment and revision. In a statement, they said:

“In our efforts to be more aggressive…we may make some mistakes and are working on a robust appeals process. We’ll evaluate and iterate these changes in the coming days and weeks.”

Regardless of potential mistakes, Twitter sees this as an opportunity to protect users from dangerous activity and combat abusive behaviors online. “We’re making these changes to create a safer environment for everyone.”