In the era of social networks like Facebook, Instagram, and Twitter, there is a battle raging to find the balance between free speech and promoting civility. Social networks and media have transformed communication in modern society. The ability to reach millions with our words is as close as the smartphone in our pockets.
Along with the explosion of global communication has come the advent of keyboard warriors, internet trolls, and anonymous racists. Tap a few keys and hate speech is as virulent as the flu. Yet, in a society that values free speech, where do we draw the line?
Recent history has shown that the reach of social media is far and wide. Online users take to social media platforms to express themselves in ways never dreamed of in the past. The reach is limitless. For some, the allure of a global audience brings out language and sentiments they would never dream of using and holding in real life. However, the impact on targeted individuals feels as real as if it was said to their face.
Case in point is the Nextdoor app. In the spring of 2015, the location-based neighborhood application boasted over $100 million in venture capital raised. This status catapulted the app to the billion-dollar level. In less than a month, the company was hit with a scandal.
An internet news site reported a growing problem with racial profiling on the site. Users took to the app’s crime and safety forum to report suspicious activity of people of color. In many cases, the reports cast suspicions over people in their own neighborhoods. Since then, the company has made a concerted effort to stop racial profiling on Nextdoor.
The structure of the app’s texts made it difficult to track down the offending post. As a result, five Nextdoor employees were given the arduous task of reading through thousands of user posts.
Before the company was made aware of the racial profiling issues, app users could report inappropriate posts. The company realized that the current reporting procedures were inadequate to handle the racial profiling problems.
At first, the solution came in the form of a new button that allowed users to report posts that contained racial profiling. It was quickly determined that the new reporting button was not enough. Users seemed unclear on the purpose of it. The button was used to report numerous perceived incidences that had nothing to do with racial profiling.
Nextdoor continued to work on a solution. Eventually, the company settled on a multi-step answer. One, the company provided diversity training for neighborhood operations teams. Two, the company redesigned the app’s community guidelines. Finally, the app itself underwent a redesign.
The app redesign did not immediately solve the problem. The company then tested various versions of the app. Experiments were conducted on the various app versions. Nextdoor divided users into two groups to use different versions. The versions differed slightly in wording, questions, and so on. After time, it was clear which versions worked better.
After a few months, the company finalized changes to the crime reporting forums. The changes included prompts to provide more descriptive details when race was indicated in the forums. For example, users were asked to give descriptors like hair color, clothing, and so on.
As a result, the company reported a 75 percent reduction in posts that involved racial profiling. Of course, racism was not eliminated from the app, but the changes netted improvements.
Perhaps there are lessons here for other social media platforms. Nextdoor utilized data to work toward a solution to a genuine problem. In the end, the threat against people of color was not eliminated but was drastically reduced. Free speech was not curtailed in the process.
Following their example, quick reaction to the problem and an open-minded approach may be helpful tools in the fight against online racism. Perhaps the approach could be adapted as an initial approach to other forms of online bullying without jeopardizing free speech.