Response to: “The Secret Rules of the Internet”

The Verge’s 2016 long-form article, "The Secret Rules of the Internet," by Catherine Buni and Soraya Chemaly, provides a detailed glimpse into what they call "The murky history of moderation, and how it’s shaping the future of free speech." The article talks about the role of YouTube’s early moderators in 2006 – Julie Mora-Blanco and her team, referred to as The SQUAD. Their job was to scour the website’s content for offensive or malicious content, and remove it; thereby protecting the new company’s image in the public domain.

In order to understand where the need for moderation originated from, we have to backtrack to earlier years. In those days of the commercial web, the act of content regulation was minimally practiced among the blossoming tech industry. Of course, there were the then-mainstream websites with curated content; but websites on the forefront of social media’s initial boom were driven by user-content. This idea re-sparked what was formerly known as "The American Dream."

Sounds dramatic, I know. However, the general idea of The American Dream can be loosely defined as an ethos that, according to Wikipedia, incorporates "the set of ideals (democracy, rights, liberty, opportunity and equality)" and applies them to all individuals participating. Although it is most associated with economic opportunity, it was also a shining beacon of hope welcoming the politically oppressed.

Much has changed in the US, and across the world, since the adoption and seeming abandonment of that dream. Societal classes allowed for some to have their voices heard while others were being stifled. The internet, however, gave everyone a voice – for better or for worse. In addition, it was not limited solely to the US. This new promise of truly uncensored speech protected by anonymity, along with an increasingly-ubiquitous medium provided opportunities for the atrocities of mankind to be exposed where they may have gone unreported prior. What may not have been expected, though, was that this very same medium would prove to be an effective way to perpetuate or enable malicious behaviors as well.

So, what can we do?

When pressed for an association to the phrase "free speech," the Constitution’s first amendment is likely the first association a US citizen would have. Many Americans, however, are grossly unaware as to what the law actually means and the extent of protections it provides. The idea of free speech outlined in the Constitution protects people from government-originated repercussions, barring some extreme limitations (hate speech, causing panic in public, child endangerment, etc.). Because of this, the responsibility of creating regulations that dictate what is acceptable via free expression and what is not, falls on everyone except the government.

As noted in the article, Section 230 of the 1996 Communications Decency Act relieves the media platforms of responsibility by way of law. So where does that leave us? If the government cannot legislate its way to a solution without trampling on existing rights, and the platforms cannot be held legally responsible for the content they host, then the only viable solution that remains is the court of public opinion. This is essentially where we stand today. It is up to the individual end-user to handle the responsibility their own actions. Should they step out of sync with the rest of societal norms, they may be ostracized, but ultimately go unpunished. That’s a lot of unaccounted responsibility to hand to those who may or may not be able to handle it.

The situation this has created reminds me of a joke (read: observation) by the comedian (read: philosopher) George Carlin, "Think of how stupid the average person is, and realize half of ‘em are stupider than that" (NSFW source video).

Companies, driven by profit, avoid addressing the issue unless they suffer from its effects directly. A common side effect of capitalism in general. In fact, some sites profit from making the material available. Those who are more image-conscious often outsource their content moderation. Much like other undesirable, real-world jobs, the work is pushed onto those who are less fortunate. They’re sometimes (economically) forced to review the worst content humanity has to offer in order to protect the regular end-users. Once again, like other industries, the luxuries experienced by the masses often come at the cost of unfairly burdening a single group of already-struggling individuals.

Online content moderation and regulation is being approached, and sometimes rejected, in the same way traditional regulations have been attempted in other areas:

  • Step 1: Wrongly assume that everyone is good and wants to do “the right thing”
  • Step 2: Reward greed, and those who cheat society, with profits
  • Step 3: Act surprised

Legislation is off the table. Education on a mass scale takes too long. Socio-economic pressure toward the companies hosting the platforms is really the only realistic way to deal with today’s situation. For the future, though, companies like Facebook and Google have begun to develop AI that moderates content, rather than humans. It’s a start, but we’re only in the beginning. The lack of regulation may be a problem, but an overabundance of it comes with its own set of issues. Let's hope it does not go in that direction either!


Popular posts from this blog

Reaction to: Instagram's "Self-Care" Scams

The Internet as a Platform: Identifying the Roots of...