In my humble opinion, the practice of peacefully, and fairly, sharing and evaluating ideas in a public forum constitutes an irrefutable benefit to not only American society, but the rest of the world as well. There is a crucial reason why unabated discourse finds itself protected by federal law under the 1st Amendment. If both sides of a debate can’t be equally expressed for informed decisions/opinions/social agreements to be made, the reign of unexamined, untested ideology is the unhealthy, inevitable result. I feel we’ve been watching this possibility play out first hand with the radicalization and abusive manipulation of online communities by both sides of our two-party political system. Unlike (formerly/still somewhat) community moderated platforms like Reddit, others, using systems and practices that can at best be described as behind-the-scenes and unaccountable to the majority of users, are responsible (I would argue primarily by accident) for the deterioration of mutually respectable discourse and the distortion of our perceptions of reality.
I’m not directly referring to the problems of botnets, disinformation, hate speech, or social engineering campaigns, although these things are all contributing factors. What I’m getting at is “Big T” and its utter failure and questionable attempts to combat these things. Privatized, centralized communication platforms have, for better or worse, become the modern public forum in today’s ever-evolving age of technology, and regardless of political leanings, and while completely legal, it’s hard to deny that the recent escalation of manual and algorithmic “censorship” and “fact checking” in combination with curated, targeted search metrics pose a significant problem to the future of communication in our society.
Balanced freedom of expression is so highly valued and important because the control of information in a conversation on a contingent issue is essentially controlling the relative thoughts and perceptions of what’s being discussed (exactly what you DON’T want an overt power to have over people). Add that to how tech platforms have become just as powerful in controlling the modern flow of information as any tyrannical government throughout history, and we have a bit of a predicament. Regarding power and control, is there really any difference between the government and tech?
Who do they answer to?
The people? Ad companies? Political parties? Foreign interests? I believe those are the real questions often swept under the rug by distractions like the Section 230 debate.
Mind your P’s and Q’s (Points and Questions).
This is primarily from a 1st Amendment/American perspective, and I understand things are different according to the “Universal Declaration of Human Rights” I’m just trying to spark legitimate conversation or insight to my following observations/opinions/questions:
P. Private companies are not subject to free speech laws and Section 230 protections are crucial to maintaining free expression online for ALL parties. This shouldn’t be up for discussion. Using 230 to threaten major platforms over censorship is almost as nonsensical as the DOJ’s attempts to universally hamstring e2e encryption to combat child pornography.
P. A private company should be protected against unwarranted gov’t overreach mandating how they operate.
P. Regarding threats of violence, I’m sure everybody’s on the same page that such things aren’t speech and have no place in discourse. As for hate speech, things get a bit fuzzy.
P. Hate speech and intolerance is a serious drawback to the purpose of free expression and should rightly be addressed by platforms. Even though it’s protected by law in the US, hatred really contributes nothing worthwhile to any discussion.
P. The paradox of tolerance/intolerance states that unbridled tolerance of intolerant speech can only end one way… with the complete subjugation of the tolerant by the intolerant.
Q. Does this paradox of tolerance have merit?
Q. How much power are we willing to allow a company to have before we can agree it’s a bit too much for what’s reasonably acceptable? And whose interests are really being served?
Q. How can non-community-based platform moderation legitimately moderate hate speech?
P. Platforms like twitter, Facebook, and YouTube have simply resigned themselves to hiding behind generic community guidelines and conduct policies, taking the power of discussion and decision making out of the hands of the users and communities, and into the control of behind-the-scenes employees and “buzz-word” auto-ban systems.
P. YouTube’s definition of hate speech is anything that quote, “might offend a member of a marginalized community”. Twitter defines it as “promoting violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease”. For Facebook it’s, any “direct and serious attacks on any protected category of people”.
Q. How do you define hate speech in a moderation algorithm (or manually moderate fairly) when there’s not a universally accepted definition?
Q. In YouTube’s case, when it comes to a ban-worthy level of offending, how can a moderator possibly gauge this when people have so many different connections and levels of sensitivity to different subjects?
P. What’s offensive to some, isn’t to others. YouTube’s policy is essentially banning discussion on any controversial topic a moderator “thinks” or flags as offensive. I honestly believe their intentions are just, but this situation is primed for confusion, mistakes, and generally just counterproductive to communication.
P. Attempting to moderate “disinformation” further convolutes an already complicated situation. Twitter, Facebook, and YouTube’s attempts at this were shown to be haphazard and blatantly politically biased leading up to and during the election (and currently). This has resulted in anger, frustration, and a mass “flight” to unregulated, decentralized platforms.
Q. Who are they to tell us what we can or can’t see? I understand it’s within their legal rights, but are we so incapable and stupid not to be able to determine for ourselves what’s legitimate information and what’s not? Should we continue to allow these companies to control the subject matter or should it be up to the community?
P. With freedom of choice… if you don’t like the way a company operates, you can simply use another.
Q. Is there really a choice though when the majority of society communicates on a certain platform?
Q. Is forcing individuals to do so healthy for the future of communication, or will it have dire consequences against healing an obvious societal divide?
P. Communication platforms have gone above and beyond censoring hateful content by censoring legitimate political opinions and information.
P. Reddit was once home to completely “hands-off”, community-led content moderation, but at the same time this freedom was abused by many users gaining the site a reputation for being a “cesspool of racism”. Now thousands of subreddits were given the boot from this year’s new policy.
Q. I’m sure some of these communities undoubtedly deserved to be banned, but did all of them?
The conversation on the future of freedom of expression and censorship on American tech platforms will only progress by looking past political bias and distrust from both sides of the aisle (which has been hyperinflated over the last 4 years). This should be a mutually shared goal right now, but I’m not sure if the point of no return has already been reached between censored outcasts and those entrapped in curated, algorithm fed information feeds. It needs to be acknowledged that it’s not only the intolerant who have been silenced. Yes, the most obvious examples of censorship so far have mostly been one sided, but it would be naïve to think that’s how things will remain if different ideologies become dominant by those in control. Playing the blame game for what got us here is an irrelevant waste of time that will productively lead nowhere. The question is how we move forward. If we don’t unite and make decisions as a UNITED States of America, this experiment of independence and self-governance is over.
The “flight” to decentralized platforms like Parler, Gab, or Bitchute where there’s little to no moderation of content at all, doesn’t help regarding bridging the expanding ideological divide, but it’s understandable for those who have been disenfranchised in their right to speak. There needs to be acknowledgement that it’s not only the intolerant who have been silenced. While decentralized platforms are known to be free from moderation, even hosting racist and completely unhinged material, that’s one of the inevitable realities behind freedom. Those who use these forums shouldn’t be judged as complicit with such material… As with great power comes great responsibility. I wholeheartedly believe it should be up to individuals, not an overtly centralized tech oligarchy, to control the free flow and exchange of ideas and information.
What originally made the internet so amazing, and such a significant part of the modern development of society, was its ability to connect ALL of us in an incredible, unhindered forum where all had equal say. I believe it should be a universal responsibility for users, not unnamed moderators, to be intolerant of intolerance. It should be up these individuals to prove the fault in intolerant ideologies through public discussion to be condemned by those with good conscience. It should be up to individuals to act as their own “fact checkers” and do their own research to determine what’s true and what’s disinformation. In the debate between allowing the control of conversation and information to fall to Big Tech or users and communities… one side has faith in the strength of individuals and the virtue of humanity, the other does not.
Free expression based in mutual respect and the sharing of ideas, not hate speech or calls to violence, should also be protected by law at all costs in the modern public forum that has become social media. I think these platforms have gained too much power and influence to simply be considered “private companies”, and I know I’m not alone in this. However, I also believe that going after section 230 to make this happen is idiotic and will cause more harm than good.
I believe communication based on equality and mutual respect between participants is the truest form of free expression and the purpose behind the 1st Amendment, but I feel that has been lost in ideological translation. What we currently have going on in the US with media platforms can only be described as a well-intentioned, hot mess of anti-American division and absurdity, but I think there’s still hope if people will just rise above their superficial differences and start respecting each other again. A lack of communication has gotten us to this point, and we should be focused on the opposite for anything to change. As for who takes the first steps in the right direction, I guess we’re just going to have to see how things play out.