Out of Time(Line)
Originally published in AMP Magazine at UT Dallas on 4 Dec. 2017.
One, if not the, prevailing trend of 2017 is the hovering sense of “What the fuck is happening?” that seems to permeate every layer of existence. A major driving factor of this is (surprise!) the President of the United States and his seemingly endless Twitter beef with an ever-growing list of celebrities. But what about his cabal, the people whom he doesn’t vague-tweet about but instead endorses? Many of them run accounts similar to that of Jason Kessler, the main organizer of the “Unite the Right” protests that ran amok in Charlottesville, Virginia in August.
At the beginning of November, Kessler’s official account was given the “verified” check mark by Twitter, which led to a substantial amount of backlash from concerned users. After trying to handle their self-created shitstorm, on November 15, a week after they rescinded the status of Kessler and other hate group leaders like Dallas’s own Richard Spencer, Twitter went a step further and banned their accounts altogether, along with over a hundred other users and groups.
When pressed for justification for the suspensions by PBS News, Twitter pointed to their own terms of service, specifically their policy surrounding “hateful conduct.” The legalese seems to be wishy-washy on what does and does not constitute problematic content, but the larger question remains: How did we even end up here? Why are Twitter and Facebook the new platforms of hate speech? How did bigots manage to use their social media powers for evil, and what the hell do we do now?
First, let’s look back at 2016, to the election and its discourse that started it all — particularly the influences of Macedonian teenagers and other (usually Russian) purveyors of misinformation and hate speech. In other words, fake news. Right after the election, Facebook was implicated as a main player in the campaign. According to the BBC and Facebook itself, at least 29 million Americans were exposed to posts from the Internet Research Agency — a firm associated with friends of the Kremlin that specializes in “trolling” and misinformation — between June 2015 and August 2017. While the engagement statistics are impossible to pin down, the fact that so many members of the voting public were exposed to fake news is worrisome.
The association between those posts, which served to stoke right-wing fervor, often get tied up with the posts and communications of the “alt-right,” which itself is misunderstood. Think of the annoying guy from your 10th grade English class who read all of his compositions from a black canvas notebook and had a strange glint in his eye; that’s the kind of misunderstood the alt-right is. In the words of the now-offline Daily Stormer, the alt-right — which has been around for the past four years and was born out of troll culture on 4chan — links racist weeaboos with anime girl Twitter avis, new KKK recruits, and your Trump-supporting aunt who’s afraid of the new immigrant neighbors, all under the same agenda to meme.
As surreal as it may seem, memes form the backbone of the alt-right propaganda campaign. But in the context of an internet-driven media landscape, it starts to make sense. The Russians who worked for the Internet Research Agency had to find a source of “inspiration” from somewhere, and the memes of the young alt-right members worked perfectly. This led to an assault on social media where one person could share a Pepe meme about the dangers of immigration right in between two fake news articles about the newest deal the President struck and falsified statistics on racism that prove white people are the true victims.
This groundwork is what worked up toward the top, into the retweets and accounts of alt-right leaders, which eventually got them banned from Twitter. Similar memes ended up on Facebook, but they paled in comparison number-wise to fake news articles, which Facebook began to take steps to remove in October. These takedowns are the most concrete action taken by any platform since the problem moved into the mainstream around last year’s election. In the eyes of many, it represents a step in the right direction.
While most can agree that shutting up neo-Nazis and others of the same ilk is a pretty good thing, it raises questions of the role of censorship in social media and the place of companies in all of this.
While discussing censorship, most attention is rightfully given to government-driven censorship. But what’s happening with the alt-right is almost directly the opposite; those in power have come out to support and defend the censored against the platforms that are booting them.
It’s important to note that Twitter defines hateful conduct as anything that “promote[s] violence against or directly attack[s] or threaten[s] other people” on the basis of various statuses. Their last point focuses on their range of punishment, which suggests they’ve thought about the magnitude of censorship on their platform; before suspension, they ask users to remove tweets which violate the terms of service. If users continue to tweet hateful content, or refuse to comply with Twitter’s minor request, they take the final step of cutting off access to the platform. However, the question remains: why did they verify the people who were gaining clout and influence by posting “hateful content” in the first place?
Facebook takes an approach that, on the surface, is more emphatic — they say that organizations or people who promote hate speech “are not allowed a presence on Facebook.” And to a certain extent, that’s kept the site from being a direct hotbed of alt-right activity, forcing groups to convene on private chat platforms such as Discord or Skype. The flip side of this is that most people focus on Facebook for news and content sharing, in contrast to Twitter’s focus on opinions and personal commentary. Facebook also tries to wash its hands of users who post external content that is considered hate speech by saying they “expect people to clearly indicate their purpose” when posting questionable content.
While other platforms have a clear position in this debate (most notably Discord, who received a mix of flak and support for pulling the plug on some large alt-right servers earlier this year), Facebook and Twitter are both the most used and most outwardly-facing platforms occupied by the alt-right, and therefore the ones most worthy of criticism. As such, these anti-hateful conduct policies make sense on paper, but clearly fall short in practice. Also, is censorship the best way forward to fix the alt-right problem?
Frankly, it’s not. But it’s the best solution that we have at the moment. Censoring hate speech has been a prickly subject for years, with no easy answer found in traditionally heated dialogue. Within the field of emerging media, it’s hard to make a case for censorship because of the nature of the internet’s role in democratizing media in the first place. Arne Hintz, a professor of journalism at the University of Cardiff and author of a paper in Critical Discourse Studies from June 2016, outlined the problems that accompany censorship on social media. The problem that can be extrapolated from Hintz’s analysis of government-led censorship is that once these private companies have exercised their muscle, the messages can be shaped to corporate will, and the room carved out for dissenting opinions of any kind would slowly erode away.
So is there anything better we can do? Not directly, but considering we’re dealing with hate speech, it might not be in the best interest to take other actions. In the past, the general consensus was that hateful voices would be drowned out by more rational, tolerant ones. However, the game has changed, and as the troll farms and meme raids have shown, the hate-slingers are winning. It is clear that to stifle the proliferation of even more hatred, silencing those voices is the only way forward.
One of the problems with this strategy, aside from the admittedly hard sell of corporate censorship, is that the alt-right is still finding ways to organize and spread their rhetoric. Their own message boards and comment sections have turned into arenas for hateful discussion and planning. And short of taking actions like cutting off their server access (which has been done — and proved an effective strategy for Cloudfare to shut down criticism for hosting the Daily Stormer), nothing can really be done to stop that. But maybe these private forums would prove to be the balance of censorship and free speech: specially carved-out safe spaces for the alt-right to exist within itself but not immediately visible to the general public.Our focus can’t be hateful ideology itself, because that’s never going away. Even long after the days of Trump and white nationalism, there’ll still be people who believe this shit. The unique problem is that those in power sympathize with neo-Nazis and amplify their voices with their influence. Therefore, it really isn’t out of line to argue that private corporations have a right, bordering on an obligation, to shut down conversations that have no business even happening. When these conversations permeate the highest reaches of the U.S. government, something must be done. If we continue to allow hate groups to organize and disseminate information, we normalize them in our society. Silencing them is the first step to stop allowing hate speech to poison the mainstream.