#HTE
You Can’t Win
This scene from the cartoon is so familiar that it barely qualifies as comedy: A stick figure sits at a computer, typing furiously. “Are you coming to bed?” someone outside the panel asks. “I can’t,” our protagonist responds. “Someone is wrong on the Internet.”
It’s a truism that the Web has made it easier than ever to pick fights, giving us immediate access to the opinions of others and—when we want it—the shelter of anonymity from which to snipe at them. But if the Internet encourages argument, it may also be changing the ways we argue. The algorithms that power websites and social media platforms help determine what we see, and how we show things to others. The infrastructure of the Internet may be fragmenting the ways we contend with one another, even or especially when we don’t fully understand it.
These divergences aren’t always immediately apparent, of course. To the contrary, some research seems to propose that winning arguments online is a simple thing, merely a matter of following a few basic rules. That, at any rate, is one conclusion that you might draw from the work of a team of computer scientists based at Cornell University. In a paper published this month on arXix, the four identify “patterns of interaction dynamics” that characterize “persuasive arguments” in online conversations. They’re showing, as one Washington Post write up put it, “How to change someone’s mind, according to science.”
The Cornell researchers developed these conclusions by studying r/ChangeMyView, a subsection of Reddit on which users describe their stance on some controversial issue and then invite others to dissuade them. Recent posts include assertions such as “Bernie Sanders is misrepresenting the Scandinavian Economies” and “Women are not the primary victims of rape culture. Men are.” Such assertions frequently generate dozens, sometimes even hundreds, of replies, but the site allows the participants to flag those that they found most convincing—the ones that actually did change their views. These markers gave the researcher a readily available set of demonstrably persuasive arguments, along with a host of other data points about the ways we argue. Analyzing the resulting information, they identified linguistic patterns and discursive strategies common to successful and unsuccessful arguments alike.
The Washington Post’s Caitlin Dewey studied the resulting paper, extracting a handful of useful tips from it. It helps, for example, to cite outside evidence, but it’s better not to quote the person whom you’re arguing with. And while it’s best not to get too intense, longer replies apparently perform better than shorter ones. All of that seems reasonable enough—sensible even—especially if you really are trying to change someone’s opinion on a site designed specifically for that purpose. But is this really the way to win “ANY argument” as the Daily Mail has it? And are these guidelines really a boon to “Facebook-feuders” as the Post’s packaging of Dewey’s article implies?
Curious whether the results really were that broadly applicable, I wrote to the Cornell researchers in search of clarification. In its conclusion, their paper hedges on this point, noting, “[O]ther environments, where people are not as open-minded, can exhibit different kinds of persuasive interactions; it remains an interesting problem how our findings generalize to different contexts.” But over email, Chenhao Tan, a Ph.D. candidate in computer science, proposed that many of their findings should indeed be transferable to different contexts. In particular, he pointed to their discovery that “a person [who uses] ‘we’ instead of ‘I’ is less likely to be malleable to changes.” That is, it’s harder to change the mind of those who thinks that their positions are widely held, a premise that resonated, he told me, with “findings from psychology, where self-affirmation has been found to indicate open-mindedness and make beliefs more likely to yield.”
If this is true, however, it might be harder to have an open debate on some platforms than on others. Though the Internet has spaces for real conversation—r/ChangeMyView may be one of them—sites that provide steady streams of information tend to discourage free discourse. Anecdotally, it seems that remarks on Twitter, for example, are more likely to gain traction when users avoid the first-person singular. It’s easier to retweet a broadly generalizable statement than one that applies to and derives from a single individual, since the former is more likely to seem like public property. Statements that get repeatedly retweeted are more likely to show up in any given user’s feed, creating still more opportunities for retweeets. All of this suggests that Twitter’s architecture discourages openness to persuasion—or at least the appearance of malleability.
Like virtually all social Web destinations, Reddit has mechanisms that work along similar lines. If you visited Reddit in 2007, you might have convinced yourself that Ron Paul was sure to win the presidency by a landslide, largely because the site featured so many posts about him—and because they were so aggressively upvoted by the site’s core user base. This was an example of a “filter bubble,” which David Auerbach describes as “a feedback effect that steers similar content toward you and pushes contrasting content away, further reinforcing your beliefs.” These bubbles, often products of algorithms designed to encourage engagement by maximizing enjoyment, may, Auerbach writes, make it harder for us to argue civilly, because they tend to distance us from “others with whom [we] disagree.” Open-minded digital spaces such as r/ChangeMyView are the exception rather than the rule.
In this techno-cultural climate, to even begin to publicly disagree with someone, you have to learn to engage with them in a way that won’t immediately pop their filter bubble. (This is true even on r/ChangeMyView, which has a set of highly regimented rules for submissions and comments.) Though Facebook has denied the significance of such bubbles, its own representatives have advocated policies that suggest they’re real. Sheryl Sandberg, for example, has praised so-called “like attacks,” in which users like hate speech pages and then fill them up with more affirmative statements. In many cases, this means that they’re initially liking statements with which they vehemently disagree, if only to ensure that their disagreements will be heard. Here, the ordinary rules of engagement have clearly been pushed to the wayside. As with other forms of “counter speech,” these efforts demonstrate that the virtual spaces in which we argue increasingly shape the ways that we argue.
Rhetoric has never been platform agnostic, of course: A good argument looks different on the debate stage than it does in a newspaper editorial. But whether or not we directly acknowledge them, algorithmic conditions—how a site decides what output to show us—shape the kinds of input we feed into them. It’s possible that the Cornell group’s findings might help you grapple with you Trump-touting uncle on Facebook. But in all likelihood, you’d be better off trying to master the basics of algorithmic literacy if you want to get through to him. Learning how Facebook’s news feed actually works, for example, might make it easier to get a word in edgewise. You’re probably better off doing that, anyway, than you are looking out for posts that feature paragraph breaks and bulleted lists—a supposed sign of malleability on r/ChangeMyView that may mean something entirely different on Facebook.
There is, of course, a dark side to this prospect. Algorithms, even the ones we live with every day, are frequently opaque things—proprietary, puzzling black boxes. Even if we did understand the words and phrases most likely to resonate with our fellows, there would be no way to consistently guarantee ourselves an audience. Increasingly, it’s these alien collections of code that are our wiliest opponents. We’ve known how to argue with humans for millennia: In the future, we’ll have to learn how to argue with computers.
This article is part of the algorithm installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on geoengineering:
Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.
http://www.slate.com/articles/technology/future_tense/2016/02/cornell_research_into_winning_arguments_shows_how_to_win_fights.html