Using social media is a double-edged sword. On the one hand, it can connect us to many more people than we would otherwise interact with, which is great. But our choices regarding who we interact with (often amplified by a platform’s algorithms) limit many of our social networks in a way that keeps us tucked within an echo chamber of people who think like us.
For people studying partisan divides on various topics, there is a lot going on here. When people in a group work to understand information, everyone generally benefits. But the opposite can be true if we instead retreat to our mental fortresses, man the catapults, and prepare the boiling oil. So for a real-world problem like the partisan divide on climate change, how can we say if social networking is helping or hurting?
Douglas Guilbeault, Joshua Becker, and Damon Centola at the University of Pennsylvania set out to design an experiment that could test whether simple signals trip our mental defense mechanisms. Rather than gather a bunch of people in a room—where you might size each other up in ways the researchers couldn’t control—they created a web interface that was used by 2,400 people recruited from Amazon’s Mechanical Turk service.
Seeding social networks
The participants were split up into groups of 40. Within those groups, the researchers seeded social networks, with the same number of liberals and conservatives in each network. Control groups had only people of the same political persuasion.
Everyone was shown a graph of NASA Arctic sea ice coverage data between 1978 and 2013 and asked to forecast out to 2025. The reason the data was cut off in 2013 (when sea ice rebounded a bit from a 2012 record low) was to test something called “endpoint bias”—only looking at the last couple datapoints instead of the long-term trend.
After they gave an initial forecast, everyone was given two opportunities to revise their answer before finalizing it. This is where the four experimental groups differed. In the control group, there was no interaction between participants at all—an anti-social network. But in the other three groups, individuals were laid out in a sort of grid, and each person was given the average estimate of their four “neighbors” in the network while pondering their revision. The idea is that if you answered that sea ice was trending toward expansion by 2025 and your neighbors estimated a decline, you may reconsider.
For one of the three experimental groups, subjects only received the average number from their network neighbors—nothing else. But another group was also shown their neighbors’ usernames and political affiliations. A conservative participant might think differently about their neighbors’ estimate if they’re all liberal, for example. The final group didn’t see personal information about their neighbors, but they did see a pair of logos next to the estimate—one from the Democratic party and one for the Republican party. This simple visual cue has been shown in the past to be enough to raise the tribal alarm, effectively reminding you that this information might be politically contentious.
In the control group, both conservatives and liberals did improve their answers slightly—correctly noting the downward trend—simply because they had a couple chances to second-guess themselves. But the group that got estimates from neighbors without knowing their politics improved their answers considerably by their third and final try. In fact, while conservatives were significantly more likely to give the wrong answer with their first try, this party gap disappeared by the end. Simply put, comparing notes helped more people give the correct answer, regardless of their political druthers.
The other two groups didn’t fare quite as well. Telling participants whether their neighbors were conservative or liberal kept the party gap alive—conservatives now did only slightly better than their counterparts in the control group. But surprisingly, the simple act of slapping donkey and elephant logos on the screen had the detrimental impact. The results from both conservatives and liberals were indistinguishable from the control group. Comparing notes didn’t do a thing.
It is not immediately apparent why party logos were more harmful than party nametags. (Maybe seeing that your neighbors included three liberals and one conservative isn’t as bad as worrying that your neighbors might be liberals?) But the researchers say the overall conclusion of the experiment is clear: bipartisan networks break down barriers, but any reminder that an issue is “political” can spoil the whole thing.
That makes plenty of sense given how important cultural identity has proven to be. The stakes aren’t too high when you’re asked to evaluate a graph you may never have seen before, but the risk of being seen as traitorous by your friends and family is potent. Of course, the graph-interpreting task in this experiment isn’t a perfect stand-in for all conversations about climate science, but it does get at interactions with specific scientific information—like encountering NASA tweets, for example.
Tribal partisanship in the US seems a bit like a sleeping bear. You can go about some business if you tiptoe around it, but you’d better not poke it. Similarly, you may be open to learning something new from your social media neighbor, but your brain might have second thoughts if their avatar is a smiling photo of the wrong political candidate staring you in the face.