New study shows that stripping away anonymity isn’t enough to stop toxic behaviour.
2 minutes read
They’re the scourges of the internet, or at least news fora and social media: online trolls. The quest of the modern era is how to get rid of them, or at least how to ameliorate their behaviour, the obnoxious and toxic abuse that has a habit of souring just about everything good on the internet.
Not so long ago, there was the theory that stripping away anonymity would do much to counter trolling. Many news websites, fed up with their noxious comments sections, turned to the likes of Facebook, which generally requires users to show their real names. Being forced to use real names would cause trolls to think twice, or so the thinking went.
But it hasn’t worked, according to a new study out of the University of Duisburg-Essen in Germany. Researchers Leonie Rösner and Nicole Krämer started a fake soccer news website and published a fake story about how fans would no longer be allowed to stand at games. The issue of standing-only terraces has been a hot one in Germany recently.
From there, they invited visitors to the website to let the comments fly. Half were able to do so without registering while the rest had to use their Facebook accounts. Some participants appeared anonymous to some users, while others saw everyone’s Facebook profiles.
The results, as New Scientist reports, were somewhat surprising:
Rösner and Krämer found that language used by people who were anonymous was not necessarily more aggressive than with people who could be identified. On its own, anonymity is not usually enough to turn people into trolls.What does seem to make people mean, though, is the behaviour of those around them. The tone set by other commenters was linked to the likelihood that a participant would use aggressive language to support their points.
In other words, people tend to go with the crowd. If the general behaviour in an online forum is civil, users will tend to gravitate toward the same. If it’s hostile and combative, well, then you have YouTube. It doesn’t really matter if they’re anonymous or in plain sight.
That “civil society” goal is therefore what news organizations and social media platforms should be striving for, according to experts quoted in the New Scientist story.
The only problem is that doing so isn’t exactly easy. Many news outlets and social media companies have indeed been aiming to do just that, but it takes time and a lot of resources, which are generally in short supply (this applies here too).
One other way to potentially neuter online trolls, not suggested in the article or the study, is by introducing actual consequences to online abuse.
As it stands, there’s no real disincentive for an online troll to create mischief. He or she may get booted from the forum or platform in question, but it’s too easy to simply start again with a new identity or account.
Some form of permanent ban, perhaps even at an internet access level, might be considered.
In the real world, saying something nasty to someone’s face can result in real, physical consequences – say, a punch in the nose – which is why we generally don’t do it. It’s becoming clear we need the digital equivalent of that to apply online.
There was also a time prior to the internet where saying something outlandish or insulting about a person in public would result in a slander or libel lawsuit. Libel still has its place and lawsuits still do happen, like when a major media organization prints something that isn’t true about someone with power or money. But for everyone else, it doesn’t really exist.
It might be time to explore the idea of micro-libel – action that can be taken against those minor online aggressions that don’t necessarily cause someone major reputational harm, but which are corrosive in other ways regardless. If we can’t incur physical penalties on someone online, maybe the next best thing – financial ones – will do.
How would this work and could it even be enforced? I’m not sure yet, and I’m admittedly spitballing here, but it’s food for thought.