Machine learning program can find forum rule violators by detecting similar writing styles.
Sock Puppet Comments:
Read in 1 minute
It’s not always a good thing to try to fight the ill effects of technology with more technology, but darn it if researchers at the University of Maryland and Stanford University haven’t come up with a good way to combat online troll commenters.
Srijan Kumar from the University of Maryland and his Stamford colleagues recently announced a new machine learning tool that can detect when people post comments from multiple accounts, a practice that can skew discussions and give certain arguments more weight online.
Such arguments can lead to the promotion of fake news by giving false reports the appearance of more legitimacy.
The tool can detect such so-called “sock puppet” accounts through simple means, such as finding comments coming from similar IP addresses at similar times. But it can also find them through more advanced methods, such as studying writing styles and post lengths.
They found that sock puppets contribute poorer quality content, writing shorter posts that are often downvoted or reported by other users. They post on more controversial topics, spend more time replying to other users and are more abusive. Worryingly, their posts are also more likely to be read and they are often central to their communities, generating a lot of activity.
The researchers say their tool can detect if two posting accounts belong to the same person 91 per cent of the time, while a second tool can tell the difference between a real account and a sock puppet 68 per cent of the time.
Taken together, they believe the tools could be useful for discussion moderators in finding and preventing forum-rule violators, which could ultimately lead to more valuable online discourse.
It’s not a silver bullet, but when it comes to fake news and trolls, every little bit helps.