Troll Spray

Golda Velez
5 min readDec 23, 2018

Thought of a simple way to build trust networks, without waiting for Twitter to do it:

  1. Find demonstrably false articles that have not been retracted, like on Breitbart or RT.
  2. Use the Twitter API to collect lists of users who retweeted these clearly false articles. Lets call these users ‘trolls’
  3. Take action in response to trolls. This could be a service to automate blocking of all known trolls, or a rapid response network to post high quality debunking pieces in response to fake news links.

Trolls are welcome to retaliate by blocking regular users, but this would be counter to their purpose of sowing dissension and confusion. Paid trolls don’t want to sit around talking to each other. Regular folks do. Thus, we have an advantage, and we should exploit it.

One weakness with the above method, is that legitimate users may want to use links to false articles for discussion or satire. This requires a little more manual work, but still allows for a powerful solution. Step two is broken down as follows:

2a. Use the Twitter API to collect lists of users who linked to these clearly false articles.
2b. Manually verify a subset of these users as known trolls, who linked to them with the intent of supporting the falsehoods.
2c. Use the Twitter API to identify more trolls who ‘Liked’ any tweet of the known trolls or who retweeted any tweet without comment
2d. If wanting to identify further trolls, rinse and repeat with those identified in step 2c, possibly with a lower ‘risk score’

The blocking part of Step 3 could be modified as well to auto-block any troll over a certain ‘risk score’. Manually ID’d trolls by trusted participants would have the highest score, and lower scores would propagate outwards. Trusted participants would need to be known personally by the group at a human level.

If widely used, the blocking service would cut off trolls’ access to a large chunk of twitter. True, we couldn’t keep an eye on them as well, but I think its the right game-theoretic approach. For one thing, they could no longer interject themselves into conversations as easily, as they wouldn’t be able to see normal people’s tweets. If one wants to respond to trolls, simply don’t use the service.

For reference, the smearing of the White Helmets group is a good example of the sort of pernicious fake news that could be used as a seed. The efforts behind these smears have completely taken over the Google search results in some cases, but Wikipedia editors have managed to keep the White Helmets wikipedia page fact-based. Because of the level of effort used to smear them, this should be an easy way to target and remove large numbers of paid trolls and collaborators.

Of course, this is only the very beginning of a trust network — simply identifying non-trusted individuals, plus one level of propagation. Whatever percentage of trolls it gets, it should at least be useful for individual users who can easily subscribe to autoblock trolls, and the more widely used it is, the more effective it will be.

The bigger problem is how to spread the word and get users to use the system.

Having volunteers rapid respond to fake news articles is significantly more work than the auto-blocking service, but would be even more valuable if we can get people to do it. Organizing the high quality responses to fake news and then having real people post them quickly to any appearance of the fake article I think might be the most valuable, but it takes a higher level of effort.

Would love to hear both from those with technical critique and those with ideas of how to get users. Building it in a vacuum is of limited utility, but if any organization or well-connected individual supports the effort, or enough regular users do, then I feel it would be well worth doing.

For devs:

Also, it may be a good idea to notify the trolls they are being blocked and why, in order to let false positives — real users accidentally labelled — contact us and be unblocked or whitelisted. I can see possible issues with this if the trolls respond by immediately crying to twitter of abuse, so maybe at the same time we should notify twitter of the action, it may depend a little bit on how Twitter itself responds, if we can do that.

Thanks to Colin Delia for detailed feedback and suggestions! We’re looking at how to more forward with this, have submitted the idea to Ragtag with their encouragement, if anyone would like to get involved just contact one of us on Twitter or comment here.

Also thanks to Nathan Pitzer, founder of Find, for comments and pointer to another approach to making truth go viral, from a collection out of Austin’s center for Media Engagement

Nathan pointed out that trolls may include both those intentionally misleading, and those who have been misled and are retweeting sort of ‘in good faith’. So maybe we need to send them the evidence against the falsehood when we let them know they have been identified as sharing harmful fake news.

Thinking about this more, maybe the right response is really to respond in kind, and even bot-post in response to posts of fake news, but with a bot that admits its a bot, or with volunteer networks of fast responders. I think the idea of ID’ing the trolls is a good one, but not sure that blocking them is the right response — that sort of mechanism feels more like censorship than like free speech, even though its aimed at these kind of guys. More information might be better than less. Maybe we should discuss with the Austin Media Engagement people.

In some ways this feels like an #ArmyofBees effort (the one that got Jamal Khashoggi killed). I think it could be related, esp the rapid response bit. I would love to name it that in his honor.

--

--

Golda Velez

Mom, Software Engineer, Tucsonan. Like connection, community, fun and algorithms for increasing opportunity. Also for identifying bullshit. @gvelez17