Making SPAM fighting on Steem into a game of skill

Can abuse prevention on the Steem blockchain be gamified? I have thoughts.


Introduction

image.png

Pixabay License: source

I've been hanging around the Steem blockchain for almost 6 years now, since July, 2016. In that entire time, there have been frequent efforts to fight abuse (plagiarism, spam, etc...), and in my personal opinion, they have all failed to a large degree. This is something that I have spent a great deal of time thinking about, without much in the way of results. One idea I had was to model abuse prevention after quorum sensing, which is a technique that single-celled bacteria use to coordinate multicellular action. Another idea was to model the rewards algorithm after a second price auction, which is a type of auction that better-motivates participants to bid their "true value" when competing in an auction (here's another Steem article on second price auctions which I just discovered a moment ago, from PreSearch.)

Obviously, neither of those ideas got traction, and the community is starting to talk about abuse again. Frankly, I had given it up as more-or-less hopeless. Here are some problems that I see with abuse prevention attempts in the past.

  • They depend on altruism to avoid the "tragedy of the commons"
  • Identifying abuse is hard work (REALLY HARD)
  • Small stakeholders can't take the risk of getting into a flag war with large stakeholders

In the time I've been here, there were two efforts that come to mind as the closest things I've seen to successful abuse prevention. The first was @cheetah, which was an automated bot that was coded by someone who has now left the blockchain. Personally, I thought that Cheetah was better than nothing, but also that it left a lot to be desired. Also, as time went on, it seemed to me that it got worse and worse at accomplishing its goal. The second almost-successful effort was @steemflagrewards, which also disbanded. I never learned the details of their operation, but as I understand, they somehow managed to eliminate part of the need for altruism.

So, I've been silent when the topic came up again recently. As they say, "There's nothing new under the Sun." But then today, I had an idea. What if we could make abuse prevention into a game of skill - one that rewards people for participating, and rewards the best participants the most? So, in the following sections, I'll propose a method for doing just that. This is beyond my skills and means to implement, but maybe it will stimulate some other ideas.

Origins

Ten or fifteen years ago, there was an Internet game. I don't remember what it was called or who created it, but it was in a genre that was called something like "Gaming for Good". If I remember right, the creator may have been at Carnegie Mellon then signed on with Google (and taken his idea with him), after completing his PhD. That could be wrong, though. Doesn't matter, except that I wish I could give him credit for the idea.

Anyway, in this game, it went like this:

  • Connect to the web site
  • Get paired randomly with an unknown partner
  • Get shown a photo
  • Type potential keywords for a period of time - maybe a minute(?)
  • If you and your partner typed the same keywords, then you won, otherwise you lost

And here's where the "gaming for good" part came in. When keywords were matched by two independent partners (actually multiple pairs of partners, I assume), they were used to train an AI system on image recognition.

I spent quite a few late nights playing this game. It was one of those games where you think you're just going to be there for a few minutes, but then... one round at a time, the minutes turn into hours.

Connecting this concept to abuse prevention on Steem

So now, let's imagine how we could apply similar concepts to abuse prevention on Steem. Because of upvote-based incentivization and the need for downvotes, it would have to be run by one or more parties with a sizable stake, but here's what I imagined today:

  1. An automated crawler looks for potentially abusive posts. This could be done by looking for characteristics on the blockchain (i.e. high reputation author with low follower count; high value post with low follower count; high value post with no comments; posts with a large ratio between max vote and median vote; etc...). It could also be done by passing a small portion of the post through the API from a search engine or plagiarism service.
  2. Someone puts up a web site where it feeds candidate posts from the crawler to anonymous/random pairs (or triplets, etc..?) of players and asks them if it is abusive content (we need a better term for "abusive content" ;-). For each post, repeat steps 3-6 (outer loop):
  3. Log answers on the blockchain - but, importantly, without the post that they're replying to. That way, there's no fear of flag wars. The web site would know the link, but no one else. The posts would need some sort of anonymized player/game ID and also be delayed by a random amount of time, in order to prevent players from identifying their partners.
  4. If the two players match their answers, then the operator upvotes the answers, rewarding them for their time & effort
  5. If the two players don't match their answers, then no upvote is given (and maybe it wouldn't even need to be recorded on the blockchain)
  6. Repeat steps 3-5 with as many pairs as desired (inner loop)
  7. If some threshold is met, it signals the large stakeholder to further evaluate the post (using automation or manual inspection)
  8. If the large stakeholder(s) agree(s) that the post is abusive, then a downvote is issued.

Now, let's revisit the three challenges I identified above, with previous solutions to abuse prevention:

  • They depend on altruism to avoid the "tragedy of the commons"

If abuse prevention is turned into a game of skill like this, there is no need for altruism. The best people at identifying abuse will receive the most rewards, so it will be a sustainable level of effort.

Further, the large stakeholder would be rewarded by curation rewards for their votes, and also (hopefully, but not certainly) with a rising price of STEEM - as the content quality improves. Of course, the web site owner could also generate revenue from advertising and beneficiary rewards, and the whole thing could be further incentivized with TRC20 tokens.

  • Identifying abuse is hard work (REALLY HARD)

Players can play as much or as little as they'd like. "Many hands make light work", as they say. Also, if my experience with the game described above is any guide, it might actually be fun.

  • Small stakeholders can't take the risk of getting into a flag war with large stakeholders

Since the link is only shared between the large stakeholder and two players who don't know each other, there's no way they could be drawn into flag wars.

The only risk that occurs to me is that a malicious actor could seek to punish everyone who participates, but the operator could provide protection and it should become prohibitively expensive for a malicious actor.

Afterthoughts

For abuse prevention, it's not necessary to identify all abusive posts. We just need to find enough to create the friction that will lead the abusers to self-limit. This means that a whole post wouldn't have to be fed through a plagiarism detector, just some random excerpts. It also means that the large stakeholder could limit their downvote size to just a portion of the post's value, perhaps by canceling out the largest value voter on the abusive post (which is reminiscent of the 2nd price auction that I mentioned above). Certainly, these sorts of things could be tuned and adjusted as time passes.

Conclusion

So there's the idea. Certainly, there's nothing wrong with rekindling the abuse prevention methods that have been tried in the past, but it seems to me that we should think about new ideas, too.

I wish I had the skills and means to implement something like this in a reasonable time frame, but since I don't, all I can do is kick it out to the community and ask for feedback. Thoughts?


Thank you for your time and attention.

As a general rule, I up-vote comments that demonstrate "proof of reading".




Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.

H2
H3
H4
3 columns
2 columns
1 column
43 Comments