0000000000000111 - Proposal to Change Focus

I've recently been discussing an idea that @remlaps has about how to gamify the battle against abuse on Steemit. Where users can identify an act of plagiarism, spam or other form of abuse on Steemit and submit it into a "system". Other users are then presented with the "case" and use their judgement to determine whether something is plagiarised or not.

One of the most significant challenges that @endingplagiarism faced was sifting through the 100+ "mentions" received each day and checking that something untoward had happened. This could have been anything from a line of text that vaguely resembled another website to a full-on copy and paste. Hours of effort checking other people's subjective opinion.

This idea solves that problem.

👇🏼 Source
image.png
Separator-code.png

There were many thoughts shared between us, including a number of challenges that would be faced and below are probably the key difficulties. I'd be interested in community feedback and whether you think that I should prioritise this over the reskin project I have already started working on. The solution could easily merge into a site redesign with a working "Report Post" button in the future.

Challenges

  1. How to distribute awards anonymously - The idea of anonymity when reporting abuse is important to avoid any form of retribution. Using beneficiaries, account transfers or upvotes from an "anti-plagiarism related" account could draw attention and potentially downvotes. The best solution I could think of was to piggy back on the latest steemcurator04-08 initiative and to have upvotes blended in to this process. Disguising the reason for an upvote within many other upvotes (This then posed the challenge of users playing who don't write content themselves although they could potentially receive rewards via a beneficiary or transfer as you can't downvote somebody who doesn't write content.). I believe we'd need Steem Team agreement to go down this route.

  2. The Whitelist - I've come across plenty of accounts where users post content on another blog or their own website. This is easily validated by a human but automation of something like this would be more difficult and if not done sensitively could cause good users to leave.

Separator-code.png

This feels to me like the kind of website (or shall was call it a dApp?) that I can write with 3 core components:

1. Submission of Abuse

The user can log in (I want to validate that it's a Steemit user submitting the abuse in order to reward them (I don't know how yet) and the validation will hopefully curb spamming of the site) and submit:

  • the Steemit URL that's abusing (comment, post or profile),
  • the type of abuse (plagiarism, spam, etc.),
  • the source (if plagiarism) and any additional details.

This is stored in a database (the login credentials are never saved and the Posting Key will only be accepted).

Separator-code.png

2. Review of the Abuse

The users playing the game will also need to log in (for the same reasons as above and to prevent multiple votes on an item) and will be presented with the "abuse case". They will have 3 options to choose from:

  1. Vote Abuse - if they believe it to be abuse
  2. Undecided - if they're undecided whether it's abuse or not
  3. Vote Non-Abuse - if they don't believe it's abuse.

The idea is that each "player" has a rating which affects the vote in a certain way. For example, a player with reputation 25 (out of 100) who votes "abuse" will increase that post's "abuse score" by 0.25 whereas a player with reputation 50 will increase it by 0.5. A player's reputation can increase or decrease based upon the outcome of each post along with some "test" cases that are known to be plagiarised or not - i.e. if people blindly vote, their reputation will decrease to 1 and their vote (and therefore rewards) become irrelevant. The reputation and scoring will likely be complex and I've not figured out how it could work yet (but this is my initial thinking) - it will not be linked to Steemit's reputation and will be stored privately to avoid retribution (but could be used to calculate reward distribution).

Separator-code.png

3. Reporting the Abuse

Once enough users have reviewed a post, it will be included in a daily (or bi-daily) post detailing the current status of content that's been submitted as abusive with its "abuse rating". There will be a threshold for when posts are included (e.g. rating starts at 0 and if it reaches +5, it is included) - a smaller threshold could also be used to trigger a comment on the post to alert the community moderators (if the post is within a community) that a post within their community has been flagged as suspicious. Any replies to this comment will be included in the game (i.e. the post author or community admins / moderators have the opportunity to present a "defence").

My intention is to use the @endingplagiarism or @plagiaristpayout account to share the reports within the Mosquito Squishers community and also to post any necessary messages to suspicious posts (for the reason outlined above).

This could also allow automation of a persistent offenders report as opposed to the manual editing of the "Consolidated List of Plagiarists".

Separator-code.png

Future Possibilities

As I mentioned before, there's potential to include abuse reporting within the reskin I'm working on to streamline Component 1.

There's also potential to expand this idea to include another @remlaps idea of quorum sensing to downvote posts. Instead of relying upon @ac-cheetah to manually visit a post or any "blind" downvoting, the report could include a link to "register your downvoting interest" which would store your username and posting key and once the necessary threshold is reached, the post will receive a mass of downvotes to avoid individual retribution. Please read @remlaps' post for a better explanation.

Separator-code.png

I've spent too much time writing and proof-reading this post and I'm grateful to anybody who's reached this point (assuming you kept reading this far).

Footer-Top-green.png

What Do You Think?

Does this sound like a good and (importantly) fair approach? Does the idea make sense and will it improve the Steemit platform? Would you submit abuse and "play the game"? What are your thoughts on the challenges of anonymity and "whitelisting"? Should I prioritise this over the reskin?

These are a few questions I thought of but please add your own along with any thoughts on the idea and how it could develop going into the future.

H2
H3
H4
3 columns
2 columns
1 column
35 Comments