Who guards the guards that guard the guards ad infinitum?

On steemit.com where the attempt is that everyone’s voice is valued. There are different types of voices… and valuating each of these types of voices has certain guarding mechanisms in place.

Votes are used to measure value (popularity) of contributions… Stake is used to measure the value of votes… and cumulation of multiple stakes is used to measure the value of individual stakes.

Essentially the final set of guards are those with the largest stakes…

the whales…

Steemit needs guards because the platform issues financial rewards. Wherever rewards systems are in place it is inevitable that certain individuals or groups of individuals will game the system if there are insufficient guarding systems in place.

There has been a lot of discussions from the beginning of the platform as to how to guard the guards that guard the guards… essentially how do you police the whales…

If this were a centralized platform then that would be no problem… the central authority would have the final say and would be the final guard and defender of the rules.

Steemit is hoping to grow into a fully decentralized, censorship resistant platform… it has a way to go in some respects but this is understandable given the infancy of the platform.

The curation rewards and voting system has been changed a number of times in an attempt to find the least “gameable” solution that is “whale gaming” resistant, these have met with varying success.

Another change is currently under discussion.

One of the biggest challenges is that value on steemit is widely variable and subjective and very easily gamed by large stake holders acting independently.

When confronted with complex problems of this nature it is often beneficial to look to existing ecosystems and systems to see if something similar has been encountered before and how it was solved there.

On set of systems that may have beneficial best practices for our evaluation is found in the artificial intelligence systems space.

AI is not all that new… AI has been utilized in recognition systems for very many years. Voice recognition, character recognition, facial recognition etc.

The way these AI systems have been able to become very effective over the years is quite interesting.

Let’s take Intelligent Character (handwriting) recognition for example:

Multiple recognition engines take their turn to try to recognize each character.

Each then returns their result with a confidence ranking. The votes are then tallied and propose the most recommended character weighted by ranking. This is then proposed as the result with a combined confidence ranking.

This is very similar to how Steemit has been designed to identify good/popular content… Many users vote, these votes are weighted by stake and the post value is determined by the combined voting stakes.

What ICR systems do next in a production processing application is very interesting and could be beneficial for application in steemit.

ICR is often used in forms processing. All information entered into a form does not have the same value.

For example a social security number must be recognized with 100% accuracy whereas a comments field could still be useful even with a low character recognition accuracy.

People running these systems can assign recognition accuracy thresholds to certain recognition fields and if the confidence ranking falls below these thresholds the characters are sent to a verification queue for human verification.

This way the vast majority and more mundane recognition is accomplished using one set of criteria and the verification step is only invoked in areas when human intervention is crucial.

This type of verification principles could be applied to steemit

Another form of curation could be introduced over and above vote curation… vote curation would continue to function as normal and for the 95%+ content and voters on steemit.

Instead of changing voting weight decay rules these could remain the same for average users and average sized votes.

A new verification curation system could be introduced for votes exceeding certain thresholds which would affect probably less than 5% of steemit content…

…but this would be the high value more contentious content.

This could introduce high stakes curation types where getting it wrong penalizes voting decay rate negatively and getting it right removes any penalties…

This would incentivize High SP accounts and bots to limit their vote power below the thresholds if they don’t want the extra scrutiny. Or incentivize regular users to power up to get into the high stakes category.

A high stakes vote would then need to be verified by multiple other high SP votes in order to be considered verified. Getting it wrong could result in lowered voting power for a predefined period… Getting it right could result in greater curation rewards (perhaps the 25% curation reward that currently goes to the author is channeled to the verification curators).

The verification requirement could be parameterized to allow for adjustments to overcome whale collusion if it were to occur or certain accounts would not be able to verify other accounts. Accounts that were powering down could also be excluded or things like that.

This may add extra complexity to the code but could add an intriguing additional form of curation for high SP users

This could operate in tiers and be an incentive for users to power up to reach higher tiers and curation reward capabilities.

From where I am sitting it seems that additional incentives to power up are sorely needed. Additionally an additional guarding level emerges for whales and high SP bots that doesn't ruin the fun for lower level users.

H2
H3
H4
3 columns
2 columns
1 column
20 Comments