A quick analysis of the Steemit voting system and some questions.
I am presently unsure as to how the voting system on Steemit works (exactly)
However i have gained enough insight to understand that 'UpVotes' promote the content (as one would expect) and then it can be actively Promoted with the dedicated 'promote' button.
Let me be the first to say that is all perfect and good, let me explain why.
How 'Social Media' information flow works
Any functional system should aim for an 'access to information' bias, lets explain that:
what this means is that any 'idea' based information system should let any user potentially access ALL the the information then FILTER what they want to see, the best and most efficient way to do that is the USER does it individually.
That is to say NO USER decides what OTHER users can access or not access.
(think about that for a minute)
Based on that requirement we can find a 'value' of what makes a 'great' information platform or 'social media' platform and even give various platforms a 'score'
I won't do that right now (score them) but needless to say all of the 'Popular' 'Social media' actively prevent users from access to information (and some even actively do this while preaching the opposite)
Some examples:
Examples of how information access and flow is blocked on 'social media' platforms:
Reddit is a great example as it could literally be considered the ‘Anti-social media’ platform.
After the original Dev was driven to suicide / murdered by the US government the platform was virtually ‘taken over’ by various VPN using ‘users’ that were able to control the content by simply ‘down voting’ content they did not want people to see and ‘upvoting’ content they wanted promoted.
Of the two the only harmful one is ‘Downvoting’ as it has more ‘attached’ blocking vectors.
Lets explain:
If content is ‘down voted’ the user is inhibited to post content and eventually blocked from posting all content on the ‘Reddit’ site.
Further to that a low ‘score’ allows ‘moderators’ (often the same VPN manipulators) to outright censor various users content.
Reddit is a totally ‘human’ controlled anti-social media platform.
Twitter has centralized servers and all content can be controlled.
The Twitter ‘controllers’ actively censor or remove ‘Tags’ that have become popular
Also (lesser known) they can ‘mute’ a Tweet that has been sent to from a users to their followers.
For example Twitter can block your tweet going to all of or some of your followers.
Twitter is centrally controlled and has an active motive to censor content as they have a ‘share price’ and ‘share holders’ (they are owned by other corporations or banks)
Also recently and notable ‘controversial’ Twitter users have been banned and kicked from the site for political reasons.
Facebook can do everything Twitter does except the corporation controls the content in an even more extensive way and forces content onto users.
Plus just personally for me Facebook doesn’t look like the primary focus is to share information as much to share personal data and mine metadata, personally the layout makes little to no sense to me.
VOAT
Voat was an attempt by some people to create an alternative to ‘Reddit’ however it was attacked into submission by various groups being able to drive up the server hosting cost and then forcing the Devs to register the company in the USA. (probably for specific reasons)
Since then nearly all development of the Voat platform has ground to a halt.
Before that the devs were genuinely active in trying to prevent content from being blocked they removed ‘moderators’ that had been imported from ‘Reddit’ and converted the subs to ‘system’ (no 'moderation’ which worked much more effectively)
but what is notably missing is a 'mute user' button, and the new 'controllers' have made sure it will not be updated.
It has also been suggested that the reason for Voat to be registered in the USA was so that the content and ‘voats’ can be manipulated from the source to what to users see’s on their front page.
I.e voats can been ‘ticked up’ or ‘paused’ or ‘ticked down’ by some central 'controller'
All these things were also controlled on Reddit with suggestions that ‘bots’ tagged some content that was to be ‘suppressed’ and they would control the votes of that content and suppress it.
Other spam attacks by VPN users which are relevant for Steemit
Various VPN users (probably agencies) uploaded ‘questionable content’ or content that was on the edge of legality then proceeded to ‘bubble’ it up and promote it
So that if a casual user was to search for ‘Voat’ on Google the representation of the site was such that it was a site of ‘questionable content’
See here:
https://encrypted.google.com/#q=voat
Note ‘Young laides’ and ‘fatpeoplehate’ others that turned up for a long time were v/niggers
Also you will note ‘Reddit’ is promoted.
The point of explaining this is to tell the Devs and Community that regardless of the ’flag’ button in the future when Steemit is attacked in a similar way (upon becoming popular) it will be presented to the ‘front page’ of ‘the internet’ I.e a Google search as whatever the people with the most amount of VPNs and ‘Power’ can present.
Something to keep in mind.
Hence the point of this article :
You will notice a common trend if you read the other ‘social media’ examples above a central controller always needs a information suppression vector and for them the more the better.
How does Steemit rate?
How does the ‘flag’ button work?
All the ‘Promotion’ of content is of much less importance than any single ‘suppression’ vector and this is why the ‘flag’ button is interesting to me
So if users have up-voted X content and then lots of other users need to try to suppress it, do they just log in from various VPN and ‘flag’ the content?
If so what happens?
On a decentralized platform in a ‘trust-less’ environment these ‘rules’ matter much more than previously if the platform can get them correct, (and hey lets face it one ‘fork’ will) then that will be the one with the most success.
Let me explain why the ‘flag’ button is not needed:
Because the mute button is present, if I am user X and I find user Y content inappropriate I can just ‘mute’ the user.
This could be extended (if users were confident that it couldn’t be manipulated) to a mute ‘comment’ button I.e if X user likes all of Y users content and even follows him but find that he just swears a lot when he makes comments, then user X can mute that particular offending comment and see the rest of the User Y content.
Summary :
There is no need for a ‘flag’ button and there is no need for a ‘down vote’ button.
Because all content can be controlled by simply following the ‘new’ tab and finding and following the content you like and ‘muting’ the content you are not interested in.
all the 'Falg' button seems to do is give a potential vector for manipulators to try to suppress content (or at least make it harder for interested groups to see it)
the 'Mute' button allows users to control content it is the 'flag' button.
Common responses and solutions
"The ‘flag’ button is needed to stop spam."
Nope.
look at the trend in the exmaples above ‘Steemit’ WILL be attacked with spam the users can decided how to respond to that attack, the aim of the spammers is to put in place information suppression vectors that are OUTSIDE the control of the user. i.e more than the 'mute button.
If ‘Steemit’ puts in place information control vectors OUTSIDE the control of individual users the spammers are winning.
A common reply :
“but if all users just mute the spam the only thing that will be seen by new users on the ‘front’ page is spam”
Ok well that is a design problem. and it has a solution but the solution is not to put in place information control vectors outside the users control, one solution is to educate new users that they will need to filter the ‘Raw’ feed and ‘carve’ out a useful feed of content.
Not just spam new users with ‘promoted’ content.
if a new user wants freedom they need to carve it out of the 'Raw' feed by blocking the things that a inappropriate and that is spam. in cases of extreme illegality report that to authorities and mute it.