Voting is a popularity contest

...or how I learned to stop worrying and love the swarm.

There are various discussions going on about the nature of voting, vote rewards, etc. In this post I will present the ideas that the main purpose of voting is to be a popularity contest, that focus on votes as a measure of quality is misguided, that the current mechanism is largely effective, and that concerns about swarm voting are misplaced.

Voting measures popularity, not quality

Downvotes are widely discouraged except for reporting abuse, which means every non-abuse post with at least one upvote should have an upvote percentage of 100%. Clearly nothing other than upvote count can be used to compare posts to each other on the basis of votes. (The effect of weighting by voting power is deliberately ignored here for simplicity.)

Since voting produces only a count as an output, it is obvious that the primary effect of voting is the evaluate popularity. Popularity may, in some cases, be correlated with quality, but in difficult to characterize ways. For example, in some contexts, it is possible that less popularity correlates with higher quality. (Example: consider reviews of expensive but excellent quality products compared with less expensive lower, but still good, quality products. The less expensive lower-quality products will likely get more reviews.) Furthermore, as I will cover later, while popularity is objective, quality is subjective.

Trying to replace voting on popularity with rating on quality is misguided

A rating system that effectively measures quality, if one could be created, would also need a second method of measuring popularity. Popularity must be used as the criteria for hot lists, trending lists, etc.; if content that is “high quality” but not popular is placed on hot lists, most people seeing it will dislike it or, perhaps worse, not be interested in it at all.

Any effort to add a measure of quality must therefore take care not to discourage participation in the essential function of measuring popularity, either by distraction or by requiring more thought to participate at all. Rating functions should be separate from voting functions, and should probably be featured less prominently in any UI.

Another reason the focus must remain on measuring popularity is that people care—a lot—about popularity, even when measures of quality are also available. The most widely followed statistic on Youtube is view count. On Twitter it is number of followers. Both platforms also have quality measures (up- and down-votes on Youtube and favorited tweets on Twitter) but these get far less attention. In traditional media, books are widely promoted based on their past or present appearance on bestseller lists. Movies that have big opening weekends or break box office records are a big deal.

Popularity, but not quality, is objective

A measure of popularity can always be computed and compared. Some people are more popular than others (more people like them), some music is more popular than other music (more people listen to it, or perhaps, listen to it more), etc. Quality, by contrast, is highly subjective. People can agree on who is more popular while disagreeing in every possible way on who should be more popular.

Of course, in the abstract one would ideally like to measure quality in order to reward quality. But attempts to create aggregate measures of quality will suffer from various perverse outcomes and paradoxes that are well known in the study of rating and voting systems. For example, a majority may prefer A to B but a particular aggregate measure may rank B higher than A. In an aggregated reward system this would mean more money going to the content that the majority would identify as the inferior of the two. This is not a fatal flaw, of course, as no one expects any system to be perfect, but it raises questions.

In all likelihood, the best approach to rewarding subjective quality is tipping. While this suffers from the cognitive load micropayments problem, it is very difficult to envision simple aggregated measures serving this function effectively. One approach that could be explored is to use a collaborative filtering (recommendation) engine to identify likely-high-quality new content for a particular user and then automatically “tip" with a share of funds associated with that user from a pre-allocated fund. This raises several additional questions and complications that are beyond the scope of this post.

Popularity is popular

Popularity is highly concentrated in any system. The most popular personalities have far, far more Twitter followers than still-famous-but-less-so personalities. The most popular videos on Youtube get far, far more views than the top 10%, etc. Once content becomes popular it will often then become even more popular (often called “going viral”). One might infer, in fact, that popularity itself is popular and that, just as people can be famous for being famous, content can be popular because it is popular. When this also serves as a focus point for comments and discussion, this can also make the content become more valuable.

Conclusions

We should stop attempting to attach some meaning to votes as a direct measure of quality—they are not, they are measures of popularity. Measures of quality could be used as well, but they are secondary to the main focus of voting, which is identifying popular content in a social media system.

The current system of voting and rewards—designed, it seems, to both encourage and reward participation, and reach a consensus on popular content—is largely effective in accomplishing this essential function. It identifies a subset of content as popular and rewards people who produce this content (even when serendipitously). Swarm voting is not only inevitable but necessary, because it ensures that some content will be highly popular, in turn satisfying the human need for people to connect and communicate on the basis of commonalities, even fleeting ones.

While nothing forces people, collectively, to choose the best content to make popular (and perhaps no mechanism could), small influences will probably serve as a tipping point so that generally better content will often be chosen over generally worse content. This will tend to reward content that is of higher quality. More direct mechanisms for rewarding the quality of content include tipping, or possibly, with further development of the idea, automated tipping.

In summary, major changes to the existing voting and rewards mechanisms are neither necessary nor would they effectively serve any useful purpose that is clearly identifiable at this time. Refinements and tweaks may indeed be necessary but the very limited data from a tiny initial user base that currently exists is entirely insufficient to identify which changes if any would be useful. A rating system to evaluate and reward quality directly might be a useful addition, but designing one that works effectively is a significant challenge.

H2
H3
H4
3 columns
2 columns
1 column
70 Comments