What do you believe? The Game Theory of Steem, Part 5

<audience: @biophil! Where have you been??>

Well, I've been all over. I spent the last week in Maastricht, the Netherlands, at the World Congress on Game Theory, where I gave a talk entitled Optimal Mechanisms for Robust Coordination in Congestion Games, which is about using money to influence social behavior.

This will be the last article in which I talk about analyzing minnow voting behavior; in this article, I'll propose a very descriptive, yet analytically-confounding model. The problem, as I hope you'll see, is that if we make our model too descriptive, it becomes impossible to analyze. If it's impossible to analyze, we can't use it very easily to make predictions about behavior, and we have to ask ourselves why we're wasting our time.

If you're just joining us, I encourage you to check out the rest of the series (Part 1, Part 2, Part 3, and Part 4) to get a bit of background on where we are.

So let's do this - let's take a bit of time and just look into what it would mean to build a more descriptive model. Spoiler alert: I will do exactly no analysis using this complicated model; I just want to go through the exercise of setting it all up so you can see how intractable it becomes.

In Part 4, we thought about curation incentives from the perspective of what's known as Level-K thinking, and discovered that under our assumptions, minnows are incentivized only to vote for the winners. I received several comments about how we might make the model more descriptive. @nkyinkyim suggested we include a term ci in voters' utility functions that represents how badly a voter wants to "vote their conscience"; i.e., how much they want to vote for their favorite even though it wouldn't garner them a financial reward. So, in the interest of descriptiveness, we'll include that in today's model.
It was also mentioned that including whales in the calculation might make a difference, so we'll do that here.
Finally, several commenters wondered if including a more detailed model of voters' beliefs would change our conclusions.

The first two of these are relatively easy. The third, it will turn out, can make things incredibly complicated. Before we do that, we need to fix a flaw in our original model: in Parts 3 and 4, we were still living in the old Beauty Contest world in which only one of the entrants can win the contest. If we're going to claim model descriptiveness, we need to make sure our model captures the fact that multiple posts can "win" in Steemit. So I'm going to make things rather abstract.

I'll just give you a nominal utility function for each bidder, and explain what the terms mean afterwards:

The argument a is called the outcome of the game; it's basically a list of everybody's votes. The first term, vi(a), is the value that the voter assigns to the list of winners. For example, if you really like travel writing, you might put a large weight on the stuff that @heidi_travels writes. In general, this function could be different for each voter.

The second term, the ci(ai) term, is the "conscience" term of @nkyinkyim; this gives you points if you voted for the things that you like. Note that it only depends on ai, which means that you get points for this one whether your favorites win or lose.

The third term, Pi(a), is the curation reward you get for voting for things that ultimately get a high payout.

So this is where we are:

  • vi is what you get for liking what wins
  • Pi is what you get for voting for what wins
  • ci is what you get for liking what you vote for.

Note that this model is extremely descriptive! If you're a minnow, Pi is usually tiny, so you very well might have a bigger utility by voting your conscience. But if you happen to catch a great post early, Pi might be big enough to sway you in the other direction.

If you're a whale, Pi could be quite large, which might tempt you to vote for winners. On the other hand, if you believe in the system and have a vested interest in seeing Steemit survive long-term, that could be reflected in your vi term, placing very high weight on articles that you think are "healthy" for the system.

It would seem that we're all set to start modeling, but we're still missing one thing: beliefs. It's all well and good to write down a model of peoples' valuations, but to do game theory we have to come up with some description of how people choose the votes they choose.

What do we mean by beliefs? The way this is typically treated in game theory is to take our collection of voters and split them apart into different categories (we call them types) on the basis of what their payoffs are and what they believe about other players' payoffs. For instance, let's say that Type 1 voters are defined as

  • Minnows (tells us something about Pi)
  • Who like travel posts (tells us about vi and ci)
  • Who think that
    • Minnows like travel posts
    • Dolphins like game theory posts
    • Whales like posts talking about how great Steem is.

Question: is this sort of list all we would need to describe beliefs? The quick answer is no. Here's the problem: Type 1 voters think that minnows like travel posts, but what do they think that minnows believe about other minnows? For that matter, what do they think about anybody else's beliefs?

So here's problem number 1: To properly encode beliefs, we need to write out everybody's beliefs about everybody else's beliefs about everybody else's beliefs about everybody else's..... See the problem? Even to totally encode the beliefs that 2 people have about each other, we'd need two write out an infinite number of possibilities. Not going to happen.

Question: how do we get around this? Well, the standard thing in the game theory literature is something called a common prior. A common prior is economist-speak for saying that everybody has the same beliefs about each other's beliefs. It means that we just associate a probability to each player type, and say that everybody knows all of those probabilities so they basically know what to expect about who they're going to run into in the world. Does this sound plausible?

I'd argue that it's not supposed to be plausible - nobody really thinks that every single person starts out with the same beliefs about everybody else. It is, however, supposed to be tractable - in other words, it gives us a tool to actually make predictions. Unfortunately, it is my understanding that in general everyday person-to-person interactions, there is ample evidence that people do not really analyze their decisions with this level of mathematical rigor. For examples, see Prospect Theory.

To wrap it all up, what we would do is come up with a common prior (which in itself is a pretty daunting task), and then we could describe an "expected" social outcome with a concept called a Bayesian Nash Equilibrium (BNE), where everybody is happy with their choice of action given their beliefs about other players' types and choices of actions. Remember how I argued that an ordinary Nash Equilibrium isn't always the best predictor of social behavior? Well, a BNE has an additional layer of complexity on top of a normal Nash Equilibrium, which makes me suspect that it is probably an even worse predictor of behavior.

And this is why I'm not actually going to do any analysis on the complicated model.

I hope you can see from the wall of text I've brought you today that in many ways, game theory starts to bog down when you introduce too much model complexity. I could certainly encode all of this in a mathematical model, but I would never be able to prove a theorem about it. Worse still, any theorem I'd prove about it would almost certainly not accurately describe human decision-making processes!

To conclude, it is my opinion that the descriptive model is

  1. Resistant to analysis, and
  2. Probably not good at predicting behavior.

And before I go, I'll just plug my brother's new article! He's working on some really interesting urban food projects, and he will definitely not bore you with game theory!

H2
H3
H4
3 columns
2 columns
1 column
12 Comments