% fortune -ae paul murphy

Prediction markets and the madness of crowds

Last week's discussion of global warming was rather off the topic I wanted to talk about, but extremely interesting anyway. In particular I was struck by an exchange I had with "jtmodel" in which he offers information from realclimate.org and I respond by pointing that realclimate.org is a cheerleader site for global warming activists.

Basically he considers them credible and I consider them politically motivated, so the obvious question is who's right, but the exchange actually raises two far more interesting questions:

  1. first, how can we know? and,

  2. secondly, how can damage done by mis-information (either way) be corrected?

These are far more general questions than you might think. At the local level, for example, The Edmonton Journal has a long history of headlines about cops being caught doing something they shouldn't, followed weeks later by a paragraph hidden somewhere in the back pages that might as well be headlined: "Cops clear Cops" and never generates the public response the original headlines did.

At the macro scale the right wing blogosphere now routinely catches both Reuters and the AP quoting fake witnesses while producing and distributing faked photographs attesting to Isreali or American atrocities - but corrections, if offered at all, never have the impact of the original lies.

Think about this for a minute and you'll agree that both the discovery and the disparity issues are fundamentally about information - and that's our business, so is there something we can do?

I have a suggestion, or actually the bare bones for one -something more along the lines of muttering about opportunities than laying out a business concept.

The fundamental idea is this: set up a prediction market betting on the truth or falsehood, value or "misleadingness" of news and opinion sources. Give that market as much visibility as possible and use its results as a counterweight to the madness of crowds.

I talked about prediction markets as a means of monetising open source, away back in July of 2005 - here's the bit explaining how these things work:

What the exchange does is sell people a bundle of possible outcomes only one of which will actually occur in the specified time frame. People are then free to trade the outcomes making up the bundle among the themselves and the exchange buys back the winning bet at the end.

Imagine, for example, an election with three candidates: A, B, and C. The exchange would sell a bundle of one bet on each candidate at a price of $5.00 and promise to pay off $5.00 on winning bets the day after the election.

Since individual bets can be traded some people would seek to accumulate bets on candidate A, others on B or C. At the end, a person who bought one bundle, sold B and C at $2.00 each to others, and bought a "C" bet for $2.00 from someone else would be down $3.00 if C lost and up $7.00 if C won.

For the most part this kind of thing is actually illegal in the United States - but it's easy to set up anywhere in the world and, in any case, I imagine that a good lawyer could successfully defend an American site by presenting the market as an on-line multi-player game rather than as a financial market.

What prediction markets do best is track how well their users individual and collective decision making predicts reality.

The general answer might surprise you: individually people demonstrate the truism that the value of expertise is generally negative with respect to predictions - almost all mutual funds, for example, almost always under perform the market.

In other words, in the long run yes/no predictions made by flipping coins will show a better record than predictions made by experts. However, and this is where the value comes from: the numbers have to balance to zero -a fair coin comes up heads half the time - and correspondingly predictions made by averaging across a sufficiently large group of poor predictors will outperform the coin flipping approach by considerably more than the the individual bettors under perform it.

In 2004, for example, nearly 60% of the 2,231 sports enthusiasts who recorded predictions for 267 NFL games did worse than someone who just flipped coins; but, as David Pennock put it:

the "average predictor", who simply reports the average of everyone else's predictions as its own prediction, scores 3371 points, good enough to finish in 7th place out of 2,231 participants.

One of the great things about this is that you can look at the bets at any time prior to the resolution date and see both which way the experts are going and the, usually opposite, way the averages are tending.

Aside from legal issues, prediction markets have an odd problem that actually makes them the perfect tool for debunking media sponsored myths: the more people share a delusion, the easier the system looks to game, but the harder it really is. The effect, for example, of a decision by a nutroots organization like the dailykos to spend its adherents money buying credibility in the prediction markets would simply be to bring out large numbers of willing beneficiaries.

Suppose, therefore, we list the claims made in each of the ten most cited articles appearing on realclimate.org during the first quarter of 2007 and sell a bundle of bets based on how many of these stories will have been substantially discredited by some deadline like April 1st, 2010.

Once we nailed down the details we'd have a bundle of eleven bets from which I would keep my "all ten" bet, trades my "zeros" to jtmodel for his tens, and hope to sell enough of the other nine to get as many more "all ten"s as I could.

In the end one of us would make money, the other lose it - and in the meantime anyone could cite the current value of each bet as a pretty good indicator of the site's overall credibility.

Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.