On November 14th, 2016, just days after the election, the New York Times published an article covering likely Supreme Court nominees from the Trump administration. A day later, CNN had published their own piece on the topic. And within the next few weeks, the Wall Street Journal, Washington Post, USA Today, LA Times, and many others had all followed up with their own predictions and profiles. By early December, PredictIt, the real-money prediction market, got into the fray, listing contracts for 25 candidate Justices to their market.
Collectively, pundits and market participants spent thousands of hours researching, interviewing, and writing on the topic. Old contacts were dug up, hunches were whispered over Beltway lunches, and the tea leaves of Trump’s tweets were read. Yet despite this small mountain of effort, none of these early prognostications were right. In fact, Gorsuch was barely mentioned in this early coverage, and even then, often as an “also-ran.”
Well, almost. On November 20th, less than two weeks after the election, FantasyJustice predicted the Gorsuch appointment. And except for a few brief hours on November 23rd, Gorsuch never fell from that lead. In fact, his margin continued to grow right up to the announcement at 8PM last night. Our not-for-money crowd prediction results are shown in the figure below, beginning on November 14th and running up through the night of January 31st.
So how did $50 in Amazon Web Services hosting expenses, and a half-day diversion for one of our developers, beat out hundreds of thousands of dollars in direct investment and opportunity cost by sophisticated media organizations? More importantly, what can we learn from this example?
Three Ways to Predict
There are three ways we predict things: experts, crowds, and algorithms. Experts are best exemplified by pundits, doctors, and lawyers, and for much of recent human history, we have delegated decision-making to solitary specialists like these – the so-called “cult of the expert”. Experts typically rely on tacit knowledge and implicit models. This is a technical way of saying “experienced gut instinct.”
(If you’ve made it this far, do yourself a favor and purchase a copy of Professor Tetlock’s Superforecasting: The Art and Science of Prediction. Professor Tetlock’s career has been dedicated to exploring human judgement, good or otherwise, and his book is an excellent tour through modern research.)
Crowds, on the other hand, are defined by their multiplicity. While books like James Surowiecki’s have popularized the idea of the “wisdom of the crowd” despite the lack of wisdom of its constituents, crowds can take many forms. For example, a panel of experts can form a crowd, just as the market of PredictIt users may form another.
Lastly, algorithms are best demonstrated by the progress of Artificial Intelligence or Machine Learning technologies. Can I safely turn right at this intersection in four seconds? Is this borrower likely to repay their mortgage over the next 30 years? Algorithms are systematic approaches based on explicit, data-driven models. While humans can technically execute algorithms without the aid of computers, our general distaste for arithmetic has left this task to the machines.
While recovering from a brunch in Chicago on Saturday, November 5th, Josh, Tyler, and I applied this three-pronged framework to the upcoming nomination process. Experts – well, the papers were already full of their guesses. Algorithms – without much data to use, we couldn’t train a model. And so, through process of elimination, crowds it was.
In reality, we already spend much of our time helping clients deal with issues just like these. In addition to running FantasyJustice, we’ve run FantasySCOTUS, a Supreme Court prediction tournament, for the last 6 years; and we also offer a legal technology product called LexSemble, used by corporate legal departments and law firms. Will the FTC approve our merger? Should we settle this commercial litigation? How much in damages will the EPA seek? And, most importantly: Which experts, attorneys, and law firms have been right about questions like these in the past?
Armed with this experience, we got to work. We had the site up and running within days. Referrals through Twitter and our own FantasySCOTUS brought us the first few hundred predictions. Josh, iPad in hand, walked the floor of the Federalist Society Conference, collecting votes from many experts (while some of the potential nominees watched!). We even had Russian and Brazilian botnets weigh in with their opinions.
In the end, our process not only got the right answer, but it also did so incredibly quickly. Our crowd of interested parties, many of whom are the “experts” discussed above, provided nearly 4000 opinions without any offer of reward or compensation. The results of the poll were public and transparent from day one, and we’ll be publishing detailed vote logs (including IP) in the near future.
The moral of the story? The golden era of “the cult of the expert” is over. We are armed with science, and with the judgment of whole groups of people and technology. We can now do much better than simple reliance on one person’s gut instinct.