Blog

The Wisdom of Crowdsourcing

The big news lately is all about the impending retirement of Supreme Court Justice Anthony Kennedy. The past two weeks have seen a flurry of media activity as so-called “experts” prognosticate about who Trump will select as Kennedy’s successor. These experts certainly have insight worth considering. But do they really have the best insight?

In the fall of 2016, we built a model – called “FantasyJustice” – based on the predictions not just of experts, but of anyone who wanted to try to predict Antonin Scalia’s successor. Thousands of experts predicted who they thought President Trump would nominate. Many of these experts turned out to be wrong. In the end, our crowdsourcing of non-experts in FantasyJustice correctly predicted Neil Gorsuch would be nominated.

fantasyjustice gorsuch predict predicted prediction

FantasyJustice Correctly Predicted Gorsuch, Winter 2016-2017*

*Note also the rate of activity for Thomas Hardiman.

With Kennedy’s retirement, we returned again to FantasyJustice. As of today, July 9th, 2018, well over 2,000 unique predictions have been cast by visitors to FantasyJustice. Tonight at 9pm, we will find out for sure, according to Trump’s timetable. We are confident in the possibility that lightning may strike twice, though. Why?

 

Predicting the Future with Experts and Algorithms

Generally speaking, there are three ways to predict the future: experts, algorithms, and crowds. For most of human history, algorithms didn’t exist, and crowd-based decisions usually led to bloody revolutions. Society looked to experts to attempt to control the uncertainty of the future. These experts met society’s need for forecasts, gradually refining their methods over thousands of years. Experts haven’t always used rigorous scientific techniques, though. In addition, a single expert – or a small group of them – can often fall prey to internal biases.

The computational algorithms invented and developed in the more recent past seek to improve upon old expert-based wisdom. Algorithms power all sorts of forecasts, from the weather, to political outcomes, to sophisticated economic models.

But one thing an algorithm needs to have to be successful is a lot of data. They aren’t as good of an option for realms with small potential parameter sets, and little historical data. The Supreme Court represents one such limitation for algorithms. Only slightly more than one hundred Justices have ever served on the Court, and that number drops to lower than 80 when accounting for the Judiciary Act of 1869, which established the Court’s contemporary 9-Justice composition. A predictive algorithm can’t really do much with such a small sample size of past data. Additionally, Supreme Court selection is an idiosyncratic process that occurs in a sea of never-recurring variables, such as shifting politics, zeitgeist, evolving common-law precedents, and the candidates themselves.

 

Predicting the Future with Crowds

Enter crowdsourcing. Crowdsourcing is the acquisition of information or input from large groups of people. The practice of aggregating data from a wide range of individuals can actually fill the gaps left by experts and algorithms. With the same rigorous standards applied to all scientific research, efforts like the Good Judgment Project have produced substantial, repeatable methods of crowdsourcing all kinds of solutions to the problems of the day.

We at LexPredict have demonstrated the wisdom of crowds as well. Not only does the FantasySCOTUS Crowd demonstrate accuracy at or near 80%, but the FantasyJustice crowd also correctly predicted the appointment of Neil Gorsuch to the Supreme Court despite the prevailing stories in the media back then.

 

Everyday Crowdsourcing

But the Supreme Court is a special example. They don’t hear many cases in a year, and a new Justice is appointed only once every several years. While the Court’s decisions have a great impact on everyday life, wondering who the next Justice will be is by itself a rather small question.

There is no one crowdsourcing method, though. All crowdsourcing pools collective knowledge to address problems more efficiently and accurately than individuals can, but this can be accomplished in various ways, for various pursuits. Much depends on the individuals whose knowledge researchers seek to aggregate for their predictive insight. FantasySCOTUS participants, for example, are self-selected; most people would rather play a fantasy sport, or focus their efforts somewhere else. Only people already interested in the Supreme Court, and who possess at least a bit more knowledge about SCOTUS than the average person, choose to participate. Similarly, Philip Tetlock and his colleagues are well aware that the crowds in the Good Judgment Project are there by choice.

Luckily, these are not the only examples of effective crowdsourcing strategies. LexSemble provides another avenue for crowdsourcing, with much more tangible application. LexSemble isn’t designed for the Supreme Court, or for random research queries; it’s designed to work with data from your organization to help your organization make better decisions.

LexSemble leverages knowledge management systems built for collaboration, so an organization’s knowledge and experience can all be leveraged to solve its unique problems. LexSemble also uses crowdsourcing to aggregate predictions from across your organization, instead of just from the experts. This way, valuable information on the likelihood of outcomes is as full and complete as possible. LexSemble also has the functionality to produce reports on these and other aspects of your organization’s operation.

Many of the useful crowdsourcing applications of LexSemble involve constructing decision trees. Individuals within an organization – at various levels of expertise – can contribute responses to surveys. LexSemble then uses these surveys to build decision trees that use statistical analysis and expert ranking to determine the best outcome. For a concrete example of what this process looks like in LexSemble, check out our guest post from Chrisopher Groh.

For more details on LexSemble, and an explanatory video, see our post on intelligent crowdsourcing.

 

Conclusion

Crowdsourcing is not the niche field that projects with narrow scopes may make it seem like. Data is important, and data-driven software tools represent a crucial next step in the evolution of legal organizations. But software tools are not the only avenue of data-driven evolution. The everyday computations in the minds of individuals, aggregated across large groups, can be just as powerful. Streamlining, accessing, and storing organizational knowledge is, in many ways, the key to improving processes and strategies for legal. Crowdsourcing is a data-driven approach that humans have practiced before, but its potential to change what we know – and how we know it – has barely been tapped. Research at LexPredict, and elsewhere, as well as the aid of artificial intelligence algorithms, has only begun to show the potential of crowdsourcing.

Comments are closed, but trackbacks and pingbacks are open.