Results 1 to 2 of 2

Thread: Predicting Supreme Court Decisions and other things

  1. #1

    Predicting Supreme Court Decisions and other things
    How computers routed the experts

    By Ian Ayres

    Published: August 31 2007 22:30 | Last updated: August 31 2007 22:30

    Six years ago, Ted Ruger, a law professor at the University of Pennsylvania, attended a seminar at which two political scientists, Andrew Martin and Kevin Quinn, made a bold claim. They said that by using just a few variables concerning the politics of the case, they could predict how the US Supreme Court justices would vote.

    Analysing historical data from 628 cases previously decided by the nine Supreme Court justices at the time, and taking into account six factors, including the circuit court of origin and the ideological direction of that lower court’s ruling, Martin and Quinn developed simple flowcharts that best predicted the votes of the individual justices. For example, they predicted that if a lower court decision was considered “liberal”, Justice Sandra Day O’Connor would vote to reverse it. If the decision was deemed “conservative”, on the other hand, and came from the 2nd, 3rd or Washington DC circuit courts or the Federal circuit, she would vote to affirm.

    Ruger wasn’t buying it. As he sat in that seminar room, he didn’t like the way these political scientists were describing their results. “They actually used the nomenclature of prediction,” he told me. “[But] like a lot of legal or political science research, it was retrospective in nature.”

    After the seminar he went up to them with a suggestion: why didn’t they run the test forward? As the men talked, they decided to run a horse race, to create “a friendly interdisciplinary competition” to compare the accuracy of two different ways to predict the outcome of Supreme Court cases. In one corner stood the predictions of the political scientists and their flow charts, and in the other, the opinions of 83 legal experts – esteemed law professors, practitioners and pundits who would be called upon to predict the justices’ votes for cases in their areas of expertise. The assignment was to predict in advance the votes of the individual justices for every case that was argued in the Supreme Court’s 2002 term.

    The test would implicate some of the most basic questions of what law is. In 1881, Justice Oliver Wendell Holmes created the idea of legal positivism by announcing: “The life of the law has not been logic; it has been experience.” For him, the law was nothing more than “a prediction of what judges in fact will do”. He rejected the view of Harvard’s dean at the time, Christopher Columbus Langdell, who said that “law is a science, and ... all the available materials of that science are contained in printed books”.

    Many insiders watched with interest as the contest played out during the course of the Court’s term; both the computer’s and the experts’ predictions were posted publicly on a website before the decision was announced, so people could see the results as opinion after opinion was handed down.

    The experts lost. For every argued case during the 2002 term, the model predicted 75 per cent of the court’s affirm/reverse results correctly, while the legal experts collectively got only 59.1 per cent right. The computer was particularly effective at predicting the crucial swing votes of Justices O’Connor and Anthony Kennedy. The model predicted O’Connor’s vote correctly 70 per cent of the time while the experts’ success rate was only 61 per cent.

    How can it be that an incredibly stripped-down statistical model outpredicted legal experts with access to detailed information about the cases? Is this result just some statistical anomaly? Does it have to do with idiosyncrasies or the arrogance of the legal profession? The short answer is that Ruger’s test is representative of a much wider phenomenon. Since the 1950s, social scientists have been comparing the predictive accuracies of number crunchers and traditional experts – and finding that statistical models consistently outpredict experts. But now that revelation has become a revolution in which companies, investors and policymakers use analysis of huge datasets to discover empirical correlations between seemingly unrelated things. Want to hedge a large purchase of euros? Turns out you should sell a carefully balanced portfolio of 26 other stocks and commodities that might include some shares in Wal-Mart.

  2. #2
    Member Acarson's Avatar
    Join Date
    Jan 2007
    Homeless in North Carolina
    I am assuming-read assuming-that the statistical model was a regression analysis. As such, someone had to isolate the variables that would correlate to what you are trying to predict. For a Supreme Court decision I would guess that it would rely heavily on the public policy position that a particular Justice held over the years. Am I am the right ballpark?

Similar Threads

  1. Supreme Court Backs Disabled Georgia Inmate
    By MrSoul in forum Ability & Disability News
    Replies: 3
    Last Post: 01-13-2006, 09:10 PM
  2. Replies: 1
    Last Post: 06-10-2002, 07:11 PM
  3. Supreme Court Limits Disability Law
    By antiquity in forum Life
    Replies: 0
    Last Post: 01-08-2002, 11:11 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts