Just because you can attach a number to something doesn't mean it's quantifiable. Or, maybe more accurately, just because something is quantifiable doesn't mean it's objective. Or meaningful.
But that doesn't stop people from trying.
Empirical judicial studies is a hot topic. Apparently, you can tell a lot about the judicial system by labeling judges liberal or conservative (usually based on the party affiliation of whomever appointed them), labeling their opinions liberal or conservative (usually based on which party prevailed), and making some kind of statistical association.
Sound like voodoo? Sound like circular reasoning? "Let's assume judges are beholden to the appointing executive's party and results-oriented, and use 'science' to show they are ideologically driven." I'm not sure this is a useful venture.
But again, that doesn't stop people from trying. Here's a California Law Review article -- "How Not to Lie with Judicial Votes" -- that tries to justify the empiricalistic endeavor.
A preview (footnotes omitted):
"[T]his Article synthesizes and unifies the understanding of statistical measures of judicial voting. It provides a guide for how to interpret such measures, clarifies misconceptions, and argues that the extant scores are merely a special case of a general approach to studying judicial behavior with (model-based) measurement.
“[W]e demonstrate how modern measurement methods are useful precisely because they empower meaningful examination, data collection, and incorporation of doctrine and jurisprudence. We argue that existing uses are simply a special case of a much more general measurement approach that works synergistically with the qualitative study of case law. We demonstrate in Part V how such measurement approaches—when augmented with jurisprudentially meaningful data—can advance our understanding of courts . . . ."