I recently discussed the importance of standardized quality metrics. I’ll work through the process of how to define such metrics — this is an exercise and not intended to be a definitive solution. As a starting point, I recently met with Novus Law to see what types of metrics they were using for quality control. They provide rather sophisticated litigation support from e-discovery through to outlining the case’s story. Along the way, they create witness files that include documents used for depositions. In explaining to me how they felt their quality was better, they pointed out that, in their opinion, they’re really good at identifying the collection of documents that are likely to be discussed during depositions. But how can they show this through a standardized metric? As they went through the process, I suggested a way of measuring the quality. They felt that they could compete well using it, so they were happy with it, at least conceptually.
In Information Retrieval (IR), we call documents “relevant” when they relate to a given query. We measure the quality (Q) of a query result set with the notions of precision (P) and recall (R). Precision is the fraction of documents in a result set that are relevant. Recall is the fraction of relevant documents from the entire collection that are contained in the result set. Clearly both P and R range from 0 to 1. Thus, by setting Q = P * R, the highest quality result set, 1.0, would be one in which every document in it is relevant, and it also contains all the relevant documents from the collection. Can we use this metric to measure the quality of a witness file? Yes.
Let’s define “relevant” in this case as those documents put in front of a witness by opposing counsel during a deposition. The highest quality witness file would consist of all and only such documents, with a quality score of 1.0. But let’s also assume that the witness is too busy to have time to be prepped for all the documents in the witness file. We need to order the documents in the witness file in some efficient manner so that we prep the witness for the most important documents (of course some documents may best be presented in, say, chronological order).
Assume that any document used by a judge in a court opinion is really important — we’ll give it a weight of 3 out of 3. Documents that aren’t used in opinions but are used as evidence in trial will get a weight of 2. Finally, documents that are used in neither opinions nor evidence, but are used by opposing counsel during deposition are given a weight of 1. The efficiency score, E, is the total of the weights used in witness prep divided by the optimal (keep in mind that the weights can’t be assigned until after the case has resolved, so one can’t know the efficiency score at the time the witness is prepped). For example, suppose that a witness was prepped on 10 documents, of which 3 were “3”s, 4 were “2”s, and 3 were “1”s. However, the witness file contained 5 “3”s and 5 “2”s. Thus, the efficiency score would be 20/25, or 0.8.
The perfect witness file and prep would have a Quality-Efficiency (QE) score of Q*E = 1.0*1.0 = 1.0. That would entail a set of documents all of which were brought up by opposing counsel in deposition. Moreover, the most important documents in the witness file that were used in the case were discussed with the witness given the amount of time the witness had for prep.
A QE score of 1.0 is improbable, but that’s not the point. The point is to set a metric that creates a competitive landscape based on a desired goal. In this case, companies wanting to do the best trial prep work, based on this metric, would be arguing that they can anticipate opposing counsel well by selecting documents of interest to them. Furthermore, they can anticipate which ones are most likely to be used in court and by a judge, and efficiently work with busy clients. Not only that, but a high score means that the amount of time being spent on non-relevant documents is minimized. This is good for clients under a billable hour model, and this is good for service providers under a flat fee model.
A good metric incentivizes competition based on desirable goals.
A copy of this article appeared on Thomson Reuters’ Legal Executive Institute on May 4, 2015.
Leave a comment: