Having been the Product Manager for Search, Analytics, and Review at Clearwell Systems, I had a fairly comprehensive idea of what features were important for completing a defensible review, but when I began as Director of Innovation at Discovia, my focus changed. I no longer was concerned with defining a feature to appeal to a number of different kinds of users. I now needed to find the best eDiscovery products that would solve particular user and workflow problems for a single eDiscovery service provider. I needed to change my focus, and in turn I needed to help them change theirs.

First, everyone knows not to trust vendor claims, right? I’ve been a vendor, so I know. Each vendor does their testing in an environment that is best suited for their own software, so of course their software is the fastest and the easiest to use. Chances are very good that the vendor didn’t test in an environment that is best suited for you. That means you need to do that testing, but how?

You need to decide what you want to accomplish, and how you would like this software to fit into your existing workflow (or how you can modify your existing workflow to include this software). For example, I looked at possible solutions for speeding up processing the email and attachments used in cases. Before we began looking at specific software, I asked the Discovia operations team what they considered to be fast; that is, what did we really mean when we said we wanted to speed up processing? The operations team wanted to speed up document ingestion, the time it takes from when the documents arrive on the server hard drive to when the text is searchable in our review tool. They also wanted to compare how much “people time” the software took: how much time did an operations engineer need to spend in setup and configuration. With clear criteria for comparison, I was able to pare down the options to three possibilities, and, using our prepared testing document set, our testing went quickly. No, I won’t tell you what won, though I will tell you that one processing tool gave different results each time  we processed the EDRM Enron document set. I would call that a major bug.

But processing documents for eDiscovery is very well understood, you say. What about something more controversial, like Technology Assisted Review (TAR), which classifies documents as relevant based on training? I would say the first step is still the same: identify what you want to accomplish, and how it will fit into (or change) your existing workflow. You’ll need to think about what you want to achieve from using the software. These software tools are designed to take a set of tagged documents as input, then output a ranking for each of the rest of the documents that orders them based on how close they match the relevant documents in the tagged document set. How do you want to use those ranked documents? You’ll also need to think about process issues, such as which document set are you going to test the software with? Who is going to do the training? How will you identify success?

What ultimately matters, when it comes to new software, is whether the software works for your purposes and in the way you need it to work. You can’t trust the vendor to have your best interests at heart, so make sure you know clearly what you need.