One of the most interesting technical issues being discussed at LegalTech last week was the question of how to classify, analyze, and review “unstructured” information like the content of emails, text documents, and presentations.
A familiar, simple sounding answer leaps immediately to mind. Why not just hook up all of these documents to a search engine “crawler,” index all of the words in all the documents? Then run ordinary key-word searches on the whole set. It’s exactly like conducting Google searches, except instead of spanning a big chunk of the entire internet we only have to cover a few terabytes of corporate information – right?
The bad news is that there are a number of wrinkles in the territory surrounding eDiscovery that render a pure Google-style search model less than perfect. The good news is that a variety of vendors offer well conceived solutions meant to take these wrinkles into account. The remainder of this post will introduce some of the wrinkles, later posts will be concerned with vendor solutions.
What’s different about eDiscovery from a Search perspective?
In an eDiscovery context the ideal for a classification and search solution is to allow searchers to identify ALL documents which meet their criteria, not just “the ten most relevant documents” or “at least one document that answers my question” as is common with Google searches. Imagine a running a Google search which returned 20,000 responses. You think this number is too big — it’s overinclusive — but you don’t want to risk missing any relevant documents. Then imagine getting a bill for paying a team of attorneys at rates in excess of $100 / hour to read all of those documents in order to determine whether, in addition to containing the key words you selected, the documents are actually relevant to the particular law suit they are concerned with.
Another common instance of overinclusiveness arises because unstructured information repositories such as email accounts frequently contain multiple versions of the same chunks of content. Many documents will repeat some content from earlier versions and add some new content. For example, when emails are replied to, forwarded, or sent to multiple recipients, content already in the information pool is duplicated, and new information (email headers or comments) will have been added. Using conventional search all versions of every document will fall within search results and must be manually reviewed at great cost to understand what is important and what is merely redundant.
Another potential problem involves choosing key words correctly. One could easily choose key words that are logically related to the topic at hand, and return a large number of relevant documents, but which miss many, or the most important documents in the document pool (the search results are underinclusive). What if, as in the Enron case, “code” words were adopted by perpetrators of a scam in an effort to cover their tracks? What if some number of documents are written in a language the searchers don’t speak, or use words or terms not familiar to searchers?
Solutions to these problems that various vendors have devised include semantic clustering, multi-variate analysis of word positioning and frequency, key words plus associative groupings, near de-duplication processes, and more. Each comes with both strengths and weaknesses, of course — to be discussed in future posts.