Earlier this week I blogged about placing the locus of control for e-discovery decisions in the right hands to ensure that the decisions made pass muster in court. To illustrate the potential impact of moving the locus of control for certain decision to an outsource partner let’s compare the document review solutions offered by H5 and Inference Data.
Gold standard counsel or Linguist - who should decide?
Both H5 and Inference enable users to improve results and potentially save vast amounts of money by teaching sophisticated software how to do document review faster and more accurately than human reviewers can. And the more the review process can be reliably automated, the more money is saved down the road because the amount of manual review is reduced. This all assumes that the software is trained correctly, of course. Which frames a locus of control question: Who’s best at training the software?
Last month I attended a webinar presented by H5. One thing that struck me as distinctive about H5 is their standard deployment of a team of linguists to improve detection of responsive documents from among the thousands or millions of documents in a document review. During the webinar I submitted a question asking what it is their linguists do that attorneys can’t do themselves. One of their people was kind enough to answer, more or less saying “These guys are more expert at this query-building process than attorneys.” Ouch.
I’ve long prided myself on my search ability (ask me about the time I deployed a boolean double-negative in a Westlaw search for Puerto Rico “RICO” cases) and I’m sure many of my fellow attorneys are equally proud. However, I know people (or engineers, anyway) who are probably better at search than I am, and I know one or two otherwise blindingly brilliant attorneys who are seriously techno-lagged. More importantly, attorneys typically have a lot on their plates, and search expertise on a nitty-gritty “get the vocabulary exactly right” level is just one of a thousand equally important things on their minds, so it’s not realistically going to be a “core competency.” So I can see the wisdom in H5’s approach, although I wonder how many attorneys are willing to admit right out loud that they are better off outsourcing this competency.
I can see where, depending on a number of different factors, either solution might be better. I encourage anyone facing this choice to make an informed decision about which approach leads to the best results rather than relying on their knee-jerk reaction.
I strongly recommend reading Ron’s post for the benefit of his insights, whether or not you are already familiar with TREC Legal Track. I’d also like to offer my own observations about TREC Legal Track’s finding of low consistency between document classification decisions made by subject matter experts, who are spoken of as “gold standard” reviewers, and ordinary legal document reviewers. (In TREC Legal Track’s study, ordinary reviewers were 2nd and 3rd year law students. In real life the subject matter expert role is played by in-house or outside counsel, while much of the actual review work is performed by contract or outsource attorneys.)
Generally speaking, quality control processes involve benchmarking against some standard. Mechanical processes can be meaningfully benchmarked by physically sampling output (this is the essence of Six Sigma, in particular). For example, as machine parts come off an assembly line, samples can be selected and measured and the variance between their actual size and target size monitored not only to detect defects but to flag the processes responsible for defects. Human processes can also be benchmarked in a variety of ways. (This is in part the province of ITIL, the “Information Technology Information Library,” and the basis for the idea of “service level agreements”.) For example, those responsible for a customer service center may track the number of issues handled per hour, the type of issues handled, the number of resolutions or escalations per issue, revenue gained or lost per issue, etc.
Unfortunately, “responsiveness” and “privilege” are not only somewhat subjective in document review, standards for responsiveness and privilege will vary from case to case. For this reason standards need to be developed “on the fly” for each case, and these standards will by necessity be arbitrary (aka subjective) to some degree even if consistently applied. The good news is that the latest generation of document clustering software applications incorporate tools for developing consistent document review standards on the fly. Through an iterative feedback loop, the humans educate the machines to look for documents with certain characteristics, while the machines force the humans to refine their conception of responsiveness and privilege to a degree that the machine can reliably model it. After enough iterations have passed and the machine has reached some measurable standard of consistency, the humans can step back and let the machine do the rest of the review work. The machine does it more consistently than human reviewers could themselves, and at a much lower cost.
With document review the very idea of defining a “gold standard” for classification is less useful than it sounds. For instance, even if a panel of leading legal scholars could be formed for each eDiscovery matter, the mere fact that someone legitimately may be called a leading scholar doesn’t mean that their views will be consistent with anyone else’s — just well reasoned. But a “gold standard” is not what’s important here. What’s important is that in each case the attorneys responsible for responding to a document request do everything they can to carefully define and consistently enforce reasonable document review standards. This is what the current crop of document clustering applications are intended to do. That is the current model, anyway. I don’t pretend to be able to name the vendors who can or cannot deliver on this promise, although I think this will be the number one question in eDiscovery technology before long.
UPDATE: I discuss TREC’s role in forumulating new legal procedural rules for e-discovery in a later blog post, Catch-22 for e-discovery standards?
After drafting a blog post about mass data sampling and classification in the “cloud,” I became curious about the potential for reusing categories developed in eDiscovery sampling and classification projects as “seeds” for later projects. For further insight I turned to Richard Turner, Vice President of Marketing at Content Analyst Company, LLC, a document clustering and review provider for eDiscovery.
Bruce: I wonder to what extent document categories that are created using document clustering software when reviewing documents for eDiscovery can be aggregated across multiple document requests and/or law suits within the same company. Can previously developed categories or tags be reused to seed and thus speed up document review in other cases?
Richard: Regarding the notion of aggregating document categories, etc., it’s something that’s technically very feasible. And it could greatly speed document review if categories could be used to “seed” new reviews, new cases, etc. Here’s the challenge: we have found that most of the “categories” developed by our clients start-out case specific, and are too granular to be valuable when the next case comes along. It also hasn’t seemed to matter whether categorization was being used by a corporate legal department or an outside counsel – they’re equally specific.
The idea itself had merit, so we tossed it around with our Product Solutions Architects, and they came up with several observations. First of all, the categories people develop are driven by their need to solve a specific eDiscovery challenge, i.e. documents that are responsive to the case at hand. Second, when the next issue or case comes along, they naturally start over again, first by identifying responsive documents and then by using those documents to create categories – any “overlap” is purely coincidental. Finally, to develop categories that were really useful across a variety of issues or cases, they would need to be fairly generic and probably not developed with any specific case in mind.
I think that’s very hard to do for a first or even second-level review – it’s not necessarily a natural progression, as people work backwards from the issues at hand. Privilege review, however, could be a different animal. There are some things in any case that invoke privilege because of the particulars of the case, for example, attorney-client conversations which are likely to involve different individuals in different litigation matters. There are other things that could logically be generic – company “trade secrets” for example would almost always be treated as privilege, as are certain normally-redacted items such as PII (personally-identifiable information). Privilege review is also a very expensive aspect for eDiscovery, since it involves physical “reads” using highly-paid attorneys (not something you can comfortably offshore). Could “cloud seeding” have value for this aspect of eDiscovery? It’s an interesting thought.
I recently had the pleasure of speaking with Nicholas Croce, President of Inference Data, a provider of innovative analytics and review software for eDiscovery, following the company’s recent webinar, De-Mystifying Analytics. During our conversation I discovered that Nick is double-qualified as a legal technology visionary. He not only founded Inference, but has been involved with legal technologies for more than 12 years. Particularly focused on the intersection of technology and the law, Nick was directly involved in setting the standards for technology in the courtroom through working personally with the Federal Judicial Center and the Administrative Office of the US Courts.
I asked to speak with Nick because I wanted to pin him down on what I imagined I heard him say (between the words he actually spoke) during the live webinar he presented in mid-March. The hour-long interview and conversation ranged in topic, but was very specific in terms of where Nick sees the eDiscovery market going.
Sure enough, during our conversation Nick confirmed and further explained that he and his team, which includes CEO Lou Andreozzi, the former LexisNexis NA (North American Legal Markets) Chief Executive Officer, have designed Inference with not one, but two models of advanced eDiscovery analytics and legal review in mind.
As total data volume explodes, choosing the right way to sift out responsive documents becomes urgent
In a nutshell, Inference is designed not only to deliver the current model of eDiscovery software analytics, which I have dubbed “Software Queued Review,” but the next generation analytics model as well, which I am currently calling “Statistically Validated Automated Review” (Nick calls it “auto-coding”).
Bruce: In a webinar you presented recently you explained statistical validation of eDiscovery analytics and offered predictions concerning the evolution of the EDRM (“Electronic Discovery Reference Model”).
I have a few specific questions to ask, but in general what I’d like to cover is:
1) where does Inference fit within the eDiscovery ecosystem,
2) how you think statistically validated discovery will ultimately be used, and
3) how you think the left side of the EDRM diagram (which is where document identification, collection, and preservation are situated) is going to evolve?
Nick: To first give some perspective on the genesis of Inference, it’s important to understand the environment in which it was developed. Prior to founding Inference I was President of DOAR Litigation Consulting. When I started at DOAR in 1997, the company was really more of a hardware company than anything else. I was privileged to be involved in the conversion of courtroom technology from wooden benches to the efficient digital displays of evidence we see today, Within a few years we became the predominant provider of courtroom technology, and it was amazing to see the legal system change and directly benefit from the introduction of technology. As people saw the dramatic benefits, and started saying “how do we use it?” we created a consulting arm around eDiscovery which provided the insight to see that this same type of evolution was needed within the discovery process.
This began around 2004-2005 when we started to see an avalanche of ESI (“Electronically Stored Information”) coming, and George Socha became a much needed voice in the field of eDiscovery. As a businessman I was reading about what was happening, and asking questions, and it seemed black and white to me – it had become impossible to review everything because of the tremendous volume of ESI with existing technology. As a result I started developing new technology for it, to not only manage the discovery of large data collections, but to improve and bring a new level of sophistication to the entire legal discovery process.
Inference was developed to help clients intelligently mine and review data, organize case workflow and strategy, and streamline and accelerate review. It’s the total process. But, today I still have to fight “the short term fix mentality” – lawyers who just care about “how do I get through this stuff faster”, which is the approach of some other providers, and which also relates to the transition I see in the EDRM model – I want to see the whole thing change.
Review is the highest dollar amount, the biggest pain, 70% of a corporation’s legal costs are within eDiscovery. People want to, and need to, speed up review. However, we also need to add intelligence back into the process.
Bruce: What differentiates Inference, where does it fit in?
Nick: I, and Inference, went further than just accelerating linear review and said: it has to be dynamic, not just coding documents as responsive / non-responsive. I know this is going to sound cheesy I guess, but – you have to put “discovery” back into Discovery. You need to be able to quickly find documents during a deposition when a deponent says something like “I never saw a document from Larry about our financial statements”, and not just search for “responsive: yes/no”, “privileged: yes/no.”
Inference was, and is, designed to be dynamic – providing suggestions to reviewers, opportunities to see relationships between documents and document sets not previously perceived, helping to guide attorneys – intuitively. Inference follows standard, accepted methodologies, including Boolean keyword search, field and parametric search, and incorporates all of the tools required for review – redaction, subjective coding, production, etc.
In addition to that overriding principle, we wanted the ability to get data in from anyone, anywhere and at any time. Regulators are requiring incredibly aggressive production timelines; serial litigants re-use the same data set over and over; CIOs are trying to get control over searching data more effectively, including video and audio. Inference is designed to take ownership of data once it leaves the corporation, whether it is structured, semi-structured or unstructured data.
Inside the firewall, the steps on the left side of the EDRM model are being combined. Autonomy, EMC, Clearwell, StoredIQ — the crawling technologies – these companies are within inches of extracting metadata during the crawling process, and may be there already. This is where Inference comes in since we can ingest this data directly. I call it the disintermediation of processing because at that point there is no more additional costs for processing.
In the past someone would use EnCase for preservation, then Applied Discovery for processing (using date ranges and Boolean search terms), and at some cost per custodian, and per drive, you’d then pay for processing. It used to be over $2,500 per gig, now it’s more like $600 to $1500 per gig, depending on multi-language use and such.
But once corporations automate the process with crawling and indexing solutions, all of the information goes right into Inference without the intermediary steps, which puts intelligence back in the process. You can ask the system to guide you whenever there’s a particular case, or an issue. If I know the issue is a conversation between Jeff and Michele during a certain date range, I can prime the system with that information, start finding stuff, and start looking at settlement of the dispute. But without automation it can take months to do, at much higher costs.
Inference also offers quality control aspects not previously available: after, say, one month, you can use the software to check review quality, find rogue reviewers, and fix the process. You can also ultimately do auto-coding.
Bruce: I think this is a good opening for segue to the next question: how will analytics ultimately be used in eDiscovery?
Nick: The two most basic components of review are “responsive” and “privileged.” I learned from the public testimony of Verizon’s director of eDiscovery, Patrick Oot, some very strong statistics from a major action they were involved in. The first level document review expense was astounding even before the issues were identified. The total cost of responsive and privileged review was something like $13.6 million.
The truth is that companies only do so many things. Pharma companies aren’t generally talking about real-estate transactions or baseball contracts.
Which brings us to auto-coding… sometimes I try to avoid calling it “auto-coding” versus “computer aided” or “computer recommended” coding. When someone says “the computer did it” attorneys tend to shut down, but if someone says “the computer recommended it” then they pay attention.
Basically auto-coding is applying issue tags to the population based on a sampling of documents. The way we do it is very accurate because it is iterative. It uses statistically sound sampling, recurrent models. It uses the same technology as concept clustering, but you cluster a much smaller percentage. Let’s say you create 10 clusters, tag those, then have the computer tag other documents consistently with the same concepts. Essentially, the computer makes recommendations which are then confirmed by attorney, and then repeated until the necessary accuracy level has been achieved. This enables you to only look at a small percentage of the total document population.
Bruce: I spoke with one of the statistical sampling gurus at Navigant Consulting last month, who suggests that software validated by statistical sampling can be more accurate than human reviewers, with fewer errors, for analyzing large quantities of documents.
Nick: It makes sense. Document review is very labor intensive and redundant. Think about the type of documents you’re tagging for issues – it doesn’t even need to be conscious: it is an extremely rote activity on many levels which just lends itself to human error.
Bruce: So let’s talk about what needs to happen before auto-coding becomes accepted, and becomes the rule rather than the exception. In your webinar presentation you danced around this a bit, saying, in effect, that we’re waiting for the right alignment of law firms, cases, and a judge’s decision. In my experience as an attorney, including some background in civil rights cases, the way to go about this is by deliberately seeking out best-case-scenario disputes that will become “test cases.” A party who has done its homework stands up and insists on using statistically validated auto-coding in an influential court, here we probably want the DC Circuit, the Second Circuit, or the Ninth Circuit, I suppose. When those disputes result in a ruling on statistical validity, the law will change and everything else will follow. Do you know of any companies in a position to do this, to set up test cases, and have you discussed it with anyone?
Nick: Test cases: who is going to commit to this — the general counsel? Who do they have to convince? Their outside counsel, who, ultimately, has to be comfortable with the potential outcome. But lawyers are trained to mitigate risk, and for now they see auto-coding or statistical sampling as a risk. I am working with a couple of counsel with scientific and/or mathematic backgrounds who “get” Bayesian methods- and the benefits of using them. Once the precedents are set, and determine the use of statistical analysis as reasonable, it will be a risk not to use these technologies. As with legal research, online research tools were initially considered a risk. Now, it can be considered malpractice not to use them.
It can be frustrating for technologists to wait, but that’s how it is. Sometimes when we are following up with new installs of Inference we find that 6 weeks later they’ve gone back to using simple search instead of the advanced analytics tools. But even for those people, after using the advanced features for a few months they finally discover they can no longer live without it.
Bruce: Would you care to offer a prediction as to when these precedents will be set?
Nick: I really believe it will happen, there’s no ambiguity. I just don’t know if it’s 6 months or a year. But general counsel are taking a more active role, because of the cost of litigation, because of the economy, and looking at expenses more closely. At some point there will be that GC and an outside counsel combination that will make it happen.
Bruce: After hearing my statistician friend from Navigant deliver a presentation on statistical sampling at LegalTech last month I found myself wondering why parties requesting documents wouldn’t want to insist that statistically validated coding be used by parties producing documents for the simple reason that this improves accuracy. What do you think?
Nick: Requesting parties are never going to say “I trust you.”
Bruce: But like they do now, the parties will still have to be able to discuss and will be expected to reach agreement about the search methods being used, right?
Nick: You can agree to the rules, but the producing party can choose a strategy that will be used to manage their own workflow – for example today they can do it linearly, offshore, or using analytics. The requesting party will leave the burden on the producing party.
Bruce: If they are only concerned with jointly defining responsiveness, in order to get a better-culled set of documents — that helps both sides?
Nick: That would be down the road… at that point my vision gets very cloudy, maybe opposing counsel gets access to concept searches – and they can negotiate over the concepts to be produced.
Bruce: There are many approaches to eDiscovery analytics. Will there have to be separate precedents set for each mathematical method used by analytics vendors, or even for each vendor-provided analytical solution?
Nick: I’d love to have Inference be the first case. But I don’t know how important the specific algorithm or methodology is going to be – that is a judicial issue. Right now we’re waiting for the perfect judge and the perfect case – so I’ll hope it’s Inference, rather then “generic” as to which analytics are used. I hope there’s a vendor shakeout – for example, ontology based analytics systems demo nicely, but “raptor” renders “birds” which is non-responsive, while “Raptor” is a critical responsive term in the Enron case.
Bruce: Perhaps vendors and other major stakeholders in the use of analytics in eDiscovery, for example, the National Archives, should be tracking ongoing discovery disputes and be prepared to file amicus briefs when possible to help support the development of good precedents.
This blog post is the first of two on the topic of advanced eDiscovery analytics models. My goal is to make the point that lawyers don’t trust or use analytics to the degree that they should, according to scientifically sound conventions commonly employed by other professions, and to speculate about how this is going to change.
In this first post I’ll explain why we arrived where we are today by describing the progression of analytics across three generations of Discovery technology.
The first generation, which I call “The Photocopier Era,” relies on labor intensive, pre-analytics processes. Some lawyers are still stuck in this era, which is extremely labor intensive.
The second generation is the current reigning model of analytics and review. I call it “software queued review.” Software queued review intelligently sorts and displays documents to enable attorneys to perform document review more efficiently. At the same time, software queued review allows – or should I say, requires? – attorneys to do more manual labor than is required to ensure review quality or to ensure that attorneys take personal responsibility for the discovery process.
The third, upcoming generation of analytics is only beginning to provoke widespread discussion in the legal community. I’ll call it “statistically validated automated review.” In it software is used to perform the majority of document review work, leaving attorneys to do the minimum amount of review work. In fact, certain advanced analytics and workflow software solutions can already be calibrated, by attorney reviewers, to be more accurate than human reviewers typically are capable of when reviewing vast quantities of documents.
Because it will radically reduce the amount of hands-on review, the third generation model is currently perceived by many lawyers as a risky break from legal tradition. But when this model is deployed outside of the legal profession it is not considered a giant step, technologically or conceptually. It is merely an application of scientifically grounded business processes.
In subsequent blog posts, including the second post in this series, I will look at what is being done to overcome the legal profession’s reluctance to adopt this more accurate, less expensive eDiscovery model.
The Pre-analytics Generation: Back to the Photocopier Era
Ka-chunk! What a way to spend a week or more.
Please return with me now to olden times of not-so-long-ago, the days before eDiscovery software. (Although even today, for smaller cases and cases that somehow don’t involve electronically stored information, the Photocopier Era is alive and well.)
In the beginning there were paper documents, usually stored within folders, file boxes, and file cabinets. Besides paper, staples, clips, folders, and boxes, photocopiers were the key document handling technology, with ever improving speed, sheet feeding, and collation options.
Gathering documents: When a lawsuit reached the discovery stage, clients following the instructions of their attorneys physically gathered their papers together. Photocopies were made. Some degree of effort was (usually) made to preserve “metadata” which in this era meant identifying where the pieces of paper had been stored, and how they had been labeled while stored.
Assessing documents: In this era every “document” was a physical sheet of paper, or multiple sheets clipped together in some manner. Each page was individually read by legal personnel (attorneys or paralegals supervised by attorneys) and sorted for responsiveness and privilege. Responsive, non-privileged documents were compiled into a complete set and then, individual page after individual page, each was numbered (more like impaled) with a hand-held, mechanical, auto-incrementing ink stamp (I can hear the “ka-chunk” of the Bates Stamp now… ah, those were the days).
Privileged documents were set to one side, and summarized in a typed list called a privilege log. Some documents containing privileged information were “redacted” using black markers (there was an art to doing this in a way so that the words couldn’t be read anyway – an art which even the FBI on one occasion in my experience failed to master).
Finally, the completed document set was photocopied, boxed, and delivered to opposing counsel, who in turn reviewed each sheet of paper, page by page.
The Present Generation of Analytics: Software Queued Review
Fast forward to today, the era of eDiscovery and software-queued review. In the present generation software is used to streamline, and thus reduce, the cost of reviewing documents for responsiveness and privilege.
Gathering documents: Nowadays, still relying on instructions from their attorneys, clients designate likely sources of responsive documents from a variety of electronic sources, including email, databases, document repositories, etc. Other media such as printed documents and audio recordings may also be designated when indicated.
After appropriate conversions are made (for example, laptop hard drives may need to be transferred, printed documents may need to be OCR scanned, audio recordings may need to be transcribed, adapters for certain types of data sources may need to be bought or built) all designated sources are ingested into a system which indexes the data, including all metadata, for review.
Some organizations already possess aggressive records management / email management solutions which provide the equivalent of real time ingestion and indexing of significant portions of their documents. Such systems are particularly valuable in a legal context because they enable more meaningful early case assessment (sometimes called “early data assessment”).
Assessing documents: In the current era attorneys can use tools such as Inference which use a variety of analytical methods and workflow schemas to streamline and thus speed up review. (Another such tool is Clustify, which I described in some detail in a previous blog entry.) Such advanced tools typically combine document analytics and summarization with document clustering, tagging, and support for human reviewer workflows. In other words, tools like Inference start with a jumble of all of the documents gathered from a client, documents which most likely contain a broad spectrum of pertinent and random, off-topic information, and sort them into neat, easy to handle, virtual piles of documents arranged by topic. The beauty of such systems is that all of the virtual piles can be displayed — and the documents within them browsed and marked — from one screen, and any number of people in any number of geographic locations can share the same documents organized the same way. Software can also help the people managing the discovery process to assign groups of documents to particular review attorneys, and help them track reviewer progress and accuracy in marking documents as responsive or not, and privileged or not.
The key benefit of this generation of analytics is speed and cost savings. Similar documents, including documents that contain similar ideas as well as exact duplicates and partial duplicates of documents, can be quickly identified and grouped together. When a group of documents contains similar documents, and all of the documents in that group are assigned to the same person or persons, they can work more quickly because they know more of what to expect as they see each new document. Studies have shown that review can be performed perhaps 70-80% faster, and thus at a fraction of the cost, using these mechanisms.
Once review is complete, documents can be automatically prepared for transfer to opposing counsel, and privilege logs can be automatically generated. Opposing counsel can be sent electronic copies of responsive, non-privileged documents, which they in turn can review using analytical tools. (Inference is among the tools that are sometimes used by attorneys receiving such document sets, Nick tells me.)
The Coming Generation of Analytics: Statistically Validated Automated Review
The next software analytics model will be a giant leap forward when it is adopted. In this model software analytics intelligence is calibrated by human intelligence to automatically and definitively categorize the majority of documents collected as responsive or not, and as privileged or not, without document-by-document review by humans. In actuality, some of the analytical engines already in existence – such as Inference — can be “trained” through a relatively brief iterative process to be more accurate in making content-based distinctions than human reviewers can.
To adopt this mechanism as standard, and preferred, in eDiscovery would be merely to apply the same best practices statistical sampling standards currently relied upon to safeguard quality in life-or-death situations such as product manufacturing (think cars and airplanes) and medicine (think pharmaceuticals). The higher level of efficiency and accuracy that this represents is well within the scope of existing software. But while statistically validated automated review has been widely alluded to in legal technology circles, so far as I know it has not been used as a default by anyone when responding to document requests. Not yet. Reasons for this will be discussed in subsequent posts, including the next one.
Gathering documents: The Statistically Validated Automated Review model relies on document designation, ingestion, and indexing in much the same manner as described above with respect to Software Queued Review.
Assessing documents: In this model, a statistically representative sample of documents is first extracted from the collected set. Human reviewers study the documents in this sample then agree upon how to code them as responsive / non-responsive, privileged / non-privileged. This coded sample becomes the “seed” for the analytics engine. Using pattern matching algorithms the analytics engine makes a first attempt to code more documents from the collected set in the same way the human coders did, to match the coding from the seed sample. But because the analytics engine won’t have learned enough from a single sample to become highly accurate, another sample is taken. The human coders correct miscoding by the analytics engine, and their corrections are re-seeded to the engine. The process repeats until the level of error generated by the analytics engine is extremely low from the standpoint of scientific and industrial standards, and more accurate than human reviewers are typically capable of sustaining when coding large volumes of documents.
By way of comparison this assessment process resembles the functioning of the current generation of email spam filters, which employ Bayesian mathematics and corrections by human readers (“spam” / “not spam”) that teach the filters to make better and better choices.
After the Next Generation: Real Time Automated Review
It’s not another generation of analytics, but another significant shift is gradually occurring that will have a significant impact on eDiscovery. The day is approaching when virtually all information that people touch while working will be available and indexed in real time. From the perspective of analytics engines it is “pre-ingested” information. This will largely negate the gathering phase still common in previous generations. Vendors such as Kazeon, Autonomy, CA, Symantec, and others are already on the verge – and in some cases, perhaps, past the verge – of making this a possibility for their customers.
(Full disclosure of possible personal bias: I’m working with a startup with a replication engine that can in real time securely duplicate documents’ full content, plus metadata information about documents, as they are created on out-of-network devices, like laptops, to document management engines….)
The era of Real Time Automated Review will be both exciting and alarming. It will be exciting because instant access to all relevant documents should mean that more lawsuits settle on the facts, in perhaps weeks, after a conflict erupts (see early case assessment, above), rather than waiting for the conclusion of a long, and sometimes murky, discovery process. It’s alarming because of the Orwellian “Big Brother” implications of systems that enable others to know every detail of the information you touch the moment you touch it, and at any time thereafter.
In my next post you’ll hear about my conversation with Nick Croce, including how Inference has prepared for the coming generation of automated discovery and where Nick thinks things are going next.
One of the most interesting technical issues being discussed at LegalTech last week was the question of how to classify, analyze, and review “unstructured” information like the content of emails, text documents, and presentations.
A familiar, simple sounding answer leaps immediately to mind. Why not just hook up all of these documents to a search engine “crawler,” index all of the words in all the documents? Then run ordinary key-word searches on the whole set. It’s exactly like conducting Google searches, except instead of spanning a big chunk of the entire internet we only have to cover a few terabytes of corporate information – right?
The bad news is that there are a number of wrinkles in the territory surrounding eDiscovery that render a pure Google-style search model less than perfect. The good news is that a variety of vendors offer well conceived solutions meant to take these wrinkles into account. The remainder of this post will introduce some of the wrinkles, later posts will be concerned with vendor solutions.
What’s different about eDiscovery from a Search perspective?
In an eDiscovery context the ideal for a classification and search solution is to allow searchers to identify ALL documents which meet their criteria, not just “the ten most relevant documents” or “at least one document that answers my question” as is common with Google searches. Imagine a running a Google search which returned 20,000 responses. You think this number is too big — it’s overinclusive — but you don’t want to risk missing any relevant documents. Then imagine getting a bill for paying a team of attorneys at rates in excess of $100 / hour to read all of those documents in order to determine whether, in addition to containing the key words you selected, the documents are actually relevant to the particular law suit they are concerned with.
Another common instance of overinclusiveness arises because unstructured information repositories such as email accounts frequently contain multiple versions of the same chunks of content. Many documents will repeat some content from earlier versions and add some new content. For example, when emails are replied to, forwarded, or sent to multiple recipients, content already in the information pool is duplicated, and new information (email headers or comments) will have been added. Using conventional search all versions of every document will fall within search results and must be manually reviewed at great cost to understand what is important and what is merely redundant.
Another potential problem involves choosing key words correctly. One could easily choose key words that are logically related to the topic at hand, and return a large number of relevant documents, but which miss many, or the most important documents in the document pool (the search results are underinclusive). What if, as in the Enron case, “code” words were adopted by perpetrators of a scam in an effort to cover their tracks? What if some number of documents are written in a language the searchers don’t speak, or use words or terms not familiar to searchers?
Solutions to these problems that various vendors have devised include semantic clustering, multi-variate analysis of word positioning and frequency, key words plus associative groupings, near de-duplication processes, and more. Each comes with both strengths and weaknesses, of course — to be discussed in future posts.