Predictive coding explained
By Dr Jan Puzicha
Today’s knowledge-driven business world demands unprecedented access to information and myriad methods of near-real-time communication. As a result, information volumes and diversity have sky-rocketed. While this rapid growth of information has allowed knowledge workers to thrive and become much more productive and competitive, it has also resulted in an overwhelming increase in the volume of electronically stored information (ESI) subject to litigation, regulatory scrutiny and internal investigation — which has had a wrenching impact on how organisations must manage legal risk. This acceleration of the speed of business is only matched by the speed at which legal teams now need to respond to legal proceedings.
Linear document review — where individual reviewers manually review and ‘code’ documents ordered by date, keyword, custodian or other simple, non-topical fashion — has been the de facto standard within the legal industry for decades. However, linear review has been repeatedly shown to be notoriously inaccurate and very costly. And in a business environment where the sea of information — and therefore potentially relevant ESI — is ever expanding, technology-enhanced methods for increasing the efficiency, consistency and accuracy of review are becoming an ever-more important piece of the e-discovery puzzle.
Courts have begun to push litigants to accelerate the long-overdue paradigm shift from linear manual review to computer-expedited approaches, including predictive coding. Judge Paul Grimm framed this shift to computer-expedited review perfectly in a recent webinar…
If you are registered and logged in to the site, click on the link below to read the rest of the Recommind briefing. If not, please register or sign in with your details below.