Skip to Main Content
Augmented Legality
Blogs | April 6, 2011
5 minute read
Augmented Legality

V-discovery: Litigating in Augmented Reality

Mo' technology, mo' problems.

Advances in digital and computing technologies can make litigation, like anything else, more effective and efficient. Lawyers have so many more tools at their disposal for crafting and communicating persuasive arguments than they did 10, or even five years ago.

But all this technology is also giving lawyers a whole lot more to do. Generally, any information that is reasonably likely to reveal evidence that could be admissible in court is fair game for discovery during litigation. Increasingly, the digital data stored and exchanged by the people and companies involved in lawsuits are becoming important to the issues being fought over. That means that lawyers and their staff often have to gather "electronically stored information" (ESI) during the discovery phase, in addition to the paper documents and testimony--the phenomenon we call "e-discovery," Therefore, lawyers end up with even more data to sift through in order to figure out what happened than they used to. A lot more.

"Perhaps no case could be a more monumental example of the reality of modern e-discovery," says a recent article in the ABA Journal, "than the ongoing Viacom copyright infringement lawsuit against YouTube filed back in 2008. In that dispute, the judge ordered that 12 terabytes of data be turned over"-- more than the printed equivalent of the entire Library of Congress.

"Experiences like these," the article continues, "have left law firms and in-house attorneys scrambling to make sense of the new risks associated with the seemingly endless data produced by emerging technologies like cloud computing and social media."

How will law firms and litigants cope, then, when augmented reality becomes mainstream, and digital technology leaps off the computer monitor to overlay the physical world? At least three potential problems come to mind.

The first problem will be one of volume. Companies such as Vuzix and Gotham Eyewear are already working on making AR eyewear available and affordable to the general public. If a site like YouTube can amass enough video footage to make the prospect of reviewing it all seem (quite rightly) ridiculous, how about when we're all wearing AR eyewear that collects and creates (presumably with the option to record) digital data about more or less everything we look at? Will paralegals be sifting through days and weeks worth of mundane, first-person audio and video to find the relevant portions of a litigant's experiences? As more of our reading takes place on digital devices, we're already creating troves of data about our activities in browser caches and RAM memory. But how much larger will our digital footprints be when everyday physical objects become opportunities (even necessities) for encountering and creating geotagged data?

The second, and closely related, problem will be locating and collecting all of this data. It's hard enough nowadays to locate data stored in "the cloud," which actually means some remote server farm nestled somewhere in the distant hills. Presumably, that data will be stored in even more diffuse ways in an AR world. Whether or not my eyewear will require a centrally broadcast "signal" or "network" in order to function, it will certainly be interacting with any number of signals sent to and from objects that I physically encounter, leaving digital traces of my physical presence behind.

We're already halfway there. Consider Color, the social media darling of the moment. It gives you access to other people's photo streams merely by coming into physical proximity to those people. Or Foursquare and other check-in sites, which offer you discounts to businesses near your current, physical location. Once transactions like this become the centerpiece of a lawsuit, will it require lawyers to pinpoint where particular people where when they accessed these apps?

If it becomes relevant in litigation to retrace someone's steps through an augmented reality, how would one do it? Will it be necessary to actually visit those locations? Or might we all be equipped with personal "black boxes" that keep track of our digital experiences--probably all too often for the purpose of uploading them to lifelogs, or whatever social media has by then become.

A third problem is one of triangulation. Today, ESI may take various forms, but it all has one thing in common: it's viewable on a two-dimensional screen. That won't be universally true for much longer. How one perceives augmented reality will depend first on how they're looking at their physical surroundings. It may not be possible to interpret digital data stored in a server somewhere without knowing exactly where the individual(s) viewing it were located, the direction they were facing, what other data they had open, and so on.

As only one example, suppose there's a trademark infringement lawsuit in which the plaintiff alleges that a virtual version of his trademark was geotagged onto the brick-and-mortar location of his competitor's store, leading confused customers to patronize his competitor instead of his own business. (This is a fairly simple extrapolation of all the lawsuits being filed nowadays over sponsored ads in search engine results.) That plainitff's claim will rise or fall in part based on how that geotag actually looked to customers. That, in turn, may depend on where the potential customers were when they looked at the logo. Was it visible through the trees, or in the sun? On which AR platforms was it viewable (assuming that there will be multiple service providers)? Did different brands of eyewear render it in the same way? Was it a static display, or did it sense and orient itself toward each individual viewer?

These are just a few of the potential issues; rest assured, there will be others. It all comes with a silver lining, however. Just a few minutes contemplating the complexities of virtual (or "v-") discovery makes the current fuss over e-discovery seem not so bad after all.