Skip to Main Content
Publications
Publications | August 17, 2023
5 minute read

Brian Wassom Authors Crain’s Detroit Business Op-Ed Article on Artificial Intelligence

Warner Norcross + Judd LLP Partner Brian D. Wassom’s op-ed article, “The sobering reality of what AI actually is,” was published online in Crain’s Detroit Business. The article offers a unique perspective on AI-powered tools and the information they supply.

Crain’s Detroit Business subscribers can read Wassom’s article here. The content is also pasted below.

Commentary: The sobering reality of what AI actually is
By Brian D. Wassom

Society is currently somewhere around the fourth-date phase of its relationship with generative artificial intelligence: still infatuated, but beginning to sober up enough to acknowledge that the object of our affections may not prove to be entirely worthy of our trust. All but the most ardent of enthusiasts have realized we can’t take for granted that AI applications tell us the truth.

ChatGPT unblushingly delivers fact and fiction with equal gusto in response to our queries. The prevailing description of these flights of fancy is “hallucination” — an anthropomorphizing term that implies the app truly desires to be honest with us but is inadvertently held back by an occasional schizophrenic break with reality.

This is dangerously misleading vocabulary. Of course, to speak of AI as if it is something more personal than the complex interaction of 1s and 0s is our first mistake. AI programs act intelligently, if by that we mean “able to adapt.” The very thing that sets AI apart from garden-variety software is its ability to adapt in response to accumulated feedback. We call these processes “machine learning” or “deep learning,” and the more advanced AI designs are known as “neural nets” — all terms patterned after the way human minds operate.

But these terms are more analogical than accurate. AI is not — cannot be — “intelligent” in the same way a human brain is, because software lacks a mind. Merriam Webster more thoroughly defines “intelligence” as “the skilled use of reason” and “the ability to apply knowledge ... or to think abstractly.” Machines can do none of this. If an AI program generates data that happens to correspond to reality, that is the happy result of its human coder’s effort. These programmers are like civil engineers who design a complex array of subterranean pipes precisely enough to ensure that water flows only to the desired location. But an AI program does not know it is telling the truth any more than a pipe knows it is delivering water correctly, because it has no mind with which to ascertain what reality is.

At least AI cannot lie to us. A liar is one who recognizes the truth and chooses to deceive the listener into believing something different. AI cannot “know” the truth, so it cannot lie.

But neither can it “hallucinate.” To hallucinate is still to perceive a reality, just not the actual one. So, if we’re going to continue speaking of AI in anthropomorphized terms, we should at least be more precise. Lulling ourselves into a mistaken understanding of how the software functions sets us up to misunderstand both its limitations and its true potential utilities. To that end, I propose that the most accurate way to describe the output of generative AI programs is — to use the abbreviation — “BS.”

I’m serious. In a 2005 book that became his most popular publication, philosopher Harry G. Frankfurt set out to define this oft-used but ill-understood term. Titled On Bullsh*t, the book posits that “the essence of [BS] is not that it is false but that it is phony.” Unlike a liar, who is actively “attempting to lead us away from a correct apprehension of reality,” the BS artist “does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.”

That describes ChatGPT’s conversations with us. The only difference between a chatbot and a human BS artist is that the person does it intentionally. AI has no intention. But that just cements the description. As Frankfurt says, “[BS] is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.” Since ChatGPT cannot know what it is talking about, it cannot say anything other than BS.

This is easiest to see in the context of a chatbot, but the same principle applies to all manifestations of AI. Humans relying on AI-generated information as if it were reliable advice have already caused disastrous outcomes in numerous industries.

AI-powered tools will be useful for an increasingly broad array of applications that inform and assist human decision-making, but they must always be wielded by humans exercising independent judgment. AI solutions must be implemented within guardrails restrictive enough to ensure that the program’s final output is sufficiently likely to correlate to reality as to be useful (after human-led review and revision) for its intended purpose. The acceptable parameters will vary by context. But to the extent we allow ourselves or our businesses to uncritically rely on generative AI as a source of truth rather than the sometimes-useful BS that it is, we will come to regret it.

About the author:

Brian D. Wassom is a partner at Warner Norcross + Judd LLP, where he is the practice leader for content, branding and media litigation and chair of the firm’s Emerging Media and Technologies Industry Group. He holds a certificate in artificial intelligence from the Wayne State Engineering School and advises clients from Fortune 500 companies to local businesses on AI issues. Brian is also a globally recognized pioneer in the legal aspects of augmented reality and other cutting-edge digital fields.