the danger of human exceptionalism

Generative AI (gAI) has created a ton of hype along with a broad spectrum of opinions from “it will change everything” to “it isn’t really that good.” Like most things in life, I think the truth is somewhere in-between. But those in the latter camp often exhibit what I believe fits into the concept of “human exceptionalism.” Playing off the American Exceptionalism moniker, this variation of human exceptionalism is believing that that humans are “special” and AI is just “math” and a poor impersonation of intelligence.

The problem is that humans aren’t nearly as special as we think we are, and machines have been able to do things we can’t for some time. Ahh, but the “higher tasks” and “critical thinking” are still only within the realm of humans, right? Maybe, but even though we’re in the early stages of gAI, I would argue that for every example of poor AI performance, there are ten – or a hundred – or a thousand – examples of acceptable performance. Often humans will hold machines to a higher standard than other humans. Which is OK as long as you realize that you’re doing that. One example is autonomous cars. We tolerate regular fatalities on the road at the hands of humans, but a single algorithmic accident causes huge ramifications. It is a double standard, which again, is OK as long as we realize it is a double standard.

At the end of the day, if gAI creates things that are “good enough”, then humans will adopt it. Doesn’t matter if it is as good as human outputs, and certainly doesn’t matter if it is suspect with regards to accuracy, biases, etc. We already know that humans either can’t tell or don’t care about problematic content.

I am far from being a gAI cheerleader. This post isn’t to laud how good gAI is, but rather to point out that thinking the impact of gAI will be limited because it “isn’t as good as humans” is short-sighted and will lead to bad outcomes. We were very laissez faire about the internet wrt regulation, and arguably that helped create the cesspool of reddit (other’s words, not mine). If the focus of critical analysis of gAI is trying to find the examples where it falls short of humans in order to support the concept of human exceptionalism, then we’ll be blind to the things is does “good enough” to lead to widespread adoption, and the things it does “really well” (e.g. protein folding) that will drastically change entire verticals.

Yes, humans are special. And humans created gAI, which is trained on human outputs. Downplaying the capabilities is likely more dangerous than all the hype we’re currently seeing.

Share your thoughts