revisionist history

via Todd Richmond

RAND has been looking at “Truth Decay” for some time now, with a variety of analysis and reports exploring the diminishing role of facts in decision making. And history is always, on some level, biased and revisionist, as the human lens of time typically recontextualizes events and gives them, new meaning or insights.

But the move to digital has allowed new capabilities that raise the stakes considerably. Forgery has been around for centuries, but until the late 20th century, creating a fake or manipulating an existing artifact usually required significant skills (and often resources). Digital fundamentally changed the game, as anything that is made up of 0s and 1s can conceivably be recreated or manipulated by changing the 0s and 1s. Certainly in practice it is much more complex, but digital also has allowed the tools to become easier to use, more powerful, and in some cases, highly automated.

Man walked on the moon 51 years ago. We saw it on TV so it must be true (though to this day there are conspiracy theorists who maintain the entire thing was a hoax). What if things had played out differently? Nixon’s speechwriter William Safire had prepared a statement for the president in case the mission went wrong. Of course in “real history”, the Apollo 11 crew made it to the moon and returned to earth as heroes.

What if you weren’t alive to see the landing “live”? Presumably you’d watch a video if you were interested, but did you click on the original footage or instead perhaps something manufactured? Just to prove the point, a lab at MIT created a “deepfake” video that shows President Nixon announcing the death of the Apollo 11 astronauts. The MIT piece took a lot of work, but if emerging technology has taught us anything, it is that we can expect something that is expensive and difficult will eventually be easier and cheaper – particularly if there is an appetite for the capability. Well, except maybe for flying cars…

Which brings us to the world in 2020. “Fake news” has become a catchphrase, and the lines between fact and fiction increasingly blur. While spin has perhaps been elevated to high art – or perhaps just efficacy through sheer volume from all sides – the ability for citizens to engage and embrace evidence is in peril. It used to be that crude forgeries were easy to spot but even good fakes could be discerned by experts. In a digitally driven world, new tools, particularly machine learning (ML) algorithms used for audio and visual generation, run the risk of moving us into a “post-evidence” world, where we can’t trust anything we see.

“A picture is worth a thousand words” – this saying illustrated the power of visual imagery. When “Photoshop” became a verb, it changed the relationship between image and viewer. While still images still have power, their value in “documenting” events has become suspect. Video retains more gravitas, and recent social justice movements have been driven by moments captures by the moving image. Cameras, in the hands of citizens, are helping to effect change from the ground up.

But considering where tech is heading, will ML and deepfakes soon diminish the power of video documentation? Will deepfakes become so good that they defy identification even by experts? Will society reach a point where the first thought upon seeing any video is concerning veracity rather than reaching to click the share button? While the answers to those questions remain open, history does tell us that if we move forward with technological advances in the absence of considering policy and ethics, society is almost assured of bad outcomes. At least bad for someone, and that brings up a whole other set of questions to be asking.

Share your thoughts