narrative entropy

An interesting piece in Quanta magazine discussing Claude Shannon’s 1948 paper, “A Mathematical Theory of Communication.” While this was written long before social media disinformation, deep fakes, and truth decay, it does lay the groundwork for conceptual models that might help make progress against these existential threats.

Shannon took the concept of entropy from physics, and applied it to messaging – specifically, how much data do we need to understand information. His work has seen application in compression algorithms among other places, and highlights that there is a lower limit for the amount of data required to retain a message such that it is understood by someone else.

Today, Shannon entropy serves as a yardstick in many applied settings, including information compression technology. That you can zip a large movie file, for example, owes to the fact that pixel colors have a statistical pattern, the way English words do. Engineers can build probabilistic models for patterns of pixel colors from one frame to the next. The models make it possible to calculate the Shannon entropy by assigning weights to patterns and then taking the logarithm of the weight for all the possible ways pixels could appear. That value tells you the limit of “lossless” compression — the absolute most the movie can be compressed before you start to lose information about its contents.

But one has to wonder if this concept could be of benefit in thinking about stories and how they are passed from person to person, generation to generation. In particular, considering disinformation and life in a post-evidence world. Is “narrative entropy” a thing and can it be measured (turns out, yes)? Since the answer appears to be yes, though there are likely different flavors of narrative entropy, then can looking at narrative entropy help us combat disinformation and learn how to navigate a post-evidence world? Only if we start prototyping…

Share your thoughts