by Eddie López
Whether one thinks about Congressional testimonies or Joe Biden’s 2022 State of the Union Address, it’s clear that there are a lot of concerns in Washington around social media. I’m talking about misinformation and disinformation campaigns, user interfaces, algorithms – all things which have been felt in the real world via campaign elections, mental issues, and social relations.
Now, this writing piece is not here to dive into each of these issues. Better sources than me have articulated these both at length, as well as at a much higher level. Instead, I wanted to talk about the solution. Currently, our government does not seem to have one, with ongoing discussions happening in our congressional chambers. Moreover, while I also don’t have the solution, per sé, I think I have stumbled upon a path to getting us there.
The answer is casuistry. The rest of this piece walks through just that.
What is Casuistry?
Casuistry is a 500-year-old philosophy developed by St. Ignatius de Loyola. For a more in depth explanation, you can check out one of RAND’s Board of Trustee member’s (Malcolm Gladwell’s) podcast episode on it. As for a brief introduction, casuistry is a philosophy for dealing with newly occurring problems. Instead of applying a given set of principles and then acting on the problem, casuistry instead invites the user to understand the problem first. In doing so, one can take the situation, compare it to an understood problem of similar context, and then try and use that problem’s solution to solve the situation at hand.
This is the very process I tried to use for our social media, regulation solution. I just posed the question: is there anything out there that we have already dealt with that we could compare to our current, social media, regulation problem? The answer is yes… and this is where things got interesting.
Plots and Comparisons
When thinking of how to contextualize social media, I found that there were two primary factors that really defined our problem. Firstly, there is the integration aspect of it all; how is social media regulated and on what level? Secondly, there is also the necessity aspect of it. Social media does indeed provide a hefty pillar to our societal structure… and at least for the foreseeable future, is not something we will just get rid of all together; it is an important pillar in our culture.
Using these ideas as relative metrics, I wanted to see how other policies might compare. To do so, I created a plot, using three different policy areas as primary example. This is shown below:
Figure 1: The scaling of laws.
As for why these policy areas, I chose these three because I think they show the three primary forms that we might find on such a graph. Such forms include:
- Policy areas that are equally integrated and necessary. For this, I chose pollution regulation, as it is highly integrated in society via the federal government, and also is seen as a high necessity with the amount of emission the U.S. outputs.
- Policy areas that are skewed to one side more than the other. For this, I chose employment laws, with them being of high necessity, but ranging pretty widely based on local level policies.
- Policy areas that span across each axes. For this, I chose IP law, as for necessity, it ranges on things like employment, and for integration, it can vary based on local level policies.
Now, can other policy areas be added to this? Absolutely. However, for the purposes of this piece, the plot is merely a structural proof of concept – a potential medium for how we might compare policy solutions in our search for a social media, regulation solution.
Beyond establishing a framework, one other interest of mine was establishing an implementation proof of concept. Like, what would cross mapping another policy area look like? Would we actually learn anything? Is it useful? To answer this, I tested out “pollution law” from our plot… and it actually turned out quite well.
Pollution Policy: Cross Walking to Social Media Solutions
When I think of pollution policy, I first think of the 1990 Pollution and Prevention Act; in many ways, it serves as the philosophical basis for why pollution laws needed to be implemented. Correspondingly, with us using pollution policy as our comparison, I wanted to see if I could originate a similar philosophical basis for social media regulation. Using “misinformation and disinformation” as a replacement for the act’s use of “pollution,” I find the reasoning and courses of action to be uncannily similar:
Table 1: Comparing language and substitutions.
|Original 1990 Pollution Policy Act Text||Inserting Social Media Language into the Act|
|“The United States of America annually produces millions of tons of pollution and spends tens of billions of dollars per year controlling this pollution.”||The United States of America annually produces millions of misinformation and disinformation posts and spends X dollars per year controlling this pollution.|
|“There are significant opportunities for industry to reduce or prevent pollution at the source through cost-effective changes in production, operation, and raw materials use.”||There are significant opportunities for industry to reduce or prevent misinformation and disinformation at the source through cost-effective changes in production, operation, and raw materials use.|
|“The opportunities for source reduction are often not realized because existing regulations, and the industrial resources they require for compliance, focus upon treatment and disposal, rather than source reduction.”||The opportunities for source reduction are often not realized because existing regulations, and the industrial resources they require for compliance, focus upon treatment and disposal, rather than source reduction.|
|“Source reduction is fundamentally different and more desirable than waste management and pollution control.”||Source reduction is fundamentally different and more desirable than misinformation and disinformation management and control.|
With similar logic in hand, does this mean we have found the right way to approach social media regulation? Not necessarily. However, I think the similar language shows that there is a potential solution here – presuming we apply it correctly. Case in point, if we just look at the vision, could we not see how misinformation and disinformation source reduction could be inevitably better than misinformation and disinformation management control? Absolutely. As for how we do this, I think that is where a lot of our questions and brainpower will need to come into play. However, to start us off, maybe the most basic example of this would be verifying all accounts on a platform. Establishing this kind of legitimacy could reduce misinformation and disinformation spread via bots, and the rest would go from there.
Moving outside of philosophical underpinnings, I think another avenue for solution finding is simply looking at current regulation implementations. With regards to pollution policy, I find it hard to think of a more prominent example than cap-and-trade. At this, we can ask yet another question: is this implementation transferable? Could we use a cap-and-trade style model for misinformation and disinformation management?
Specifically for cap-and-trade, the answer remains unclear. In large part, this has to do with feasibility issues such as the standard we hold social media companies to for identifying information, the metric we choose for misinformation and disinformation amounts, what that misinformation and disinformation cap is, etc. However, if we can fill in those gaps, would it not just be a policy solution that has a similar pros and cons list to normal cap-and-trade, pollution policy?
The Way Forward
Ultimately, this was just a thought experiment to see how some policy areas may provide a solution to social media regulation. Overall, I think the process works relatively well… and I’m excited to continue crosswalking solutions from other areas. Maybe in doing so, we will find a better framing for our social media, regulation solution. Maybe it was right in front of our eyes all along.