Lo-fi Wildfire Modeling for Augmented Reality Prototyping

via Tim Gulden and Tim Marler

The TNL is gearing up to do some experiments in using augmented reality to improve situational awareness in wildfire situations.  Our goal is to superimpose information about fire locations, fire model predictions, response personnel and equipment, evacuation routes, etc. onto a smartphone camera view – letting untrained people get a better sense of what a wildfire report really means for them.  Technology like this might also be useful for fire managers in making use of the latest fire modeling results and keeping track of people and equipment in chaotic, low visibility environments.

We are working on this in partnership with Lawrence Livermore National Laboratory, which gives us access to high-fidelity, physics driven models that run on supercomputers.  But for this early prototyping stage, we just need a fast, lightweight, low-fidelity fire model that we can apply to terrain that is near our location – because testing AR capabilities will require us to go out and inhabit the reality that we want to augment!

To fill this gap, we spun up a very simple wildfire model that allows us to roughly simulate fires in the Santa Monica Mountains, within a few miles of RAND’s Santa Monica location.  We used the NetLogo modeling package, beginning from the “Fire” model from the NetLogo sample library.  We used ArcGIS to preprocess USGS elevation data and GAP/LANDFIRE landcover data to produce 100m resolution datasets for both elevation and generalized fire risk covering the Eastern half of the Santa Monica mountains, then used the NetLogo GIS extension to pull these layers, along with a road layer into NetLogo. 

We then reworked the logic of the sample model to calculate a chance of ignition for each pixel that is adjacent to a burning pixel as a function of slope (fire tends to go uphill more than downhill), and wind (fire tends to travel downwind more than upwind, and stronger winds matter more).  We also included a general dryness factor, that is applied to the whole landscape – raising or lowering the overall fire risk.

The result is a model that should meet our prototyping needs.  It has reasonable fire dynamics, it is simple, and it is fast (the runs showed above are slowed down by orders of magnitude so that we can see what is happening).  This will let us do thousands of runs for a given fire in a given state to calculate a risk zone, and to generate output that bears a meaningful resemblance to a real fire on the real terrain where we want to do our exploration.

As George Box famously said: “All models are wrong, but some are useful.”  This is a great example of such a model.  It will likely be useful for our needs – but it is important to remember the ways in which it is wrong.  One of its most significant limitations is that it does not model the speed at which a fire spreads.  At each step, the model looks at the neighbors of a burning pixel and decides whether they also catch fire.  This means that the flame front spreads by one pixel each tick – unless it goes out!  This is unaltered no matter the wind or slope conditions; they only effect the probability that the fire spreads, not its speed.  In reality, of course, we know that in dry windy conditions, a fire can move up a slope with alarming rapidity.  With more time and effort, we could incorporate this and the thousand other things that one might want to know about a fire into the model – making it wrong about fewer things.  It would then be useful for different things, and the fire science community has those things covered far better than we could ever cover them.  But, this model is useful for exploring integration with AR in ways that the bigger, more sophisticated models are not!  Sometimes worse is better.

Share your thoughts