Standardised and transparent model descriptions for agent-based models
Last month saw initial publication (although officially it is a May publication!) of the paper that came out of the agent-based modelling workshop in which I participated at iEMSs 2014.
Birgit Müller put in some great work to summarise our discussion and bring together the paper which addresses how we describe agent-based models. Standardised and transparent model descriptions for agent-based models: Current status and prospects has several highlights:
We describe how agent-based models can be documented with different types of model descriptions.
We differentiate eight purposes for which model descriptions are used.
We evaluate the different description types on their utility for the different purposes.
We conclude that no single description type alone can fulfil all purposes simultaneously.
We suggest a minimum standard by combining particular description types.
To present our assessment on how well different purposes are met by alternative description types we produced the figure below. In the figure light grey indicates limited ability, medium grey indicates medium ability and dark grey high ability (an x indicates not applicable). Full details of this assessment are presented in the version of the diagram presented in an online supporting appendix.
Citation and abstract for the paper below. Any questions, or for a reprint, get in touch.
Müller, B., Balbi, S., Buchmann, C.M., de Sousa, L., Dressler, G., Groeneveld, J., Klassert, C.J., Quang Bao Le, Millington, J.D.A., Nolzen, H., Parker, D.C., Polhill, J.G., Schlüter, M., Schulze, J., Schwarz, N., Sun, Z., Taillandier, P. and Weise, H. (2014). Standardised and transparent model descriptions for agent-based models: Current status and prospects. Environmental Modelling & Software, 55, 156-163.
Abstract
Agent-based models are helpful to investigate complex dynamics in coupled human–natural systems. However, model assessment, model comparison and replication are hampered to a large extent by a lack of transparency and comprehensibility in model descriptions. In this article we address the question of whether an ideal standard for describing models exists. We first suggest a classification for structuring types of model descriptions. Secondly, we differentiate purposes for which model descriptions are important. Thirdly, we review the types of model descriptions and evaluate each on their utility for the purposes. Our evaluation finds that the choice of the appropriate model description type is purpose-dependent and that no single description type alone can fulfil all requirements simultaneously. However, we suggest a minimum standard of model description for good modelling practice, namely the provision of source code and an accessible natural language description, and argue for the development of a common standard.
Keywords
Agent-based modelling; Domain specific languages; Graphical representations; Model communication; Model comparison; Model development; Model design; Model replication; Standardised protocols
Back in February last year I wrote a blog post describing my initial work using agent-based modelling to examine spatial patterns of school choice in some of London's education authorities. Right at the start of this month I presented a summary of the development of that work at the IGU 2013 Conference on Applied GIS and Spatial Modelling (see the slideshare presentation below). And then this week I had a full paper with all the detailed analysis accepted by JASSS - the Journal of Artificial Societies and Social Simulation. Good news!
One of the interesting things we show with the model, which was not readily at the outset of our investigation, is that parent agents with above average but not very high spatial mobility fail to get their child into their preferred school more frequently than other parents - including those with lower mobility. This is partly due to the differing aspirations of parents to move house to ensure they live in appropriate neighbourhoods, given the use of distance (from home to school) to ration places at popular schools. In future, when better informed by individual-level data and used in combination with scenarios of different education policies, our modelling approach will allow us to more rigorously investigate the consequences of education policy for inequalities in access to education.
I've pasted the abstract below and because JASSS is freely available online you'll be able to read the entire paper in a few months when it's officially published. Any questions before then, just zap me an email.
Millington, J.D.A., Butler, T. and Hamnett, C. (forthcoming) Aspiration, Attainment and Success: An agent-based model of distance-based school allocation Journal of Artificial Societies and Social Simulation
Abstract
In recent years, UK governments have implemented policies that emphasise the ability of parents to choose which school they wish their child to attend. Inherently spatial school-place allocation rules in many areas have produced a geography of inequality between parents that succeed and fail to get their child into preferred schools based upon where they live. We present an agent-based simulation model developed to investigate the implications of distance-based school-place allocation policies. We show how a simple, abstract model can generate patterns of school popularity, performance and spatial distribution of pupils which are similar to those observed in local education authorities in London, UK. The model represents ‘school’ and ‘parent’ agents. Parental ‘aspiration’ to send their child to the best performing school (as opposed to other criteria) is a primary parent agent attribute in the model. This aspiration attribute is used as a means to constrain the location and movement of parent agents within the modelled environment. Results indicate that these location and movement constraints are needed to generate empirical patterns, and that patterns are generated most closely and consistently when schools agents differ in their ability to increase pupil attainment. Analysis of model output for simulations using these mechanisms shows how parent agents with above-average – but not very high – aspiration fail to get their child a place at their preferred school more frequently than other parent agents. We highlight the kinds of alternative school-place allocation rules and education system policies the model can be used to investigate.
The main aim of visiting Assoc. Profs. David O'Sullivan and George Perry was to continue on from where we left off with our recent work on agent-based modelling (including that published inGeoforum on narrative explanation and in the ABM of Geographical systems book chapter). The paper on narrative explanation was actually initiated in a previous trip I made to Auckland in 2005 - takes a while for these things to come to fruition (but in my defence I was busy with otherthings for several years and there were otheroutcomes from that trip). Hopefully, such a concrete outcome as a publication from our modelling and discussions won't be so long in coming this time around! In particular, we'll continue to examine the idea that, just as we fail to maximise the value of spatial models by not using spatial analysis of their output, we fail to maximise the value of agent-based models by not using agent-based analysis of their output. Identifying means of understanding how agent interactions and attributes influence path dependency in system dynamics seems and interesting place to start...
While in Auckland I also made good progress on the manuscript I'm writing with John Wainwright on the value of agent-based modelling for integrating geographical understanding (which I mentioned previously). I presented the main ideas from this manuscript in a seminar to members of the School of Environment and got some useful feedback. The slides from my presentation are below and I'm sure I'll discuss that more here in future.
Another area I made progress on with George is the continuing use of the Mediterranean disturbance-succession modelling platform developed during my PhD. We think there are some interesting questions we can use an enhanced version of the original model to investigate, including examining the controls on Mediterranean vegetation competition and succession during the Holocene. One of the most sensitive aspects of the original model was the importance of soil moisture for succession dynamics and I've started on updating the model to use the soil-water balance model employed in the LandClim model. Enhancing the model in this way will also improve it's applicability to explore fire-vegetation interactions with human activity and to explore questions regarding fire-vegetation-terrain interactions (i.e., ecogeomorphology).
So, lots to be going on with and hopefully I'll be able to visit again in another few years time.
Last week I was in Los Angeles for my first ever Association of American Geographers Annual Meeting. I think I hadn't been before because the US-IALE annual meeting is around the same time of year and attending that has made more sense in the last few years given my work on forest modelling in Michigan. As I'd heard previously, the meeting was huge - although not quite as crazy as it could have been.
Most of my participation at the meeting was related to the Land Systems Science Symposium sessions (which ran across four days) and the Agent-Based and Cellular Automata Model for Geographical Systems sessions. It was good to discuss and meet new people wrestling with similar issues to those in my own research. Unfortunately, the ABM sessions were scheduled for the last day which meant it was only late in the conference that I got to properly meet people I'd encountered online (e.g., Mike Batty, Andrew Crooks, Nick Magliocca) and others. Despite being scheduled for the last day there was a good turnout in the sessions and my presentation (below) seemed to go down well. Researchers from the group at George Mason University were most well-represented, with much of their work using the MASON modelling libraries (which I'm going to have to looking into more to continue the work initiated during my PhD).
It's hard to concentrate on 20-minute paper sessions continuously for five days though, and I found the discussion panels and plenaries a nice relief, allowing a broader picture to develop. For example, David O'Sullivan (whom I'm currently visiting at the University of Auckland) chaired and interesting panel discussion on ABM for Land Use/Cover Change. Participants included, Chris Bone who discussed the need for better representation of model uncertainty from multiple simulation (via temporal variant-invariant analysis - coming soon in IJGIS); Dan Brown who suggested we're missing mid-level models that are neither abstract 'toys' nor beholden to mimetic reproduction of specific empirical data (e.g., where are the ABM equivalents of von Thunen and Burgess type models?); and Moira Zellner who highlighted problems of using ABM for decision-making in participatory approaches (Moira's presentation in the ABM session was great, discussing the 'blow-up' in her participatory modelling project when the model got too complicated and stakeholders no longer wanted to know what the model was doing under the hood).
I also really enjoyed Mike Goodchild's Progress in Human Geography annual lecture, in which he reviewed the development of GIScience through his long career and where he thought it should go next ('Old Debates, New Opportunities'). Goodchild argued (I think) that Geography cannot (and should not) be an experimental science in the mold of Physics, and that rather than attempting to identify laws in social (geographical) science, we should aim to find things that can be deemed to be 'generally true' and used as a norm for reducing uncertainty. This is possible because geography is 'neither uniform nor unique', but it is repeating. Furthermore, he argued it was time for GIScience to rediscover place and that a technology of place is needed to accompany the (existing) technology of space. This technology of place might use names rather than co-ordinates, hierarchies of places rather than layers of coverages, and produce sketch maps rather than planimetric maps. The substitution of names of places for co-ordinates of locations is particularly important here, as names are social constructs and so multiple (local) maps are possible (and needed) rather than a single (global) map. Goodchild exemplified this using Google Maps, which differs depending on which country you view it from (e.g., depending on what the State views as its legitimate borders). He talked about loads of other stuff, including critical GIS, but these were the points I found most intriguing.
Another way to break up the constant stream of 20-minute project summaries would have been organised fieldtrips around the LA area. However, unlike the landscape ecology conference there is no single time set aside for fieldtrips, and while there are organised trips they're scheduled throughout the week (simultaneous with sessions). Given such a large conference I guess it would be hard to fit all the sessions into a single week if time were set aside. I didn't make it to any of the formal fieldtrips, but with Ben Clifford (checkout his new book, The Collaborating Planner?) and Kerry Holden I did manage to find time to hit the beach for some sun. It was a long winter in the UK after all! Now I'm in Auckland it's warm but stormy; an update about activities here to come in May.
This week I visited one of my former PhD advisors, Prof John Wainwright, at Durham University. We've been working on a manuscript together for a while now and as it's stalled recently we thought it time we met up to re-inject some energy into it. The manuscript is a discussion piece about how agent-based modelling (ABM) can contribute to understanding and explanation in geography. We started talking about the idea in Pittsburgh in 2011 at a conference on the Epistemology of Modeling and Simulation. I searched through this blog to see where I'd mentioned the conference and manuscript before, but to my surprise, before this post I hadn't.
In our discussion of what we can learn through using ABM, John highlighted the work of Kurt Godel and his incompleteness theorems. Not knowing all that much about that stuff I've been ploughing my way through Douglas Hofstadter's tome 'Godel, Escher and Bach: An Eternal Golden Braid' - heavy going in places but very interesting. In particular, his discussion of the concept of recursion has taken my notice, as it's something I've been identifying elsewhere.
The general concept of recursion involved nesting, like Russian dolls, stories within stories (like in Don Quixote) and images within images:
Computer programmers of take advantage of recursion in their code, calling a given procedure from within that same procedure (hence their love of recursive acronyms like PHP [PHP Hypertext Processor]). An example of how this works is in Saura and Martinez-Millan's modified random clusters method for generating land cover patterns with given properties. I used this method in the simulation model I developed during my PhD and have re-coded the original algorithm for use in NetLogo [available online here]. In the code (below) the grow-cover_cluster procedure is called from within itself, allowing clusters of pixels to 'grow themselves'.
However, rather than get into the details of the use of recursion in programming, I want to highlight two other ways in which recursion is important in social activity and its simulation.
The first, is in how society (and social phenomena) has a recursive relationship with the people (and their activities) composing it. For example, Anthony Gidden's theory of structuration argues that the social structures (i.e., rules and resources) that constrain or prompt individuals' actions are also ultimately the result of those actions. Hence, there is a duality of structure which is:
"the essential recursiveness of social life, as constituted in social practices: structure is both medium and outcome of reproduction of practices. Structure enters simultaneously into the constitution of the agent and social practices, and 'exists' in the generating moments of this constitution". (p.5 Giddens 1979)
Another example comes from Andrew Sayer in his latest book 'Why Things Matter to People' which I'm also progressing through currently. One of Sayer's arguments is that we humans are "evaluative beings: we don't just think and interact but evaluate things". For Sayer, these day-to-day evaluations have a recursive relationship with the broader values that individuals hold, values being 'sedimented' valuations, "based on repeated particular experiences and valuations of actions, but [which also tend], recursively, to shape subsequent particular valuations of people and their actions". (p.26 Sayer 2011)
However, while recursion is often used in computer programming and has been suggested as playing a role in different social processes (like those above), its examination in social simulation and ABM has not been so prominent to date. This was a point made by Paul Thagard at the Pittsburgh epistemology conference. Here, it seems, is an opportunity for those seeking to use simulation methods to better understand social patterns and phenomena. For example, in an ABM how do the interactions between individual agents combine to produce structures which in turn influence future interactions between agents?
Second, it seems to me that there are potentially recursive processes surrounding any single simulation model. For if those we simulate should encounter the model in which they are represented (e.g., through participatory evaluation of the model), and if that encounter influences their future actions, do we not then need to account for such interactions between model and modelee (i.e., the person being modelled) in the model itself? This is a point I raised in the chapter I helped John Wainwright and Dr Mark Mulligan re-write for the second edition of their edited book "Environmental Modelling: Finding Simplicity in Complexity":
"At the outset of this chapter we highlighted the inherent unpredictability of human behaviour and several of the examples we have presented may have done little to persuade you that current models of decision-making can make accurate forecasts about the future. A major reason for this unpredictability is because socio-economic systems are ‘open’ and have a propensity to structural changes in the very relationships that we hope to model. By open, we mean that the systems have flows of mass, energy, information and values into and out of them that may cause changes in political, economic, social and cultural meanings, processes and states. As a result, the behaviour and relationships of components are open to modification by events and phenomena from outside the system of study. This modification can even apply to us as modellers because of what economist George Soros has termed the ‘human uncertainty principle’ (Soros 2003). Soros draws parallels between his principle and the Heisenberg uncertainty principle in quantum mechanics. However, a more appropriate way to think about this problem might be by considering the distinction Ian Hacking makes between the classification of ‘indifferent’ and ‘interactive’ kinds (Hacking, 1999; also see Hoggart et al., 2002). Indifferent kinds – such as trees, rocks, or fish – are not aware that they are being classified by an observer. In contrast humans are ‘interactive kinds’ because they are aware and can respond to how they are being classified (including how modellers classify different kinds of agent behaviour in their models). Whereas indifferent kinds do not modify their behaviour because of their classification, an interactive kind might. This situation has the potential to invalidate a model of interactive kinds before it has even been used. For example, even if a modeller has correctly classified risk-takers vs. risk avoiders initially, a person in the system being modelled may modify their behaviour (e.g., their evaluation of certain risks) on seeing the results of that behaviour in the model. Although the initial structure of the model was appropriate, the model may potentially later lead to its own invalidity!" (p. 304, Millington et al. 2013)
The new edition was just published this week and will continue to be a great resource for teaching at upper levels (I used the first edition in the Systems Modeling and Simulation course I taught at MSU, for example).
More recently, I discussed these ideas about how models interact with their subjects with Peter McBurney, Professor in Informatics here at KCL. Peter has written a great article entitled 'What are Models For?', although it's somewhat hidden away in the proceedings of a conference. In a similar manner to Epstein, Peter lists the various possible uses for simulation models (other than prediction, which is only one of many) and also discusses two uses in more detail - mensatic and epideictic. The former function relates to how models can bring people around a metaphorical table for discussion (e.g., for identifying and potentially deciding about policy trade-offs). The other, epideictic, relates to how ideas and arguments are presented and leads Peter to argue that by representing real world systems in a simulation model can force people to "engage in structured and rigorous thinking about [their problem] domain".
John and I will be touching on these ideas about the mensatic and epideictic functions of models in our manuscript. However, beyond this discussion, and of relevance here, Peter discusses meta-models. That is, models of models. The purpose here, and continuing from the passage from my book chapter above, is to produce a model (B) of another model (A) to better understand the relationships between Model A and the real intelligent entities inside the domain that Model A represents:
"As with any model, constructing the meta-model M will allow us to explore “What if?” questions, such as alternative policies regarding the release of information arising from model A to the intelligent entities inside domain X. Indeed, we could even explore the consequences of allowing the entities inside X to have access to our meta-model M." (p.185, McBurney 2012)
Thus, the models are nested with a hope of better understanding the recursive relationship between models and their subjects. Constructing such meta-models will likely not be trivial, but we're thinking about it. Hopefully the manuscript John and I are working on will help further these ideas, as does writing blog posts like this.
Selected Reference
McBurney (2012): What are models for? Pages 175-188, in: M. Cossentino, K. Tuyls and G. Weiss (Editors): Post-Proceedings of the Ninth European Workshop on Multi-Agent Systems (EUMAS 2011). Lecture Notes in Computer Science, volume 7541. Berlin, Germany: Springer.
Millington et al. (2013) Representing human activity in environmental modelling In: Wainwright, J. and Mulligan, M. (Eds.) Environmental Modelling: Finding Simplicity in Complexity. (2nd Edition) Wiley, pp. 291-307 [Online] [Wiley]
"The model simulates the initial height of the tallest saplings 10 years following gap creation (potentially either advanced regeneration or gap colonizers), and grows them until they are at least 7 m in height when they are passed to FVS for continued simulation. Our approach does not aim to produce a thorough mechanistic model of regeneration dynamics, but rather is one that is sufficiently mechanistically-based to allow us to reliably predict regeneration for trees most likely to recruit to canopy positions from readily-collectable field data."
In the model we assume that each forest gap contains space for a given number of 7m tall trees. For each of these spaces in a gap, we estimate the probability that it is in one of four states 10 years after harvest:
occupied by a 2m or taller sugar maple tree (SM)
occupied by a 2m or taller ironwood tree (IW)
occupied by a 2m or taller tree of another species (OT)
not occupied by a tree 2m or taller (i.e., empty, ET)
To estimate the probabilities of these states for each of the gap spaces, given different environmental conditions, we use regression modelling for composition data:
"The gap-level probability for each of the four gap-space states (i.e., composition probabilities) is estimated by a regression model for composition data (Aitchison, 1982 and Aitchison, 1986). Our raw composition data are a vector for each of our empirical gaps specifying the proportion of all saplings with height >2 m that were sugar maple, ironwood, or other species (i.e., SM, IW, and OT). If the total number of trees with height >2 m is denoted by t, the proportion of empty spaces (ET) equals zero if t > n, otherwise ET = (n − t)/n. These raw composition data provide information on the ratios of the components (i.e., gap-space states). The use of standard statistical methods with raw composition data can lead to spurious correlation effects, in part due to the absence of an interpretable covariance structure (Aitchison, 1986). However, transforming composition data, for example by taking logarithms of ratios (log-ratios), enables a mapping of the data onto the whole of real space and the use of standard unconstrained multivariate analyses (Aitchison and Egozcue, 2005). We transformed our composition data with a centred log-ratio transform using the ‘aComp’ scale in the ‘compositions’ package (van den Boogaart and Tolosana-Delgado, 2008) in R (R Development Core Team, 2009). These transformed data were then ready for use in a standard multivariate regression model. A centred log-ratio transform is appropriate in our case as our composition data are proportions (not amounts) and the difference between components is relative (not absolute). The ‘aComp’ transformation uses the centred log-ratio scalar product (Aitchison, 2001) and worked examples of the transformation computation can be found in Tolosana-Delgado et al. (2005)."
One of the things I'd like to highlight here is that the R script I wrote to do this modelling is available online as supplementary material to the paper. You can view the R script here and the data we ran it for here.
If you look at the R script you can see that for each gap, proportions of gap-spaces in the four states predicted by the regression model are interpreted as the probability that gap-space is in the corresponding state. With these probabilities we predict the state of each gap space by comparing a random value between 0 and 1 to the cumulative probabilities for each state estimated for the gap. Table 1 in the paper shows an example of this.
With this model setup we ran the model for scenarios of different soil conditions, deer densities, canopy openness and Ironwood basal area (the environmental factors in the model that influence regeneration). The results for these scenarios are shown in the figure below.
Hopefully this gives you an idea about how the model works. The paper has all the details of course, so check that out. If you'd like a copy of the paper(s) or have any questions just get in touch (email or @jamesmillington on twitter)
Millington, J.D.A., Walters, M.B., Matonis, M.S. and Liu, J. (2013) Filling the gap: A compositional gap regeneration model for managed northern hardwood forests Ecological Modelling 253 17–27
doi: 10.1016/j.ecolmodel.2012.12.033
Regeneration of trees in canopy gaps created by timber harvest is vital for the sustainability of many managed forests. In northern hardwood forests of the Great Lakes region of North America, regeneration density and composition are highly variable because of multiple drivers that include browsing by herbivores, seed availability, and physical characteristics of forest gaps and stands. The long-term consequences of variability in regeneration for economic productivity and wildlife habitat are uncertain. To better understand and evaluate drivers and long-term consequences of regeneration variability, simulation models that combine statistical models of regeneration with established forest growth and yield models are useful. We present the structure, parameterization, testing and use of a stochastic, regression-based compositional forest gap regeneration model developed with the express purpose of being integrated with the US Forest Service forest growth and yield model ‘Forest Vegetation Simulator’ (FVS) to form an integrated simulation model. The innovative structure of our regeneration model represents only those trees regenerating in gaps with the best chance of subsequently growing into the canopy (i.e., the tallest). Using a multi-model inference (MMI) approach and field data collected from the Upper Peninsula of Michigan we find that ‘habitat type’ (a proxy for soil moisture and nutrients), deer density, canopy openness and basal area of mature ironwood (Ostrya virginiana) in the vicinity of a gap drive regeneration abundance and composition. The best model from our MMI approach indicates that where deer densities are high, ironwood appears to gain a competitive advantage over sugar maple (Acer saccharum) and that habitat type is an important predictor of overall regeneration success. Using sensitivity analyses we show that this regeneration model is sufficiently robust for use with FVS to simulate forest dynamics over long time periods (i.e., 200 years).
Millington, J.D.A., Walters, M.B., Matonis, M.S. and Liu, J. (2013) Modelling for forest management synergies and trade-offs: Northern hardwood tree regeneration, timber and deer Ecological Modelling 248 103–112
doi: 10.1016/j.ecolmodel.2012.09.019
In many managed forests, tree regeneration density and composition following timber harvest are highly variable. This variability is due to multiple environmental drivers – including browsing by herbivores such as deer, seed availability and physical characteristics of forest gaps and stands – many of which can be influenced by forest management. Identifying management actions that produce regeneration abundance and composition appropriate for the long-term sustainability of multiple forest values (e.g., timber, wildlife) is a difficult task. However, this task can be aided by simulation tools that improve understanding and enable evaluation of synergies and trade-offs between management actions for different resources. We present a forest tree regeneration, growth, and harvest simulation model developed with the express purpose of assisting managers to evaluate the impacts of timber and deer management on tree regeneration and forest dynamics in northern hardwood forests over long time periods under different scenarios. The model couples regeneration and deer density sub-models developed from empirical data with the Ontario variant of the US Forest Service individual-based forest growth model, Forest Vegetation Simulator. Our error analyses show that model output is robust given uncertainty in the sub-models. We investigate scenarios for timber and deer management actions in northern hardwood stands for 200 years. Results indicate that higher levels of mature ironwood (Ostrya virginiana) removal and lower deer densities significantly increase sugar maple (Acer saccharum) regeneration success rates. Furthermore, our results show that although deer densities have an immediate and consistent negative impact on forest regeneration and timber through time, the non-removal of mature ironwood trees has cumulative negative impacts due to feedbacks on competition between ironwood and sugar maple. These results demonstrate the utility of the simulation model to managers for examining long-term impacts, synergies and trade-offs of multiple forest management actions.
I was hoping to make my first blog post of the year about the latest paper to come out of my work in Michigan. The paper is entitled, Filling the gap: A compositional gap regeneration model for managed northern hardwood forests and is forthcoming in Ecological Modelling. Unfortunately, despite being accepted for publication by the editors some time before Christmas, the manuscript seems to have got lost in the production system and has been delayed. If all goes to plan the paper will be out in time for February's blog post. Instead, today I'll highlight some other recent activities.
Between Christmas and New Year I took a bit of time to finish off a paper I was invited to submit to a special issue of Ecology and Society. The special issue will be entitled, Exploring Feedbacks in Coupled Human and Natural Systems (CHANS) and will bring together multiple different approaches for accounting for feedbacks in CHANS modelling and applications. The CHANS research framework emphasizes the importance of reciprocal human-nature interactions and the need for holistic study of humans and nature. Feedback loops can be formed in CHANS when information about one system component produces a change in a second component, which in turn provides information which produces a change in the original component.
Feedbacks loops between human and natural components of coupled systems are a primary reason that humans and nature must be investigated together to properly understand their temporal dynamics. However, as a geographer I'm also interested in the role space plays in system dynamics. It seems that there haven't been any broad overviews or analyses of spatial feedbacks for CHANS, so I set out to produce one with the goal of improving understanding about the issue.
After a couple of drafts with very useful comments from the editors of the special issue and colleagues George Perry and David O'Sullivan, I arrived at a manuscript entitled, Three types of spatial feedback loop in coupled human and natural systems. As the title suggests, after identifying some of the key characteristics of feedbacks, I conceptualize and describe three types of spatial feedback loop. These three types address the areal growth of system entities, the importance of transport costs across space, and how spatial patterns can create feedback loops with spatial spread processes.
I won't go into the details of these now as the manuscript is still under peer review (I think it's a bit of a Marmite manuscript - they'll either love it or hate it). However, I will highlight some of the simple spatial simulation models I used to help me conceptualize the feedbacks and which should be useful to help readers do the same (along with the real world examples I used). You can play with the simulation models yourself as they are freely available online. Download the models and their source code for use with NetLogo from http://www.openabm.org/models/eschansfeedback/tag, or use them online without downloading NetLogo from http://modelingcommons.org/tags/one_tag/166. I think these simple spatial simulations should be far more helpful for understanding spatio-temporal dynamics - inherent to spatial feedbacks - than the figures I present in the paper (like that below). See what you think. We'll find out whether the reviewers love it or hate it in a month or two.
Since the New Year, I've spent most of my time working on undergraduate modules I'll be teaching later this term. In particular, I'm developing a new module named Spatial Data and Mapping for the Principles of Geographical Inquiry course. In the module I'll introduce students to some of the methods, tools and technologies available to collect and present spatial data. These include GPS and remote sensing (e.g., orthophotos) on the collection side and EDINA Digimap and ArcMap on the presentation side of things. Alongside lectures, there will be plenty of opportunity for students to use these tools as they will collect their own data from London's Southbank which they will then use to create a digital map. It's the first time running the module so there may be some teething issues, but hopefully the students will find it interesting and useful for their future studies.
I'm also teaching a PhD-level short course for the KISS-DTC entitled, Social Simulation. The course will provide an introduction to the use of computer simulation methods - notably agent-based modelling - for questions germane to social scientists. I won't go into detail on that now, maybe in future.
Finally, I'll just highlight some new urlists I've been making as resources for myself and students (and maybe you?). Urlist is a collaboration tool to collect, organize and share lists of links which I've found quite handy. I've started lists on Open Data (freely available for analysis), Spatial Data and Geodata resources and tools, and Valuation of Ecosystem Services. The Open Data list is collaborative so anyone can contribute relevant links - if you know good Open Data sources online that aren't listed there please feel free to add!
Nearing the end of 2012 and the total number of posts on this blog has been even fewer this year than in 2011. At least I have been tweeting a bit more of late. Here's a quick round-up of activities and publications since my last post with a look at some of what's going on in 2013.
The Geoforumpaper on narrative explanation of simulation modelling is now officially published, as is the first of twoEcological Modelling papers on the Michigan forest modelling work. Citations and abstract for both are below, and are included on my updated publications list. I'll post more details and info on each in the New Year (promise!). I'll likely wait to summarise the Michigan paper until the second paper of that couplet is published - hopefully that won't be too long as it's now going through the proofs stage.
The proceedings for the iEMSs conference I attended in Leipzig, Germany, this summer are now online. That means that the two papers I presented there are also available. One paper was on the use of social psychology theory for modelling farmer decision-making, and the model I discuss in that paper is available for you to examine. The other paper was a standpoint contribution to a workshop on the place of narrative for explaning decision-making in agent-based models. From that workshop we're working on a paper to be published in Environmental Modelling and Software about model description methods for agent-based models. More on that next year too hopefully.
In one of my earlier posts this year I talked about agent-based modelling spatial patterns of school choice (I'll get the images for that post online again soon... maybe). I've managed to write up the early stages of that work and have submitted it to JASSS. We'll see how that goes down. I hope to continue on that work in the new year also, possibly while in New Zealand at the University of Auckland. I'll be in Auckland visiting and working with George Perry and David O'Sullivan, with whom I published the recent Geoforum paper (highlighted above). On the way to New Zealand I'll be stopping off in Los Angeles for the Association of American Geographers conference which I haven't been to previously and which should be interesting.
So that's it for 2012. A New Year's resolution for 2013 - post at least once every month on this blog! Especially from Down Under.
Happy Holidays!
Abstracts Millington, J.D.A., O’Sullivan, D., Perry, G.L.W. (2012) Model histories: Narrative explanation in generative simulation modellingGeoforum43 1025–1034
The increasing use of computer simulation modelling brings with it epistemological questions about the possibilities and limits of its use for understanding spatio-temporal dynamics of social and environmental systems. These questions include how we learn from simulation models and how we most appropriately explain what we have learnt. Generative simulation modelling provides a framework to investigate how the interactions of individual heterogeneous entities across space and through time produce system-level patterns. This modelling approach includes individual- and agent-based models and is increasingly being applied to study environmental and social systems, and their interactions with one another. Much of the formally presented analysis and interpretation of this type of simulation resorts to statistical summaries of aggregated, system-level patterns. Here, we argue that generative simulation modelling can be recognised as being ‘event-driven’, retaining a history in the patterns produced via simulated events and interactions. Consequently, we explore how a narrative approach might use this simulated history to better explain how patterns are produced as a result of model structure, and we provide an example of this approach using variations of a simulation model of breeding synchrony in bird colonies. This example illustrates not only why observed patterns are produced in this particular case, but also how generative simulation models function more generally. Aggregated summaries of emergent system-level patterns will remain an important component of modellers’ toolkits, but narratives can act as an intermediary between formal descriptions of model structure and these summaries. Using a narrative approach should help generative simulation modellers to better communicate the process by which they learn so that their activities and results can be more widely interpreted. In turn, this will allow non-modellers to foster a fuller appreciation of the function and benefits of generative simulation modelling.
Millington, J.D.A., Walters, M.B., Matonis, M.S. and Liu, J. (2013) Modelling for forest management synergies and trade-offs: Northern hardwood tree regeneration, timber and deerEcological Modelling248 103–112
In many managed forests, tree regeneration density and composition following timber harvest are highly variable. This variability is due to multiple environmental drivers – including browsing by herbivores such as deer, seed availability and physical characteristics of forest gaps and stands – many of which can be influenced by forest management. Identifying management actions that produce regeneration abundance and composition appropriate for the long-term sustainability of multiple forest values (e.g., timber, wildlife) is a difficult task. However, this task can be aided by simulation tools that improve understanding and enable evaluation of synergies and trade-offs between management actions for different resources. We present a forest tree regeneration, growth, and harvest simulation model developed with the express purpose of assisting managers to evaluate the impacts of timber and deer management on tree regeneration and forest dynamics in northern hardwood forests over long time periods under different scenarios. The model couples regeneration and deer density sub-models developed from empirical data with the Ontario variant of the US Forest Service individual-based forest growth model, Forest Vegetation Simulator. Our error analyses show that model output is robust given uncertainty in the sub-models. We investigate scenarios for timber and deer management actions in northern hardwood stands for 200 years. Results indicate that higher levels of mature ironwood (Ostrya virginiana) removal and lower deer densities significantly increase sugar maple (Acer saccharum) regeneration success rates. Furthermore, our results show that although deer densities have an immediate and consistent negative impact on forest regeneration and timber through time, the non-removal of mature ironwood trees has cumulative negative impacts due to feedbacks on competition between ironwood and sugar maple. These results demonstrate the utility of the simulation model to managers for examining long-term impacts, synergies and trade-offs of multiple forest management actions.
This week on the SIMSOC listserv was a request from Annie Waldherr & Nanda Wijermans for modellers of social systems to complete a short questionnaire on the sort of criticism they receive. The questionnaire is only two short questions, one asking what field you are in and the other asking you to 'Describe the criticism you receive. For instance, recall the questions or objections you got during a talk you gave. Feel free to address several points.'
Here was my quick response to the second question:
1) Too many 'parameters' in agent-based models (ABM) make them difficult to analyse rigorously and fully appreciate the uncertainty of (although I think this kind of statement highlights the mis-understanding some have of how ABM can be structured - often models of this type are more reliant on rules of interactions between agents than individual parameters).
2) The results of models are seen as being driven by the assumptions of the modeller than by the state of the real world. That is, modellers may learn a lot about their models but not much about the real world (see similar point made by Grimm [1999] in Ecological Modelling 115)
I think it would have been nice to have a third question offering an opportunity to suggest how we can, or should, respond to these critisisms. Here's what I would have written if that third question was there:
To address point 1) above we need to make sure that we:
i) document our models comprehensively (e.g., via ODD) so that others understand model structure and can identify likely important parameters/rules and assumptions;
ii) show that the model parameter space has been widley explored (e.g., via use of techniques like Latin hypercube sampling).
To address 2) we need to make sure that:
iii) when documenting our models (see i) we fully justify the rationale of our models, hopefully with reference to real world data;
iv) we acknowledge and emphasise that the current state of ABM means that usually they can be no more than metaphors or sophisticated analogies for the real world but that they are useful for providing alternative means to think about social phenomena (i.e., they have heuristic properties).
If you're working in this area go and share your thoughts by completing the short questionnaire , or leaving comments below.
A couple of weeks ago I visited King's Department of Education to give a seminar I entitled Agent-based simulation for distance-based school allocation policy analysis. The aim was to introduce agent-based modelling to those unaware and hopefully open a debate on how it might be used in future education research. This all came about as I've been working on modelling the drivers and consequences of school choice with Profs Chris Hamnett and Tim Butler here in King's Geography Department.
In their recent research, Chris and Tim looked at the role geography plays in educational inequalities in East London. Many UK local education authorities (LEAs) use spatial distance as a key criterion in their policy for allocating school places: people that live closer to a school rank get allocated to it before those that live farther away. This is necessary because it's often the case that more people want to send their children to a school than there are places available at it. For example, you can read about the criteria the Hackney LEA uses in their brochure for 2012.
Using data from several LEAs, Chris and Tim showed empirically how this distance criterion is related to school popularity. School popularity is indicated for example by the ratio of school applicants to the number of places available at the school (A:P) - some schools have very high ratios (e.g. up to 8 applications per place) and others very low (e.g. down to around one application per place). Furthermore, this spatial allocation criterion is an important influence on parents’ strategies for school applications, dependent on the location of their home relative to schools and their ability to move home.
These allocation rules, combined with parent's strategies, produce patterns and relationships between schools' GCSE achievement levels, A:P ratio and the maximum distance that allocated pupils live from the school. In Barking, for example, we see in the figure below that more popular schools have higher percentages of pupils achieving five GCSE's with grades A* - C, and that these same popular schools also have the smallest maximum distances (i.e. pupils generally live very close to the school).
This spatial pattern can also seen when we look at maps of the locations of successful and unsuccessful applicants to popular and less popular schools in Hackney. For example, looking at the figure below (found in Hamnett and Butler 2011) we can see how successful applicants to The Bridge Academy (a popular school) are more tightly clustered around the it than those for Clapton Girls' Technology College (not such a popular school).
The geography of this school allocation policy, combined with differences in parents’ circumstances, suggests this issue is a prime candidate for study using agent-based modelling. Agent-based simulation modelling might be useful here because it provides a means to represent interactions between individual actors with different attributes (in this case schools and parents) across space and time. Once the simulation model structure (e.g. rules of interactions between agents) has been established, it can then be used to examine the potential effects of things like opening or closing schools (i.e. changes in external conditions) or changes in school allocation policy rules or parents’ application strategies (i.e. internal model relationships and rules).
I developed an initial 'model' as a proof of concept and which you can try out yourself. Things have progressed from that proof of concept model, and the model now represents changes in cohorts of school applicants and pupils through time, including the potential for parents to move house to be more likely to get their child into a desired school.
In the seminar with the Department of Education guys I presented some ouput from the recent modelling. I showed how the abstract model with relatively few and simple assumptions can start from random conditions to reproduce empirical spatial patterns in school applications and attainment outcomes like those described above (see the figure below)
I also presented early results from using the simulation model to explore implications of potential policy alternatives (such as closing failing schools). These ideas were generally welcomed in the seminar but there were some interesting questions about the what the model assumptions might entail for maintaining existing policy assumptions and intentions (what we might term the rhetoric of modelling).
I'm exploring some of these questions now, including for example issues of how we define a 'good' school and how parents' school application strategies might change as allocation rules change. These will feed into a research manuscript that I'll continue to work on with Chris and Tim.
Although I've been working on new ideas since leaving Michigan and returning to London about a year ago, there's still lots to do to examining alternative forest management strategies.
Several years ago we set out to develop a simulation model that could be used to investigate the effects of interactions between timber harvest and deer browse disturbances on economic productivity and wildlife habitat. We've already published several papers on this work, but just before Christmas we submitted a manuscript to Ecological Modelling entitled 'Modelling for forest management synergies and trade-offs: Tree regeneration, timber and wildlife'. In the manuscript we report error analyses of the full simulation model (which uses the USFS Forest Vegeation Simulator) and use the model to investigate scenarios of different timber and deer management actions. Our results indicate that greater harvest of commercially low-value ironwood and lower deer densities significantly increase sugar maple regeneration success over the long term.
I expect we'll also report some of these results at the Fourth Forest Vegetation Simulator (FVS) Conference to be held in April this year in Fort Collins, CO. Our abstract, entitled 'Investigating combined long-term effects of variable tree regeneration and timber management on forest wildlife and timber production using FVS', has been accepted for oral presentation. It would be great to be there myself to present the paper and discuss things with other FVS experts, but I'm not sure if that will be possible. If it's not, Megan Matonis will present as, handily, she's currently doing her PhD in that neck of the woods at Colorado State University.
In the meantime, Megan and I are in the process of finishing off a different manuscript describing the mesic conifer planting experiment we did in Michigan. In that experiment we planted seedlings of white pine (Pinus strobus), hemlock (Tsuga canadensis), and white spruce (Picea glauca) in northern hardwood stands with variable deer densities and then monitored browse on the seedlings over two years. We found that damage to pine and hemlock seedlings was inversely related to increasing snow depth, and our data suggest a positive relationship between hemlock browse and deer density. These results suggest that hemlock restoration efforts will not be successful without protection from deer. Hopefully we'll submit the manuscript, possibly to the Northern Journal of Applied Forestry, in the next month or so.
All of this work has been pursued with management in mind, so it was nice this week to receive a call from Bob Doepker, a manager at the Michigan Department of Natural Resources with whom we worked to co-ordinate data collection and establish key research questions. Bob had some questions about the details and implications of our previous findings for deer habitat, tree regeneration and how they should be managed. It was good to catch up, and no doubt our ongoing work will continue to contribute to contemporary management understanding and planning.
So term is drawing to an end. There's lots been going on since I last posted here and I'll write a full update of that over the Christmas break. I'll just highlight here quickly that the agent-based modelling book I contributed to has now been published.
Agent-Based Models of Geographical Systems, is editied by Alison Heppenstall, Andrew Crooks, Linda See and Mike Batty and presents a comprehensive collection of papers on the background, theory, technical issues and applications of agent-based modelling (ABM) in geographical systems. David O'Sullivan, George Perry, John Wainwright and I put together a paper entitled 'Agent-based models – because they’re worth it?' that falls into the 'Principles and Concepts of Agent-Based Modelling' section of the book. To give an idea of what the paper is about, here's the opening paragraph:
"In this chapter we critically examine the usefulness of agent-based models (ABMs) in geography. Such an examination is important be-cause although ABMs offer some advantages when considered purely as faithful representations of their subject matter, agent-based approaches place much greater demands on computational resources, and on the model-builder in their requirements for explicit and well-grounded theories of the drivers of social, economic and cultural activity. Rather than assume that these features ensure that ABMs are self-evidently a good thing – an obviously superior representation in all cases – we take the contrary view, and attempt to identify the circumstances in which the additional effort that taking an agent-based approach requires can be justified. This justification is important as such models are also typically demanding of detailed data both for input parameters and evaluation and so raise other questions about their position within a broader research agenda."
In the paper we ask:
Are modellers agent-based because they should be or because they can be?
What are agents? And what do they do?
So when do agents make a difference?
To summarise our response to this last question we argue;
"Where agents’ preferences and (spatial) situations differ widely, and where agents’ decisions substantially alter the decision-making con-texts for other agents, there is likely to be a good case for exploring the usefulness of an agent-based approach. This argument focuses attention on three model features: heterogeneity of the decision-making context of agents, the importance of interaction effects, and the overall size and organization of the system."
Hopefully people will find this, and the rest of the book useful! You can check out the full table of contents here.
These last few days I've been up in Edinburgh visiting folks at the Forestry Commission's Northern Research Station to discuss the socio-ecological modelling of potential woodland creation I've been working on recently. I also got to talk with Derek Robinson at the University of Edinburgh about some of these issues. Everyone seemed interested in what I've been doing, particularly with the ideas I've been bouncing around relating to the work Burton and Wilson have been doing on post-productivist farmer self-identities, how these self-identities might change, how they might influence adoption of woodland planting and how we might model that. For example, I think an agent-based simulation approach might be particularly useful for exploring what Burton and Wilson term the '‘temporal discordance’ in the transition towards a post-productivist agricultural regime". And I also think there's potential to tie it in with work like my former CSIS colleague Xiaodong Chen has been doing using agent-based approaches to model the effects of social norms on enrollment in payments for ecosystem services (such as woodland creation).
I was away on holiday for a couple of weeks after the RGS. On returning, I've been preparing for King's Geography tutorials with the incoming first year undergraduates. The small groups we'll be working will allow us to discuss and explore critical thinking and techniques about issues and questions in physical geography. Looking forward a busy autumn term!
I just updated the Philosophy of Modelling page on my website. It's not anything too detailed but I was prompted to add something by my activities over the last few weeks. I've been working on both making progress with my 'modelling narratives' project and a paper I've started working on with John Wainwright exploring the epistemological roles agent-based simulation might play beyond mathematical and statistical modelling (expected to appear in the new-ish journal Dialogues in Human Geography).
Model Histories: The generative properties of agent-based modelling Fri 2 Sept, Session 4, Skempton Building, Room 060b James Millington (King's College London) David O'Sullivan (University of Auckland, New Zealand) George Perry (University of Auckland, New Zealand) Novels, Kundera has suggested, are a means to explore unrealised possibilities and potential futures, to ask questions and investigate scenarios, starting from the present state of the world as we observe it – the “trap the world has become”. In this paper, we argue that agent-based simulation models (ABMs) are much like Kundera’s view of novels, having generative properties that provide a means to explore alternative possible futures (or pasts) by allowing the user to investigate the likely results of causal mechanisms given pre-existing structures and in different conditions. Despite the great uptake in the application of ABMs, many have not taken full advantage of the representational and explanatory opportunities inherent in ABMs. Many applications have relied too much on 'statistical portraits' of aggregated system properties at the expense of more detailed stories about individual agent context and particular pathways from initial to final conditions (via heterogeneous agent interactions). We suggest that this generative modelling approach allows the production of narratives that can be used to i) demonstrate and illustrate the significance of the mechanisms underlying emergent patterns, ii) inspire users to reflect more deeply on modelled system properties and potential futures, and iii) provide a means to reveal the model building process and the routes to discovery that lie therein. We discuss these issues in the context of, and using examples from, the increasing number of studies using ABMs to investigate human-environment interactions in geography and the environmental sciences.
Trees, Birds and Timber: Coordinating Long-term Forest Management Fri 2 Sept, Session 4, Skempton Building, Room 060b James Millington (King's College London) Megan Matonis (Colorado State University, United States) Michael Walters (Michigan State University, United States) Kimberly Hall (The Nature Conservancy, United States) Edward Laurent (American Bird Conservancy, United States) Jianguo Liu (Michigan State University, United States) Forest structure is an important determinant of habitat use by songbirds, including species of conservation concern. In this paper, we investigate the combined long-term impacts of variable tree regeneration and timber management on stand structure, bird occupancy probabilities, and timber production in the northern hardwood forests of Michigan's Upper Peninsula. We develop species-specific relationships between bird occupancy and forest stand structure from field data. We integrate these bird-forest structure relationships with a forest model that couples a forest-gap tree regeneration submodel developed from our field data with the US Forest Service Forest Vegetation Simulator (Ontario variant). When simulated over a century, we find that higher tree regeneration densities ensure conditions allowing larger harvests of merchantable timber, and reducing the impacts of timber harvest on bird forest-stand occupancy probability. When regeneration is poor (e.g., 25% or less of trees succeed in regenerating), timber harvest prescriptions have a greater relative influence on bird species occupancy probabilities than on the volume of merchantable timber harvested. Our results imply that forest and wildlife managers need to work together to ensure tree regeneration and prevent detrimental impacts on timber output and habitat for avian species over the long-term. Where tree regeneration is currently poor (e.g., due to deer herbivory), forest and wildlife managers should pay particularly close attention to the long-term impacts of timber harvest prescriptions on bird species.
Since I last posted, THREE of the papers I've mentioned here previously have become available online! Here are their details, abstracts are below. Email me if you can't get hold of them yourself.
Millington, J.D.A., Walters, M.B., Matonis, M.S., Laurent, E.J., Hall, K.R. and Liu, J. (2011) Combined long-term effects of variable tree regeneration and timber management on forest songbirds and timber productionForest Ecology and Management 262 718-729 doi: 10.1016/j.foreco.2011.05.002
Millington, J.D.A. and Perry, G.L.W. (2011) Multi-model inference in biogeographyGeography Compass 5(7) 448-530 doi: 10.1111/j.1749-8198.2011.00433.x
Millington, J.D.A., Demeritt, D. and Romero-Calcerrada, R. (2011) Participatory evaluation of agent-based land use modelsJournal of Land Use Science 6(2-3) 195-210 doi:10.1080/1747423X.2011.558595
Millington, J.D.A. et al. (2011) Combined long-term effects of variable tree regeneration and timber management on forest songbirds and timber productionForest Ecology and Management 262 718-729 Abstract The structure of forest stands is an important determinant of habitat use by songbirds, including species of conservation concern. In this paper, we investigate the combined long-term impacts of variable tree regeneration and timber management on stand structure, songbird occupancy probabilities, and timber production in northern hardwood forests. We develop species-specific relationships between bird species occupancy and forest stand structure for canopy-dependent black-throated green warbler (Dendroica virens), eastern wood-pewee (Contopus virens), least flycatcher (Empidonax minimus) and rose-breasted grosbeak (Pheucticus ludovicianus) from field data collected in northern hardwood forests of Michigan’s Upper Peninsula. We integrate these bird-forest structure relationships with a forest simulation model that couples a forest-gap tree regeneration submodel developed from our field data with the US Forest Service Forest Vegetation Simulator (Ontario variant). Our bird occupancy models are better than null models for all species, and indicate species-specific responses to management-related forest structure variables. When simulated over a century, higher overall tree regeneration densities and greater proportions of commercially high value, deer browse-preferred, canopy tree Acer saccharum (sugar maple) than low-value, browse-avoided subcanopy tree Ostrya virginiana (ironwood) ensure conditions allowing larger harvests of merchantable timber and had greater impacts on bird occupancy probability change. Compared to full regeneration, no regeneration over 100 years reduces merchantable timber volumes by up to 25% and drives differences in bird occupancy probability change of up to 30%. We also find that harvest prescriptions can be tailored to affect both timber removal volumes and bird occupancy probability simultaneously, but only when regeneration is adequate. When regeneration is poor (e.g., 25% or less of trees succeed in regenerating), timber harvest prescriptions have a greater relative influence on bird species occupancy probabilities than on the volume of merchantable timber harvested. However, regeneration density and composition, particularly the density of Acer saccharum regenerating, have the greatest long-term effects on canopy bird occupancy probability. Our results imply that forest and wildlife managers need to work together to ensure tree regeneration density and composition are adequate for both timber production and the maintenance of habitat for avian species over the long-term. Where tree regeneration is currently poor (e.g., due to deer herbivory), forest and wildlife managers should pay particularly close attention to the long-term impacts of timber harvest prescriptions on bird species.
Millington, J.D.A. and Perry, G.L.W. (2011) Multi-model inference in biogeographyGeography Compass 5(7) 448-530 Abstract Multi-model inference (MMI) aims to contribute to the production of scientific knowledge by simultaneously comparing the evidence data provide for multiple hypotheses, each represented as a model. With roots in the method of ‘multiple working hypotheses’, MMI techniques have been advocated as an alternative to null-hypothesis significance testing. In this paper, we review two complementary MMI techniques – model selection and model averaging – and highlight examples of their use by biogeographers. Model selection provides a means to simultaneously compare multiple models to evaluate how well each is supported by data, and potentially to identify the best supported model(s). When model selection indicates no clear ‘best’ model, model averaging is useful to account for parameter uncertainty. Both techniques can be implemented in information-theoretic and Bayesian frameworks and we outline the debate about interpretations of the different approaches. We summarise recommendations for avoiding philosophical and methodological pitfalls, and suggest when each technique is best used. We advocate a pragmatic approach to MMI, one that emphasises the ‘thoughtful, science-based, a priori’ modelling that others have argued is vital to ensure valid scientific inference.
Millington et al. (2011) Participatory evaluation of agent-based land use modelsJournal of Land Use Science 6(2-3) 195-210 Abstract A key issue facing contemporary agent-based land-use models (ABLUMs) is model evaluation. In this article, we outline some of the epistemological problems facing the evaluation of ABLUMs, including the definition of boundaries for modelling open systems. In light of these issues and given the characteristics of ABLUMs, participatory model evaluation by local stakeholders may be a preferable avenue to pursue. We present a case study of participatory model evaluation for an agent-based model designed to examine the impacts of land-use/cover change on wildfire regimes for a region of Spain. Although model output was endorsed by interviewees as credible, several alterations to model structure were suggested. Of broader interest, we found that some interviewees conflated model structure with scenario boundary conditions. If an interactive participatory modelling approach is not possible, an emphasis on ensuring that stakeholders understand the distinction between model structure and scenario boundary conditions will be particularly important.
Changing the 'Targets and Timetables' Climate Change Narrative
Earlier this week I was in Leipzig, Germany, to meet the Ecological Modelling research group at the Helmholz Centre for Environmental Research (UFZ) and one of my PhD supervisors, Dr. George Perry. While there I was lucky to meet and talk with some renowned ecological modellers: Thorsten Wiegand, who's work includes spatial point process modelling (although some of his discussion with George about that was a bit technical for me!); Volker Grimm, proponent the 'Pattern-Oriented Modelling' approach (look out for a new review of this in Phil Trans. of the Royal Society in the near future), and Andreas Huth, notable forest dynamics modeller.
At UFZ I gave a presentation I entitled "Future Forests: Managing and Creating Forests for Biodiversity, Recreation, Timber and Carbon" in which I talked about some of the work I did in Michigan and the new project I'm working on now in the UK. The talk seemed to go down well and the research group had some very good questions, both about technical aspects of the modelling and the issues it is applied to (i.e. forest ecosystem management and woodland creation, including the Woodland Carbon Code). Thanks to Juergen Groeneveld for organising this (and his hospitality at UFZ).
Another interesting activity at UFZ was hearing Roger Pielke Jr. talk about the need to 'change the climate change narrative'. In his talk he suggested that understanding all carbon policy can be boiled down to a single sentence;
'people engage in economic activity that uses energy from carbon emitting generation'.
He emphasised that he thinks the "Targets and Timetables" approach to reducing anthropogenic carbon emissions is flawed. As an example, he used the case of the UK and the Climate Change Act of 2008 which set the aim of an 80% cut in the country's carbon emissions by 2050 compared to 1990 levels, with an intermediate target of 34% by 2020. However, Pielke argues that given the 'iron' law of climate policy (that we cannot mitigate emissions by reducing GDP, both because people will pay only so much to mitigation now, and because increasing GDP is seen as a virtue by way of its effects on povety reduction) we cannot hit these types of targets.
Previous decarbonisation of the UK economy has been achieved by replacing the contribution to GDP from high-emitting manufacturing with low-emitting financial services. He wonders how long can this go and presented his estimate that for the UK to actually hit its 2020 target it will have to build more than 40 nuclear power stations in the next 10 years. In this context, he suggested that the building of a third runway at Heathrow was an insignificant concern (in terms of the new emissions it would generate) when there are still 1.5 billion people globally who do not have access to electricity. His argument is that we do not know how to achieve the targets and the timetables we have set ourselves.
Pielke argues that we must change the climate change narrative from
"We need to use less energy and fossil fuels are cheap"
to
"We need more energy and fossil fuels are too expensive".
This would allow these 1.5 billion people to access the electricity they aspire to whilst driving the growth of alternative, cleaner, sources of energy. I like this argument - and his one about making small steps towards these change to reach bigger changes - but it seems to run counter to his point about the insignificance of another runway at Heathrow (which by increasing capacity for flights would continue the narrative of cheap fossil-fuelled energy). Opening a third runway but only allowing non-fossil-fuelled aeroplanes to use it is ultimately most consistent with the change in narrative he argues for.
And of course, while at UFZ, George Perry and I took the opportunity to discuss past, current and ongoing work over beers and dinner. Mainly we discussed the idea surrounding the narrative properties of generative simulation models and on which I plan to submit a manuscript to a journal for publication soon. But we also thought about other areas of research including land use modelling (continuing our work in Spain) and landscape disturbance-succession modeling (including the use of the LFSM I've developed with paleo-estimates of wildfire regimes).
All-in-all a very interesting and productive trip!