Beven begins with a general introduction to the modeling process, covering parameters, variables and boundary conditions, problems of scale and incommensurability, model spaces and model ensembles, as well as the uses of models in prediction and simulation. He argues for the importance of dealing with uncertainty and surveys different kinds of uncertainty and uncertainty estimation methods.
Chapter two is something of an epistemological and philosophical digression. Beven argues for some kind of pragmatic realism, or a "critical realism" following Bhaskar. He explores issues with model validation, falsification, and Bayesian confirmation (evaluating predictions against observations). He explains why adding explanatory depth to a model is not always good (over-parameterisation). And he suggests that though unexpected extreme events pose a challenge for models they are also opportunities, since we learn the most when our models fail.
With simulations without historical data, "the results [of any sensitivity or uncertainty analysis] will always be conditional on the assumptions made". Independence is often assumed in practice, but we may have to deal with covariation between parameters and variables. Beven presents some ways of sampling model space (Monte Carlo, Latin Hypercube, copula), doing sensitivity analysis, and using simplified models for exploring parameter space and interpolating between results. Where no probabilities can be assigned to uncertainties, the only option may be to model multiple scenarios separately.
With historical data, there arises the "inverse problem" of estimating model parameters. There may still be no "right" answer here, but "we can use the data available to refine and hopefully constrain our estimates of the uncertainty associated with any model predictions". Statistical methods here include non-linear regression and formal Bayesian methods. The latter seem an obvious choice, but they need a formal likelihood measure and, with model structural errors that may be inseparable from residual errors, can result in overconditioning; they also lack the ability to reject models.
Alternative tools include fuzzy sets and what Beven calls Generalised Likelihood Uncertainty Estimation (GLUE). The latter involves a (possibly informal) likelihood measure for evaluating model runs, with criteria for rejecting non-behavioural models outright, and the generation of random realisations of models consistent with choices of parameter and input variable distributions. With a formal measure this is equivalent to the Bayesian approach, but Beven argues that there is a case for "common-sense model evaluations and informal likelihood methods"; GLUE can help avoid over-conditioning and has, over formal Bayesian methods, "the advantage that the equifinality of models as hypotheses, non-stationarities in the residual errors, and model failures are more clearly revealed".
When forecasting the near future, some kind of updating or data assimilation is needed to allow new data to be used in updating model predictions. One approach is to model the residual error, but it's also possible to update the core model parameters. Certain lead times may be critical, in which case it makes sense to minimise error variance at those times. Beven explains the Kalman filter, the ensemble Kalman filter, and a Monte Carlo method called Particle filtering, with examples of their applications to flood models. He also looks at variational (3DVar and 4DVar data assimilation) and ensemble methods in numerical weather forecasting.
Decision making with uncertainty poses another set of problems. Here Beven surveys tools and methods such as Bayesian Belief Networks, Evidential Reasoning, decision support systems, and Info-Gap decision theory (using concepts of robustness and opportuneness for dealing with non-probabilistic uncertainties). One problem is evaluation of the benefits of investment in additional information to reduce uncertainty. And explaining uncertainty to diverse groups of stakeholders is a practical challenge.
A final chapter touches on a mix of ideas. Unknowability and uncertainty are not just the result of poor models. We need to accept an uncertain future and approach modeling as a way of learning about places and learning about model structures. The reluctance of decision-makers to address uncertainty may lead to them passing responsibility onto modelers — who in turn prefer to pass it on to their models.
The more mathematical material in Environmental Modelling is segregated into twelve "boxes", which are basically appendices at the end of chapters, taking up about a quarter of the text in total. These range from basic material such as "Simple operations with probability-distributed variables" to more advanced topics such as "Kalman filter methods for data assimilation". Beven also provides some information about software packages for particular applications.
Beven is a hydrologist and a majority of his examples are from flood and groundwater models, while others are taken from climate science, ecology, and fire management. (There's no attempt to compare the treatment of uncertainty in environmental models with uncertainty in other domains, say in finance.) Environmental Modelling will be useful for anyone working with environmental models, though it's not a source of quick answers for practical questions or solutions for specific problems. It will also have an appeal to anyone who has to deal with scientific uncertainty more generally.
February 2011
- External links:
-
- buy from Bookshop.org
- buy from Amazon.com or Amazon.co.uk
- share this review on Facebook or Twitter
- Related reviews:
-
- books about philosophy of science
- books published by Routledge