Published papers

Models on the move: migration and imperialism (forthcoming)
with Karim Thébault. Studies in the History and Philosophy of Science A

Abstract: We introduce ‘model migration’ as a species of cross-disciplinary knowledge transfer whereby the representational function of a model is radically changed to allow application to a new disciplinary context. Controversies and confusions that often derive from this phenomenon will be illustrated in the context of econophysics and phylogeographic linguistics. Migration can be usefully contrasted with concept of ‘imperialism’, that has been influentially discussed in the context of geographical economics. In particular, imperialism, unlike migration, relies upon extension of the original model via an expansion of the domain of phenomena it is taken to adequately described. The success of imperialism thus requires expansion of the justificatory sanctioning of the original idealising assumptions to a new disciplinary context. Contrastingly, successful migration involves the radical representational re-interpretation of the original model, rather than its extension. Migration thus requires ‘re-sanctioning’ of new ‘counterpart idealisations’ to allow application to an entirely different class of phenomena. Whereas legitimate scientific imperialism should be based on the pursuit of some form of ontological unification, no such requirement is need to legitimate the practice of model migration. The distinction between migration and imperialism will thus be shown to have significant normative as well as descriptive value.

Modelling Inequality (forthcoming)
with Karim Thébault and Alexander Reutlinger. British Journal for the Philosophy of Science

Abstract: Econophysics is a new and exciting cross-disciplinary research field that applies models and modelling techniques from statistical physics to economic systems. It is not, however, without its critics: prominent figures in more mainstream economic theory have criticised some elements of the methodology of econophysics. One of the main lines of criticism concerns the nature of the modelling assumptions and idealisations involved, and a particular target are `kinetic exchange’ approaches used to model the emergence of inequality within the distribution of individual monetary income. This paper will consider such models in detail, and assess the warrant of the criticisms drawing upon the philosophical literature on modelling and idealisation. Our aim is to provide the first steps towards informed mediation of this important and interesting interdisciplinary debate, and our hope is to offer guidance with regard to both the practice of modelling inequality, and the inequality of modelling practice.

Are objective chances compatible with determinism? (2017)
Philosophy Compass, 12:8

Abstract: We review the question of whether objective chances are compatible with determinism. We first outline what we mean by chance and what we mean by determinism. We then look at the alleged incompatibility between those concepts. Finally, we look at some ways that one might attempt to overcome the incompatibility.

Non-classical probability and convex hulls (2017)
Erkenntnis, 82:1, 87–101

Abstract: It is well known that the convex hull of the classical truth value func- tions contains all and only the probability functions. Work by Paris and Williams has shown that this also holds for various kinds of nonclassi- cal logics too. This note summarises the formal details of this topic and extends the results slightly.

Constraints on rational theory choice (2017)
British Journal for the Philosophy of Science, 68:3, pp.617–638

Abstract: In a recent paper, Samir Okasha presented an argument that suggests that there is no rational way to choose among scientific theories. This would seriously undermine the view that science is a rational entreprise. In this paper I show how a suitably nuanced view of what scientific rationality requires allows us to avoid Okasha’s conclusion. I go on to argue that making further assumptions about the space of possible scientific theories allows us to make scientific rationality more contentful. I then show how such a view of scientific rationality fits with what Thomas Kuhn thought.

Vague chance (2016)
Ergo, 3:20

Abstract: If there are events that are both vague and chancy, then those chances might not satisfy the axioms of probability. I provide an example of such vague chances, and demonstrate that whether or not ‘chance-probabilism’ is true depends on your view on the logic of vagueness.

Can free evidence be bad?: Value of information for the imprecise probabilist (2016)
with Katie Steele. Philosophy of Science, 83:1 1–28

Abstract: This paper considers a puzzling conflict between two positions that are each compelling: (A) it is irrational for an agent to pay to avoid ‘free’ evidence before making a decision, and (B) rational agents may have imprecise beliefs and/or desires. Indeed, we show that Good’s theorem Good (1967) concerning the invariable choice-worthiness of free evidence does not generalise to the imprecise realm, given the plausible existing decision theories for handling imprecision. A key ingredient in the analysis, and a potential source of controversy, is the general approach taken for resolving sequential decision problems – we make explicit what the key alternatives are and defend our own approach. Furthermore, we endorse a resolution of the aforementioned puzzle – we privilege decision theories that merely permit avoiding free evidence over decision theories for which avoiding free evidence is uniquely admissible. Finally, we situate this particular result about free evidence within the broader ‘dynamic-coherence’ literature.

How to choose among choice functions (2015)
Proceedings of ISIPTA 2015

Abstract: If one models an agent’s degrees of belief by a set of probabilities, how should that agent’s choices be constrained? In other words, what choice function should the agent use? This paper summarises some suggestions, and outlines a collection of properties of choice functions that can distinguish between different functions.

Imprecise Probabilities (2014)
Stanford Encyclopedia of Philosophy

Abstract: It has been argued that imprecise probabilities are a natural and intuitive way of overcoming some of the issues with orthodox precise probabilities. Models of this type have a long pedigree, and interest in such models has been growing in recent years. This article introduces the theory of imprecise probabilities, discusses the motivations for their use and their possible advantages over the standard precise model. It then discusses some philosophical issues raised by this model. There is also a historical appendix which provides an overview of some important thinkers who appear sympathetic to imprecise probabilities.

Laplace’s Demon and the Adventures of his Apprentices (2014)
with Roman Frigg, Hailiang Du and Leonard A. Smith. Philosophy of Science 81:1 pp. 31–59

Abstract: The sensitive dependence on initial conditions (SDIC) associated with nonlinear models imposes limitations on the models’ predictive power. We draw attention to an additional limitation than has been under-appreciated, namely structural model error (SME). A model has SME if the model-dynamics differ from the dynamics in the target system. If a nonlinear model has only the slightest SME, then its ability to generate decision-relevant predictions is compromised. Given a perfect model, we can take the effects of SDIC into account by substituting probabilistic predictions for point predictions. This route is foreclosed in the case of SME, which puts us in a worse epistemic situation than SDIC.

Should subjective probabilities be sharp? (2014)
with Katie Steele. Episteme, Vol. 11, pp. 277–289

Abstract: There has been much recent interest in imprecise probabilities, models of belief that allow unsharp or fuzzy credence. There have also been some influential criticisms of this position. Here we argue, chiefly against Elga (2010), that subjective probabilities need not be sharp. The key question is whether the imprecise probabilist can make reasonable sequences of decisions. We argue that she can. We outline Elga’s argument and clarify the assumptions he makes and the principles of rationality he is implicitly committed to. We argue that these assumptions are too strong and that rational imprecise choice is possible in the absence of these overstrong conditions.

Uncertainty, learning and the ‘problem’ of dilation (2014)
with Katie Steele. Erkenntnis, Vol. 79, pp. 1287–1303

Abstract: Imprecise probabilism, which holds that rational belief/credence is permissibly represented by a set of probability functions, apparently suffers from a problem known as dilation. We explore whether this problem can be avoided or mitigated by one of the following strategies: a) modifying the rule by which the credal state is updated, b) restricting the domain of reasonable credal states to those that preclude dilation.

Laplace’s Demon and Climate Change (2013)
with Roman Frigg, Hailiang Du and Leonard A. Smith. Grantham Institute on Climate Change discussion paper 103

Abstract: This is a longer version of the ‘Model imperfection is a poison pill’ paper. It includes some new computational work as well.

Probabilistic forecasting: why model imperfection is a poison pill (2013)
with Roman Frigg, Reason L. Machete, Leonard A. Smith. New Challenges to Philosophy of Science, Edited by Hanne Andersen, Dennis Dieks, Gregory Wheeler, Wenceslao Gonzalez and Thomas Uebel. Springer

Abstract: Foretelling the future is an age-old human desire. Among the methods to pursue this goal mathematical modelling has gained prominence. Many of mathematical models promise to make probabilistic forecasts. This raises the question of exactly what these models deliver: can they provide the results as advertised? The aim of this paper is to urge some caution. Using the example of the logistic map, we argue that if a model is non-linear and if there is only the slightest model imperfection, then treating model outputs as decision relevant probabilistic forecasts can be seriously misleading. This casts doubt on the trustworthiness of model results. This is nothing short of a methodological disaster: probabilistic forecasts are used in many places all the time and the realisation that probabilistic forecast cannot be trusted pulls the rug from underneath many modelling endeavours.

Dutch book arguments and imprecise probabilities (2012)
Probabilities Laws and Structures, Edited by Dennis Dieks, Wenceslao Gonzalez, Stephan Hartmann, Michael Stoeltzner and Marcel Weber. Springer

Abstract: I give a detailed proof of the Dutch book theorem and a couple of interestingly weaker results. Please note that the proof given here is flawed. Stand by for an addendum.

Scientific uncertainty: A user’s guide (2011)
Grantham Institute on Climate Change discussion paper 56

Abstract: There are different kinds of uncertainty. I outline some of the various ways that uncertainty enters science, focusing on uncertainty in climate science and weather prediction. I then show how we cope with some of these sources of error through sophisticated modelling techniques. I show how we maintain confidence in the face of error.

A literary approach to scientific practice (2011)
Metascience 20, pp. 363-367

Abstract: An essay review of R.I.G. Hughes’ book Theoretical Practices of Physics: Philosophical Essays

In preparation

Criteria of adequacy for an imprecise decision theory

There is currently much discussion about how decision making should proceed when an agent’s degrees of belief are represented by a set of probability functions. I draw on Adam Elga’s recent discussion of decision making with sets of probabilities to inform what I take to be the implicit methodology in this area. I then apply the methodology to two further kinds of problematic decision problems. I also draw an important distinction between strong and weak interpretations of the outcome of some decision procedure. I argue that no decision rule is satisfactory with respect to all three decision problems on the strong interpretation, and only permissive decision rules are acceptable on the weak interpretation.

Confidence from robustness: A cautionary tale

In climate science, the models used to make projections of future climate change are idealised in a great many ways, and there are many parameters underconstrained by the data we have. Some important climatatological processes are poorly understood. Given the time scales at issue, there are no predictive successes of the models to give us confirmation that the models are approximately true. In short, the standard sources of confirmation (predictive success, fit with fundamental theories…) are unavailable. In the light of this, climate scientists have turned to other sources of confidence for their models. Among these sources is robustness, or agreement among models. In this paper I give an example of a model result that was somewhat robust. This robustness was commented on as a source of confidence in the IPCC’s Assessment Report 4 from 2007. Since then, further modelling work has shown that the result is not as robust as previously thought. I offer a diagnosis of the reasons for this spurious convergence of model results and conclude that appeals to robustness in climate models should be viewed with suspicion.

Conditionalisation, carefully

The aim of this paper is to outline and critically assess the various arguments that have been offered for conditionalisation as a diachronic norm of rationality. Several arguments, each built around a mathematical theorem, are discussed, and the theorems proved carefully. This done, the reasonableness of each argument is assessed.