Current projects

Most of my research can be thought of as revolving around the central question of what it takes to be rational. The bulk of my work up to now has focused on the constraints on rational belief, but I am also working on the constraints on rational choice. Further afield, I am interested in the rationality of science, and specifically with what you should rationally believe on the basis of scientific evidence from models and simulations. As well as the research topics mentioned below, I have a keen interest in the history and philosophy of mathematics: a topic I worked on as a masters student and a topic I would like to work on again.

De-idealising Bayesianism

Reasoning with uncertainty is important in a number of disciplines. An important first step in reasoning about uncertainty is to represent uncertainty and uncertain beliefs. A standard and powerful method of representing and reasoning about uncertainty is Bayesianism. Broadly construed, Bayesianism involves using the mathematical theory of probability as a way to represent uncertainty. This method is used in decision theory and game theory; statistics and information theory; pyschology; and machine learning and artificial intelligence. As well as being of interest to philosophers, models of belief show up in many different areas of research: e.g. psychology, machine learning, economics, logic, game theory. Within philosophy, these kinds of models are of direct interest to epistemologists and decision theorists. But they are also of indirect interest to, for instance, philosophers of science since ultimately, Bayesian measures of confirmation are built out of degrees of belief.

The model is powerful and backed by a large corpus of illuminating mathematical results. However, the model involves a great many idealisations. An enduring criticism of Bayesian methods is that they are too idealised. This project would explore ways to de-idealise the Bayesian approach to belief.

In my PhD. dissertation, I outlined and defended a de-idealised version of Bayesianism known as “imprecise probabilism”. The basic idea is to represent an agent’s belief state by a set of probability measures, rather than by a single such measure. My main focus was on a static, synchronic model of belief with a view to setting up the basics of a decision theory for imprecise probabilities. Some of this material went in to the Stanford Encyclopedia of Philosophy article on the topic of Imprecise Probabilities that I wrote.

Since finishing the PhD, I have explored a number of other interesting topics related to imprecise probabilities. In collaboration with Katie Steele (LSE), I have looked at updating imprecise probabilities on the basis of new evidence, and at the problem of decision making with imprecise probabilities. This collaboration has produced three papers so far:

There is, however, much work still to be done on these topics.

There are many other possible de-idealisations of Bayesianism. There are several other ways one might consider the orthodox approach too idealised. Each idealisation suggests an exciting area for research. For example, Bayesian agents are often assumed to be logically omniscient. Relaxing this restriction is not trivial, but it is arguably worth doing, in order to allow a Bayesian treatment of mathematical knowledge and mathematical learning.

The orthodox Bayesian story of learning has it that agents learn by conditionalising on some proposition of which the agent has become certain. A slight modification allows (Jeffrey) conditioning on the probabilities of some partition of the space of propositions. But reasoning goes far beyond these kinds of updates. An agent can learn that some new propositions that were not in her algebra of propositions are of relevance to the current task. An agent can learn that two propositions are independent, or that some particular dependence holds between them. An agent might learn that some proposition is much more likely than some other. All of these are difficult to capture in the standard framework. The Objective Bayesian is better placed in this respect: most of the above kinds of update can be accommodated as restrictions to the set of probability functions to which the MaxEnt procedure is applied.

Weak rationality

Rationality is typically seen as something guides or determines your beliefs or choices. I argue instead that rationality is merely a constraint on choice or belief. In Constraints on rational theory choice I argue that this more nuanced understanding of rationality gives us a reasonable picture of the extent to which science can be considered a rational entreprise.

I also use this understanding of rationality to argue for a careful and somewhat quietist stance on what we can get from rationality in the case of imprecise probabilities. I do this in a number of papers:

Climate science and philosophy

The future of the climate system is extremely important to many current projects. Decisions costing hundreds of millions of pounds must be made on the basis of evidence that is not as certain as we would perhaps like. The broad outlines of what will happen to the climate are pretty certain: we know that the planet is very likely to get warmer. But how much the planet will warm, and how this warming will affect local climate systems and other climate variables (like rainfall) is less certain. These local factors are what will, ultimately, determine the success of various decisions that must be made now. So there is a great deal of interest in how we might improve our predictions of climate change, and how to extract the best information we can from our models of the climate.

With Roman Frigg, Leonard Smith and others, I have been exploring the question of what information we can reasonably extract from climate models, given that the models are imperfect. This has led to a number of papers, including a recently accepted paper in Philosophy of Science.

One aspect of climate models that is sometimes taken to give us confidence in their results is the property of robustness. I am exploring the extent to which this gain in confidence is reasonable given some case studies from recent climate science research.

PhD Project

It is important to have an adequate model of uncertainty, since decisions must be made before the uncertainty can be resolved. I use climate decisions as a case study. For instance, flood defenses must be designed before we know the future distribution of flood events. Making these decisions on the basis of a “best guess” forecast would obviously be a bad idea. So modellers attempt to offer probabilistic forecasts of future climate change. There is reason to be sceptical that the model probabilities offered really do reflect the chances of future climate change, at least at regional scales and long lead times.

Indeed, scientific uncertainty is multi-dimensional, and difficult to quantify. I argue that probability theory is not an adequate representation of the kinds of severe uncertainty that arise in some areas in science. I claim that this requires that we look for a better framework for modelling uncertainty. I start by outling the myriad kinds of uncertainty that arise in science, in particular in modelling non-linear dynamical systems like the climate.

I criticise some arguments for the claim that probability theory is an adequate model of uncertainty. In particular I critique Dutch book arguments, representation theorems, accuracy based arguments and Cox’s theorem.

Then I put forward my preferred model: imprecise probabilities. These are sets of probability measures. I offer several motivations for this model of uncertain belief, and suggest a number of interpretations of the framework. I also defend the model against some criticisms, including the so-called problem of dilation.

I apply this framework to decision problems in the abstract. I discuss some decision rules from the literature including Levi’s E-admissibility and the more permissive rule favoured by Walley, among others. I then point towards some applications to climate decisions. My conclusions are largely negative: decision making under such severe uncertainty is inevitably difficult.