Here’s something I’ve been thinking about. The basic idea is to wonder what consequences follow from relaxing the standard assumption that Bayesian agents are logically omniscient.
Bayesian epistemology and decision theory typically assume that the ideal agents are logically omniscient. Borrowing a practice from I.J. Good, I refer to my putative ideal agent as “you”. That is, they assume that if $\phi$ and $\psi$ are logically equivalent, then you should believe them to the same degree.