Epistemic Rationality
Epistemic, from Greek epistḗmē, "knowledge" 1
Epistemic rationality: systematically improving the accuracy of your beliefs2
To begin with, I want to dispell some confused, popular concepts of "reality" and "truth," or at least contrast them with the notions that are used in the theory of epistemic rationality I'll be discussing.
In an ordinary sense, when we make concrete statements like "I'm sitting in a chair," there's a very real, if seldom salient, provisionality about perceptions and statements like these. Indeed, we experience and believe such things in dreams, and certainly people sometimes have stable coherent hallucinations that they later take back. Even if the truth or reality of perceptions and statements like these is seemingly just mundanely undeniable, they remain just shy of an absolute solidity.
That's of course the extreme case. The point here is that our epistemology needs to account for both fundamental human fallibility and fundamental uncertainty in any possible knowledge. (Perhaps, I should mention, this is the other side of such refrains as "physics is just a theory"—yes! Even in the orthodox theories of knowledge underpinning modern science, all human knowledge is uncertain, however, we should still have high confidence in this or that theory.
The sense sense of "belief" that is used in the context of Bayesian theories of epistemology is very much like this. We gather observations, learn, and become more confident in this hypothesis over that one. In Bayesian epistemology, there is no basis for making absolute claims about reality. In this sense, there isn't an utter solidity even to claims like "I am sitting in a chair," rather these are regarded as beliefs with extremely high credence, or equivalently, beliefs which make predictions with extreme probabilities. This is the sense in which we can build back up to ordinary ways of speaking plainly about things like chairs and warm breezes and mountaintops, and also of claims about physics and evolutionary biology.
Bayesianism somewhat opinionated
For what it's worth, my understanding is that Bayesianim is not accepted by all statisticians or probabilists, though I'm very poorly read here.
In practice I think this distinction is basically academic, I don't believe that other approaches actually generate falsifiable predictions in contradiction with Bayesianism.
In the jargon, one updates one's priors (essentially, learns and revises one's beliefs) on the basis of each piece of new information or evidence. "Updating" and "priors" may sound obtusely technical, but this is quite ordinary. Every moment we're learning, implicitly, quietly, just a bit more: a growing sense of what a friend is feeling, some particular detail about the life of a Japanese emperor, or just a bit better sense of the streets and shops in this part of town.
"Improving the accuracy of our beliefs" is then concerned with how to arrive at beliefs about the world which are highly predictive, as in, that they predict events with a high degree of accuracy and consistency.
This gets into what are known as subjective probabilities. I think most people find the concept of a probability basically intuitive when thinking about a literal dice roll, but when an event can only happen once, what can a probability even mean? In Bayesian probability theory, a probability is a measurement of the confidence in whether an event will occur. This works out the same for highly regular, but hard to predict objects like dice or jars of marbles, but also for whether there wil be rain tomorrow. Suppose we assign a 50% probability to an event, eg. the outcome of an election. In this frame, that probability is interpreted to mean that, if we're calibrated, we expect for predictions with the same confidence to come true 50% of the time, on average.
Perhaps, imagine: your boss sends a message which is phrased in a way that could read as angry, or could just be hurried and poorly written. Whether your boss meant that text in a passive aggressive way and is in fact mad at you is not a matter of chance, but you can still be uncertain about it. If you assign 15% that he's actually mad, again this means that you expect to be right in 15% of situations where you feel similarly uncertain.
I want to stress here how the technical (and perhaps robotic-sounding) ideas of updates, priors, and predictions are actually mundane and natural. One makes updates whether one wants to or not, one can be more or less calibrated, and we live with the consequences of these every day, regardless of how we think about them. There are, however, domains which are comprehensible without any explicit theories of knowledge, and there are domains where one has to move carefully to come to correct beliefs, and that's what all of this is for.
Next we'll discuss what principles underly changing one's mind well and poorly, according to this frame.