Among my many forms of cobbled-together self-employment I provide specialized tutoring to graduate students in ancient history and philosophy around the world. Which is rewarding in lots of ways. One of which is when my student ends up correcting an error of mine. That’s when you know you are a successful teacher, and they are starting to surpass you in knowledge and acumen. I’ve actually been excited to report on this, and correct the record. Gratitude goes to Nick Clarke.
The short of it is that long ago in a comments thread on my blog many years ago I was incorrect in my analysis of Gettier Problems. I was on to the right solution, but I made the mistake of assuming an unsound conclusion could not be considered justified (and without realizing that’s what I was doing). Conclusions in Gettier Problems rely on false premises to reach true conclusions. I was right about that. But I wasn’t right about that being grounds to dismiss them.
Backstory is required.
The Definition of Knowledge
All the way since ancient times the most popular definition of “knowledge” has been “justified true belief.” The notion is credited to Plato, although it’s doubtful he was the first to float the idea. That knowledge is a belief is obvious (clearly if you don’t believe x, it can’t be said that you know x is true). And that it’s only knowledge if it’s true is almost as obvious (although there are philosophers who would challenge this requirement). But we generally also require that it be justified. And the catch hangs on just what that means. What counts as a justified belief? Usually philosophers settle on justification being a conclusion reached by a valid line of reasoning, although that’s not the only way to resolve the matter.
Justification can more broadly be defined as “deriving a belief by a reliable means,” the term reliable allowing for some error (since absolute certainty is impossible), so we say we are justified in believing our car is in the garage if our means of determining that our car is in our garage is reliable enough to make it highly probable that our car really is in our garage, given all the information we have access to at the time. But of course, we could be wrong. For all we really know, our car could have been stolen, or quantum mechanically dissolved. But those are very unlikely to have occurred given all we do know (e.g. the garage is well nigh impenetrable without our knowledge; the QM probability of spontaneous dissolution is vanishingly small; and so on). So our belief can still be justified. But a belief can certainly be justified and false. That’s why knowledge is usually required to be both justified and true.
A problem arises, however, when philosophers choose to define “justified” as being conclusions reached by valid reasoning. Because conclusions reached by valid reasoning can still be reached by unsound reasoning. In technical parlance a conclusion is valid when it follows logically from the premises, but it is only sound when the premises are all true (see Validity and Soundness). This is typically meant, though, of deductive reasoning, and most reasoning is actually inductive (see Deduction vs. Induction). That my car is in my garage is not a conclusion of deductive logic. It could be, but only if stated as a probability, as in fact it should be (the point I was getting at in my original discussion of this). In fact, when we start rephrasing knowledge claims as claims about probabilities, everything changes.
The fact that in reality we can only ever know the probabilities of things, and not the things themselves (see Proving History, pp. 23-26), I think belies a fundamental mistake in common philosophical thinking about the nature of knowledge. (Indeed, even an omnible God is in the same boat, since there will always be some nonzero probability that He is being tricked into thinking he is omniscient and infallible by a Cartesian Demon, so even for the greatest conceivable God all knowledge would still be probabilistic.) But let’s suppose we could reduce all knowledge claims to claims about probabilities that are arrived at by deductive logic. I think they could be, although most would doubt it. But here I’m just asking you to suppose it for the sake of argument, since it would entail the best possible state of knowledge: all conclusions are the products of deductive logic.
What has been shown is that even if that were the case, the definition of knowledge still has a serious flaw.
This was cleverly demonstrated by Edmund Gettier in 1963. His method is highly convoluted and involves esoteric aspects of deductive logic that have even at times been questioned, but you can learn all about this elsewhere (see Gettier Problems for a really good treatment). For I can demonstrate the same point he was making using a much more straightforward approach (below). Key to being able to do this is afforded by the brilliant analysis of Linda Zagzebski, in “The Inescapability of Gettier Problems,” The Philosophical Quarterly 44.174 (1994), pp. 65-73. Zagzebski demonstrates that Gettier problems actually just illustrate (and thus reduce to) a broader and more basic insight: that it is possible to have “justified true belief” entirely by chance coincidence. And that seems to chafe at philosophers (or indeed most people), since it seems strange to say you know something, when really it’s only by sheer luck that what you claim to know happens to be correct.
Now, some philosophers don’t have a problem with this. They are happy to allow that accidental knowledge is knowledge. It’s true, after all. So what’s the big deal? Most philosophers, however, are intuitively disturbed by the idea that accidental knowledge can be credited as knowledge. Although when knowledge is stated as a knowledge of a probability this is less disturbing. If I say I know there is a 99.99% chance my car is in my garage, and it just happened to be the case that my car was not in my garage, I have not actually been contradicted. My belief was still true. Because my actual belief entailed there is already a 0.01% chance that my car is not in the garage, so it’s not being there is already a possibility fully included in my belief. What has been contradicted is my expectation that my car is in my garage. So my prediction that it is there is falsified by its not being there, but my prediction that it only had a small chance of not being there is not falsified by its not being there.
By similar logic, if I use a belief-forming method that is highly reliable, let’s say it produces conclusions with 99.99% certainty, I can say I know that any x produced by that method has a 99.99% chance of being true, yet if that method had a flaw–a flaw that rarely affects its conclusions (hence the method’s high reliability)–which resulted in my mistakenly believing my car was in my garage when in fact indeed it was in my garage, I can still legitimately say I know there is a 99.99% probability that my car is in my garage. Because that is entailed by the method’s reliability, which measure of reliability already includes a full accounting of any such flaws (like being accidentally correct). This is less disturbing.
But philosophers often confuse “I know x is probably true” with “I know x is true” (even layfolk often confuse those two statements), even though the latter, in all practical respects, is either saying the same thing as the former, or else can never actually be true–because it then would translate as “I know the probability of x is 100%,” which is knowledge no one ever has about anything (other than immediate uninterpreted experiences, but that’s not what we usually need to know). So when we get knowledge right, as a belief in epistemic probabilities, Gettier Problems already look a lot less problematic.
The Impending Death of Socrates the Lizard
I will illustrate this in a manner much more straightforwardly than Gettier did (and here I am in gratitude to Zagzebski, although she does not use this approach herself). The stock example of the most basic deductive reasoning is this:
- P1. Socrates is a man.
- P2. All men are mortal.
- C1. Therefore, Socrates is mortal.
This is a valid argument. If the premises are true, then it is also sound. In traditional parlance, my belief that Socrates is mortal would be justified as long as I arrive at it by a valid means, and here all that requires is the two premises P1 and P2. It does not require those premises to be true. They could be false, and my conclusion would still be valid. It just wouldn’t be sound. I would then have a justified false belief that Socrates was mortal.
Of course usually a little more than that is required. No philosopher would consider my belief justified if I knew (or even so much as strongly suspected) that either premise was false. So generally a belief is only considered justified if our belief in the premises is also justified. And so on all the way down the ladder of reasoning (which ladder ends in the basement of raw immediate uninterpreted experience: see Epistemological End Game). But it’s still possible to have a justified belief in the premises (to be really sure they are true, by a very reliable means) and those premises still be false.
For example, suppose I overhear a conversation about a certain Socrates. The nature of the conversation is such that the parties involved are “obviously” talking about a man named Socrates. I therefore have a very reliable belief that this Socrates is a man. I also have a very reliable belief that all men are, at least currently, mortal (given my grasp of such things as biology). And I can validly reach the conclusion that this Socrates is mortal from those two premises. So I have a justified belief that Socrates is mortal.
But suppose, lo and behold, those people were actually talking about a pet lizard named Socrates. It’s highly improbable that anyone would speak the way they did about a lizard, but alas that’s what happened. Unbeknownst to me. And I certainly can’t be expected to have known that. I can’t even have been expected to know it was in any way probable. To the contrary, I know (like the improbably undetected theft of my car) that it’s very improbable, which is why my mistaken belief that this Socrates is a man remained justified. But this means the deductive reasoning I am engaging in, P1 + P2 = C1, is valid and unsound. Because P1 happens to be false. But P1 being false does not make my belief in C1 unjustified, because justified beliefs can be false. My belief in P1 was justified, and C1 justifiably follows from P1 (and P2), so my belief in C1 is justified as well.
But guess what? All lizards also happen to be mortal. So in fact C1 is true. I just reached it by assuming a false premise. The fact that my belief in C1 is true is simply an accident. I got lucky–the mistake I made (believing P1 is true when it’s not) didn’t change the conclusion I reached. This is the same situation Gettier Problems aim to demonstrate, although by a far more esoteric and circuitous route. We can have a justified true belief…that we derived from false beliefs!
So is my belief that this Socrates is mortal knowledge? In other words, can I honestly say I know that Socrates the lizard is mortal? Even though in fact what I believed was that a certain Socrates the man was mortal? I had no belief involving a lizard. Yet my belief that this Socrates is mortal is true. And also, as it happens, justified. One might object that if I have a belief about Socrates the man, I do not have a belief about Socrates the lizard, except via a masked man fallacy. But by the same token, in Gettier Problems, the rule of disjunction introduction is used, where “if P, then P or Q,” and that requires that the first term (P) be true, or else the rule does not follow and thus any deduction relying on it is unsound. And yet being mistaken that P is true is just the product of another fallacy somewhere down the line (it can even be a masked man fallacy, though it does not have to be).
Philosophers are troubled by the idea that I can “know” Socrates is mortal when in fact I sort of kind of really don’t, since although my belief that Socrates is mortal is both true and justified, I really only came by it by accident. It’s just a lucky coincidence that lizards are also mortal. In Gettier Problems, the lucky coincidence each example hinges on is even more improbable than that.
Redefining Knowledge
This is actually easily solved. Of course, semantically, it solves itself. If knowledge simply is “justified true belief,” then the beliefs generated in Gettier cases (like my belief that Socrates the lizard is mortal) are simply knowledge. Full stop. The objection to that cannot be semantic, because justified true belief is justified true belief, whether arrived at accidentally or not. The objection therefore is really at root pragmatic: philosophers don’t want accidental knowledge to be knowledge, so the fact that their definition of knowledge allows that is something that annoys them. More charitably, we can say it creates a problem for us when we want to distinguish knowledge reached non-accidentally from knowledge reached accidentally.
But that problem is solved by changing the definition of knowledge accordingly. Zagzebski overlooked the obvious way to do that: just say knowledge is “justified true belief not arrived at accidentally.” This simply brackets away all Gettier cases. Since they reach “justified true belief” only by accident, if we simply declare that any belief reached by accident is not knowledge, then all Gettier cases are eliminated. As is my knowledge that Socrates the lizard is mortal: I do not really know that, because it’s being true is simply a coincidence, and I am not aware of that. In other words, I do not know “it’s true only by coincidence,” therefore I do not really know it’s true. (Of course, I will go on believing it’s knowledge, but that’s true of all justified false beliefs.)
Zagzebski demonstrates that what Gettier Problems really show is that “since justification does not guarantee truth, it is possible for there to be a break in the connection between justification and truth, but for that connection to be regained by chance,” so any theory of knowledge that accounts for chance error (both the false positive and the false negative) will solve the Gettier problem. Her recommended solution is thus to do that (pp. 72-73), although she doesn’t appear to realize that her suggested solution can be reduced to a very simple redefinition of knowledge as justified true belief not arrived at accidentally. Her conclusion is that any “truth + x” redefinition of knowledge is vulnerable to Gettier Problems, but that’s not the case when the “random chance” element is made a disqualifier. You might say that’s a “truth – x” redefinition of knowledge, but if we define x as “not generated by random chance” then it’s actually a “truth + x” redefinition of knowledge. Zagzebski’s conundrum is resolved. Her conclusion was incorrect after all. But in a way she was getting close to discerning herself.
Conclusion
It’s worth noting that disagreement on this is perfectly legitimate because all we are really talking about is how we want to define the word “knowledge” (and cognate terms like “to know”), which is actually not a question of objective fact but simply a cultural or practical choice about what symbols to assign to what concepts and when. We can define “knowledge” any way we want. So philosophers who disagree about how to define it are really just doing nothing more than making pragmatic proposals about which definition is most useful in practice. So we have Jonathan Weinberg, Shaun Nichols, and Stephen Stich in “Normativity and Epistemic Intuitions,” Philosophical Topics 29 (2001), pp. 429-60, and Stephen Hetherington in “Actually Knowing,” Philosophical Quarterly 48 (1998), pp. 453-69, arguing that Gettier knowledge, indeed any lucky knowledge, simply is knowledge, so get over it already. And they’re right. So long as you don’t mind the consequences of that definition, there can be no objection to it.
Philosophers who oppose that outcome are merely saying that they find that definition confusing, because its consequences muddy up what they usually want to use the word “knowledge” for, and so they’d rather, for convenience, restrict “knowledge” to beliefs that aren’t arrived at by mere luck. And they are welcome to do that. Because again, we can define words any way we want to. As long as our audience knows how we have defined them. Which can present a problem when the most popular linguistic convention is not the definition you are using. If you want to speak or write without having to teach your audience a new language, but instead just rely on the lexicons already in their brains, you have to actually rely on the lexicons already in their brains. Which is what dictionaries attempt to empirically document, for example. I discuss these issues in more requisite detail in Sense and Goodness without God (II.2, pp. 27-48, esp. 2.2.1, pp. 35-37). For the present point, there is no English convention on whether “knowledge” vocabulary is inclusive of lucky knowledge, so the philosophical debate over which to prefer can’t be resolved by appealing to how the words are used in practice. Most people never even think about it. And those who do are divided on the matter.
So you can choose to be content with either. And that’s fine. You can agree that my accidental knowledge about the mortality of Socrates the lizard is knowledge (by one definition, that being the most popular and traditional definition in the formal systematic study of knowledge), or you can say that it is not knowledge because when you use the word “knowledge” you mean to exclude true beliefs formed by accident (my reformed definition of knowledge). Either is correct. You just have to explain which it is. When it matters. Which is rarely.
Before diving in, I just want to say that I read you irregularly, but I need to change that.
Yes. This is my criticism of the field of gender-studies-teaching (or, if you prefer, gender-studies-pedagogy, though I’m referring to praxis not theory and teaching will emphasize that). We want to use definitions of important concepts like gender and sex that are more precise than those in general parlance and because of their precision permit us to discuss things that would otherwise be hopelessly muddled (e.g. transsexuality – just **try** to talk about transsexuality if gender and sex are the same thing). But, frankly, we in gender-studies-teaching are terrible at this.
I had one teacher at one point giving her beginning-of-the-term “this course is the most important one you will ever take because” speech and, as a way of emphasizing the importance of understanding gender, tell us that it impacts our lives in myriad ways, that we, among other things, “buy dresses because we’re female”.
Serious, serious fail here.
I am much less versed in philosophy (though I have *some* knowledge of ethics), but I do pay attention to how teachers (and writers) define concepts. And if a teacher defines a word in some way but then uses it in a way apparently inconsistent, I’m going to either a) be very, very confused, or b) think the teacher is an idiot .Neither of those are useful from the teacher’s point of view.
Which means that I’m constantly open to new definitions an expert wishes to provide – I just assume that they will be useful in what I’m about to learn, so why not embrace that new tool? – but frequently frustrated by the less-skilled in their failure to embrace their own innovations.
I assume that philosophers who get tenure have practice in using their own definitions carefully, but I wonder if they might not be as careless as gender studies professors, and your quote makes me worry that that might be true.
I share all the same sentiments and concerns.
Particularly as even I will be inadvertently inconsistent with word uses now and again (as I’m sure everyone is), although I’m keen to correct that if I need to, or be clearer about context demarcation.
Which reminds me to add, just so no one misunderstands us, that I’m sure you agree that words can also be multi-valent and thus change meaning by context, so a teacher can use the same word “inconsistently” if they are using it in two different contexts, which isn’t really being inconsistent, although it can still be confusing if the students don’t know the context has changed or that the change of context changes the application and thus the meaning of the word, and if the teacher hasn’t made that clear themselves.
Also, I should add that the pragmatics of definition changing is a complicated business, e.g. “it’s easier to wear shoes than to pave the earth with leather” is a proverb many a would-be definition-changer fails to properly heed. Some attempts at changing a word’s meaning in popular use are simply too quixotic to put any energy to. Sometimes you have to have more humble aims, e.g. either (1) carving out a context-dependent meaning in a specialized sub-field, although I am usually an advocate of doing that by qualification rather than change if at all possible, since if you use the same words outsiders use but in a different way, outsiders can’t read your insider literature and know what you’re actually saying (or worse, you try to insist outsiders are using it wrong, when in fact you are the one using it weirdly…such an argument will peg you square away as cut off in the ivory tower from the rest of the world, e.g debating what “racism” means) or (2) trying to explore what the conventional meaning actually is in practice and adapting it to a reductive scientific understanding of what it’s actually describing (e.g. debating what “free will” means).
Agreed.
In gender studies, for instance, saying “persons assigned a sex” and “persons having sex” clearly put the word sex in 2 different contexts, and that’s fine and understandable. It’s perfectly proper to use the same word to indicate separate meanings, …
Yes. And if the teacher fails to make this clear on rare occasions, it’s subject for a quick clarifying question. If a teacher is failing to make clear the definition changes with context, or fails to make clear what contextual cues will signal a change in definition, or fails to keep the contexts separate (meaning from context 1 is used in an unusual situation in context 2, without calling out that context 2 does not shift the definition in this case), and if the teacher is failing in that task one or more times every lecture, 3 or more times a week, that can be enough to really mess with someone’s understanding of the material.
I think we’re exactly on the same page.
As a pedagogical concern, however, let me make clear that I believe that in the field of gender studies these types of confusing usages typically happen more than once a lecture, more than 10 or 12 times a week – in other words, so often that it’s hard to say that the definition given by the instructor at the beginning of the course is even the definition the instructor holds.
=====
The funny thing is that I think a college course – undergraduate or graduate – or within the covers of a book are two of the easiest places in the world to set up temporary definitions.
I don’t think there’s any reason to pave the earth with leather. I guess this is a matter of laying down a floor for your well-used home, then it doesn’t matter if people don’t have shoes, and they aren’t expected to take your floor with you when they go, but we can all stand with the same security on the floor provided by our educator.
I’m sure we’re all on the same page with this.
Yes. And nice extension of the shoes analogy. I like that. (I will steal it in future. ;-))
“One might object that if I have a belief about Socrates the man, I do not have a belief about Socrates the lizard”
I am stuck here. Don’t suppose you can help me me out? I can’t get past simply agreeing with this despite reading up on the rule of disjunction introduction. In the argument “Socrates” is the noise made to refer to a particular lizard. At no point are you ever referring to the particular lizard you are simply making the same noise to refer to something else (the man). You can only have knowledge of the lizard when referring to that lizard.The fact that the same noiseword is used seems to me to be irrelevant.
Epistemology is my favourite subject of philosophy but I am still new to it and have no formal training to apologies if I am tripping up on something elementary.
Oh man just realised my problem … sorry its late and I should be sleeping. Anyone who happened to be confused similarly just replace the name “Socrates” with “Thing A” or at least that’s how I did it. Thank you for the excellent post sorry for the confused earlier reply.
Some Gettier-type problems strike me as being based on an ambiguous use of language which could be better avoided.
There’s the classic original where:
Smith believes Jones will get the job (he’s overheard the company VP saying so).
Jones has ten coins in his pocket and Smith knows this.
So Smith believes a person with ten coins will get the job.
But in fact Smith gets the job.
Smith has ten coins in his pocket.
So Smith was right for the wrong reasons…
Here the relevant claim is that “Smith believes a person with ten coins will get the job” – which is ambiguous. It could mean either (a) Smith believes that some person or other, with ten coins in their pocket, will get the job, or (b) Smith believes that a specific person, who has ten coins in their pocket, will get the job. When Smith forms the belief he’s forming it in sense B – it regards Jones specifically. At the end when we say “Smith was right to believe that a person with ten coins in their pocket will get the job”, we’re using the phrase in sense A – a belief that Smith never actually held. We seem to be confusing the string of words with the belief that Smith actually formed.
In your Socrates-the-lizard example, you’re claiming a correct belief that Socrates is mortal. But you don’t in fact have any belief about Socrates the lizard, because you don’t know of any Socrates the lizard – your beliefs concern Socrates the (nonexistent in this case) man, and you didn’t form a belief of the form “Socrates – be he man or lizard or rhino – is mortal”.
Maybe an approach which focusses more on the contents of people’s models of the world would help here – I think there are different beliefs that aren’t being uniquely identified by the strings of words we use to describe them.
Right. In both cases a fallacy has occurred (as I did note). And this was the point I kept trying to hammer home several years ago on my blog. It’s just that Gettier uses a more convoluted logical maneuver to try and get around it.
It’s possible to construct the logic so that “there is a man with ten coins in his pocket, and that man will get the job” can be justified belief merely from the justified belief that Smith will get the job (believe me, tons of papers have been written on this); it might be more obvious when you rephrase it as a gambling problem and bet on the claim “whoever gets the job will have ten coins in his pocket,” and you place a big bet based on your justified belief that Smith will get the job, but he doesn’t, yet you win anyway: you thus displayed justified confidence in a win, yet that confidence was satisfied by accident (“winning” here corresponding to “true belief” in the analogy, so we have justified true belief…by accident).
But your point has not gone overlooked. See this article in Philosophy Now; and the same point was also made by Don Levi in “The Gettier Problem and the Parable of the Ten Coins,” Philosophy 70 (1995), pp 5-25 (so philosophers did beat you to it). But it’s important to read Zagzebski, because she fixes these problems to show there really is still a problem (and it’s the one I note here, and quote her on).
If you want to see a formal logical demonstration that the Jones-coins case does produce justified true belief (by merging two separate disjuncts, each of which is justified but false, but thereby producing premises the conclusion of which is also justified but, by accident, true), see Sergei Artemov, “The Logic of Justification,” The Review of Symbolic Logic 1 (2008), pp 477-513.
Or when you just get the words wrong. 😉 I watched your whole North Carolina “Why I Think Jesus Didn’t Exist” presentation on YouTube last night, and I found it extremely informative and helpful, but two or three times you used the word “disavow” when you meant “disabuse.” You’re so erudite in so many other ways that it’s both amusing and kind of endearing when you trip over a vocabulary item like that.
Case in point. 😉
When I found out that RC had some videos of this type up, I watched 2 – the NC video you mention, and a debate with Mike Licona (okay, really I listened to it, but I don’t think I missed much w/o the visuals)
I thought they were informative, though I heard RC using “specified complexity” which, as i understand it, is completely undefined (or, rather, multiply defined in conflicting ways) so I’m not sure how RC came up with numbers here. It might be that he used Kolmogorov Complexity, but I don’t know.
But my biggest gripe is not getting the Q&A after the lecture. I has a sad.
I’d need to know which talk and what numbers, in order to answer your specific question. But in general, specified complexity is defined in No Free Lunch. It is essentially Kolmogorov Complexity.
I wholeheartedly agree that we should change the definition of knowledge to be claims about probabilities.
The definition of knowledge as “justified true belief” seems to me to be some combination of circular, meaningless and useless; because how do you determine if a belief is true if not with more knowledge?
In your example above, how do you know that they were “really” talking about a lizard? You must have made that determination based on some sort of evidence. Does that make it 100% certain to be true? For that matter, you claim C1 to be a justified true belief, but how are you 100% certain that it is true?
I really want to add an impressive sentence here involving the word Bayesian, but I haven’t really wrapped my head around it yet.
You’re right, of course. It’s actually Bayes Theorem all the way down. But that’s a whole other discussion.
I’m probably misunderstanding, but is your syllogism valid under these conditions? I thought that this was considered the fallacy of the undistributed middle. The example I remember from school is:
Nothing is better than eternal bliss.
A ham sandwich is better than nothing.
∴A ham sandwich is better than eternal bliss.
The syllogism avoids that fallacy if we allow a masked man fallacy under the formation of P1 instead. So it’s one fallacy or the other–only the masked man fallacy occurs before the syllogism (as it underlies the justification for P1), and thus is harder to detect. Same as Gettier problems (only Gettier buries the fallacy much deeper in the convoluted path his problems take).
“No philosopher would consider my belief justified if I knew (or even so much as strongly suspected) that either premise was false.”
Not to be pedantic, but this isn’t true of all epistemologists. Reliablists generally consider a belief justified if the belief is formed in a reliably truth-tracking manner, regardless of the subject’s beliefs about the method. So as long as S believes p because of a reliably-truth-tracking method of getting beliefs M, and p is true, p counts as knowledges regardless of S’s beliefs about M, or whether S even knows anything about M at all!
This is a generalization about reliabilists, of course – sure as I say “reliabilists hold…,” someone will find a soi disant reliabilist who doesn’t hold some part of my definition. But many notable writers in that vein are indifferent to the subject’s beliefs about the knowledge forming process or, indeed, whether the subject even realizes he or she has knowledge at all!
Zagzebski herself is a sort of reliabilist, by way of her virtue theory, which I think is hopelessly flawed. But that’s another argument.
Nice article, though – thanks! Mind if I assign it to my Intro students? You have a nice way of explaining things that is clear and easy-to-access.
It occurs to me that in our everyday practice we use the term knowledge to mean “Justified belief”, because it’s the justifications on which we rely when we claim something is true… which suggests cutting the knot rather brutally by dropping the “true” requirement entirely 🙂 Too much?
As I note, we can define knowledge any way we want. It’s just that the consequences might be problematic. Case in point: if you have a justified belief that the earth is flat, would it be correct to say you know the earth is flat?
If it’s ok to redefine knowledge, isn’t it also ok to redefine flat? (after all the original concept of “flatness” arose from the surface of a calm sea which is really just part of a sphere of radius about 4000miles)
Semantic tricks can’t escape propositional logic. Redefining “flat” would simply avoid answering the question, by pretending I asked a different question (one using “flat” in your new way, instead of the way I was using it) and then answering that one, instead of answering the question I actually asked.
I didn’t think I was disagreeing with you – just giving another example of a natural-seeming redefinition whose “consequences might be problematic”. But I will actually agree with sawells (tongue in cheek?) suggestion if I am allowed to assert that one is never justified in believing a falsehood.
In fact, I do think that the definition of “justified” is the crux of the problem of knowledge and agree with Ayer’s claim that it “is to be decided, if at all, on grounds of practical convenience”. The right to be sure “may be earned in various ways; but even if one could give a complete description of them it would be a mistake to try to build it into the definition of knowledge”. (And it was seven years before Gettier that Ayer also wrote “If a witness is unreliable, his unsupported evidence may not enable us to know that what he says is true, even in a case where we completely trust him and he is not in fact deceiving us.”)
Would it be correct to say that your definition of knowledge as “justified true belief not arrived at accidentally” is equivalent to changing the definition of “justified” to mean “based on sound reasoning” (rather than just “based on valid reasoning”)?
Not really. Because “based on sound reasoning” is simply synonymous with “valid and true” reasoning (since “sound” simply means “validly based on true premises”) and thus you are just replacing “justified true belief” with “true belief.” Which fails to work because you can’t tell which beliefs are true or not without justification, and as soon as you add back in “justification” as a separate thing from truth, you reintroduce the possibility of accidental knowledge. This is what Zagzebski demonstrates. Thus, we have to specify that accidental knowledge won’t be counted as knowledge. This does mean that what we accidentally know will still be true, but when we discover that we “knew” it only accidentally, we will say we never really knew it, that we only knew it accidentally. The alternative is to simply admit that we did know it. That it was accidentally justified and true then can be said to make no difference. And the choice between them is largely arbitrary, a matter of either taste, or what you want to pragmatically do with the words “know” and “knowledge.”
ok That’s good. Indeed you are right that, since p=>p, “true belief based on sound reasoning” includes just “true belief” itself as a special case (though I don’t think they are exactly equivalent). So I guess I will have to think harder about what you mean by “not arrived at accidentally”.
Hi Richard,
Interesting read. You suggest two possible ways to reply to Gettier cases. I have worries for both, so the following will be the reasons I think your proposals don’t work (or are at least unsatisfying to this philosophy student).
Your first suggestion is to simply argue that justified true belief is knowledge, full stop. You suggest that when one has a belief, one forms a belief about probabilities. You appear to imply, if I understand you, that knowledge claims are reducible to claims about probabilities. As you state:
You also say that philosophers tend to confuse “I know that x is probably true” with “I know that x is true.” And then state, “even though the latter, in all practical respects, is either saying the same thing as the former, or else can never actually be true–because it then would translate as “I know the probability of x is 100%,” which is knowledge no one ever has about anything (other than immediate uninterpreted experiences, but that’s not what we usually need to know).”
Okay, I have two problems with this suggestion. Firstly, I think philosophers do not generally confuse the two. There are reasons some philosophers don’t like to reduce knowledge claims to claims about probabilities, but these reasons are separate from worries about Gettier cases. What are their reasons? Because of well known (perhaps notorious) lottery cases. Suppose I buy a lottery ticket and the next day I form the belief that the lottery ticket is a loser (even though I haven’t heard the news about whether it was a winner or loser yet). Now, I form this belief based on the probability that it will be a loser (And what a high probability it is!). If your thesis is correct and knowledge claims are simply reducible to claims about probabilities, then I should know that my lottery ticket is a loser. After all, high probability is sufficient for an epistemic agent to know some proposition. But, as many philosophers agree, nobody knows their lottery ticket is a loser, even if there is a high probability that it is a loser and the lottery ticket actually ends up a loser. But if I can’t know that my lottery ticket is a loser, then knowledge cannot be reduced to claims about probability. Therefore, knowledge cannot be reduced to claims about probability.
Timothy Williamson also has a brilliant analysis regarding assertion and knowledge and makes this same point. He argues that when we assert something, we are claiming to know something, but knowledge is not reducible to reasoning based on high probabilities (because of lottery cases).
So, according to Williamson, when I properly assert, “your car is in the garage,” I am expressing an epistemic state: that I know that your car is in the garage. Given proposal of knowledge, and if we assume Williamson is right, then when I assert, “your car is in the garage,” I am expressing a belief about probabilities. But, intuitively I’m not expressing a belief about probabilities. There is a world of difference between what I am expressing when I assert “your car is in the garage,” and what I am expressing when I assert “the probability that your car is in the garage is x.” But if your proposal is correct, then I’m asserting the exact same thing. That’s a little bizarre. So, if we accept Williamson’s analysis of assertion and knowledge, I suspect there will arise some difficulties for your proposal.
Secondly, the Gettier cases are generally understood as good counter examples to any analysis of knowledge that argues that knowledge is justified true belief. Or, to present the jtb theory of knowledge more formally,
S knows that p if and only if S believes that p, S is justified to believe that p, and p (is true?).
The reason this is so is simply because one can have a justified true belief, but still not have knowledge, since one’s justified true belief has an element of luck to it. So, justified true belief is not sufficient for knowledge. Hence, the analysis fails. Now, Gettier himself seems to imply that one should just come up with some other non-trivial condition or definition for knowledge, but most have failed.
Now, your other suggestion was to simply change the definition, or add an additional condition that one can’t be epistemically lucky, or the condition that the belief is “not generated by random chance.” But this seems trivial and circular. It’s non-informative, and doesn’t really tell us anything interesting. I think most philosophers agree that there is a condition like that, but they aren’t sure what exactly that means, and what sort of factors would eliminate a belief being generated by random chance.
So, the worry is this: your first proposal seems false or at least has some pretty significant arguments and problems against it, and your second proposal seems trivial.
I also have some small quibbles with the cognitive thesis that we have entire lexicons in our brains, but I will leave that for now.
Cheers,
Ray
The irony here is that you just did what you claimed I was wrong to say philosophers do: you just conflated “probably” with “certainly.” It is plainly false that a lottery ticket “is” a loser when it has a non-trivial probability of winning. It doesn’t matter how low that probability is–unless it’s so low the probability of there ever being any winning ticket in the lifetime of anyone holding any ticket for that draw is very small (as would be the case for a lottery ticket that is media-reported to have lost…I am not aware of that ever occurring in the history of contemporary lotteries). But of course we design lotteries specifically to have winners. So they actually have a very high probability of generating a winner.
Thus you do not “know” a ticket “is” a loser in this scenario. You only know it has a low probability of winning. If you play poker (and philosophers who screw up gambling analogies like you just did usually don’t) then you know the decision you have to make when you have a low probability of having a winning hand is not (A) “I’m going to lose, so I fold,” but (B) “can I afford to lose and still remain a reckonable player in this game, and if no, can I afford to go all in anyway,” because there is a chance you’ll win (otherwise the only rational choice would be (A) not (B)). So you calculate risk. That’s complicated. But it’s partly a function of the probability of losing and the amount you will lose (relative to what you have to lose). Thus, poker players holding “lottery tickets” don’t assume they have already lost. They assume they might yet win, and the only question is, is it worth the risk to find out. Which is entirely compatible with the belief “it is very improbable I will win.”
Case in point: I once was near the end of a poker game and faced very long odds against winning. Several players were showing extraordinary cards and I didn’t have much (just a low pair, not even enough to beat what was showing). But I could afford to lose and buy back in, so I went all in. I won. The other players had been bluffing, and my last draw gave me three of a kind. Obviously I did not do that believing I would lose. Why would I just surrender all my money knowing I was going to lose it? Obviously, though I knew I was almost certainly going to lose, I also knew there was a non-trivial chance I’d win, and I could afford to find out.
Now that we know how gambling knowledge actually functions in the real world, let’s see what happens to your argument…
Which should teach you to stop being certain about things, not to treat probabilities as certainties.
You thus learned exactly the opposite lesson from this fact than you should have.
Everything is a question of risk management. Propositions you can afford to be false you don’t need to have sterling-high probabilities for; probabilities you can’t afford to be false you do need sterling-high probabilities for; and so on in between. Thus, I am comfortable saying I “know” things about ancient history when the probability is 95% or up. But I would never say I “know” my car is safe to drive when it had a 5% chance of exploding.
See the difference?
Our assertions of certainty are really declarations of risk (the probability is high enough to make the risk low enough for us to operate as though x is true). They are not actual assertions of certainty (that x will happen 100% of the time). It is philosophers (and ordinary people, too) who confuse the one for the other. As you keep doing, notably.
Notice how this conclusion only follows when you confuse probability assertions with certainty assertions–you thus just proved my point, in the very attempt to refute it.
Your example actually proves knowledge is an assertion of probability. The philosopher who claims to “know” he will lose the lottery is simply making a false statement. He has no such knowledge. Only a philosopher who claims to know he will probably lose the lottery is making a true statement.
Therefore, knowledge is reduced to claims about probability. By your own example!
Only because you haven’t learned how to think like a gambler or a risk assessor. Gamblers and risk assessors don’t think this is bizarre at all. In fact, they understand that’s how everyone should be thinking–and how in actual practice they do think (by observing their behavior, the true tell–what people claim to be thinking or even think they’re thinking, psychologists long ago discovered, is often not true).
It’s everyone else whose intuitions are wrong. And as a philosopher, you should know never to trust your intuitions when there are experts in the matter to consult first. Talk to real world gamblers and risk assessors about your silly lottery cases and you’ll learn an important lesson: philosophy from the armchair is useless. You need to go talk to people who actually have the knowledge you lack before forming opinions about how things work.
The problem with this is that such knowledge predominantly can never exist. S can never be justified in believing that p when p has any nonzero probability of being false, as almost all propositions about anything substantive do. Therefore, you just defined knowledge out of existence. The only things you can ever know by this definition are things that have no chance of being false, and that’s limited to present uninterpreted experience (“I see colors” vs. “I see a tree ten feet from me”).
So what’s the use of defining knowledge as something no one pretty much ever has or can have? That’s simply useless. Better to define it as something we do commonly have, and more importantly, as what people really mean when they go about their lives (by, again, observing them: hence you need to pay attention to the science of human behavior and decision making before making pronouncements about what they mean when they use words like “know”).
Why do you think that matters?
Imagine a world where the only way you ever acquire knowledge about anything is complete accident. Maybe you drank Harry Potter’s Felix Felicis potion and all lucky accidents go your way. You would quickly learn that what you know is nevertheless typically true. So you would call that knowledge and rely on it in every decision you make.
It simply does not matter how you come by knowledge, so long as it’s usually true. Thus, knowledge by accident is still knowledge.
The worry you may have is that we don’t live in a Felix Felicis world, so beliefs gained by accident will usually not be true, so you operate on the assumption that it won’t be, but that’s just another probability: it may have a low probability of being true, but that is still not zero. So it may yet be true. Why is it such a bad thing to simply admit that sometimes it’s true, and any decision we make based on a mistaken belief that it is true will be correct? You can’t say “because probably it won’t be true” because in Gettier cases it actually has a high probability of being true, because we followed a procedure that reduces the rate of error to something very low. We are therefore not trusting all accidental beliefs to be true. We are only trusting accidental beliefs to be true when they are very unlikely to be false–as when we follow a method that makes the probability of being wrong very low. As has to be the case in Gettier examples (e.g. we have to have a very low probability of being wrong about any false belief used as a premise in a Gettier case, otherwise we would not be justified in believing those premises, and thus neither the conclusion).
So I don’t see any problem here. It’s really just a matter of arbitrary taste or convenience whether you include Gettier knowledge as knowledge.
All definitions are circular. By definition.
Definitions never inform you of anything other than what a term will refer to when you use it. My definition does that. And being the only information a definition can ever convey, it cannot be accused of being non-informative.
And yet it does tell you something interesting: that when I use that definition, I am including accidental true beliefs as knowledge, and more generally, that an accidental true belief can exist, and English speaking people can sometimes call it knowledge. That’s three very interesting things.
You don’t ever eliminate a belief being generated by random chance. What you do is reduce its probability. As long as you follow any reliable method, which by definition is a method that makes your chances of being wrong very low (as low as you need to accept the consequent risk of being wrong), you accept the product as knowledge (“x is very probably true”). That sometimes the method will fail and x will be true anyway is irrelevant to that conclusion. Indeed, it actually entails that a proposition is slightly more likely to be true than what any reliable method tells you it is, because some of the probability space occupied by failures is actually taken up by successes (accidentally true cases), so that when the probability that x is true is determined to be 99% on method z because z fails only 1% of the time, the probability that x is true is actually slightly greater than 99%, when we account for adding the very small number of cases when x will be true even when z fails. But when we exclude accidental true beliefs from knowledge, we simply stick with the 99% determined by z, and don’t add the very small number of cases when x will be true even when z fails. Even though, on a straightforward utility equation, we should. Because true is true, and the consequences of x being true will be the same regardless of how we came by believing it.
And that is very interesting indeed.
Of course, you should not take that as saying we have alphabetized codices in our brains. Our brains are organized differently than book technology. But if we know what any word means, that means there is a brain circuit tying a sound (the heard word) to a network of other circuits that code for what the word can refer to, which is the meaning of the word (everyone’s codebook will be a little different, e.g. what I imagine a generic tree to be will be different than you, what kinds of things I can recollect or creatively construct as a tree will differ, etc., but overlapping enough to make communication possible: see my discussion of semantics in Sense and Goodness without God, esp. II.2, pp. 27-48). If there are no reference circuits tied into a heard sound circuit, we don’t know what the word means–it’s not in our “lexicon.” If the heard sound is only tied into completely different circuits (e.g. the Chinese word that sounds like “lee” means nothing at all like what the English word “lee” means), then we do not have the same lexicons in our brains. And so forth. Learning a language involves trying to tune our lexical reference circuits to be as similar to our peers’ lexical circuits as possible, and the more dissimilar they are, the less able we will be to communicate with them (we will often say things they misunderstand as something else, and they will often say things we misunderstand as something else; the goal is to reduce those communication errors, and the only way to do that is to try and get the reference circuits as close as possible, which of course our brains evolved to do autonomically, although there are actions we can consciously take to help–hence that technological invention called a language immersion course).
I think you’re wrong. I think Bayesianism legitimately solves Gettier problems.
Your Bayesian logic:
Justification-Socrates is probably a man (90%)
Justification-A man is probably mortal (90%)
Belief-Socrates is probably mortal (81%)
My Bayesian logic:
Justification-Socrates is probably a man (90%)
Justification-A man is probably mortal (90%)
Belief-Socrates is probably a mortal man (81%)
Justification-Socrates is probably not a non-man (10%)
Justification-A non-man might be mortal (50%)
Belief-Socrates is probably not a mortal non-man (5%)
Justification-Socrates is probably a mortal man (81%)
Justification-Socrates is probably not a mortal non-man (5%)
Belief-Socrates is probably mortal (86%)
You lost me at a lizard having a 50% chance of being immortal.
That’s because you’re trying to match my logic up to yours. A non-man might be a lizard (mortal), or it might be a pet rock (immortal). The point is, I took the possibility of Socrates being a lizard into account, when I took into account the possibility of Socrates being a non-man. Lizards, rocks, amoebas, computer programs, EVERYTHING that isn’t a man is a “non-man”, and all those possibilities together compose the 10% possibility space in which Socrates is not a man. And I arbitrarily postulated, for the sake of argument, that 50% of that possibility space is composed of mortals.
Rocks are hardly immortal. But I see your point.