A correspondent (code name emergence) sent me a letter asking an interesting question about epistemology…
Hey Dr. Carrier, I realize that this might be kind of off-topic, but I was talking about philosophy in the Thunderdome on Pharyngula, and you were recommended to me as the guy to go to for philosophy questions. I understand if you’re busy, but I wanted to ask a few questions about something that’s been bothering me. I’ve been thinking about a lot of famous epistemological dilemmas lately, like the regress argument, Descartes’s evil deceiver, or the problem of induction. There are a few other minor ones, but those are the main 3. In trying to understand them, I think that I’ve been able to figure out some solutions to some of them.
I want to talk about the infinite regress first. This argument seems to have several different facets, and I’ve noticed that it might bring up a few other ideas that not many people have touched on. There’s even a bit of overlap with the idea of the evil deceiver. For now, I just want to present my main solution to the issue, and ask a few questions about it.
This is my general argument:
- The foundation of the logical chain starts with my own thoughts and perceptions. Specifically, my experiences of them. This is where the issue sort of overlaps with the evil deceiver. Even if I’m just a brain in a tank, I’m still experiencing something. There are still images and sounds that I am experiencing, even if they don’t correspond to anything real. The reason that I know that I am experiencing them is purely because I am, in fact, directly experiencing them. I don’t infer that I am experiencing my own thoughts or perceptions from anything other than my experiences of them. This is also how I know that I exist. I have a subjective experience of existence.
- I know that abstract concepts like “existence”, “non-existence”, “location”, “quantity”, etc. exist because they are generalizations and idealizations that I use to describe what I perceive. For example, I have a concept of “existence” because I use that term to describe something that I perceive. If I perceive something to “be”, I call that “existence”. It doesn’t matter that what I’m perceiving could be an illusion. I still have the abstract concept inside of my mind.
- I know that the basic rules of logic (non-contradiction, identity, excluded middle, etc.) are true because of my understanding of the abstract concepts that they describe. For example, I know that “something is itself” is true because of what I understand the terms “something”, “is”, and “itself” to mean.
Now on to what I’m still iffy about:
- In my first point, I say that I justify my belief that I experience thoughts and perceptions through my experience of them. Is this circular reasoning?
- The second point that I make seems to rely on the idea that I can know that the abstract concept of something exists purely by experiencing a perception of that thing. For example, I can know that the abstract concept of a circle exists purely from experiencing a perception of a circular object, even if that perception is an illusion that doesn’t represent anything in objective reality. Are there any problems with this?
- Is there something wrong with the fact that this line of argument is so complicated? I feel like the fact that this idea is so long somehow disqualifies it from solving the regress argument. I also feel like all of the side-explanations might need to be justified as well.
- Am I overthinking this issue?
I’m sorry if this was too long, or anything else like that. I just really want to get some answers to these questions.
The short answer? No, no, no, and…drum roll…no.
For those who want to catch up to this point in the conversation, I tackle the regress problem in the context of my epistemology in Epistemological End Game. Relevant discussions of my epistemology (in application as well as construction) are in Sense and Goodness without God (Chs. II, III.6, III.9) and Proving History (Ch. 2; plus see the index, “gerrymandering (a theory)”).
My answer in more detail is, in reverse order:
- [i.e. 4] To do good philosophy, it’s not overthinking it to ask any question that you don’t already know the obvious answer to. A grounded epistemology in fact requires overthinking, so you can be sure it’s all been thought out. You don’t have to dwell on the overthought. But it’s good to have thought it through.
- [i.e. 3] The machine you are trying to explain (a consciousness-producing mind, ostensibly floating through a whole universe, attempting to produce knowledge of both) is (and indeed must necessarily be) vastly complex. So you should expect any explanation of how that mind mediates its access to itself and the outside world so as to construct perception and understanding will not likely be a simple one. If it were simple, that would be spooky. Remember, we are starting with the output of a machine (perception events). We are not starting with anything actually ontologically simple (like a photon or space-time). And we are trying to ascertain what connection those outputs (perception events) have to anything apart from them (an external world, which also includes the machine generating those outputs we are observing). Analogously, watching a movie, the flow of the images on a wall looks really simple (it’s just colored light, reforming effortlessly). But if you ask what is producing it, you should expect, and will discover, it’s a really complicated machine, combined with a really complicated causal history involving an even more complex social system (e.g. the Hollywood Industrial Complex).
- [i.e. 2] Abstractions are just names for sets of perception events with a common characteristic. So you are creating them in your concept-space, out of the raw undeniable data presented to you in your consciousness. They are therefore as undeniably certain as the perception events of which they are assembled. There really isn’t any problem explaining abstraction, which is just “information processing” the data of perception. Philosophers who think abstraction has to be “hyper-real” in some independent sense for commonalities to exist outside the mind are the ones off the rails. Once circles are capable of existing, nothing else needs to exist to explain why they have common features, or why we can assign a code word to designate those common features. See Defining Naturalism for my discussion of mathematical abstractions as an example. This also comes up at many points in my Critique of Reppert. And its application to rule out solipsistic and rule in physicalist explanations of our experiences is outlined in my Critique of Rea.
- [i.e. 1] If the existence of a perception event is literally undeniable, that means there is no logically possible way it can be false to say it exists (when experienced). That’s not a fallacy of circular argument. Because circular arguments only exist when something that needs proof remains unproven. But logically necessary truths are allowed to be circular because they are incapable of being false (unlike a fallacious circular argument, whose conclusion is capable of being false, that’s why it’s a fallacy). They can’t be false because by definition they are necessarily true. In fact, by definition, all tautologies are necessarily true. That’s actually a really handy feature of them, epistemically speaking. It just sucks that most of the things we want to know are not tautologies. But we start at those things on a foundation of tautologies: facts that are incapable of being false. Pointing out that the proposition “I am having x experience right now” is incapable of being false is not a circular argument. It’s just a tautological description of the data.
Since Sense and Goodness was published I have come to conclude that Bayesian epistemology is correct (The Gettier Problem, Two Bayesian Fallacies, etc.) and that is reflected in Proving History (especially in Chs. 4 and 6). This allows a potential solution to the problem of induction, especially when using Laplacean reasoning. The upshot is: regress ends with facts that cannot be false (raw uninterpreted present experience); what we can infer from those facts is probabilistically true by virtue of deductively certain logic (so induction does not require a circular presumption of the future resembling the past); because Cartesian demons are the only alternative hypothesis with a competitive consequent probability of producing the same evidence, yet Cartesian demons are necessarily vastly more complex than explanations lacking them, and therefore have vastly smaller prior probabilities (as I explain in Defining the Supernatural vs. Logical Positivism).
That Cartesian demons are vastly improbable is a fact deductively certain from the data undeniably present to us and therefore incapable of being false. The data, that is, is incapable of being false; the deduction could still be false, e.g. if we are making a mistake in the logic somewhere unbeknownst to us, but that is simply one more consideration of the probability of such a mistake entailed by the undeniable data. In other words, the data we have entails the probability that we are wrong about this is small.
An illustration of that is: a weakest Cartesian Demon is your friends pranking you (a somewhat more convoluted and thus more complex explanation of what you are experiencing than that you are just experiencing the true state of the world right now); a much stronger but still weak Cartesian Demon is The Truman Show (which is far more complicated in the system required to realize it); a far better Cartesian Demon is The Matrix (which is far more complicated still, in the system required to realize it); so to get even better than that (even all the way to a perfect Cartesian Demon) requires a vastly more complex hypothesis than even that (necessarily, as discussed in The God Impossible). Just to construct and describe its powers and motives and how and why it has them. Worse, you then also have to still propose a whole extra universe on top of the Cartesian Demon anyway, in which the Cartesian Demon can exist and which makes its powers realizable. Just skipping the vast added complexity of the demon and sticking with that universe you need in the explanation anyway is by definition far simpler.
So, Cartesian Demons are simply too improbable to credit on present undeniable evidence (particularly the undeniable present experience of all these logical facts). That’s epistemically improbable, of course. We may well be brains in vats. But we have no reason at all to believe that that’s in any way likely. Mediated perception of an external reality more or less accurately modeled by our brain is just a far simpler and thus epistemically more likely explanation of how these perception events are occurring as they are. And that conclusion requires no infinite regress, no circular argument.
What do you think about the Nick Bostrom idea of there likely being more simulated worlds than real world since real worlds will necessarily generate a plethora of simulated worlds and thus your prior probability of being in a simulated Matrix like world is actually greater than being in a real world?
Also along those lines, what do you think about combining a hypothetical strong conclusion on abiogenesis coming out of a solved Drake equation with the probability of a technological singularity generating what we would have to call a god? And wouldn’t that necessarily make polytheism (potentially anyway, depending on the numbers) probably true?
I addressed that in The God Impossible. The original has hyperlinks, but in brief:
You’ve also argued that teleological arguments more rightly point to a creator predominantly interested in black holes. So it stands to reason many alien races would be interested in better understanding physics and would be running tons of such simulated worlds that just so happen to have a side of life they don’t know or care about. So the gross ad hoc evil can merely be reframed as incidental amorality.
It’s extremely unlikely anyone doing that (indeed, anyone technologically capable of doing that) would not know that’s what was happening. And if they know, then they are sociopathic monsters. Not just that, but somehow, contradictorily, sociopathic monsters who don’t get annoyed by their paradise sims being denied bandwidth because of nutters running nightmare sims on the mainframe. I don’t think such a species is likely even to exist, and certainly vastly unlikely to ever survive long enough to be able to convert the whole universe into a mainframe running sims.
“I am having x experience right now” is incapable of being false
That still doesn’t mean you have knowledge of the experience that you are having. You could be experiencing an illusion or delusion; you could be high or dreaming. When you are having a dream you are experiencing the dream not the experience you are dreaming.
This is intuitively obvious, if you’ve ever had the experience of thinking you knew something, but later discovered you were wrong. You a) didn’t “know” what you thought you did b) experienced exactly the same sensation you would have had if you had been correct in your knowledge c) only later were able to tell the two experiences apart.
It seems contradictory to be building an epistemology in which being wrong (not knowing something) is as valid as knowing something experientially. That is the challenge Sextus Empiricus raises, and it does not appear to me that you have answered it.
“You could be experiencing an illusion or delusion…” — That’s what we mean by raw uninterpreted experience. That element is already covered in the materials linked and referenced here.
You seem not to have read any of that. So you don’t know that what we are doing is taking undeniables (raw uninterpreted present experiences) and testing hypotheses that explain them. The result is usually a hypothesis that is more probable than the others. And that is what we call knowledge (most particularly, we have knowledge that P(h) is x given the undeniables, not knowledge that h is certainly true; and, BTW, our knowledge that P(h) is x given the undeniables is deductively certain, or can be if we take the trouble to deductively verify it, usually through an a fortiori proof).
“It seems contradictory to be building an epistemology in which being wrong (not knowing something) is as valid as knowing something experientially.” — I don’t know how you are using the word “valid” in this sentence. It doesn’t seem to make any coherent sense if you mean the formal definition of validity. A valid conclusion is a conclusion that follows deductively from the premises (i.e. without fallacy). Valid conclusions can easily be false. That’s why conclusions have to be valid and sound (or just “sound,” since saying both is technically redundant). The premises need to be probable, not merely possible.
And those probabilities come from the undeniables. For example, it is undeniable that the being-mistaken scenario you describe presents as rare, which by definition means it is improbable for any given item of what we claim to know. That’s already built into Bayesian epistemology. If, let’s say, 5% of our knowledge is mistaken, then everything we claim to know, we are claiming to know that it is 95% likely to be true. The cases that are mistaken are included in that remaining 5%. Thus there is no contradiction. Sextus had very little knowledge of probability theory, and none of Bayes’ Theorem. But it is notable that his solution is very similar (contrary to what many who read him think, he actually asserted a lot of knowledge to be highly probable and not 50/50, based on a cruder form of the same reasoning).
Now, you might want to try and suggest we are greatly deceived in all this, and that all knowledge claims are mistakes (everything we claim to know, we only falsely believe we know). But that hypothesis fails to explain the data well at all (how it is we navigate our environment successfully so often without dying or starving; how we do so well at reading other people’s thoughts; etc.) and any gerrymandering of it to fix that, vastly reduces its prior probability (so you just move the same improbability around in the equation, getting you nowhere).
What you are then proposing is just another Cartesian Demon. It falls to the same analysis. Which hypothesis explains the data better with fewer assumptions? Not that we are dreaming etc. or that we are falsely believing we know things all the time. That we are interacting with a real world, and right about most things most of the time, requires fewer presupposed continuing amazing coincidences and posited additional contents of the world. It is therefore vastly simpler. Until, of course, evidence arises to the contrary. Then we wake up or go to the doctor (or, should it be the case, realize we are in a permanent dream of our own making etc., a la Vanilla Sky).
Regarding the phrase in the title, ‘Epistemology without insurmountable regress,’ I accept the phrase, but feel needs an caveat. I don’t think we really disagree, I just wanted to emphasize something different.
The infinite regress argument is correct: universally deductive proofs of useful facts about reality are impossible. This is only seen to be a problem, however, when the concept of knowledge is badly understood, as with useless phrases, such as ‘justified true belief.’ (Its not that the phrase represents a necessarily empty set, but that we can never know what propositions fit in that category.)
Some statements are indeed necessarily true, but they achieve that status by being sufficiently vague to be essentially uninformative. E.g. ‘something exists,’ or ‘I (something) just experienced something.’
Nontrivial and informative facts always need to be inferred inductively, and probability theory is open ended. For example, for simplicity, I might fit my data set with a linear function, and leave it at that. If I wanted to be more careful, I might try two fits, one linear and another quadratic, and adjudicate between them using model comparison (the quadratic, with its higher degrees of freedom is guaranteed to give a fit with a not lower, and usually higher likelihood, but due to its larger number of parameters, has lower prior probability). We could keep on adding polynomials with higher and higher complexity. We could add yet another layer to the inductive onion, by supposing that up to some point in time the data evolved according to some polynomial with a fixed set of parameters, then after that point point, evolved in accord with some other set of parameters. Formally, we wouldn’t know this model was unnecessary, until we tried it out – it might give the best fit. There is no limit to the number of levels of sophistication we can add to our model. Maybe my calculator is malfunctioning? Maybe the whole experiment was just a dream? etc. etc. …
This is the problem of theory ladenness, or as I call it elsewhere, the calibration problem. These are only problems, however, if what one is expecting is deductive certainty. As soon as one gets over that fantasy, then it’s business as usual.
Certainly.
Although “universally deductive proofs of useful facts about reality are impossible” is not strictly true, since we can be deductively certain of the statistical facts of present experience and thus deductively certain of what probabilities they entail. But that only gets you to warranted belief not certitude. So “it is very probable that x” can be deductively proven from undeniable facts of present experience; but that’s the same point you are making: though the buck stops somewhere (regress is not infinite), it stops at some degree of uncertainty.
I’ve noted elsewhere, BTW, that this is necessarily true even of God, should he exist: he also cannot be deductively certain he is not the victim of a Cartesian Demon or any of the other albeit improbable possibilities that would generate the same data persuading him that he is a god. Notably, that’s true, even if God is, as the theist’s desperately hope, a logically necessary being. Because any “god” would still then not be able to know with deductive certainty that s/he is that god.
Thanks for considering my comment.
To nit-pick a little, it seems to me that while “it is very probable that x” can be proven in the mathematical sense, as you say, its proof is contingent upon some selected probability model, and it is therefore not a property of the universe, but an abstract construct of the mind.
We can have deductive proofs concerning mathematical entities, but these have no reality. We can have rationally supported extremely high confidence in (non-trivial) propositions about the universe, but not deductive proofs.
“…its proof is contingent upon some selected probability model, and it is therefore not a property of the universe, but an abstract construct of the mind.” — That construct is a model of the universe, based on facts that are undeniable (the requisite probability theory follows deductively from logic, via Willard Arithmetic and the singular undeniable premise that distinctions exist). Therefore it is a hypothesis about the universe that cannot be false (as if it were, we would not be experiencing it).
“We can have deductive proofs concerning mathematical entities, but these have no reality.” — Numbers and relations are hypothetical models of physically realizable states. That is not the same thing as having no reality. Aristotle was right about this (for a bibliography of recent philosophers who agree, see Sense and Goodness without God, Chapter III.5.4-5, pp. 124-34; more have been published since).
Richard,
I doubt we’ll reach accord on this topic (I seem to recall we had a long discussion before, which left us both completely unconvinced of the other’s position), but in the hope of making steps in the right direction (and in the spirit of sharpening my own skills in philosophical discourse):
To perform any probability calculation, a hypothesis space has to be formed, but there is no correct hypothesis space. We can not say it is an undeniable fact that H_1 belongs in the search space, while H_2 does not. If it was possible to make such statements, then there would be no need for probability, and there would only be one hypothesis. (If there was more than one proposition in the search space, then at least one must be false, in which case we have to ask what makes one false hypothesis a good one to evaluate, and another false hypothesis not – in what sense could we say that the correctness of our chosen hypothesis space is an undeniable fact?)
Sure, we can argue that H_1897634 is far to implausible to consider including in the hypothesis space, but in doing so, we are informally evaluating that hypothesis, using some technique that is valid only to the extent that it succeeds in mimicking the output of Bayes’ theorem. But we can’t do this for all hypotheses – at some point, we draw the line and say, ‘I have to stop calculating here, otherwise I’ll never get anywhere useful.’ The point at which we make that decision, and the exact route to that position are governed to some extent by the data, but also by intuition and personal taste. There is nothing undeniably factual about them.
“We can not say it is an undeniable fact that H_1 belongs in the search space, while H_2 does not.” — This confuses real space with epistemic space. The hypotheses we don’t know whether they belong (or full on don’t know) are not in b (background knowledge, a condition in every term of a Bayesian equation), therefore they are irrelevant epistemically. They become relevant only when discovered, and then they are in b, causing an update. Remember, Bayes’ Theorem tells us what P(h) is only given what we have presented to us (in b). It is therefore a conditional probability, one that is necessarily true (within an a fortiori error margin) given the undeniables accessible to us at that time. The question then is: how often are we wrong about that. For which we have data…
Among those undeniables will be data relating to the condition we are in. For example, on previous occasions in our experience (hence present as recollections now; and, by iteration transformation, likewise data reported from others who have come before us, putatively centuries worth) statistically after a poor search of the possibility space we end up buying into too many hypotheses that fail, but after a diligent search of the possibility space, we far more often end up with successful hypotheses rather than overlooked ones (which we know from their success in practice, which is massively improbable as a coincidence: e.g. the convergence and predictive power principles in Sense and Goodness, Chapter II.3.1, pp. 51-53). So we know that if we perform a diligent search of the possibility space, we will not commonly overlook the correct hypothesis and buy into a false one. (This is true even for the Newton-Einstein case, both of whom discovered correct hypotheses, the one simply extending the other’s into new conditions; likewise Classical Mechanics is an emergent property of Quantum Mechanics.)
If it helps, you can do this in two stages: instead of working from a real-world hypothesis right away, just assume you are a brain in a vat, and what you are trying to figure out are the rules used by the external computer to feed you information (including recollected memories). You will end up with a set of rules that will be undeniably true to a high probability. Then, once you’ve sussed all of that you can, you can test then the hypothesis “a computer is doing this” against “this is actually all being caused by a physical world that acts that way” and find the latter is informationally much less complex and thus has a much higher prior. Indeed, if the computer is eternally infallible, then it wouldn’t even matter: the physical world then does exist…as the output of a computer (SaG, Chapter II.2.1.2, pp. 31-32). And if it’s not, there will be information eventually that gives it away (and here, the closer you imagine the computer to being eternally infallible, so as to increase the rarity of give-aways, the more complex and specified that computer has to be, and thus the more complex the hypothesis, and thus the lower its prior probability).
“The point at which we make that decision, and the exact route to that position are governed to some extent by the data, but also by intuition and personal taste.” — That’s not epistemic, that’s pragmatic. We apportion research and vetting time to risk, and risk varies to values (some people care more about some risks than others). So you are here confusing the true probabilities (the ones we could in principle deductively arrive at if we could be absolutely thorough with all undeniable data) with the ones we actually generate, which are a fortiori (probabilities far lower than actual, but just high enough to risk acting on, or too low to).
Bayesian epistemology is all about a fortiori reasoning. See Proving History, index, “a fortiori.” So, for example, although I could determine nearly the actual probability of my apartment building being destroyed by a meteorite tomorrow (the data is available), all I need to know is that it is unlikely enough for me to worry about, so I can run a quick and crude a fortiori calculation: one divided by the number of days I’m sure this spot has not been hit by meteorites (let’s say, 3,650,000 days, or 10,000 years; I could do better with a Laplacean formula, but the difference doesn’t matter, this is a fortiori). I then know for a fact that the probability of my apartment being destroyed by a meteorite tomorrow is less than (and I can show also simply that it is very, very much less than) 1 in 3,650,000, which is too unlikely to care about or take any precautions against. I could, if I wanted, walk that calculation all the way down to the undeniables of my direct experience, as the foundation for all the data I used to run that top calculation. Thus, I can be deductively certain that the probability of my apartment being destroyed by a meteorite tomorrow is less than 1 in 3,650,000. Without ever knowing (or even caring) what the actual probability is (death by meteorite in general is closer to 1 in 75 million). And this is how we live every part of our lives. All knowledge works this way.
This is also how we eliminate fringe hypotheses like H_1897634. We don’t need to know their prior. We only need to know that it is lower than some value that removes it from our math because it washes under the margins of error we are already working with. Meanwhile, our being wrong about that is already included in the converse probability to the one we assign the hypothesis we conditionally decide to believe is warranted: e.g. if we choose H1 and say it is 95% likely, then there is a 5% chance some other hypothesis is true, and that could well be H_1897634; since we are already saying so, no contradiction is generated, and since our P(H1) is conditional on b, our P(H1) is deductively certain (potentially actually; but typically a fortiori), until the contents of b change, e.g. we discover something we overlooked about H_1897634. All knowledge statements (beyond the undeniables) actually reduce to “the P of h is x given all we know right now.” Not “the P of h is x” or even strictly speaking “x is true.”
Regarding your first paragraph in the above reply, you cannot say that the scope of the hypothesis space is epistemically irrelevant. To evaluate a hypothesis, you must compare its performance to that of some set of rivals. This is what Bayes’ theorem does, and the output of Bayes’ theorem will depend on the exact choice of that set.
You take b, the ‘background knowledge,’ to be something that appears fully formed from somewhere, and claim that if a hypothesis isn’t in b, then we don’t need to worry about it. But where does b come from? Concerning the part that determines the hypothesis space, it comes from your own imagination, and what seems to you reasonable. There is nothing deductively provable about it’s contents. The problem is not one of simply examining b to see what hypotheses are inside it (what would I use for that, a microscope?), but rather one of estimating what hypotheses we should consider.
You say that new hypotheses can enter b, once they are discovered. But how are such discoveries made? By applying methods that, again, at least approximate Bayes’ theorem. This process must involve some form of random search, tentatively trying out novel propositions, to see how they perform. The difference between some hypothesis being outside b at some point and then later inside b is simply that up to that point in time, you had never tried thought experiment. To say that the contents of b are an undeniable fact is to misunderstand the process.
Respectfully,
Tom
“Regarding your first paragraph in the above reply, you cannot say that the scope of the hypothesis space is epistemically irrelevant. To evaluate a hypothesis, you must compare its performance to that of some set of rivals. This is what Bayes’ theorem does, and the output of Bayes’ theorem will depend on the exact choice of that set.” — Of course. But if in b is no knowledge that set Y of hypotheses even cumulatively has a prior high enough to not be washed out by your margins of error, you can logically disregard them. That’s my point. Only when information enters b (by the iterative method it will do so by beginning as a novel e) that one of those hypotheses should be assigned a non-negligible prior, is there are logical warrant to consider it. Everything else is already subsumed under the probability you are wrong, which will the probability you are wrong given b.
“Concerning the part that determines the hypothesis space, it comes from your own imagination, and what seems to you reasonable. There is nothing deductively provable about it’s contents.” — It also comes from logic, set theory, present information, processed information, etc., most of which deductively provable. For example, that a certain experience-set is recollected as rare with regard to its converse is an undeniable fact of present experience. Which logically entails conclusions in probability (rare = improbable). Likewise the binary fact that either your recollection in that respect is accurate to within a chosen margin of error, or it is not; giving you a foundational h and ~h to compare with each other, and accumulate evidence to test, with respect to trust in reliability of memory to provide data.
“This process must involve some form of random search, tentatively trying out novel propositions, to see how they perform.” — Hardly any truth of the world has been discovered by a random search. There is a reason scientists and historians zoom in weirdly fast on valid hypotheses. It’s not because they are psychic or spirits are guiding them or they swallowed a Red Dwarf Luck Virus. It’s because they use information available to ascertain the most probable hypotheses right from the start and then begin testing them, and then revising with feedback. That this works is itself data. (Incidentally, data that makes “external world” hypotheses increasingly more probable than Cartesian Demons.) Of course, most of this was done automatically by the brain interacting with the world for its first five or so years (hence children begin with antirealism, then learn realism is more likely, e.g. by discovering object permanence).
Epistemology aside… I’d like to hear more about this Red Dwarf Luck Virus. Is it FDA approved? If not can I still get it on Amazon? If not, is it available through a reliable merchant you can recommend on the Dark Web?
If you can legitimately say that the integral over sub-space Y is negligible, then it means you have already evaluated these hypotheses. I.e. they are already part of your hypothesis space. What I’m talking about is the infinity of propositions that are not in the hypothesis space.
No hypothesis space can contain the proposition ‘the truth is something outside this hypothesis space.’ It is mathematically impossible. What we might do, as a crude method, is to define a hypothesis, H_x, ‘something other that all these other hypotheses,’ but then, strictly speaking, there is no way to assign a prior distribution, or to calculate all the likelihoods. The final cover-your-back proposition is too vague to permit mathematical analysis. We can stick in some approximate probability for H_x, using e.g. empirical frequencies with which the hypothesis space is found to be too narrow, but again, this would be an estimate, contingent upon some higher-level probability model.
As long as any part comes from non-deductive inference, then you cannot claim deductive status for the conclusion. Under my limited hypothesis space, H_1 might be genuinely very highly probable. But another might arbitrarily consider a different hypothesis space that happens to contain the true hypothesis (while mine does not). In this case, he may very well find my H_1 improbable. That H_1 is probable under my model would be a deductively true fact. But it is a fact about a conjunction of some proposition about the world AND my assumed probability model. I.e., a construction of my mind. The existence of other models giving contrary conclusions concerning H_1 is enough to demonstrate that the very highly probable status of H_1 is not a deductively provable fact about external reality.
I hope I’ve succeeded in making my point more clearly.
Unless you know any of those has a non negligible prior, it has a negligible prior. Knowledge cannot be born from ignorance.
And we know this because we aren’t constantly surprised that all our hypotheses fail because of failing to assign a non negligible prior to something we didn’t think of.
That only happened when we used lousy methods. Since the Scientific Revolution, we discovered that with good methods we are so good at detecting hypotheses with non negligible priors, we rarely miss them. The only time we do miss them is when we know we are: i.e. when we don’t know what the correct explanation is for something.
It has strictly zero prior probability, because it is not in the hypothesis space. But that is just a matter choice.
The reason that our hypotheses are not regularly failing is that our experiences are produced by a mechanism that exhibits symmetry (e.g. as yesterday, so today). What we can only make intelligent guesses at, however, is the nature and origin of that symmetry. A perfect example is the birth of quantum theory. For your statements about logical undeniability of the of the hypothesis space to be valid, there would need to be a clearly defined threshold where the accumulated evidence suddenly makes the inclusion of the quantum-mechanical model in the hypothesis space appropriate. No such threshold can be deduced – any attempt would be analogous to the fabled 0.05 alpha level in null-hypothesis significance testing: entirely arbitrary.
Perhaps it would help you to try to think like a computer scientist. How would you actually build an algorithm to do probability analyses? Where would the content of your background knowledge, b come from? How would you convert raw experiences into numbers?
“It has strictly zero prior probability, because it is not in the hypothesis space. But that is just a matter choice.” — Now I don’t know what you are talking about. Only the logically impossible can even possibly have a zero prior. Everything else has a nonzero prior. Most, a vanishingly small one. We can ignore most hypotheses not because their priors are zero, but because their priors, even in sum, are near enough to zero.
The rest of your comment is unintelligible. It does not sound like you actually understand what Einstein showed with regard to classical mechanics that defined quantum mechanics. Or that you are even aware of how a fortiori reasoning works.
I wonder if that could be used to counter a slippery slope argument. It seems like a similar argument, just in the other direction.
My use of “valid” was an error; I think it was probably the equivalent of saying “um…” in the middle of a sentence. Sorry about that.
I’ve read some/most of those references at various times but I haven’t committed them to memory and I am not entirely convinced. Your argument (that bayesian probabilities help resolve mistakes where we think we know something but actually believe something incorrect) seems reasonable; it’s similar to what Cizko is putting forward in “without miracles” – a sort of “wrong knowledge eventually dies” thing. I like it. I imagine, though, someone like – let’s call him Dick Cheney – who’s incredibly wrong about things because he’s established a self-reinforcing body of “knowledge” that is untrue in certain respects. He’ll look at conditional probabilities of new observations in light of his past extremely wrong beliefs, and conclude that he “knows” something wrongly. You’re right that’d be another cartesian demon and that it’s vastly less likely; your explanation is convincing. Thank you!
“He’ll look at conditional probabilities of new observations in light of his past extremely wrong beliefs, and conclude that he “knows” something wrongly.” — Well, to be accurate, that does not require adding a Cartesian Demon. In that scenario (which I think is actual reality), the Cartesian Demon is himself. Certainly correct conclusions cannot follow when you ignore data present to you. Cheney ignores logic and facts galore. So yes, he will be convinced of a shitload of stuff that’s false. That’s how delusion works. Philosophers, of course, are interested in what protects you from that fate. And they discovered the cure: empiricism, logic, and the self-examined life.
Very thought-provoking.
As a physicist, though, I have to say that I cannot think of a sense in which either photons or spacetime are “ontologically simple.” Actually, the serious quantum field theory folks will tell you that when spacetime is curved, particle number becomes a tricky concept, and we retreat from the idea that photons are objects. Instead, we have to talk about photon events in our particle detectors, and expectation values for “operators” associated to different regions of spacetime. We slide, it seems, from the mental image of a photon as a tiny BB with some numbers attached, to the notion of having experiences with a photon-ish character.
Of course, nothing is clear-cut in the philosophy of physics, and a great many physicists get by without caring about any of this stuff, but when you find physicists themselves saying things like, “the notion of ‘particle’ is highly nontrivial and problematic in this setting and is to be understood in a metaphorical sense” — well, something odd is going on! For my own part, I’m willing to say that we’re wrapping around to the idea of perception events as undeniable raw data or experiences, but many physicists would doubtless disagree. David Mermin writes,
Yes, it’s a fair note that photons and space-time are not the simplest things imaginable; just far simpler than a machine generating a coherent intelligent consciousness.