This is a request to all fans of Bayes’ Theorem out there: I’m looking for the best blogs and websites substantially devoted to discussing all things Bayesian.
Of course I know about Less Wrong, the brainchild of Eliezer Yudkowsky, which often discusses Bayesian reasoning and is a fabulous website for learning about human reason, and cognitive biases and how to overcome them, and other related subjects (it should be regular reading for most people keen on those subjects). But I also just discovered the awesome blog Maximum Entropy by Tom Campbell-Ricketts (since he asked me about the famous anecdote of Laplace, “Sir, I have no need of that hypothesis,” which might be apocryphal, but I directed him to what evidence there is for it). This blog is a Bayesian paradise of great posts, often quite advanced (so not for beginners or mathphobes)–but for people getting into the groove of these kinds of things, a fun resource.
The Wikipedia article on Bayes’ Theorem has already become too advanced to recommend to beginners. The Stanford Encyclopedia of Philosophy entry isn’t any better that way, but at least it discusses the application of the theorem to philosophy (epistemology in particular) and has a more extensive bibliography. My own Bayesian Calculator page (which is continually in development) will perhaps be more helpful, with more plain English explanation and some actual calculators you can fiddle with to see what happens. And total beginners should start with my Skepticon video Bayes’ Theorem: Lust for Glory! (that blog article gives the links plus additional resources about the video). Lots of good links are also assembled at Alexander Kruel’s A Guide to Bayes’ Theorem.
But none of these are blogs or websites that regularly produce discussion and articles about Bayesian reasoning. And I’m looking for the best of the latter. I’m looking for more stuff like Less Wrong or Maximum Entropy. If there is any. It can be basic intro level stuff, or advanced, but it should be good reading either way, the kind of place a general Bayesian might want to visit monthly to see what’s going down. So if anyone reading this has recommendations, please plop them in the comments section!
[I should add that I think all Bayesians should also familiarize themselves with the lists of cognitive biases and logical fallacies at Wikipedia, to contemplate how these can model misuses of Bayes’ Theorem or be corrected or avoided by using Bayes’ Theorem. FallacyFiles also has a useful taxonomy of logical fallacies. But I’m also interested in lists or sites dedicated to common errors or fallacies in reasoning about probability specifically.]
Limited Comments Policy: Because this post is a resource request, only comments that supply relevant hyperlinks (or names of websites) will be posted. Everything else will be deleted. Comments on other subjects should be posted within an appropriate blog thread (see the topic index for my blog down the right side of this page).
http://deusdiapente.blogspot.com has a few Bayesian posts.
http://deusdiapente.blogspot.com/search?q=bayes
Thanks. Good call. Deus Dia Pente, by J. Quinton. Though that’s not mainly a Bayesian blog, it has a lot of great Bayesian content, mostly from low to low-moderate difficulty, so a great one for new folk to poke around in. Since he writes on so much else, Bayesians should link to and keep their eye only on that search list you provide. I shall cll it: Bayes @ Deus Dia Pente.
Do you know about SpamBayes? See: http://spambayes.sourceforge.net/ It is a spam-filtering tool that uses Bayesian analysis of words in email to calculate the probability that it is spam. You might also want to check out the related (or perhaps seminal) article by Paul Graham, called A Plan for Spam: http://www.paulgraham.com/spam.html
As we are in the political season, I think it would be pretty cool to apply BT to political claims in the same way you apply them to historical claims in Proving History. Instead of endless arguing and the usual copious application of fallacies, we could put all the available evidence for/against political claims (including where particular similar policies have implemented in other places, etc.) into the formula and stop running around in circles with bad argumentation. …no, I don’t know any site that does this; I was just fantasizing.
(1) I assumed all spam software was already Bayesian.
(2) I share your fantasy. 🙂
You can delete this, of course, but: It’s spelled Eliezer Yudkowsky.
(For those who don’t know what Joshua means, I had Eliezar instead of Eliezer, two common spellings of that name; now corrected. Thanks, Josh!)
I follow the blogs of two Bayesian statistics professors. Andrew Gelman‘s blog is Statistical Modeling, Causal Inference and Social Science; Christian Robert‘s blog is Xi’an’s Og.
I found out that you were looking for links to Bayesian stuff here.
Thanks!
Xi’an’s Og is too advanced for most, and mostly a “news of the field” site rather than a “learn about Bayes” site. Statistical Modeling, Causal Inference and Social Science is also too advanced for most, and though it at least occasionally comes close to talking about Bayesian epistemology, it’s mostly a tech blog for working scientists who want to explore research methods and discuss the coding of Bayesian software, or discuss the advanced mathematical particulars of the debate between Bayesians and Frequentists at the level of applied methodology (rather than underlying epistemology or semantics). It looks like it will certainly be of interest to people in its niche demographic, but not most laymen.
Richard, massive thanks for mentioning my blog.
Another resource you might like is Allen Downey’s Probably Overthinking It, which has a lot of excellent examples of Bayesian analysis, and not too technical. (Also has a recent series on secularization in America.)
One small point about my own series: I have put the easiest stuff in the earliest articles, and my genuine hope is that any determined beginner will be able to follow those, and gradually build up the understanding needed for the later stuff. (I’ve had some good feedback in this respect.) Mathphobes definitely won’t get much out of it, though.
Good to know. If you ever do a “round-up” post that guides newbies on which posts to read first and in what order (doesn’t have to cover all links, just the essentials that can be used to figure out anything else on your site), please post a link to that here, too.
P.S. Downey’s site is fascinating. I’d say, moderate to advanced, but his pages often mix both forms of discourse so people can get the point from the moderate-advanced discussion without having to understand the advanced parts. Which is a good feature. But his site is mostly stats (applied, and therefore interesting), and though he is a Bayesian and writes entries about that, he doesn’t subject-tag his posts so there is no way to zero-in on them. That’s not a good feature. And there are too few posts in that category that I could locate. So, interesting, but doesn’t fit the bill.
For a non-technical historical perspective on and introduction to Bayes Theorem check out The Theory That Would Not Die, subtitled: “How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy”.
I cite that in my own book Proving History.
Theory is really great for the history (indeed, eminently readable, and there is no equal), but almost useless for understanding the theorem or its use in practice.
For a basic introduction I suggest udacity.com Intro to Statistics (st101) Part 2, Unit 8-11. Units 8 and 9 teach you the basics or probability while unit 10 introduces you to Bayes rule and unit 11 challenges you to program it using Python.
All units make you solve lots of problems on your own. You can submit your solution to be verified.
A similar introduction is available via Khan Academy. Especially the videos “Probability (part 7)” and “Probability (part 8)” introduce you to Bayes’ Theorem.
P.S. I really have to overhaul the list of links you linked to in your post. But the two links mentioned in this comment followed or accompanied by Visualizing Bayes’ theorem and Eliezer Yudkowsky’s An Intuitive (and Short) Explanation of Bayes’ Theorem should get you started.
Hi Richard,
I had a go at putting together a lecture on Bayes’ Theorem as part of a series I’m doing on science in general:
(The Bayes’ Theorem stuff starts after recap and discussion at around 08:51 if you don’t want to listen to the whole lot).
TBH (as with most projects) I see quite a few flaws in the presentations now I look back over them, but I guess that’s just how the learning process goes – I’ve been using Bayes’ Theorem since University, but I’ve only ever had to explain it to Undergrads before, not to non-specialists. I hope it’s clear. I don’t think any of my stuff is watertight to the satisfaction of a decent philosopher but it’s probably good enough for an interested amateur.
Cheers,
Col
Cambridge University “Professor of the Public Understanding of Risk” and Bayesian statistician David Spiegelhalter has a website and blog that might be of interest to you.
Oh, that is really great. Thanks! Although Bayes’ really only comes up occasionally there in any explicit way (Understanding Uncertainty). But even as a site primarily devoted to risk assessment and fallacies of risk assessment it’s a valuable blog to frequent. He does a good job of explaining in lay terms how risk assessment is done (see, for example, his discussion of the claim that eating meat kills you).
Heavily seconded! He gave a marvellous public lecture in Oxford last year on the understanding of risk and uncertainty, and was very good at showing how probability is really about how we update our knowledge and beliefs given the evidence.
http://andrewgelman.com
Thanks. See below.
Andrew Gelman is a professional statistician (of a decidedly Bayesian bent) who has a blog here: http://andrewgelman.com/
Gelman is a good example of someone who is both philosophically reflective about the foundations of statistics and probability and also very familiar with cutting edge applications of Bayesian techniques. The content on his blog usually presumes a pretty advanced knowledge of statistics.
Active blog: http://doingbayesiandataanalysis.blogspot.com/
Dormant blog, but what’s there is interesting/amusing: http://bayesianstats.com/
Thanks. Unfortunately, the first is too advanced and mostly tech; while the second would have been perfect if it were still active.
I have a website that discusses BT at times. If people want to focus specifically on those articles (mostly aimed at an introductory level) they can use this link:
http://foxholeatheism.com/tag/bayes/
This is not a blog/website but a free course that is periodically run by Coursera: Probablistic Graphical Models https://www.coursera.org/course/pgm . I can vouch for the high quality of the teaching (I am currently taking the Machine Learning course). It is probably on the more advanced side but the FAQ suggest that it requires fairly little background.
As an aside there is another Coursera course which may be of interest to your readers: “Think Again: How to Reason and Argue”.
From Richard Martin:
Nicholas Covington has just started some Bayesian blogging at Hume’s Apprentice. Could become a good collection.
Just ran across a post on the Freakonomics blog singing the praises of Bayes:
http://www.freakonomics.com/2012/09/20/beware-the-weasel-word-statistical-in-statistical-significance/
And it reminded me of this post of yours. I did a quick search on their site, and Bayes doesn’t seem to come up very often, but I thought you might be interested nevertheless.
Cheers!
How about a Venn Pie Chart?
A pie chart with overlapping sectors does help to understand the Bayes’ theorem.
http://oracleaide.wordpress.com/2012/12/26/a-venn-pie/
That’s a good and useful link, but just one page. So not the sort of thing I was looking for. Nevertheless, I appreciate you providing it here. Some might find it of use.
I have tried using the same technique to teach Bayes’, but in practice students find it more confusing rather than less–until they “get it.” So it should always be used in conjunction with other ways of communicating the same ideas. I use the technique a few times in Proving History.
Over the New Year holidays I built an interactive Venn Pie Chart using HTML5
http://dl.dropbox.com/u/133074120/venn_pie.html
I was a New Year fun project.
This request is a bit off topic but looking through the categories I couldn’t find a better thread.
Does anybody know of a Bayesian analysis of the Anthropogenic part of AGW?
I have struggled at length with the science and I accept there has been recent global warming and that CO2 inhibits the dissipation of heat to space at night, I just can’t clearly see that the differential in CO2 levels caused by man is a significant driver. The scientific arguments between believers and deniers/skeptics seem to go to the skeptics by a narrow margin.
The main reason I don’t fully endorse the general educated skeptic position is that they are significantly in the minority. We all know that choosing between competing scientific theories by a show of hands is not very satisfying.
Has anybody seen a Bayesian analysis that supports or refutes that CO2 produced by mankind is heating the planet? Could a Bayesian analysis clarify this?
I don’t understand that statement. How can you think that, after looking at what we know of the relative heating effects of CO2 (and, remember, methane) quantity and the actual documented quantity shift that human industry (and agriculture) has caused? Scientists have looked for all other possible sources and already accounted for them, too.
As to a Bayesian analysis, I wouldn’t be surprised if it’s already been done, but I haven’t looked around. Certainly if anyone knows any, do cite them here.
One group I am certain has done it is insurance companies (which have been run on Bayesian modeling for decades now). But their calculations are proprietary (and thus won’t be accessible to the public–they don’t want competitors to benefit from their work in calculating risk).
Someone is going to slap me down for the simplicity of this.
My understanding is that CO2 does slow infra red radiation from the earth and therefore the temperature is higher because of its presence. However it absorbs radiant heat in a very narrow band of frequencies (usually charted as wavelengths) and for those specific frequencies the atmosphere is nearly opaque already. So increasing quantities of CO2 will trap very little extra heat as most of the heat that can be trapped by CO2 is already being retained.
Water vapour traps heat across a much wider spectrum, and variations of water vapour concentrations make a relatively huge difference to the amount of heat radiated into space and hence difference in temperature. Deserts get cold at night because the local air has so little water vapour. Climate models are very weak regarding water vapour.
Plus there are other major warming or cooling drivers, albedo, heavy particle pollution, fluctuations in solar radiation etc. Compared to other drivers, man made CO2 is orders of magnitude less significant.
So tinkering with the man made contributions to atmospheric CO2 levels will cause minimal changes to global warming. It’s too little, too late and the economic costs fall too heavily on the poor.
I do not have a comprehensive understanding of the various climate models, not do I have advanced degrees in atmospheric physics, meteorology or climatology. But I have read a lot of the arguments from the better proponents of both sides and can get my head around much of it.
My personal scientific conclusion is that we are barking up the wrong tree when we concentrate on CO2.
However this puts me in the same camp as people I generally don’t like (not a major problem but uncomfortable), and goes against the consensus view (potentially a larger problem but there are well credentialed proponents on both sides).
I am hoping that there might be a Bayesian analysis (or several) that will get me closer to the truth.
Sorry for double posting. I just feel that I did not address your reply specifically.
How can you think that, after looking at what we know of the relative heating effects of CO2 (and, remember, methane) quantity and the actual documented quantity shift that human industry (and agriculture) has caused?
I do accept that humans have increased CO2 concentrations by approx 50% from about 0.025% to 0.039%.(from memory) My understanding is that the differences in the heating effects at these two concentrations is small and further increases in CO2 levels will cause proportionally smaller increases in heating effects.
Scientists have looked for all other possible sources and already accounted for them, too.
This statement is not up to you usual very high standard. “All other?” Even the strongest qualified proponents of AGW admit major difficulties modelling water vapour. There are several strong hypotheses for alternative primary global warming drivers.
Read the NASA reports (cited earlier). Variances for uncertainly are not great enough to matter. You seem to be buying into the antiscientific dogma that a margin of error in exact degree of effect merits doubt of any effect. That’s a non sequitur. That my margin of error in predicting when you will die does not warrant your concluding you are immortal.
What you are saying as to the greenhouse effect of CO2 has been directly refuted in laboratory experiments (and peleogeology confirms it: remember, earth has been here before–high CO2 and global temps way beyond what we will produce, a correlation that we can document in the geologic record). So I’m not sure where you are getting that from. Moreover, manmade greenhouse gases are not just CO2 but methane et al. It’s just that we generate so much CO2 that it is vastly out-pacing the effect of increases in manmade methane (and other pollutants), even though methane is massively more greenhousing than CO2. Our control of ozone destruction over the last forty years shows how easily human industry can fuck with global climate, and how effective doing something about it can be. CO2 is slower in effect, but is being dumped into the air in vastly greater quantities.
Read up on the NASA science here and here. Summary here.
As they show, without CO2, earth would be a frozen planet. We need global warming. Life has depended on it for billions of years. What we don’t need is too much of it. So the idea that CO2 doesn’t warm the planet is right up there were cavemen riding dinosaurs. The fact that clouds and vapor have a greater warming effect is irrelevant since those aren’t increasing. What’s causing the problem is the dial we are turning up: CO2. The same thing ended the ice age. So the question is what’s causing the increase now. All possible sources have been ruled out but human (as the main precipitator–which also happens to be the only one we can actually control).
But the “too late” conclusion may be true. Now it’s just mitigation (will we make it worse, or less bad). A couple of degrees difference can make a huge difference in global effect, so it is not futile. We can make a difference even now. But at the same time, excessive apocalypticism is equally silly. Life on earth has thrived with vastly higher temps and CO2 levels before. It’s just that life would really super suck for humans if we went back to any of those phases in earth’s history (for reasons already becoming evident, but illustrated more completely in such films as Soylent Green, which depict end results we might not reach, but which may have been reached at different points in earth’s past).
TRIPLE posting… OMFSM! (Oh my flying spaghetti monster)
I found a post on Stackexchange where the poster tried to apply Bayes theory to whether “humans have caused global warming.” He got a revised probability of 34%, but did not accept his own result. Probably with good cause.
Here’s the post: http://math.stackexchange.com/questions/223681/application-of-bayes-theorem
There were three different commentators with a bit of back and forward with the original poster.
It could be that the real problem is how to assign a reasonable value to the prior probability.
It could be that Bayes’ Theorem is not applicable to this problem.
It could be that Bayes’ Theorem has to be applied iteratively to each new paper published.
It could be none of the above and something completely different.
Sigh.
There is another Bayesian discussion at landshape.org
http://landshape.org/enm/david-evans-on-greenhouse-gas/
The post by David Stockwell begins, “David Evans , aka ‘rocket scientist‘, shared his progression from a believer in anthropogenic global warming (AGW) to skeptic in response to new evidence. Bayesian updating is a way of modeling rational changes of mind. I want see if DE is a rational, thinking person.”
David Evans’ arguments are usually dismissed by AGW believers primarily using ad hominem attacks because he refers to himself as a rocket scientist though his degree is in engineering.
Stockwell’s analysis concludes that applying Bayes Theorem vindicates Evan’s transition from AGW believer to skeptic based on the unfolding scientific evidence.
I can’t ascertain if Stockwell is biased in his selection of the unfolding evidence; though I am familiar with every element of the evidence he used and think it is adequately documented.
Also, I don’t know if Stockwell is just defending a previously held skeptical position using Bayes as a form of rhetoric.
Unfortunately, the applications of Bayes Theorem in the cases I tracked down hasn’t increased, or decreased, my 60-65% confidence that AGW, primarily from CO2, is mostly bogus.
The third is necessarily true (and therefore should not even be in dispute: BT has two terms, e and b, which must collectively subsume all knowledge available to you…you can’t leave things out).
The second is impossible (as I prove in Proving History, pp. 106-14, if BT doesn’t apply to an empirical question, no argument logically can).
The first is obviously part of what’s going on here. Why a prior for AGW of 0.05? That makes no sense.
A proper BT analysis would start from effective ignorance, and thus a prior of 0.5. You would then begin adjusting that as you start looking at evidence of what causes GW and the relative frequency of those causes in the past (thus introducing b, or background evidence: requires consulting the results of paleoclimatology), allowing for any new causes that didn’t exist in previous eras (such as human and astrophysical; these can be included by comparing them to previous causes in proximate effect, e.g. CO2 production vs. frequency of relevant astrophysical phenomena). Then you would look at coincident data (the fact that GW has started now, right when humans started increasing CO2 etc., and how likely it is that evidence of all the other causes could have been overlooked by now) and that would then generate consequent probabilities, and an updated result.
Doing it properly, I see no likelihood of getting the result he does.
But he doesn’t only do a boner on the prior (arbitrarily and inexplicably starting absurdly low); he doesn’t make any sense in his assessment of consequents. He doesn’t evaluate any evidence for GW except “what scientists say.” That’s okay for a layman who doesn’t have time or concern to delve further, but it’s not okay if you are getting results against what scientists say–as that would require delving further to find out why all scientists are wrong and you are not (a circumstance that itself has a low prior, not the other way around).
But in any event, even on this method he doesn’t know what he’s doing. He defines one of the consequent probabilities as “Y = probability of humans causing global warming, given scientist evidence,” but that’s not Bayesian. In BT, the probability you want is the probability scientists say there is GW, given that there is GW; and then, in ratio to the probability that scientists say there is GW, given that there is no GW. Instead, he treats the evidence of what scientists say as if he is constructing a prior, not a ratio of consequents (in which case his prior should be 0.9, not 0.05, and he should be looking at what evidence is convincing him scientists are wrong). And if we treated it as a ratio of consequents, surely the probability that thousands of climate scientists are wrong is not 1 in 10. It is vastly lower (see * below).
Conclusion: since he doesn’t even understand the rudiments of how BT works, that post is just tinfoil hat.
[* in order for the probability that “1000 climate scientists are all wrong” to be 10% (0.10), the average rate of error for an individual climate scientist must be 99.77% (0.9977). Because for all of them to err, the probability is 0.9977^1000 = 0.1. You can calculate that in reverse: 0.1^0.001 = 0.9977. When you include AGW deniers among them the math gets much more complicated, but as long as the ratio is 1000:1 in expert opinion, I don’t think you’ll get a much different outcome. Are scientists that error prone? They are wrong over 99% of the time? I doubt it.]
I could not find the David Evans article to evaluate it. But it appears to operate on bogus evidence (the claim that some papers in 1999 and 2007 “refuted” AGW; I’m unaware of any such thing; sounds like creationists claiming recent papers in biology journals “refute” evolution, like they claim every year).
Thanks very much Richard for taking the time to comment in such detail.
I agree with you that assigning a prior probability for AGW of 0.05 seems ridiculously low. The blog that you couldn’t open assigned a prior of 0.95 (admittedly to the more specific question of AGW causing recent temp increases in the Arctic) which seemed way too high.
You also wrote, “but as long as the ratio is 1000:1 in expert opinion,” (I don’t know how to bring quotes down in the nice shaded box.”
This presents two problems:
1. How do you get the 1000:1 figure from phrases like “overwhelming consensus?”
2. How do we define “expert?”
Compared to the general population, members of a pentecostal mega church in Missouri could claim to be New Testament experts. But I would not for a moment assign a higher probability to their shared conclusions than yours.
Many recently graduated “climate scientists” have negligible math, physics or chemistry training, but have graduated with a climate science degree. I spoke to one of them who had come back from a 10 day research expedition to study the effect of climate change on the central Australian bilby (a marsupial rodent.) They hadn’t been able to sight an actual bilby. But they had brought home several samples of scat which contained “four papers worth” of data. Four papers worth!! What a fabulous unit of measurement. And from a quarter of a shoe box of dried shit.
I have also spoken in a family social setting to a deputy-head of a state meteorology bureau. He said that he didn’t accept CO2 was the key issue, but swore me to secrecy because it would be detrimental to his position and prospects if this was generally known.
I will follow the links you provided to see what’s there. Thank you for them.
I wrote: It could be that Bayes’ Theorem has to be applied iteratively to each new paper published.
You replied: The third is necessarily true (and therefore should not even be in dispute: BT has two terms, e and b, which must collectively subsume all knowledge available to you…you can’t leave things out).
Which unfortunately leaves me with an impossibly lengthy task. I have a more than passing interest in AGW; but that’s all. It’s just an interest.
Thanks again for your lengthy replies. I don’t want to draw you away from your main work to become my tutor on AGW.
I didn’t. I was just guessing at a ratio by way of example. If someone wanted to do a proper equation, they’d have to check the data and get something more concrete. For example, checking now, the actual ratio appears to be 40:1 (cf. consensus). That linked paper defines what a relevant expert is.
Yes, level of detail can be modulated to risk. Lower risk, less effort is needed to check. Risk includes financial and other losses and costs (including loss of time), not just death and injury. This should be a standard rule in skepticism. We don’t have to check further if the cost of being wrong is lower than the cost of continuing to check.
In this case, when 97.5% of qualified experts say AGW is a thing, you should simply agree AGW is a thing, unless you deem the cost of being wrong to be higher, then you check further, until your effort is sufficient to assuage your concern about risk (whatever that concern may be). (Of course, risk management isn’t the only motivator; you can check further just because it’s fun or you find it interesting or just want to know, etc., in which case the cost in effort to apply to investigating equals whatever you deem the result to be worth to you.)
In that process, you don’t have to do a scientific-level Bayesian analysis, revising with every publication, etc. It would be nice if someone else did, whose work you could then consult (and I assumed that was what you were originally asking for; it still does not appear that anyone has done that–at least, as I said at the start, I don’t know of any). But lacking that, you need only do a “status quaestionis” analysis at whatever level of focus you want to bother with.
The simplest prima facie test is to take the scientific consensus itself as the prior, and thus 0.975 is not “way too high” when you are entering the question at this stage (post-consensus). You can only start with 0.5 if you actually intend to run the analysis on present data to update it. I won’t bother you with the math, but when you do that, and enter your first datum as the 0.975 rate of agreement, you end up with an updated prior of 0.975 anyway. Unless you have actual data that shows that those in the relevant reference class (the experts being counted) are (not might be, but are) more prone to type I errors than type II errors. Then you would adjust accordingly.
But I’m not aware of any such evidence. That leaves us with an updated prior of 0.975. If you want to check further, you need to examine whatever evidence there is that 97.5% of actual experts are wrong, so that you can get the probability that they would affirm AGW when ~AGW, and weigh it against the probability that they would affirm AGW when AGW. The latter is harder to guess at, so an odd’s form would be better, where you just estimate the ratio of the two consequents, i.e. how much more likely is it that they would affirm AGW when ~AGW than that they would they would affirm AGW when AGW…and based on what? (See Proving History, pp. 284-85 and index, “Bayes’s Theorem, odds form”)
In short, you need some reason to distrust climate scientists as a whole. And it has to be a very good reason. That’s why AGW deniers, to avoid cognitive dissonance, have to convince themselves a vast international conspiracy is afoot. They just aren’t smart enough to realize what that does to their priors. Their motivation thus tends to be desire rather than evidence. And that’s a big no-no for an honest skeptic.
Latest updates: see Bayesian Atheism and Bayesian Atheism Even Lowder
Richard, thank you for your comprehensive answers.
I suspect that you and I might be the only ones currently reading this thread because many people feel very strongly about AGW yet nobody else has pitched in.
Also AGW could well be considered off topic from the blog’s main themes of very early Christianity and Atheism.
I am becoming more concerned about my own confirmation biases. Why the hell do I lap this
http://judithcurry.com/2012/01/02/on-the-dangerous-naivete-of-uncritical-acceptance-of-scientific-consensus/ up instead of rejecting it as rhetorical nonsense?
I feel I could be like a new Earth creationist who continually finds “scientific proof” supporting his ideas. Yet I do find enough of the science understandable that the appeal to accept the consensus view doesn’t sway me. Shit!
Thanks again for your responses.
Of course questioning consensus is always worthwhile. It’s just rejecting consensus based on wildly implausible conspiracy theories, stock fallacies, and ginned up evidence that’s no longer rational behavior. You have to have a good reason to reject an expert consensus. Not a bad one. Thinking in terms of probabilities (and yes, correcting for verification bias, but also fallacies of all sorts) is part of the cure.
I was ready to let this drop, but I don’t agree that questioning consensus is always worthwhile. It’s rarely worthwhile.
Take Einstein’s Special Relativity as an example. I find it quite difficult to even understand comprehensively. I am very comfortable relying on the consensus view of very good physicists. I studied physics honours at an academically strong university, but was in the middle of the pack of that elite group.
Similarly that the Earth is 4.54 billion years old (margin of error 1%). I can’t see any value in questioning that consensus. First I don’t know enough about how that conclusion was reached. Second I don’t really care if subsequent scientific discoveries move that consensus view to 6 billion or 3 billion years.
You then wrote: It’s just rejecting consensus based on wildly implausible conspiracy theories, stock fallacies, and ginned up evidence that’s no longer rational behavior.
Unarguably true. But are you implying that all AGW skeptics are doing this? It seems that you are.
AGW true believers usually hold to their position with a semi-religious fervour. They are quick to imply all deniers suffer from character defects or are irrational. Most of them have an extremely weak understanding of the science. The scientific arguments go right over their heads. The same is true in reverse for most AGW skeptics.
The wildly implausible conspiracies seem either irrelevant or to balance each other out.
I am strongly influenced by the fact that a very large majority of scientists publicly support AGW. But a review of the best scientific arguments from both sides leaves me adopting a skeptical position on the balance of probabilities. The fact that a guy like Nathan Myhrvold is also not fully convinced by conventional AGW theory leads me to consider that my position is reasonable. Not necessarily correct; but reasonable.
What I cannot comprehend is why most people, including you, are so certain that their position is correct.
It seems it is extremely difficult to get clarification on AGW using Bayes Theory because of the complications in assigning values to the three crucial numbers. You correctly argue that Bayes is extremely useful for the study of history because it focuses the argument to the values assigned to just three numbers. But with AGW all the usual points of contention are just transferred to the assignment of these values prior to a Bayesian calculation.
True, if by “questioning consensus” you mean only “doubting the consensus” rather than “wanting to know on what the consensus is based or whether it has been arrived at by a process you can trust” (i.e. a process with a very low rate of error over time) before trusting a consensus.
I was thinking of the latter, i.e. not just gullibly accepting consensus but making sure you should accept it. Looking at the error rates of the group in question on questions of a relevantly similar kind is one way to do that without having to become an expert yourself. Another way is to look at what the consensus is based on and whether the conclusion non-fallaciously follows from the premises (and then if it does, you can ask the same of the case made for each premise, and so on down the ladder, until you reach the point where, as I noted above, the cost of being wrong is less than the cost of finding out). I discuss exactly this notion in Proving History, pp. 19-20.
In the examples you cite, you can explore both aspects (the prior likelihood that the entire physics/geophysics community would be wrong about a question like that; and read up on what evidence they based it on, why, and how they verified that evidence) and even go on to examine critiques of the consensus and compare their evidence and logic with that of the expert consensus (e.g. as when a creationist makes some claim about polonium spheres against the age of the earth and you look into it and discover that they are misrepresenting the science and relying on obvious logical fallacies, whereas geophysicists do not appear to be doing either).
The end result will be: you will understand the science better and the reasons for the consensus better (than you did before) and you will have more experience testing and exposing the tactics of the other side. Indeed, you will have added one more datum to the list of examples of which side of that debate can be trusted more often over the other.
So, I see a lot of that as worthwhile. Although as I said, it comes down to a cost-benefit analysis.
As to the question of “certainty,” I am talking about very high probabilities. The reason I assign one here is that I have never seen an AGW-denying argument that wasn’t demonstrably fallacious or misrepresenting the facts (whereas the number of climate specialists supporting it is huge). Now, obviously, being a good Bayesian, new evidence can adjust my probabilities. So if you know of any AGW-denying argument that you think does neither, link to it here. Otherwise, your objection to my certainty (which really just means my high posterior) is not Bayesian-valid.
The first time I started to doubt the Greenhouse Effect of increasing CO2 concentrations was when I ran across this simple fact way back in the mid 1990s: There is a logarithmic decrease in heat absorption with increased CO2 concentrations.
This is not my original source, but it does cite peer reviewed material:
http://joannenova.com.au/2010/02/4-carbon-dioxide-is-already-absorbing-almost-all-it-can/
I just found this website today. I think you’ll like it. This page is good to start on; each topic expands to a full post with peer reviewed sources.
http://joannenova.com.au/tag/evidence-agw-disproved/
The page above has lots of evidence that could be new to you and isn’t demonstrably fallacious or misrepresenting the facts.
There are bucket loads of strong science at http://wattsupwiththat.com/ Anthony Watts is a very fair moderator, he does not delete posts just because they present strong counter arguments to his positions.
You might be a fan of Nate Silver. He achieved national prominence recently setting odds for Obama’s victory. He might well be a fan of yours too. He should be.
Nate Silver has begun turning his attention to climate science and global climate change. He dedicates a chapter to climate science and its predictions in his book “The Signal and the Noise.”
An apparently balanced view of Nate Silver by Jason Kemp referencing Bayes
http://www.dialogcrm.com/blog/2013/01/13/nate-silver-on-climate-change-skepticism/
Michael Mann’s refutation of Nate Silver
http://www.huffingtonpost.com/michael-e-mann/nate-silver-climate-change_b_1909482.html
Dana Nuccitelli refutes Nate Silver
http://thinkprogress.org/climate/2012/10/08/970541/nate-silvers-climate-chapter-and-what-we-can-learn-from-it/
Since 99% (yeah, I made up that figure) of people who accept AGW have a minimal understanding of the hard science, they can only be holding that view logically due to an acceptance of the “overwhelming scientific consensus.” A Google search of “AGW consensus” produces about 282,000 results. The figures are all over the place. One site claimed 13+k peer reviewed papers in favour versus 24 against – 99.82%.
I found this discussion on consensus valuable:.
http://wattsupwiththat.com/2012/04/30/consensus-argument-proves-climate-science-is-political/
But, as I stated at the outset, my aim was to use Bayes to clarify my thinking on AGW; not to attempt to change yours. So thanks again, Richard, for your detailed and helpful responses.
Since 97.5% of people who have exactly the relevant expertise accept AGW (and unlike you I did not make up that figure), that observation is a fallacy called a red herring.
So you can see where we’re going with this.
It is ridiculous to think, for example, that climate science experts are not taking into account the actual warming curve of CO2. Why do you think they are doing that? Can you cite any paper by any AGW scientist that makes that mistake? If not, then why bring it up? And why were you fooled into thinking it was relevant? Are you that easily tricked by hand waving fallacies? Seriously, think about that.
The links you produce are full of fallacious arguments. Yet you claim they don’t contain any.
They also contact factual errors. Indeed, in some cases egregious ones, e.g. your last link claims that “0.6°C [with] an error of ±0.2°C…is scientifically meaningless,” yet even a fourth grader can do the math on that and discover that that’s not even remotely meaningless but a confirmed net positive; it is also false: the IPCC report preceding his article by five years concluded the rise was 0.74 °C and shows why the old estimate of 0.6°C was incorrect–so evidently this guy expects you not to check the fact that he is using an outdated report over ten years old. His claim, likewise, that “the earth is not warming any more” is creation science 101, the same scientifically illiterate argument as “increasing snow storms refutes global warming”; I’ll let you check the merit of this claim yourself, since it’s more another fallacy than a factual error, per se. Moreover, this statement is expressing a doubt that GW exists, not a doubt that humans are causing it. So you are way out in the crank land citing this as a sensible critique of AGW. This is like citing a creationist denying dinosaurs even existed as a reputable argument against guided evolution.
Look closely at all your own cited resources, and you have yet to present any actual evidence that roughly 98% of published climate experts are wrong.
One should look at one’s own process and see if it is even in principle a reliable way to form beliefs. Your process is not. I think you would do well to look at the articles you link to from the mindset of doubt that they are reasoning correctly, and then look for where their fallacies are. Likewise, any fact claims that seem impossible for a thousand climate scientists to miss or overlook: you should be deeply suspicious of those claims and check into whether they are correct, or even in fact relevant, or how in fact actual climate scientists addressed them. I think that’s the only way you will eventually come to see the fallacies in these articles.
Then you’ll be on the road to where I am. Every ex-creationist had to follow the same road.
I’d never heard of Jo Nova until a few days ago.
She has written a free downloadable 16 page booklet The Skeptic’s Handbook available here: http://jonova.s3.amazonaws.com/sh1/the_skeptics_handbook_2-3_lq.pdf You’ll probably simultaneously love and hate it.
It hits most of the points that lead me to question AGW. She either cites her sources in the booklet or has them at her website http://joannenova.com.au
If I were to do a Bayesean analysis of new evidence for CO2 driving global warming now, I’d start with a prior probability of 0.15. Well down from my approximation of 0.35 to 0.4 a couple of weeks ago.
I’ll truncate Kurt Vonnegut, “Read it and weep!”
The full quote is. “History! Read it and weep!”
I am amused by the hubris of you final line, “Then you’ll be on the road to where I am. Every ex-creationist had to follow the same road.”
First, I genuinely appreciate the time you have spent helping me use Bayes to clarify my thinking on CAGW (catastrophic anthropogenic global warming.) My analysis at the beginning of this led me to feel roughly 60% confident it was an incorrect theory. I would now assign a prior probability of 95% to CAGW being a hoax.
Second, nothing you have written has diminished my respect for you as an eminent, credible historian and as leading atheist philosopher, thinker and speaker. I often recommend your work to my friends and leaders in positions of influence in Australia. I will continue to do so; and hopefully you will soon be receiving all expenses paid invitations to conferences in Australia.
However, (you knew it was coming didn’t you) Joanne Nova, David Evans and I were all CAGW true believers and are now CAGW “athiests.” So let me suggest that, if you start to do more independent analysis of the science behind the current consensus, you’ll be on the road to where I am.
What element of CAGW would need to be conclusively proven scientifically wrong; and proven to be the product of a conspiracy between scientists actively trying to deceive their peers, the governments and the general public; for you to regard the whole edifice of CAGW as extremely shakey? What would be the AGW equivalent of the Ressurection of JC to Christians?
Let me suggest Michael Mann’s famous Hockey Stick graph (co-authors are Bradley and Huges). It shows relatively stable temperatures and CO2 levels for the last 1,000 years with an uptick starting at the industrial revolution. It fits the bill nicely. It was the centre piece of Al; Gore’s “An Inconvenient Truth” and was included in at least the last two IPCC reports. Most AGW true believers know of it and regard it as a foundational truth. It is one the main pillars of the AGW myth. (yeah, mixed metaphors foundation – pillar)
If the Hockey Stick graph was proven to be deceitful bullshit, would you begin to doubt AGW?
I’m not suggesting that you merely Google “Hockey Stick Graph.”
I’m not suggesting you research Steve McIntyre. He is a retired statistician from Canada. In a series of scientific papers and later on his blog, Climate Audit, McIntyre took issue with the novel statistical procedures used by the hockey stick’s authors. He writes here: http://climateaudit.org/
I’m not suggesting that you read a comprehensive but mathematically simplified version of the whole Mann v McIntyre saga.at http://bishophill.squarespace.com/blog/2008/8/11/caspar-and-the-jesus-paper.html
You might enlighten yourself if you do all or any of those things.
I am suggesting a bet. Let’s make a one dollar bet (more if you are feeling horny) that the next IPCC report, due in Sept this year, does not include that graph as part of its support for CAGW.
When it comes to AGW i think you are like William Lane Craig regarding Christianity. Quite well, though inadequately, informed; able to argue your faithfully believed position well; but not open to persuasion.
I am not requesting you to comment further. Just take or decline the bet; and name your preferred stake…
I’m not sure you know what hubris means. Or else you are exhibiting it when using it, which would be ironic.
Why? Are 95% of all scientific conclusions hoaxes?
In other words, what is your reference class, by which you find nineteen times more hoaxes than legitimate results?
Is it that you are now distinguishing from AGW (the actual scientific consensus) from CAGW (which is not a scientific consensus) and thus worried about the claim “catastrophic” rather than the more measured claim “problematic”? Then why would a “hoax” be a hypothesis, when no hoax is needed for a subset of unreasonable people to reach apocalyptic conclusions from actual AGW?
I just don’t get what you are trying to say here.
Except that I demonstrated their misrepresentations of the facts and evidence. You have not reciprocated.
That establishes which one of us is anchoring their beliefs in reality and an honest evaluation of source reliability.
Let me suggest you actually make an effort to understand what you are talking about before repeating distortionist propaganda.
You instead only link me to conspiracy theorists who, like Fox News, just echo what you want to hear. Rather than actually objectively looking at the facts of the case, and exposing which side of these debates is actually misrepresenting the evidence.
With the “hockey stick graph” you are acting like a Creationist attacking the writings of Darwin and concluding there is no evidence for evolution.
The more so as I already warned you against citing outdated papers and instead referring to the latest science.
Please don’t make me have to tell you that again.
Don’t you agree you should not be making this same error any more?
PS I notice that you did not take the bet I offered. We would have a definite answer in Sept this year.
Recent peer reviewed paper on overwhelming consensus.
Science or Science Fiction? Professionals’ Discursive Construction of Climate Change
Lianne M. Lefsrud. University of Alberta, Canada
http://oss.sagepub.com/content/33/11/1477.full
Looks like a Bayesian prior probability on the magnitude of the “consensus” should be very approximately 50%.
Continuing to cite crank science as if that didn’t confirm exactly every point I’ve been making here is not helping your case.
For more advanced interests in Bayesian blogging, the following suggestion was also made: the thinkbayes site on programming Bayesian statistical analyses in practical terms.
I have put together a fun series of videos on YouTube entitled “Bayes’ Theorem for Everyone”. They are non-mathematical and easy to understand. And I explain why Bayes’ Theorem is important in almost every field. Bayes’ sets the limit for how much we can learn from observations, and how confident we should be about our opinions. And it points out the quickest way to the answers, identifying irrationality and sloppy thinking along the way. All this with a mathematical precision and foundation. Please check out the first one: Bayes’ Theorem 101.
Nice series Nat. Thanks. I watched all 6 videos. I was expecting a bit more math, but still found quite a lot of valuable info. A very good series to recommend to math-phobic friends.
Another suggestion: The Bayesian Biologist.
Better late than never – I’ve released a page of links to material on Maximum Entropy that takes the novice through all the basics of probability, up to a reasonably advanced level (if they are interested), in what I hope is a logical order. Link is here. It is reasonably complete, though a work in progress. One thing I feel is missing is a set of simple examples of basic forward problems. Any other comments welcomed – I’m not immune to good suggestions! (Comments are open, so consider double-posting there – that way I definitely won’t miss it.)
As well as serving interested novices, its my hope that some who are more mathematically developed may find it a useful reference.
The links draw heavily on a glossary that I’ve also just unveiled, which lays out concisely many of the key elements of Bayesian philosophy (also a work in progress). Both resources have been announced, here.
That’s pretty cool. Thanks.
Not helpful for learning Bayes, but still an interesting curiosity: Google ngram viewer on use of the words “Bayes” and “Bayesian” in the English language: http://books.google.com/ngrams/graph?content=Bayes%2BBayesian&year_start=1700&year_end=2008&corpus=15&smoothing=3&share=
Dr. Carrier:
I believe I have come across a material error in the Employing Bayes’ Theorem section of your beginners’ tutorial found here: http://www.richardcarrier.info/CarrierDec08.pdf
It is concerning the instructional sample problem of the library in Jerusalem, Section 10.
In this example, P(e|h.b) appears to be incorrect. It should be assigned the probability of 0.6, not 1.
If you use the Odds Method formulation of the theorem, you will see that you take the prior ratio (54:46 for) and multiply by the Bayes’ factor (60:40 for). This gets you a net result of 3240:1840 for. Translating that ratio back into a probability, the answer is 0.638.
If you plug P(e|h.b) = 0.6 into your own Bayesian calculator, you will receive the same result. This is because both formulations of Bayes’ Theorem are mathematically equivalent.
If you would like an intuitive thought experiment to prove why P(e|h.b) = 0.6, rather than the currently assigned value of 1, consider the following:
What are the consequent probabilities P(e|h.b) and P(e|~h.b) for a piece of evidence completely unrelated to the library in Jerusalem? Take for example the evidence that I am writing to you here now. The fact that this post exists has no bearing whatsoever on the existence of a library in Jerusalem, either for or against.
The consequent probabilities will be 0.5 for both. That means equally likely for both cases, i.e. totally meaningless. The Bayes factor here would be 1:1 – again totally meaningless.
If you plug 0.5 for both into your Bayesian calculator, you will see that the final probability does not change. It remains 0.54. This is because the consequent probabilities cancel each other out. This is also intuitive. A meaningless piece of evidence should have no effect whatsoever on the prior probability.
Now put P(e|h.b) = 1 and P(e|~h.b) = 0.5 into your Bayesian calculator. You will see that you have now increased the final result to 0.701. But this makes no sense because meaningless evidence cannot increase our confidence from the prior probability. If Bayes’ Theorem worked in this way, you could confirm any hypothesis merely by stringing together multiple pieces of meaningless evidence. Thus you can see this is a mathematical error.
May I also suggest that you consider removing the 90% chance of having the evidence in the first place for the problem to avoid confusion. The calculation above (and I believe from your side note in the section the simpler one you meant to do) assumes a 100% chance of having the evidence in the first place.
To solve for the 90% chance, we must discount the Bayes’ factor by the appropriate amount. An easy way to think of this is that the Bayes’ factor will be 60:40 for 9 times, and then the 10th time it will be 50:50 (again, meaningless because the evidence is not real in that case). So averaging it out and simplifying the ratio down, we have a weighted Bayes’ factor of 1.45:1. This again makes intuitive sense since 1.45:1 is less than the unweighted Bayes’ factor above of 1.5:1. This reflects that a 90% chance of having evidence confirms our prior probability less than a 100% chance of having such evidence would.
The solution, including the 90% chance of evidence part, is then 54:46 for multiplied by 1.45:1 for. Simplifying again, this comes out to be 1.7:1 for. Converting back to a probability, the answer is now 0.63.
This is, as expected, slightly less than the answer of 0.638 if we had 100% confidence in the existence of the evidence.
I hope this explanation will help you refine your beginners’ tutorial on Bayes’ theorem.
Thank you for your time.
Err…scratch the above. I found my error rather quickly. I am sure it is obvious to most. I should have checked my work a bit more carefully before sending that.
Please delete sir. Thank you for providing the tutorial!