Thursday, December 20, 2012

Of Atheists and Divisions


Dan Merica of CNN posted an article recently about how Christmastime brings to light divisions within the atheist movement about how to deal with religion.  Hemant Mehta beat me to this article and has already responded and I agree with much of what he says in his post. But the CNN post came on the heels of another article that I ran across from Psychology Today about atheist discrimination. I feel that the many topics brought to light in these articles are tied together and I have found it difficult to agree with any of the people, atheist or theist, who have chosen to speak on atheism as it relates to yuletide religiosity this year.
When I first read Dan Merica's post, I was slightly amused. Though he doesn't use this terminology, what he is really writing about is the ever-famous confrontation vs accommodation debate that has actually been going on in the atheist community for a years. Merica's post cites the words of two atheist activists, David Silverman and and Greg Epstein. These two are known for their seemingly diametrically opposed  views on how religion should be addressed. Silverman has a very forward, in-your-face style of criticizing religion, especially religion in the public square. Epstein, Harvard's Humanist chaplain, believes that religion itself doesn't need to be opposed, as believers and non-believers can peacefully work together. I have my sympathies and concerns about both of these views. Where my real criticism lies in in the confrontation vs accommodation debate itself. I do not deny that a debate exists. I understand that one can get judged very harshly for choosing one side or the other in different spheres of the atheist community. As with most dichotomies, however, confrontation vs accommodation is a false one.

I could easily write an entire post about this single point, but I will try to sum up my objections in the context of the "war on Christmas" issue. David Silverman and American Atheists make some good points in their rhetoric. Religion has long dominated the national conversation and has taken for granted its ability to trample over unbelievers. This is a problem that should certainly be addressed. The problem is that going on Fox News and arguing with Bill O'Reilly about nativity scenes doesn't do much to solve the problem. Putting obnoxious billboards up doesn't help either; it makes us look like jerks. Silverman has argued that putting signs up helps nonbelievers realize that they're not alone and that it's ok for them to come out and be proud. The problem is that the level of confrontational and poorly thought out vitriol that we see from American Atheists is not necessary for this aim. The Center for Inquiry put up billboards that simply said "you don't need God..." speaking directly to the nonbeliever and directing them to a website. While these billboards still resulted in controversy, it was clear that their intention was to show compassion for the atheists rather than to give the finger to the religious.

All of that being said, I am equally disillusioned with the accommodation "why can't we all get along?" camp. Greg Epstein mentions in the article some charity work that his group participated in with religious groups for the holidays. Such collaboration is all well and good and I have, with my own SSA group, worked with religious groups on events. The problem with noting the ability of the nonreligious and the religious to get along is that it misses the point entirely. Men can work with women, white people can work with black people, etc. This does not mean that one of each of those pairings is not underprivileged. I brought up the Psychology Today article about atheist discrimination for exactly this reason. We live in a society where it is perfectly acceptable (or at least, not negligible) for Mike Huckabee to twice blame the godless in our country for the horrendous  tragedy in Connecticut. When someone blames homosexuals for disasters, sensible citizens shake their heads and condemn the accuser for being discriminatory. How often do unfair blanket accusations against atheists illicit the same reaction? The US is far from the worst country to disbelieve, but it certainly is no paradise for us. Ignoring this fact and devoting our time to accommodation and coexistence doesn't make the situation better.

Allow me to speak in blunt terms: the "war on christmas" is hogwash. It's a made up conflict hyped by the likes of Fox News and helped along by atheists who feel the need to argue directly with the clowns on that network. We should not be using it as a jumping off point for drudging up the old aggression vs accommodation argument. We really need to move beyond that debate. As I see it, both sides are wrong. We all need to sit back, eat a holiday cookie and decide what actions we're going to take to really help strengthen the community of Atheists and secularists in the coming year.

(Originally posted on The Humble Empiricist)

Tuesday, December 18, 2012

Pat Michaels Misleads his Readers on IPCC Draft

As a skeptic, one of the more important goals to me in our society is to make sure that pseudoscience and scientific falsehoods are adequately responded to and shut down.  Evolution and cosmogeny are the more commonly known ones in the atheist/free-thought community, because of the very strong creationist push against them, the very strong push to legislate intelligent design into our schools.  For me, even more important, and I lament less popularly told, is accurate science on climate change.

Forbes has taken upon itself to do just the opposite of educating its readers.  The outlet published about a "story" (written by Pat Michaels) regarding the leaked IPCC AR5 second draft, a story started at WattsUpWithThat? (I will not link to that website) claiming that Figure 1.4 of the draft (below) shows that the models from the IPCC have overestimated the warming that we have actually observed.

(Figure 1.4 from IPCC AR5, second draft)

The different colored bars are the IPCC's First Assessment Report (FAR, 1990), Second Assessment Report (SAR, 1995), Third Assessment Report (TAR, 2001), and Fourth Assessment Report (AR4, 2007).  Each report used climate models of different complexity (generally increasing in time, as computing power is wont to do) and a given input scenario to predict temperature increase based on the current physical understanding of the Earth's climate.  AR4, at least, 'hindcasted' the first decade or so using real greenhouse gas/solar/volcanic data (etc.), and the true 'projection' starts in 2000.

In this graph, each projection was shifted to have the same value in 1990.  We can see that the observations since 1990 (more specifically, since mid-2000s) appear to have gone below the projections from each scenario collection - indeed, why?

The uncritical, like Pat Michaels, say that it's because the models are wrong.  Michaels also thinks that the IPCC will remove this Figure, because apparently there was some change made during AR4 to a draft report Figure that he didn't like.

What happened in the drafting of AR4, we probably won't know.  Michaels probably won't either, since he didn't appear to really do much research into the matter.  This pattern also continues into AR5.  We have already been shown, for instance, that the models do match the observations (such as here, here, and here).  We already do know, for instance, that ocean heat content (a much BIGGER number than atmospheric heat content) continues to rise unabated.

(Ocean heat content data from Levitus et al. (2012))

So, that should leave us wondering, what is up with Figure 1.4 from AR5?  Well, maybe we should actually look at the Figure in context.  The leaked draft is available online and to maintain at least a bit of dignity I won't link to it, but if you want to find it you probably can.  For starters, the caption reads:
"Figure 1.4: [PLACEHOLDER FOR FINAL DRAFT: Observational datasets will be updated as soon as they become  available] Estimated changes in the observed globally and annually averaged surface temperature (in °C) since 1990 compared with the range of projections from the previous IPCC assessments. Values are aligned to match the average observed value at 1990. Observed global annual temperature change, relative to 1961–1990, is shown as black squares  (NASA (updated from Hansen et al., 2010; data available at; NOAA (updated from Smith et al., 2008; data available at; and the UK Hadley Centre (Morice et al., 2012; data available at reanalyses). Whiskers indicate the 90% uncertainty range of the Morice et al. (2012) dataset from measurement and sampling, bias and coverage (see Appendix for methods). The coloured shading shows the projected range of global annual mean near surface temperature change from 1990 to 2015 for models used in FAR (Scenario D and business-as-usual), SAR (IS92c/1.5 and IS92e/4.5), TAR (full range of TAR Figure 9.13(b) based on the GFDL_R15_a and DOE PCM parameter settings), and AR4 (A1B and A1T). The 90% uncertainty estimate due to observational uncertainty and internal variability based on the HadCRUT4 temperature data for 1951-1980 is depicted by the grey shading. Moreover, the publication years of the assessment reports and the scenario design are shown."
This actually isn't that interesting for the discussion, but we get some more details on how the Figure was made and some basic context behind it.  This description is available with the Figure, but the Figure was added to the end of the draft: the draft does not place each Figure where it goes in the report, but has them collected at the end of the PDF.  We need to go a level deeper, straight to the source of the discussion on this Figure, Chapter 1.3.1.  The section has this to say about the Figure (my emphasis):
"Even though the projections from the models were never intended to be predictions over such a short time scale, the observations through 2010 generally fall well within the projections made in all of the past assessments. Note that before TAR the climate models did not include natural forcing, and even in AR4 some models did not have volcanic and solar forcing, and some also did not have aerosols. The projections are all scaled to give the same value for 1990. The scenarios considered for the projections from the earlier reports (FAR, SAR) had a much simpler basis than the SRES scenarios used in the later assessments. In addition, the scenarios were designed to span a broad range of plausible futures, but are not aimed at predicting the most likely outcome. There are several additional points to consider about Figure 1.4: (1) the model projections account for different emissions scenarios but do not fully account for natural variability; (2) the AR4 results for 1990–2000 account for the Mt. Pinatubo volcanic eruption, while the earlier assessments do not; (3) the TAR and AR4 results are based on MAGICC, a simple climate model that attempts to represent the results from more complex models, rather than the actual results from the full three-dimensional climate models; and (4) the bars on the side represent the range of results for the scenarios at the end of the time period and are not error bars. The AR4 model results that include effects of the 1991 Mt. Pinatubo eruption agree better with the observed temperatures than the previous assessments that did not include those effects. Analyses by Rahmstorf et al.(2012; submitted) show that accounting for ENSO events and solar cycle changes would enhance the comparison with the AR4 and earlier projections. In summary, the globally-averaged surface temperatures are well within the uncertainty range of all previous IPCC projections, and generally are in the middle of the scenario ranges. However, natural variability is likely the dominating effect in evaluating these early times in the scenario evaluations as noted by Hawkins and Sutton (2009)."
We see no discussion in Michaels' article about this section, and the reason is clear: it undermines his message.

The scenarios for each model are predictions of what CO2 will be, what aerosols will be (though only some models could handle that as it said), what other greenhouse gases will be, what sun activity may be, so on.  They have nothing to do with the models.  If the input is incorrect, then the output will be incorrect too.  It does not tell you if the model is wrong, it tells you that your scenario is wrong.  The hindcast for the AR4 models matches observations quite nicely, and that should indicate to us at least that the models do a pretty good job of taking accurate input and giving you the Earth's temperature, because they're based on our physical understanding of the Earth's climate.

What the section tells us is that the scenarios did not properly account for natural variability, such as ENSO (El Nino - Southern Oscillation), solar activity, and aerosol radiative forcing.  We do know that these played a large role in the last decade:

• 1998 was a very strong El Nino year, while 2011 and 2012 were La Nina years (El Nino causes surface warming, La Nina causes surface cooling);
• there was a prolonged solar minimum during 2008/2009, so again there is a cooling bias on the end of the time series;
• aerosol pollution (aerosols reflect and scatter incoming sunlight, so they cool the planet) somewhat increased over the past couple years, likely due to China's booming economy, which thrives off of dirty burning of coal.

Kauffman et al. (2011) helps to explain this in more detail.

The scenarios don't include natural variation much, so when we actually do take natural variability into account and remove it from the observations, as Rahmstorf et al. (2012) do, how do the observations look compared to the climate model runs?  Well:

(Figure 1 from Rahmstorf et al. (2012))

Would you imagine that.

Now one thing that Michaels might be right about is that the IPCC could indeed change this graphic before the final draft is released, and it's clear why: people like Michaels are adamant on taking it out of context and lying about it.  But it wouldn't be because Michaels was ever right about, well, anything.

This blog post will herald in my time as an author for the blog, and foreshadow the topic of many of my posts, which will be bringing into the spotlight the pseudoscience surrounding climate change "skepticism."

Saturday, December 8, 2012

Book Drive Win!

For the past week, we've been wandering through neighborhoods dropping off flyers and asking for donations of books. We decided to donate to the PTO Thrift store, a local nonprofit shop that exists to raise money for the Ann Arbor Public Schools. Today was the last day of the book drive and we collected all of the books and dropped them off at PTO. We got to donate almost 100 books. Thank you to everyone who made donations and to all the cool SSA members who went out in the cold to collect books. You guys are awesome.

Wednesday, December 5, 2012


Hey Secularists!

Welcome to the homepage and blog for the University of Michigan Secular Student Alliance! Feel free to explore our site and learn more about our group! Below are posts by our members about religion, philosophy, current events and other topics of interest to us.

If you are a member of the Michigan SSA and you would like to contribute to this blog, send us an email at secularstudents-owner [at] umich [dot] edu and we will send you an author invite. Also, make sure you are following us on Twitter and Facebook for the latest updates and check out our latest events in the sidebar.

Have a great and godless day, everyone!

Soul, ill-defined therefore unfalsifiable

(Originally posted by Jason in 8/2012)

There is a big campaign going on campus reaching out to student, courtesy of Harvest Mission Community Church. Since I happened to wear my most blatant "Michigan Atheist" shirt today (8/28), I was invited to join some of their students for dinner at Chipotle. Inevitably, familiar topics poped up, and one of my new friend asked, "Has soul been disproven?"
On the spot, I decided to go on asking why he believes in soul, why he believes that he has ONE soul instead of two, three, or multiple souls specialized in certain tasks (It's not as ridiculous as it sounds. According to the ancient Chinese belief, souls are composed of hun and Po, with 三魂七魄 "three hun and seven po" as one prominent dogma). And of course, the answer can only be the Christian Bible, which was written by extremely uninformed authors by modern standard, as I pointed out.
A little fun in conversation aside, those of us who are familiar with scientific method know that the problem is falsifiability: nothing observable can disprove the existence of soul. But at the bottom of it, what makes such concept unfalsifiable?
It seems to me that factual unfalsifiable claim is usually ill-defined, and vice versa. What's the details of soul? What's it composed of? Where and when was your soul created/made? Did the common ancestor of human and chimp have soul? If not, from which generation onward did we start to have soul? None of these questions can be answered according to the Christian faith. When I asked whether H. neanderthalensis, H. erectus, or H. habilis have soul, all they can answer is "if they are human". Unfortunately, ill-definedness is contagious: now the term "human" is ill-defined.
The same applies to claims like "God exists" and "There is a fire-breathing dragon in my garage", as neither god nor dragon is well-defined here. As people try to fill out the details or connect reality to unfalsifiable claim, it either becomes demonstrably false or imaginary/subjective: as long as it has nothing to do with reality, one can dream up all kind of things. In the end, "please elaborate" may be a more effective approach, compared to the invocation of fancy scientific method.

Of consciousness and death

(Originally posted by Jason in 8/2012)

At the end of last month, John Templeton Foundation generated some buzz by awarding $5 million to University of California Riverside philosophy professor John Martin Fischer to lead "Immortality Project" to investigate questions such as (quoted from the website)
  • whether and in what form(s) persons survive or could survive bodily death
  • whether and to what extent persons’ beliefs about immortality influence their behavior, attitudes, and character
  • why and how persons are (at least pre-reflectively) disposed to believe in post-mortem survival
  • whether it is in some sense irrational to desire immortality
  • and more besides.
While these questions sound innocent, there is no shortage of causes of concern, some from the statements from the project lead himself (careful documentation as a valid approach to determine whether near-death experiences offer plausible glimpses of afterlife? theology as a way to bring reason to beliefs about religion?). Most importantly, if you don't have a clear idea about the conscious life of Homo sapiens, how can you meaningfully talk about afterlife? (this last sentence, of course, is paraphrasing Confucius' take on this topic.)
Unfortunately, understanding of consciousness requires progress in neuroscience, which doesn't generate news everyday (optogenetics -- Method of the Year in 2010 -- deserves some attention). We can, however, anticipate what we expect to find. For some of us, that may in fact suffice.
What do we expect to find? We expect human consciousness to be an emergent phenomenon, fully described by the underlying physical system. This position is variously referred to as "scientific materialism", "physicalism", or "mechanism", but it really shouldn't be considered merely a "position" or "school". People have been looking very hard for things they don't fundamentally understand, went so far as to build a machine kilometers in diameter in order to find phenomena they can't describe, and ended up only validating what they hypothesized so far. While there is still hope that something new will be discovered at LHC and what we know so far shouldn't be considered perfect knowledge, it's no longer rational to place your bet on something mysterious at play for our consciousness. We expect consciousness to be fully explained by the function of your brain and (to a less degree) entire body as much as we expect the sun to rise tomorrow morning, as long as you live far away from polar regions (In case you are wondering, this is not induction as much as Bayesian inference. We are estimating probability given imperfect knowledge).
A few conclusions immediately follow. Since human body occupies finite volume and contains finite amount of energy (there are some unfortunate outliers of the distribution, but they can only get so large and still be alive for long...), the set of possible "human states" is not only countable, but finite in the most precise sense of the word, due to quantum mechanics. As you may have guessed, this set of possible human states is unimaginably large: we are trying to describe a system which typically reaches 60~80 kg by quantum mechanics, which describes individual electrons and photons. However, this set of possible human states is further constrained by our knowledge in biology, and many human states are identical for practical purposes, even though they are, strictly speaking, physically distinguishable. For example, human genome is already sequenced, and it turned out one's genome can be compressed and sent as old-fashioned e-mail attachment (~4mb): your genome can only differ so much from the reference and still qualify as homo sapiens. The molecules in your body that do not participate in the flow of matter and energy are in thermodynamic equilibrium with its environment, at around body temperature. For the ones that do, the bulk of free energy used comes from glycolysis and citric acid cycle. Since no biochemical reaction reaches the energy scale of gamma ray, its occasional presence due to radioactive isotope or cosmic/solar radiation is of no relevance except the possibility of cellular damage, and so on so forth.
What do all these mean?
  1. While the number of possible human states is without doubt still vast, each individual is no longer unqualifiedly unique. We can consider each one of us at a given instant occupying a specific human state, within a intrinsically shaped "human phase space" constituted by all of the possible human states. Our lives can be considered trajectories through the human phase space: by most measures of distance, we start very close to each other as fertilized eggs and then drift away from each other (continuing the theme of Confucius). We enter the subspace of self-aware humans around age 2, roughly follow the development program with environmental influences, and finally drift out of the human phase space, i.e. death.
  2. Subjective experience is in fact replicable in principle, and such replication goes as follows: the initial preparation is the easiest, all you need is an sufficiently identical egg (effectively fertilized egg with the same genome, epigenetic markers, the same number of mitochondria with the same DNA content as the original, perhaps approximately the same number of glucose and ATP, and other relevant variables). Then you have to follow up with sufficiently identical environment: in utero, childhood environment and beyond. The resulted subjective experience of the replica would be the same except the inherent uncertainty of human state and variation in preparation. Some of the attempts may end up very different, but with sufficient number of attempts some are bound to end up eerily similar or for all practical purposes, identical. If we can run an ensemble of these replicas of the original as computer simulations, we may even be able to apply particle filter algorithm to localize the one closest to the original.(Disclaimer: above is intended as thought experiment. In reality such experiment could be cruel to the replica and prohibitively expensive, compared to whatever it may accomplish.)
  3. In the real world, the closest example is "identical" twins. If you have a identical twin, your twin is not only similar to you. With soul or other mystical element out of the picture, you can consider your twin genuinely close to you in the human phase space, or even "almost you" if your upbringing is sufficiently similar.
Up till this point, these conclusions and implications should be at least technically true. It is, however, up to each one of us how we are going to take them. Personally, I actually feel somewhat relieved to realize that I am not responsible for something immaterial, intrinsically unique, and irreplaceable: there is no such thing as "soul". I will never be reincarnated into, say, an insect being eaten alive from the inside by parasitoid or some kind of gruesome being called Preta. Supposedly I may try to extend my life in the future if I feel like it, but honestly I don't really like everything about myself unconditionally (I doubt many do) and many aspects of the environment I experienced. I may try to "edit out" these aspects of myself and environmental influences in the future, but exactly how worthwhile is such self-preservation and self-improvement? After certain point, how much continuity is left between now and such future (interestingly, we apparently evolved a partial break in our stream of consciousness as we develop from an infant into an adult)? Might it not be more meaningful to make a clean break and start over, with (possibly genetically engineered) offspring and vastly superior upbringing?
I suspect each one of us would have a different take on this, so please leave yours as a comment (as long as you are a self-aware being, not a spambot!) If you run into my replica though, don't bother asking: he would say something almost the same :D
P.S. Oh, Pascal's Wager, you asked? That's really beyond moot. I suppose there is a vanishingly small chance that some kind of super-intelligent and technologically-advanced being is keeping track of us, scoring us along the way, and waiting to initiate two systems: one constantly inflicts excruciating pain upon the (unfortunately) chosen, replicated, and most likely modified human states, and the other constantly provides maximal bliss to the (fortunately) chosen, replicated, and modified human states. But why should I be personally concerned? I suspect if such dude shows up at this point though, most of us won't welcome Him, Her, It, or Whatever.

How should an ideal modern constitution treat religion?

(Originally posted by Jason in 8/2012)

In all honesty, this post is motivated by a recent conversation between Christian and Atheist members of our group regarding Chinese policy towards religion. Apparently, we are in agreement here: people should have the freedom to practice religion, without state's interference. In contrast of the current situation in the US, however, it seems to me that neither country's approach is ideal. It is in fact arguable that US Constitution is not completely secular.
The current US Constitution can be best described as agnostic. In particular, the First Amendment reads:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
In its historical context, this is probably the best approach it could take, as the original US Constitution and the subsequent Bill of Rights were written before the discovery of most of the modern knowledge and predate Origin of Species by decades. Looking back given what we know, however, such agnostic approach constitutes false balance: it doesn't assert, declare, or promote one position versus the other in the presence of evidence. As we collectively accumulate knowledge, gain power through technology, and more phenomena become predictable, what the public considers "fact" or "truth" only becomes more important over time. Essentially leaving knowledge and empirical approach out of the constitution leaves a nation too susceptible to anti-intellectualism and religious enterprise that has no interest in empirical knowledge or fact.
There are many possible ways to put that into constitution, and it doesn't even have to mention science. For example (this probably fits best in the preamble), "We value subjective experiences but seek and believe in empirical, verifiable facts. We hold it as self-evident that no human had, has, or will ever have inherently unique access to truth over other humans..." Such approach may be considered scientific, but it's not even inherently atheist. If it turns out that super-intelligent beings created the first cell on Earth and finally decide to show themselves, by such constitution we should consider them our Creator, of sorts.  The same goes if it turns out that this universe is a simulation, or some super-intelligent beings (at a totally different level) created this universe and picked some kind of extended combination of Standard Model and General Relativity as its physical laws.
All the subsequent problems aside, taking no position with respect to truth and knowledge is without doubt one secular approach. But while US constitution is not allied with any particular religion, the First Amendment goes out of its way to grant a special kind of freedom to all religions, and religions collectively do not deserve such vaulted position over other schools of thoughts or enterprises. As long as the constitution protects other freedoms like freedom of speech and freedom to assemble, religious practices should be as protected as they should be already. Going out of its way to grant special freedom to religions shields dubious practices that probably won't be allowed under other circumstances from public discourse: if parents want to cut off a part of their child without medical reason, I doubt many people would tolerate such practice in general. Yet, in the context of religious freedom, such position is suddenly controversial: as Wall Street Journal puts it, The German Judge vs. Genesis 17:10. The same shielding effect is also emboldening parents to force their child to attend religious school, a situation that's really violating the child's freedom.
We are essentially out of places on earth to create a new country, but countries rise and fail all the time on the historical scale. Along the line of Paul Gauguin and E.O. Wilson's new book, perhaps one day there really will be a new country, whose constitution opens with a kind of Emancipation Proclamation for Homo sapiens:
We value subjective experiences but seek and believe in empirical, verifiable facts. We hold it as self-evident that no human had, has, or will ever have inherently unique access to truth over other humans...
...We know where we come from, we know who we are, and we will decide where we are going together.

Village of the damned: real world trolley problem

(Originally posted by Jason in 4/2012)

Until now, I had been somewhat dismissive towards "unrealistic" thought experiments like the trolley problem. It turned out I was wrong. I had been wrong the whole time.
Panama: Village of the damned (per Aljazeera English)
They are not unrealistic -- at least not all of them. They just don't happen around us, in our daily life. Had I not been interested in issues regarding indigenous people, I would never notice what's going on in Panama, or a similar situation regarding Belo Monte Dam in Brazil.
For now, let's give the decision maker the maximal benefit of doubt. Say it is indeed for the good of many, not a whole bunch of shady business of corruption. Let's also assume that indigenous people do not unconditionally own their land, river, and forest (to be honest, a fairly strong assumption already). Even then, the issue is not a few people affecting "the rights of the rest of the people", as minister Jorge Ricardo Fabrega trying to portrait, or "needs of the many outweigh the needs of the few", as one commentator put it. It's simply not true that everyone is affected equally in this case. The indigenous people would permanently lose their habitat, along with their way of life. Unlike the modern people who can live essentially anywhere as long as they have money and speak the language, relocating indigenous people alone could be difficult and relocating their culture could be outright impossible: that would be the degree of harm on the few. The degree of benefit on the many would be more plentiful and cheaper energy (again, assuming that the benefit does transfer to the general public). What would be the implications? Would it be more like premature baby getting proper neonatal care, or more like people enjoying air conditioner 24/7 in the summer, instead of fan? Moreover, if the panamanians know the human cost behind the cheap electricity, would they still want it along with the guilt?
I do not exclude the possibility that after proper cost-benefit analysis, dam construction is indeed the right thing to do. But even in that case, it's obvious that the indigenous people should be heavily compensated/accommodated. In this case, I don't quite mind indigenous people talking about dios/deus (Spanish/Portuguese for god). What really makes me wince is when that minister says it's about "rights". Too many people are flinging all kinds of rhetorics,  while too few people have the proper framework to think about these issues.

The danger—and necessity—of moral theorizing

(Originally posted by Pat in 4/2012)

“Lying is wrong in all circumstances, even to save someone's life.”
“We may eat retarded orphans.”
“The problem with Stalin was his inefficiency.”
“Taxation is slavery.”
“Anyone who doesn't give the majority of their income to UNICEF is a murderer.”
“Under patriarchy, all heterosexuality is tantamount to rape.”
“Accelerating the Singularity is far more important than saving lives today.”
These are just a sample of the weird, extremist, and even appalling conclusions that I have heard people draw as a result of moral theorizing. It's remarkably easy to start with premises that seem entirely plausible, carry them down a chain of reasoning that seems logically unassailable, and come out with something that is completely absurd and immoral. (Indeed, the real challenge seems to be carrying through a moral argument that doesn't come out absurd and immoral.) Additionally, we have the finding that ethics books are most likely to be stolen from libraries, and of course the various works of John Haidt suggesting that many of our moral attitudes are largely impervious to reason. (Haidt takes this so far as to say that all of our moral attitudes are completely impervious to reason, which is why he is wrong. But it's hard to listen to the rantings of a racist, a misogynist, or a global-warming denier and think, “Humans are so rational and sensitive to evidence!”)
It would be tempting, therefore, to abandon moral theorizing entirely. We could just rely on our intuitive judgments and never have to think about the underlying theory. Indeed, it could be argued that we already have done so, if whenever our arguments come out to something counter-intuitive we abandon those arguments. Doesn't that mean we are really slaves to our intuitions?
But no, we cannot afford to do this. We owe the greatest achievements in human history precisely to moral theorizing. It is because of moral theorizing that we now let women and racial minorities vote; it is because of moral theorizing that we abandoned theocratic monarchy and replaced it with representative democracy. (Saudi Arabia and Iran did not get the memo.) It is because of moral theorizing that gay rights is now becoming mainstream and we are on the verge of a new paradigm shift in animal rights—in a few generations it will be as unthinkable for most of us to eat meat as it is unthinkable to us now to sell Black people as property. Nothing could ever be more important than knowing and doing what is right.

Were these ideas counter-intuitive when they were first proposed? Yes, they were. And people made arguments to defend them—typically very bad arguments, but arguments nonetheless. In hindsight, many of these arguments feel more like excuses for what was already being done without any kind of rational justification. But at the time, people took them seriously as if they were real arguments. And even if they were excuses, people still bought into them. People today will present Pascal's Wager as if it's a brilliant argument for Christianity. There are still scientific racialists like Charles Murray trying to argue that the reason Blacks and Hispanics don't do as well in education is because they are genetically stupid. There are still people defending male circumcision on the grounds that it protects against diseases like AIDS (note that AIDS is 40 years old and circumcision is 3000 years old), and other people defending female circumcision on the grounds that we have no right to judge other cultures. There are people who think that “religious freedom” means the freedom not to cover contraception on health insurance because the Pope says contraception is bad. There are people who think that global warming isn't happening, or if it's happening it must not be bad, or if it's bad it must not be our fault, or if it's our fault there must not be anything we can do about it. (This slipperiness speaks volumes about what the real objective is: Not to understand the truth about global warming, but to make sure that no matter what, we never have to do anything, especially not anything involving government.) There are people who think that illegal immigrants are what causes economic problems, and other people who think that the only reason markets ever crash is that governments intervene too much.
Yet, there is something I notice about all these cases, something really quite important. Almost all of them (not quite all—and in a moment I'll get to the exceptions) are actually debates about facts. Global warming is an empirical hypothesis—it can be verified or falsified by scientific evidence, and has been strongly supported thus far. The claim that some races of people are genetically stupider than others is a scientifically testable one (and when tested it does not fare very well). If circumcision really does protect against disease, that needs to be factored into a cost-benefit analysis (and due to the high risk of complications, routine circumcision fails that test). The impact of immigration on an economy can be studied (and has been—it's almost always positive). The causes of market crashes are the subject of scientific research—in this one case, we don't have clear answers yet, though we do know certain things: It's not always the government's fault (though occasionally it is), the market is unstable far beyond our standard risk models, deregulation makes it worse, not better.
If we go back to the cases of slavery, monarchy, and theocracy, these too fail for largely factual reasons. We know that people from Africa aren't so stupid as to be mindless automatons, robots that may be used for whatever we wish. We know that there is no God directing the fates of states, there are only people making political decisions.
This means that moral progress can often be, and typically is, the result of scientific progress. The more we know about the world, the more we know about how to act in the world in order to achieve our goals, and since most of our goals are pretty much the same—happiness, peace, prosperity—that largely solves the problem.
But what about the cases where it doesn't? So for instance, what if some people really believe that deference to God is more important than the happiness of human beings on Earth? Then we can show them all the statistics we want about how religion is correlated with poverty rates and abortion rates an so on, and they won't care. They will dig in their heels and say that no matter how much suffering theocracy may cause, it is still right because it is what God wants and that's all that matters.
In fact, I think such people are rarer than we imagine. Why is it controversial (despite being scientifically almost unassailable) that poverty is correlated with religion? Because even religious people can see that's a mark against religion. This is why the controversy over the effect of religion on crime rates (which actually is somewhat unclear scientifically) is so fierce; if it does turn out that religion is overall bad for crime rates, that's another reason to be doubtful that God is watching over us. And if religion does reduce crime, that's to some extent an argument in its favor (though I must say there are far too many in the other direction for it to change my mind).
But there do seem to be some people who really have shielded their moral beliefs so carefully against evidence that they are unlikely to be persuaded by any sort of scientific data. Moral relativists come to mind; it's pretty obvious that some cultures do better than others, and it's pretty obvious that modern Western culture does better than everything else. It's only if you're a priori convinced that all cultures are equal that you would try to argue that, say, life expectancy, infant mortality, and median income are completely irrelevant to the welfare of a society. Some religious fundamentalists fall in this category as well (but clearly not all); there do seem to be folks who think that no matter what happens, we should always serve God.
What do we do with such people? We need moral theorizing. We need to better understand just what it is that makes some actions right and others wrong, some policies good and others bad. We need to root out the flaws in our intuitions, and as much as possible, repair them.
Indeed, there's one very obvious flaw in our intuitions: They often contradict themselves. By slightly reframing the exact same situation, you can make people's intuitive judgments change. A simple matter of “save 100 people or save 200 people” versus “let 100 people die or let 0 people die” can completely change the way people decide a moral dilemma. There are some interesting psychological questions here as to why this happens—but it clearly shows that we can't trust our unaided intuition in all situations.
Another very serious flaw in human intuition is the tendency to defer to authority and conform with the group. Most of the really horrible atrocities in human history were done not by malicious psychopaths, not by fanatical true believers, but by ordinary people going along with the crowd, doing what they were told (often orders given by malicious psychopaths or fanatical true believers). Milgram actually did much more careful study of this phenomenon than most people give him credit for; it's actually not the case that most people obey direct orders. Instead, they obey social pressures, and will obey orders if the orders are associated with goals and authorities believed to be legitimate.
But this does raise the very difficult problem of what we can trust—if both theorizing and intuition sometimes give wrong answers, what can we do? I rather like the proposal Rawls offers of “reflective equilibrium”—use whatever means we have, but use each means to check and challenge all the others. This clearly helps, but I must wonder if it actually gets us all the way. Many systems in nature have multiple stable equilibria; what if morality is like that too? Or worse, what if there is no stable equilibrium? What then?
One thing we cannot do—must not do—is give up on the project of making morality rational. We must not listen to the Haidts of the world who say that there is no way to persuade people; we must not listen to the subjectivists and the relativists who tell us there is no truth to be found. As hard as it may be to change people's minds, we have done it before, and we must do it again. As hard as it may be to come up with wise moral theories, we have no other choice.

Theism among different types of scientists

(Originally posted by PJ in 4/2012)

I figured for some time that theism is more prevalent among certain types of scientists than others. Just from a layman's standpoint, some science fields involve different modes of thinking than others. One field may involve heavy memorization and careful hands when performing experiments while another field may be more concerned with raw logic than anything else. Assuming that most scientists are working in fields that are most compatible with the way they think, this should translate to different rates of theism among those in different sciences as well.
With this thought nagging me, I decided that it was time to take a look at research already done on this to see if my conclusion has some basis in truth. Intuitively, I believed that (among the natural sciences, at least) chemists would have high rates of theism compared to other scientists. I came to this conclusion because chemistry generally doesn't concern itself with anything that could indirectly relate to the philosophical. With such a top-down view, I believed that this would prevent a religious chemistry researcher's belief from contradicting with their work. (Also, from an anecdotal standpoint, I've come across a higher percentage of people working in Chemistry who were devout Christians than in other sciences.)
So enough of my speculation, and on to the description of articles I looked through. It wasn't completely exhaustive by any means (only found three studies so far). The second study listed is under a paywall, so if you want to read that paper, let me know and I will give a link to it. In addition, feel free to ask about anything that needs clarification.
The first study I currently know which discusses this was by James H. Leuba in The Belief in God and Immortality: A Psychological, Anthropological and Statistical Study, published in 1921.  It didn't deal with the topic at hand as thoroughly because this book was a study among Americans in general, but it does show briefly that even then, there was enough of a difference in theism among different types of scientists worth mentioning. 1000 of the ~5500 men listed in American Men of Science were contacted with questions about belief in God. They were split generally randomly in two divisions of 500. Those in each division were separated into 'Lesser' and 'Greater', and the scientists were separated into the 'Biological Sciences' or 'Physical Sciences'. Here were the results among the second division:
Physical Scientists:
Lesser - 49.7% Believers
Greater - 34.8% Believers
Total - 43.9% Believers
Biological Scientists:
Lesser - 39.1% Believers
Greater - 16.9% Believers
Both - 30.5% Believers
There are pie charts giving a visual of the other percentages for 'disbelievers' and 'agnostics & doubters', but the actual numbers are only described for a few cases, so they will not be listed.
A more recent study was Religion among Academic Scientists: Distinctions, Disciplines, and Demographics by Ellen Howard Ecklund and Christopher Scheitle. This was published in Social Problems Vol. 54, No. 2 in May of 2007. In this study, 2,198 faculty members from 21 elite universities were selected for the study, and 1,646 responded. There were three different subfields represented for the Natural Sciences (Physics, Chemistry, and Biology), and four different subfields represented for the Social Sciences (Sociology, Economics, Political Science, and Psychology). When it came to belief in god, they were given six options:
The first was an option for atheists: "I do not believe in God."
Biology 41.0%
Physics 40.8%
Sociology 34.0%
Psychology 33.0%
Economics 31.7%
Political Science 27.0%
Chemistry 26.6%
The second was a option for agnostics: "I do not know if there is a God and there is no way to find out."
Economics 33.3%
Political Science 32.5%
Sociology 30.7%
Biology 29.9%
Physics 29.4%
Chemistry 28.6%
Psychology 27.8%
The third was an option for those who believed in a higher power but not God. The remaining three options were for theists. For weak theists, there was "I believe in God sometimes". For strong theists, there was "I have no doubts about God's existence". Here were the combined results for those who believed in God or a higher power:
Chemistry 44.8%
Political Science 40.5%
Psychology 39.1%
Sociology 35.4%
Economics 35.0%
Physics 29.9%
Biology 29.2%
A third study I came across was a survey by Pew Forum: Scientists and Belief. Here is a visual of the data breakdown by gender, age, and field:

Let me know your thoughts on this.

Courage and Computor Screens

(Originally posted by Rodion in 3/2012)

In America, we live in a society dominated by screens. Everything is now online. If you want a pizza, order it online. If you want a video game, order it online. If you want a Russian Bride, we have those online too. Our lives are starting to revolve more and more around virtual interactions and wireless relationships. We are starting to lose the ability to communicate with each other. Now, don't get me wrong, I think the internet is an incredible thing. it has changed our lives for the better in so many ways. We are able to access information so easily, we basically have encyclopedias in our pockets. We socialize, work, and play online as well. It is a wonderful tool that is constantly pushing human society forward.
On the other hand, with this power, people forget the internet is a place where consequences exist. The internet is a changing place now where anonymity is becoming harder and harder to maintain. This fact has pro's and con's that I don't wish to get into here. But, the truth is, we are becoming more and more responsible for the things we say and do online. Whether you believe it or not, the things you say online will have consequences. The things you say online will be read by other people, and depending on what you say, your reputation may become tarnished or even destroyed. For some reason, people tend to be much braver behind a computer screen than they are in reality. The things people say behind a computer screen to someone are things they would never have the courage to say in real life. And again, there are pros and cons with this fact. As I get older I begin to realize that I am accountable for everything that comes out of my mouth, whether it is a conversation with someone else, or just a simple facebook status. I also realize that everyone else is also accountable for their statements. As an adult, if I have problems with another adult, I address them directly and in person. I value passive aggression in the same way as I value the tooth fairy, not at all. Getting to the moral of the story, be very careful of what you say on the internet, especially is you are saying harmful things. Because eventually someone will read those statements and they will not be happy. In addition, we are all adults here, we should not be saying things on the internet that we would not say in real life. I just want people to stop hiding behind computer screens, come out and talk to each other. You know, like the good old days!
I hope you enjoyed my first blog post!!!

The Reason Rally was, overall, a success.

(Originally posted by Pat in 3/2012)

It was not a total success, I would say. The bus was remarkably cheap, but you get what you pay for—not nearly enough legroom, a schedule that didn't allow us any time in DC aside from the rally, no wifi access, and a temperature control system that made the front of the bus cold while the back was hot. The result was mass sleep deprivation; by the power vested in me by diphenhydramine I was able to get at least some sleep, but still by the time I got home the one thing I most wanted to do was sleep (and I did so, for about 8 hours). Buses are also quite a bit harder to sleep on than airplanes, because roads are full of bumps, lights, and competing vehicles while airspace is typically clear and smooth.
The rally itself was pretty good. The rain caused a few problems, but wasn't nearly as bad as it might have been. (If Thor frowned upon our proceedings, he's getting lazy in his old age.) Depending on your individual tolerance for sogginess, you could have watched most of the rally without an umbrella or poncho. News outlets have estimated the attendance at about 30,000 people, which is respectable but not particularly impressive.
We didn't have a schedule—indeed, there were no printed schedules made, only an app made available for smartphones. In principle this seems ecologically sound; in practice a lot of people don't yet have phones with the requisite capabilities. If I'd thought ahead, I would have brought my own printed copy of the schedule posted online. Even worse, the rally didn't strictly follow the schedule; it started out very well aligned and gradually deviated over the course of the day. This is to be expected to some extent; but as the whole rally went from 10 to 6 with no breaks and our bus arrived at 10 and left at 7, this meant that either you never ate or visited DC, or you missed part of the rally without really knowing which parts you were going to miss.

There's actually an interesting little moral problem embedded in that temperature issue: If you have control over some social variable V, and some number of people N_1 want V at a particular value V_1, while some other portion of the population N_2 want V at another value V_2, what is the decision procedure for setting V that is socially optimal? To really do it right, I think you need to know the utility functions for all the people in the population—how bad is it to be too hot versus too cold?—and then add them up and find the local maximum. To approximate this, you could conduct a range vote between the two groups, hoping that people would not exaggerate their utility functions strategically—in real life, they probably would, though in a worst-case scenario that just turns the range vote into a simple majority vote. We of course did nothing of the sort: Rodion came from the back of the bus to the front and asked the driver to turn on the AC; there were a few groggy objections from other people in the front which were ignored, and then the AC was turned on. This may or may not be the right outcome (thanks to layered clothing and the drop in body temperature required for sleep, the utility of cold is quite a bit higher than the utility of hot), but the decision procedure is a terrible one. I've noticed a systematic trend here actually: Since I left office, the group has spent less and less effort trying to devise genuinely fair decision procedures, instead preferring fast heuristics like majority vote or executive-board decision that seem democratic enough. I regret that haven't voiced my objections more when decisions are made by such biased methods. They may be better than unilateral autocracy—but only marginally. Democracy is about the will of the people; if you're not matching the will of the people, whatever you're doing isn't democracy.

I regret missing three speakers mainly: Adam Savage, Eddie Izzard, and Lawrence Krauss. I was particularly disappointed to miss Savage, because I wasn't even sure what he planned to speak about! Izzard no doubt regaled us with his comedy, Krauss probably talked about science and tried to stay away from the reasons he's become controversial lately—but what does a Mythbuster have to say to a crowd of atheists? I may never know.
I never got the chance to see any of the many museums and monuments in DC, other than the time we rushed through the National Gallery of Art to get to an overpriced cafeteria in the basement, or the time Ewan and I used the Smithsonian Museum of Natural History as a meeting spot with our friend Eoin so we'd have a chance to get dinner together.
The best speeches in my opinion were Sean Faircloth and James Randi. Faircloth was more eloquent than usual, and it really seemed like exactly the right place and time for what he was talking about—atheism as a political movement, rationalism as public policy and not merely personal belief, a new social movement . Randi's speech was a warning about how quickly irrationality can poison a society—how eternally vigilant we must be to prevent a relapse of old ways of thinking. This is a point I have trouble getting across to apathetic atheists and agnostics; so often I hear “What's the harm?” and I want to just shake them and say, “Have you heard of the Dark Ages?” 40% of Americans think the Earth is 6,000 years old. If you don't think that's a problem, I don't know what else to say to you.
Tim Minchin was fun but not all that substantive (“And I... will always... love boobs!”), which is pretty much what we thought it would be. Bill Maher was pretty good, but he wasn't actually there (it was just a video), and frankly I can watch videos of him anytime. The rally was also of note because it was the first time I can remember where I really strongly disagreed with Richard Dawkins. Most of his speech was good, though I'd heard a lot of it before. But there was one part in particular that really jarred me: He told us to inquire deeply into the details of people's religious beliefs (which so far I think is right), and then when someone openly admits that they really do believe in something as bizarre as transsubstantiation or reincarnation, to do what? To publicly ridicule them to their face. Suddenly, I can see what all the “Don't be a dick” people are talking about—no, I'm sorry, that's rude, even cruel. If you want to ridicule the ideas, or publicly criticize the leaders of religious organizations, I agree with that. (One place Minchin and I definitely agree: Fuck the Pope.) But individual laypeople are as much victims of religion as they are perpetrators, and you're never going to get people to like you if when they open up to you about what they believe your first response is to make fun of them. People shouldn't make their religious beliefs so personal to their sense of identity, but the fact is, they do—and unless you account for that, people are going to hate you and be fairly well justified in doing so.
I in fact don't do this, though I am sometimes accused of it. My mother believes in transsubstantiation, and my cousin is the worst kind of Young-Earth Creationist. I've met people who believe in alien abduction, and vast numbers of people who profess belief in things like scientific anti-realism and moral relativism. When they get very stubborn and irrational in arguing with me, yes, I will get angry and frustrated, and I will raise my voice and point out the stupidity of their arguments. But there's a very important difference between that and what Dawkins seemed to be suggesting—I never make fun of anyone personally, I do my best to avoid ad hominem arguments, and I never start aggressively. I've had hour-long discussions with my Creationist cousin that never involved anyone raising their voice. (I did feel like facepalming a few times though.) There is a world of difference between “How do you know that's true?” or “Don't you see how that sounds weird to someone from the outside?” or “Come on; you've got to see that's a bad argument.” (as I might say), and “You moron! How can you believe something so stupid?” (what Dawkins seems to be recommending—though I note he doesn't usually do this himself).
In fact, I'm thinking I may want to rethink my own approach, especially in my online persona, simply to differentiate more strongly from what Dawkins is talking about. I think a better model is Dennett, who bends over backwards to be polite, but refuses to give religion special treatment that other ideas don't get. PZ isn't a bad example either; he rants on his blog, but in person he's a teddy bear. It's important to remember: Religious people are not mentally ill, they are not idiots, they are not retarded. (In fact, even if they were, the proper response to mental illness or retardation is pity, not anger. These peopel need your help, not your condemnation.) There are far too many religious people for that sort of theory to be plausible. These are normal, mentally healthy people who believe these incredibly bizarre things—and while we are right to point out how bizarre the ideas are, we must also be careful to keep in mind that these are normal people believing them.
I was particularly unimpressed by the music performances (other than Minchin), and the entire speech delivered in Spanish was pretty weird (as far as I can tell, there weren't even subtitles). There were maybe a hundred Christian counter-protesters—I note I didn't see any Muslim, Jewish, or Hindu counter-protesters—gathered in a clump off to one side of the rally, as well as your typical street-corner preachers all around the general area. I collect this sort of paraphernalia (I'm especially happy when people give out Bibles, as I've been trying to build a Bible collection), so I have a DVD now that I plan to watch in MST3K style. (It's called 180 and it plugs itself as “30 minutes that will rock your world!”)
A lot of the speeches were about how the Reason Rally could be a turning point in the atheist movement. Maybe I was simply too exhausted from sleep deprivation followed by standing in the rain for hours, but such things rang a bit hollow for me. 9/11 was a turning point; The God Delusion was a turning point. This rally, at least at the time, didn't feel like a turning point.
We did get a fair amount of media attention: The Washington Post, The Examiner, Huffington Post, The Blaze, and even Fox News put out stories on us. After being somewhat supportive at first, Fox News remembered its bias; Yahoo News described the rally as “lacking passion” (which is one thing it certainly wasn't. Frankly I was made a bit uncomfortable by the cheering of “Richard! Richard!” when Dawkins came up to speak—it seemed so, for lack of a better word, groupthink.) USA Today and The Christian Post latched onto the same concerns I had about Dawkins's speech; I'm sure they won't quote people like me digesting and criticizing it.
I guess this is a problem for any social movement; our most radical voices will always draw the most attention, while more nuanced ideas actually motivate the real change behind the scenes. Yet this may not be so bad, for in our case, even the “radicals” at the rally were far more reasonable than most political movements; the worst-case scenario would be an atheist as rude as Rush Limbaugh. There were no threats of violence, no calls for bloody revolution. A few speakers and signs didn't make a strong enough distinction between “Religion is stupid” (which is true) and “Religious people are stupid” (which is not). I didn't hear them myself, but a few others on the bus recounted some really tasteless jokes. If that's “militant atheism”, we're still miles above any other ideological movement. We don't even glitterbomb people (which, as assaults go, is pretty benign). Militant socialism was the October Revolution; militant Christianity was the Crusades. Even feminists—hardly known for their violence—have said things far more appalling than the worst I've heard from atheists. (Catherine MacKinnon may not have said “all sex is rape” in so many words, but this is a direct quote: “Men who are in prison for rape think it's the dumbest thing that ever happened... It isn't just a miscarriage of justice; they were put in jail for something very little different from what most men do most of the time and call it sex. The only difference is they got caught. That view is nonremorseful and not rehabilitative. It may also be true.” Also in my Women's Studies class I heard people saying “under patriarchy, all heterosexuality is rape” unabashedly, and one of the instructors strawmanned evolutionary studies of rape as “boys will be boys”.)

In all, I think the rally will be a force for good. It may or may not be a significant turning point in the atheist movement, but it does make it clear that there our movement has a lot of supporters who aren't going away. The last ten years or so have shown poll numbers gradually shifting with regard to religion; for the first time in decades a statistically significant plurality of Americans think there is too much religion in politics instead of too little. The Reason Rally should only accelerate this process, and that can only be a good thing.
Still, this whole rally business is really wearing me out. I spend money I can't really afford in order to go on long, harrowing bus rides and stand in enormous crowds? There's got to be a better way to get political messages across.

Religion, Labels, and Predictability

(Originally posted by Ryen in 3/2012)

Why don't people trust atheists? Clearly, you might say, they have some mistaken notion that only the fear of God can be a foundation of good behavior. Atheists, as a whole, are about as trustworthy as any other group of people (perhaps more so). Surely it's just ignorance of the facts? But I submit that the situation is more complex.
It is not the case that the religious only trust people of their own "in-group" and distrust equally atheists and people who practice other religions. In general, the religious trust the religious, (almost) regardless of which religion. Just so long as you have a religion and a God, you are deemed trustworthy. (Islam may be the exception in the US). Think of what not-so-bright ultraconservatives sometimes say about freedom of religion: "You are free to practice whatever religion you want, so long as you have a religion. We are one nation under God, after all. You can believe in whatever God you want, but atheists are not welcome." The point is that anyone who labels himself with a religion of some sort, regardless of which religion, is more trusted by other people of any religion. It seems that specific doctrines of belief are secondary to the fact of belief itself. ("Belief in belief," as Daniel Dennett calls it).
So what is it about people who identify as atheists that makes the religious (and possibly even other atheists) wary? I have a theory. It's all about information transfer and predictability. Consider the following two statements: "I am a Christian." "I am an atheist." They sound quite similar, but there is, in fact, a great asymmetry between them. The former statement packs a lot of information into one label: "Christian". This one word conveys a long list of doctrines about God and morality, which are supposedly believed and committed to. When you know that a person is Christian, in other words, you have a better chance at predicting their behavior (or at least you think you do). You can say to yourself, "would this man steal from me when I'm not looking? Nah, probably not - he is Christian, after all! He has made a commitment to the moral doctrines of the Bible; out of his love for God, I don't expect him to steal anything." But what does the latter statement convey? Nothing more than a lack of belief in a deity. There is no list of doctrines that comes with being an atheist, and as a result, the term conveys very little information. "So this man is an atheist. Would he steal from me? Well, it seems very possible. After all, he could be a nihilist, or a hedonist, a moral relativist ... perhaps he doesn't have any morals at all!"
Imagine that you have a dangerous disease. Do you try to cure yourself with the FDA-approved drug or the new, untested, highly experimental drug? Most people would go for the former - the FDA label ensures, in your mind, that the drug is trustworthy. You are given a list of things that you can expect to happen when you take it. The drug is predictable: no need for anxiety. Label yourself with a religion, and you bear an authoritative stamp of trustworthiness; label yourself as an atheist, and people don't know what they're getting into. It is a natural reaction to fear the unknown and the unpredictable. I submit that some sort of conscious or subconscious consideration of these facts contributes to why the religious so mistrust us nonbelievers.
So what can we do about this? Perhaps call ourselves something with more conveying power, and begin defining ourselves more in terms of what we affirm to be true, rather than what we disbelieve. How about the term "humanist"? This is better, but it is still not yet nearly as cohesive or well-defined (in the public mindset) as any religion. (Another idea, also from Dan Dennett, is to use the term "bright" instead of "atheist", but I'm not going to get into that here). "Humanist" presupposes only a vague set of propositions about morality and the goals of humanity. It does not have set-in-stone commandments that can be paraded around as assurance of right belief and good behavior. But perhaps it should (to a very small extent), so that the term thereby conveys more information and is more well-received by the public.
No, we should not have our own "ten commandments." I'm saying that we should be more assertive about the basic, thoroughly-verified values we hold to be true. Humanists come from all walks of life, but at some point, in order to be humanists, we must all agree on a few basic ideas. We attack the dogma of religion, but the fact is, some measure of "dogma" is unavoidable. We despise excessive, unjustified dogma - but there are some things which cannot be doubted away. Is it wrong to kidnap, rape, torture, and murder a ten-year-old girl for your own pleasure? I think we can be dogmatic and that yes, this is unequivocally wrong. Shall we say to ourselves, "maybe, but I might be wrong, and it might be morally acceptable"? This is about as productive as doubting the existence of the chair you are sitting on. It might be true that in some extraordinary universe, a situation could arise in which an act like this is the most ethical option. But until we find ourselves in that universe, we can off-handedly dismiss the act as wrong, without much, if any, further justification. (Perhaps "dogma" is the wrong word here - but it is something very similar).
We ought not to have our own "commandments", per se, but I think that we humanists out to write out and agree on, explicitly, three or four strong (and minimal) guiding principles that we all hold to be true. Perhaps then we can begin to gain more respectability among the religious - they may not agree with our philosophy, but at least they'll know what sort of behaviors they can expect from a humanist.
Has anyone tried something like this before? Yes, to some extent. According to the Wikipedia article on "Secular Humanism",
Humanism rejects dogma, and imposes no creed upon its adherents except the International Humanist and Ethical Union's Minimum Statement on Humanism. All member organisations of the IHEU are required by bylaw 5.1 to accept the Minimum Statement on Humanism:
"Humanism is a democratic and ethical life stance, which affirms that human beings have the right and responsibility to give meaning and shape to their own lives. It stands for the building of a more humane society through an ethic based on human and other natural values in the spirit of reason and free inquiry through human capabilities. It is not theistic, and it does not accept supernatural views of reality."
The IHEU has also devised some broad statements about the humanist outlook:
  • Need to test beliefs – A conviction that dogmas, ideologies and traditions, whether religious, political or social, must be weighed and tested by each individual and not simply accepted by faith.
  • Reason, evidence, scientific method – A commitment to the use of critical reason, factual evidence and scientific methods of inquiry in seeking solutions to human problems and answers to important human questions.
  • Fulfillment, growth, creativity – A primary concern with fulfillment, growth and creativity for both the individual and humankind in general.
  • Search for truth – A constant search for objective truth, with the understanding that new knowledge and experience constantly alter our imperfect perception of it.
  • This life – A concern for this life (as opposed to an afterlife) and a commitment to making it meaningful through better understanding of ourselves, our history, our intellectual and artistic achievements, and the outlooks of those who differ from us.
  • Ethics – A search for viable individual, social and political principles of ethical conduct, judging them on their ability to enhance human well-being and individual responsibility.
  • Building a better world – A conviction that with reason, an open exchange of ideas, good will, and tolerance, progress can be made in building a better world for ourselves and our children.
All well and good, but this still seems rather vague, unparsimonious, and not widely known. I've drafted my own "minimum statement on humanism", consisting of three propositions (subject to peer review), which I have tried to make as concise and precise as possible. We ought to wear them on our sleeves, almost like Christians proclaim the ten commandments, instead of leaving it neglected in some Wikipedia article.
1. The Ethical Imperative. A desire for maximal human well-being is the foundation of morality. Murder, torture, rape, theft, lying, fraud, and other similar deeds are simply wrong, in and of themselves, because in essentially all cases they reduce overall human well-being. A good deed done for fear of consequences is merely a rational expedient, but a good deed done because of love and empathy for the other is a righteous act. A moral system which depends on faith and punishment is bankrupt. We shall be good to others because we care about others.
2. The Justificatory Imperative. Any new proposition shall be judged in accordance with the amount and credibility of evidence that supports it. We shall not be quick to adopt positions of great consequence. The scientific method and peer-review shall be thoroughly applied wherever they can be applied. And we must not let our beliefs be set in stone - we must tread humbly down the path, continually questioning ourselves, "Is this right? Is this true? Why?" By this method, those beliefs for which there is overwhelming evidence will quickly pass this test; others, we may find, must be discarded.
The above two principles should be fairly uncontroversial. It's the third principle that separates the humanists from the boys.
3. The Liberal Imperative. We shall not impose our beliefs on others by force, except as an absolutely last resort when under threat. Anyone is allowed to openly criticize our position. Personal liberty under law shall be maximized as much as possible, so long as that person's liberty does not (noticeably) interfere with someone else's. Moreover, we must agree on a certain core of social issues. We must agree that the government should not be a dictator of the social contracts and private activities of consenting adults, that it should not dictate the sorts of substances a person chooses to put in their body (unless this will greatly detriment others, which, in some cases, it might), that a woman has the right to privacy to make reproductive decisions, that any able-bodied and able-minded citizen shall be able to volunteer for military service (without discrimination on the basis of sexual orientation or religion), and that government shall not affiliate itself with any religion.
This is essentially a set of doctrines from secularist, liberal political philosophy. Why have I done this? It seems as if I've gone too far, adding too much dogma to an allegedly non-dogmatic belief system. Why make believing these things necessary in order to be a humanist, excluding so many people? And can't all of these just be reduced to a function of the first two principles? By principle (2), I must try to justify myself.
Principles (1) and (2) are still somewhat trivial - they are more methods of thinking, rather than results of thinking. They still don't convey exactly what a person believes. It is important to explicitly spell out a certain set of nontrivial but evidentially-well-established ground rules (more subject to future alteration, of course, than the first two principles). We must thoroughly define the ways that we, and government, should interact with other people, even and especially the people we dislike. Principle (3) carries a lot of weight, but this also means that when you meet a self-identified humanist on the street, you know immediately whether or not he agrees with you on a certain set of very important issues. You may not know what else he believes, but you will know that you can be comfortable around him.
Note that these three principles don't necessarily exclude theists from being humanists, as the IHEU does. I think this is step in the right direction. Could a Christian be a humanist? I would say yes, potentially. (If that sounds too oxymoronic to you, then perhaps the above three principles can go under a different label - "humanist" was just my suggestion). I envision a day when people can say, "Hi, I'm a humanist and a Christian. You?" "Me? I'm a humanist and an atheist." The previous asymmetry of information between the two statements is gone, and an important gap is bridged, allowing those of vastly different religious belief to unite on earth under one goal.

What is “intelligence”? Might computers already qualify?

(Originally posted by Pat in 3/2012)

A review of Godel, Escher, Bach by Douglas Hofstadter

Like I Am A Strange Loop only more so, Godel, Escher, Bach is a very uneven work.

On the one hand, Hofstadter is a very brilliant man, and he makes connections between formal logic, artificial intelligence, cognitive science, and even genetics that are at once ground-breaking and (in hindsight) obviously correct. GEB makes you realize that it may not be a coincidence that DNA, Godel's theorems, and the Turing test were discovered in the same generation—indeed, it may not simply be that technology had reached a critical point, but rather that there is a fundamental unity between formal logic, computers, and self-replication, which makes it essential that you will either understand them all or you will understand none of them.
On the other hand, GEB is filled with idiotic puns and wordplay that build on each other and get more and more grating as the book goes on (“strand” backwards becomes “DNA rapid-transit system”, etc.), and it often digresses into fuzzy-headed Zen mysticism (the two are combined when “MU-system monstrosity” becomes “MUMON”). Worst of all, between each chapter and the next there is a long, blathering dialogue between absurd, anachronistic characters that is apparently supposed to illuminate the topics of the next chapter, but in my experience only served to bore and frustrate. (Achilles is at one point kidnapped by a helicopter; that should give you a sense of how bizarre these dialogues become.) Hofstadter loves to draw diagrams, and while a few of them are genuinely helpful, most of them largely serve to fill space. He loves to talk about different levels of analysis, different scales of reduction (and so do I); but then in several of his diagrams he “illustrates” this by making larger words out of collections of smaller words. If he did this once, I could accept it; twice, I could forgive. But this happens at least five times over the course of the book, and by then it's simply annoying.

Much of what Hofstadter is getting at can be summarized in a little fable, one which has the rare feature among fables of being actually true.
There was a time, not so long ago, when it was argued that no machine could ever be alive, because life reproduces itself. Machines, it was said, could not do this, because in order to make a copy of yourself, you must contain a copy of yourself, which requires you to be larger than yourself. A mysterious elan vital was postulated to explain how life can get around this problem.
Yet in fact, life's solution was much simpler—and yet also much more profound. Compress the data. To copy a mouse, devise a system of instructions for assembling a mouse, and then store that inside the mouse—don't try to store a whole mouse! And indeed this system of instructions is what we call DNA. Once you realize this, making a self-replicating computer program is a trivial task. (Indeed in UNIX bash I can write it in a single line. Make an executable script called copyme that contains one command: cp copyme copyme$$ (the $$ appends the id of the current process, making the copy unique.)) Making a self-replicating robot isn't much harder, given the appropriate resources. These days, hardly anyone believes in elan vital, and if we don't think that computers are literally “alive”, it's only because we've tightened the definition of “life” to limit it to evolved organics.
Hoftstadter also points out that we often tighten the definition of “intelligence” in a similar way. We used to think that any computer which could beat a competent chess player would have to be of human-level intelligence, but now that computers regularly beat us all at chess, we don't say that anymore. We used to say that computers could do arithmetic, but only a truly intelligent being could function as a mathematician; and then we invented automated theorem-proving. In this sense, we might have to admit that our computers are already intelligent, indeed for some purposes more intelligent than we are. To perform a 10-digit multiplication problem, I would never dream of using my own abilities; computers can do it a hundred times faster and be ten times as reliable. (For 2 digits, I might well do it in my head; but even then the computer is still a bit better.) Alternatively, we could insist that a robot be able to do everything a human can do, which is a matter of time.
Yet even then, it seems to me that there is still one critical piece missing, one thing that really is essential to what I mean by “consciousness” (whether it's included in “intelligence” is less clear; I'm not sure it even matters). This is what we call sentience, the capacity for first-person qualitative experiences of the world. Many people would say that computers will never have this capacity (e.g. Chalmers, Searle); but I wouldn't go so far as that. I think they very well might have this capacity one day—but I don't think they do yet, and I have no idea how to give it to them.
Yet, one thing troubles me: I also have no idea how to prove that they don't already have it. How do I know, really, that a webcam does not experience redness? How do I know that a microphone does not hear loudness? Certainly the webcam is capable of distinguishing red from green, no one disputes that. And clearly the microphone can distinguish different decibel levels. So what do I mean, really, when I say that the webcam doesn't see redness? What is it I think I can do that I think the webcam cannot?
Hofstadter continually speaks, in GEB and in Strange Loop, as if he is trying to uncover such deep mysteries—but then he always stops short, interchanges the deep question for a simpler one. “How does a physical system achieve consciousness?” becomes “How does a program reference itself?”; this is surely an interesting question in its own right—but it's just not what we were asking. Of course a computer can attain “self-awareness”, if self-awareness means simply the ability to use first-person pronouns correctly and refer meaningfully to one's internal state—indeed, such abilities can be achieved with currently-existing software. And we could certainly make a computer that would speak as if it had qualia; we can write a program that responds to red light by printing out statements like “Behold the ineffable redness of red.” But does it really have qualia? Does it really experience red?
If you point out I haven't clearly defined what I mean by that, I don't disagree. But that's precisely the problem; if I knew what I was talking about, I would have a much easier time saying whether or not a computer is capable of it. Yet one thing is clear to me, and I think it should be clear to you; I'm not talking about nothing. There is this experience we have of the world, and it is of utmost importance; the fact that I can't put it into words really is so much the worse for words.
In fact, if you're in the Less Wrong frame of mind and you really insist upon dissolving questions into operationalizations, I can offer you one: Are computers moral agents? Can a piece of binary software be held morally responsible for its actions? Should we take the interests of computers into account when deciding whether an action is moral? Can we reward and punish computers for their behavior—and if we can, should we?
This latter question might be a little easier to answer, though we still don't have a very good answer, and even if we did, it doesn't quite capture everything I mean to ask in the Hard Problem. It does seem like we could make a robot that would respond to reward and punishment, would even emulate the behaviors and facial expressions of someone experiencing emotions like pride and guilt; but would it really feel pride and guilt? My first intuition is that it would not—but then my second intuition is that if my standards are that harsh, I can't really tell if other people really feel either. This in turn renormalizes into a third intuition: I simply don't know whether a robot programmed to simulate all the expressions of guilt would actually be feeling it. I don't know whether it's possible to make a software system that can emulate human behavior in detail without actually having sentient experiences.
These are the kinds of questions Hofstadter always veers away from at the last second, and it's for that reason that I find his work ultimately disappointing. I have gotten a better sense of what Godel's theorems are really about—and why, quite frankly, they aren't important. (The fact that we can say within a formal system X the sentence “this sentence is not a theorem of X” is really not much different from the fact that I myself cannot assert “It's raining but Patrick Julius doesn't know that” even though you can assert it and it might well be true.) I have even learned a little about the history of artificial intelligence—where it was before I was born, compared to where it is now and where it needs to go. But what I haven't learned from Hofstadter is what he promised to tell me—namely, how consciousness arises from the functioning of matter. It's rather like my favorite review of Dennett's Consiousness Explained: “It explained a lot of things, but consciousness wasn't one of them!”
Godel, Escher, Bach is an interesting book, one probably worth reading despite its unevenness. But one thing I can't quite get: Why was it this Pulitzer Prize-winning bestselling magnum opus?
I guess that's just another mystery to solve.