Saturday, September 13, 2014

A Defense of Atheism Plus

Starting about a month ago, the atheist blogosphere became pretty wrapped up in a controversy over a video made by Youtube star Jaclyn Glenn. If you aren't aware of it, the normally pretty loopy Richard Carrier has done an excellent analysis here. In short, Glenn posted a video creating a strawman of the Atheism Plus movement, mocking them as being divisive, irrational, and "pussies," as she describes in the video's description. Gender politics issues like this one have given me concern about the atheist movement, a community which I grew up in and am intimately involved with.

Friday, June 20, 2014

Matt Rogers Can't Math

Another fake climate skeptic has published a misleading article, disappointingly this time in the Washington Post.  Matt Rogers says that there has been a deceleration in surface warming.  To show this, he gives the first difference graph of both surface temperature data from NASA (GISTEMP) and NOAA (NCDC).

His claim is false.  The standard errors of the trends in the first difference graphs are each greater than their respective trends, meaning you can't say even at 1-sigma that the trends are significantly different than zero (and you most certainly may not make a claim at 2-sigma).  For NOAA's data, the trend and standard error are -0.0043 and 0.0055 respectively; for NASA's data, the trend and standard error are -0.0055 and 0.0068 respectively.
NCDC: -0.0043±0.0110 (units ~˚C/year^2)
GISTEMP: -0.0055±0.0137

It is also unclear what data he is using for the GISTEMP dataset.  The 2001-2000 value should not be as low in his graph as it is; the one above is correct.  It's interesting that even with that correction, the trend is still very insignificant.

Rogers also tries to shield himself from criticism of cherry picking, by saying that he could have picked 1998 as a start year.  1998 was a very warm year, and Rogers thinks that this would have amplified the trend line.  As a fake skeptic, it is par for the course he would try to protect against accusations of cherry picking that dreaded year.

This is also a false claim anyway, and embarrassingly so.  If you start with a warm year like 1998 (contrasted to a cold year like 2000), and then do first differencing, you're going to start with a very low datapoint.  The result is not an amplified negative trend, but in fact a more positive trend.  Anyone that had actually graphed out the data would know that.  Anyone that can do basic math would know that, in fact.
NCDC: 0.0030±0.010
GISTEMP: 0.0023±0.0126
These are also not statistically significant.  In fact no first-difference trend is statistically significant, for at least a couple decades back.

Rogers doesn't know what he is talking about.  And this is a rather funny way of illustrating that fake skeptics in general don't know how to handle 1998.

[Edit: The extra attention this post seems to be getting has encouraged me to quickly add trend lines to these graphs.  Hopefully that helps make things a bit clearer!  I've also added indented "tables."  Thanks to Tom Di Liberto for the plug.]

Saturday, May 31, 2014

Philosophy of Mind, Part 1: What would a scientific theory of consciousness look like?

Recently, mathematician and physicist Max Tegmark proposed, in a highly technical paper, that consciousness can be thought of as a state of matter with particular informational properties. Although Tegmark has by no means solved the mystery of mental states, papers like this are a giant step in the right direction. At this point in human history, we know enough about the world to glimpse, if not the theory itself, at least the shape of a potential theory of consciousness. We can find our way down the path all the more easily if we have a sense of the destination.

Wednesday, May 28, 2014

Adiabatic Lapse Rate (Greenhouse v. Gravity, Part 1)

Take a box of gas*:
and crush it (increase pressure by a lot):
(*ideal) It's not really easy to visually show "pressure", but two things happen to the box of air: it will shrink (it's easy to show this), and its internal temperature will increase.

The first effect may seem obvious, but why the second one?  In the particular action we took of crushing the box, we performed work on the box (so we did something that changed its internal energy) but did not add or remove heat from the contents.  And, if we assume that the box is a thermal insulator, then we know that the gas cannot respond to being compressed by radiating away energy.

Thermodynamically, a change of a box of air's internal energy is related to the amount of heat that is transferred into or out of it, and the work performed on the box to change its volume.


Where U is the internal energy of the box, Q is the heat that is added to the box, and W is a work term that is equal to PdV, such that an increase in volume is seen as a loss of internal energy (the box expends energy to push its walls out).  Conversely, pushing on the box to make it smaller is an addition to the internal energy.  Broadly speaking, if you do work on the box (shrink it) without letting it radiate, then its internal energy will increase; and internal energy is a function of the temperature of the gas.  This type of action on the box—this type of transformation—is called an adiabatic transformation.  "Adiabatic" means "without transfer of heat."

Why does this matter?

Consider the atmosphere: due to the weight of the air above you, there is higher atmospheric pressure at the surface of the planet than there is, say, 10 kilometers above us.  If we are very sloppy in our treatment of what we just learned, we might conclude that the very fact that pressure at the surface is higher means that the temperature at the surface is higher.

And believe it or not, this is something that is used (again, sloppily and erroneously) by some to deny that the greenhouse effect causes warming on the surface of our, and frankly any, planet.  Because if gravity can cause pressure change and high pressure is associated with high temperature, then who needs a greenhouse effect, right?

The problem comes from a fundamental misunderstanding of what the above equation represents.  The above equation (and all of the ones I will include soon derive the "adiabatic lapse rate" in the title) describes changes in variables of a box, or parcel, of air that is undergoing a transformation.  It does not describe a static system; but, the pressure gradient caused by gravity describes a static system (it actually holds in a variety of dynamic systems as well).

The article linked above tries to make an argument that the temperature profile in the atmosphere with height, which we will call the environmental temperature lapse rate (the rate at which temperature lapses, or falls, with height), can be described merely by gravity.  How?  By thinking that because adiabatic lapse rate, the rate at which a parcel of air will cool as it rises adiabatically (i.e. as it goes through a pressure change adiabatically), can be, then the environmental lapse rate can be.  But these are not the same thing!

To derive the adiabatic lapse rate (skip to equation [14] if you wish), consider our above thermodynamic equation, and plug in the work equivalence:


Let's define a term that we'll call "enthalpy," H, as:


so a small change in enthalpy is:





We can also define the heat capacity of our system as being the amount of heat we need to add (remove) to (from) a system in order to increase (decrease) its temperature by a certain amount.  Typically, we would have to restrain certain parameters of our system in order to measure such a heat capacity, for instance the pressure of the system.  If our system is at constant pressure (dP = 0), and we define constant-pressure heat capacity as below:


then it follows


And so, in an adiabatic lift of a parcel of air, where heat exchange is zero:


If we divide by the mass of the system, we can obtain the specific heat capacity and the specific volume, which are, respectively, the amount of heat needed to cause a temperature change per unit of mass, and the amount of volume a unit of mass occupies (inverse of the density).  These variables will be in lowercase from the uppercase above.  And finally, we can use the hydrostatic equation to finish our derivation of the adiabatic lapse rate:





Here, g is a negative quantity (which I find exceedingly more appropriate than how it's treated as a positive variable with a negative sign attached to it, as in the Wikipedia link above).  On Earth, this adiabatic lapse rate value is about –9.8˚C per kilometer in height.  In other words, for every kilometer you adiabatically raise a parcel of air, it will cool by 9.8˚C.

Pay close attention to how the equations still describe the parcel of air, and are under the framework of literally moving a "piece" of air through a medium that has reached a pressure equilibrium with gravity.  We do not know anything about the temperature distribution in this medium, I never had to reference it.  We only know (or at least presumed) that it is stable.

It is also completely worth pointing out that if this was indeed the environmental lapse rate here on Earth, then our environmental lapse rate should equal –9.8˚C/km, no?  But it does not, the environmental lapse rate is instead roughly –6.5˚C/km, up until you hit the tropopause.  This is not a simple "well they're close, it's just an error between measurement and theory"—no, theory actually dictates that in our atmosphere the environmental lapse rate cannot be as negative as the adiabatic lapse rate.  While I will not go into that in particular in this part, allow me to show how a "shallow" environmental lapse rate is still completely compatible with a "steep" adiabatic lapse rate.


Equation [14] describes how the temperature of an air parcel will change when it rises to a particular height.  Consider what will happen to it if it does: once it reaches that height, it will have the same pressure as the pressure of the air around it, and one of three things will happen.

• The air parcel will wind up being colder than the surrounding air, which means it is denser, and thus will sink.  This is a condition where the environmental lapse rate is "shallower" (lower in magnitude) than the adiabatic lapse rate, a condition of stability where vertical motion is hampered.  A stable atmosphere will stay the way it is.
• The air parcel will reach the exact same temperature as the surrounding air, which means its density is equal, and thus it won't experience a force stopping its motion (but also not helping it).  The environmental lapse rate and the adiabatic lapse rate are equal, and this is a condition of neutrality.  Neutral atmospheres are "stable" in that when you move an air parcel adiabatically, you are not convecting heat from one location to another.  So, the environmental lapse rate will not change.
• The air parcel will be warmer than the surrounding air, which means it is less dense and will accelerate upward further.  The environmental lapse rate is "steeper" than the adiabatic lapse rate, and the atmosphere is unstable.  In an unstable atmosphere, these adiabatically rising air parcels are carrying hot air upward—this will lead to warming higher up, which makes the environmental lapse rate more "shallow".  It will work its way to a stable condition.

This implies an important point: an atmosphere with a very shallow environmental lapse rate is stable and can coexist with a steeper adiabatic lapse rate.  In fact, an atmosphere that has no greenhouse gases, or in other words does not have gases that can react with thermal radiation, will be isothermal with no environmental lapse rate at all.  This is again something in particular I will not explain in this part.

The next statement necessarily follows: the fact that the pressure is higher at the surface does not dictate that the temperature will be higher at the surface.  (You need to have a radiatively-interactive atmosphere, one with greenhouse gases, in order to have temperatures higher at the surface.)

If you're still not convinced, allow me to derive the temperature-dependent pressure profile of the atmosphere.  In other words, the pressure at a given height that has a given temperature.  That given temperature will depend on the environmental lapse rate.

Starting with the hydrostatic equation, and soon using the ideal gas law:










In these equations, in particular our final one, tau is our environmental lapse rate (see the substitution from equation [19] to equation [20]), and variables that have zero subscripts denote values at the surface of the planet (or any surface, so long as that surface remains the same in the problem).

The real question: if we give our tau variable a value very very close to zero, does that make our pressure profile very wonky?  In particular, does it imply that our pressure profile won't be "high at surface, low up above?"  The graph below shows that the answer is no.  In fact, the pressure profile corresponding to a near-zero temperature profile (–0.01˚C/km) is very close to the pressure profile corresponding to an environmental lapse rate close to our adiabatic lapse rate (–9.8˚C/km).  For this graph, the temperature and pressure at the surface for each scenario are the same, and are 14˚C and 100,000 Pa respectively.


So, not only does the adiabatic lapse rate not describe a static system, instead the change that an individual air parcel experiences when you move it up or down, but the suggestion that a pressure gradient must cause a temperature gradient is unfounded as well and has many mathematical counterexamples.  In the next post, I will offer up a couple theoretical examples to direct how we should think about energy transfer in a simple atmosphere, and why radiative interaction (i.e. greenhouse gases) is needed for the convection that drives our actual environmental lapse rate.  I'll also briefly discuss some of the published science on many of these scenarios.

Thursday, May 22, 2014

Cook et al 2013

A couple years ago I helped a small part in collecting emails for researchers who had published climate change-related papers, papers that would end up being the object of study in the since-published Cook et al 2013 "consensus project" paper that has gained a fair amount of due popularity.  Since then I have had very little time to devote to the Skeptical Science author team (especially considering how involved I was, merely a freshman in my latest stint), but from time to time I like to dust off my old coat and wear it around for a couple days while I explore some other climate-related subject.

The Cook et al paper's conclusions of a ~97% consensus in the scientific literature on the topic of anthropogenic global warming (AGW; that is, the question of whether humans are causing climate change) is important in its own right (very important, in particular, toward debunking the "there is no consensus" myth).  I'd like to comment on some of the finer details of the data collected, in particular comparisons between the ratings that the "citizen science" community at Skeptical Science gave each paper based on wording within the abstract, and the ratings that the authors of the papers themselves gave.

Thursday, April 17, 2014

Ask-an-Atheist Day: Slightly pointless in Ann Arbor, but no regrets!

Today was a beautiful, sunny day - perfect weather for Ask-an-Atheist Day despite the intermittent and slightly annoying 2 mph winds.

We had a number of long discussions, and also a handful of people who needed directions to buildings (a lot of campus day groups). Almost all the non-lost people who stopped by our table were fellow atheists. I guess that's what happens when you live in Ann Arbor!

I know we probably didn't achieve our original goal of "working together to defeat stereotypes about atheism and encourage courteous dialogue between believers and nonbelievers alike" because there is already a large population of atheists on campus, BUTT it was a fun day nonetheless and the weather was gorgeous. Some of us even went and got ice-cream. (:

Also, check out my new ink.
Best part of tabling is tabling supplies. ;)

Monday, March 24, 2014

On Subtraction

Perchance you've by now stumbled upon a new method of subtraction that is allegedly being taught in elementary school, as part of the new "Common Core" recommendations:

For anyone used to the "borrow one" method (the "old fashion" way above), the "new" way might seem confusing at first.  Subtraction through addition?

This is not an entirely scandalous idea, if one is somewhat familiar with how for instance computers perform subtraction (using the method of complements).  How this particular method (our "Common Core" method, though I won't say it's named such because I've yet seen a good link to a recent textbook that teaches this) works is by counting upward from our lower number (the "subtrahend") to the higher number (the minuend), in easy-to-grasp intervals.  We add to get a multiple of 5 because we like working in 5s, we add to get to a multiple of 10 for the same reason, and we keep adding until we reach our destination.  All of those numbers we added will get us to the final answer; their sum is the difference between the minuend and the subtrahend, our answer.

This has the benefit of not actually having to teach subtraction.  Subtraction is, after all, just a different way of adding.  It does seem somewhat longer though: in the particular example above, the answer using the borrow one method completes the task in 2 simple steps.  The Common Core method takes around 7 (it could take 5 if we can add directly to get to 10s instead of 5s).  Of course, there are counter examples that are more efficient the latter way: 100-99 would take a couple "borrows" to complete, while the new method would take 1.

The new method is not particularly intriguing or difficult.  But, some, like Hemant Mehta, contend that it is actually easier.  I disagree.

The basic concept of subtraction is not extremely difficult to grasp: Joe has 11 apples, if he eats 3 he has (11 • 10 • 9 •8) 8 apples left.  If you can count up, you can count down.

The contesting difficulties in these methods are those of what it means to "borrow", and the number of steps it takes to solve a given subtraction problem with each algorithm.  We've given the new Common Core method its turn at being explained and justified; we can do what Hemant unfortunately did not do, and explain why the old borrow one method does make sense.  And we don't need to be very difficult with this either.

Our number system is base-10: we have single characters to represent each number up until we reach ten, and then we start forming larger groups of certain sizes, of size ten.  The number of those groups we have, we can count using our original numbers.

If I have, say, twenty-three objects, I can represent them by themselves as twenty-three objects:

• • • • • • • • • • • • • • • • • • • • • • •

and maybe I make up a symbol for twenty-three, maybe Œ.  Or, what I can do is group them:

(• • • • • • • • • •) (• • • • • • • • • •) • • •

and so I have 2 groups of ten, 1 group of 3, and I represent that by an ordered pair of numbers: 23.  23 is the same as 2 of ten, and 3.

This concept will have to be learned at some point in school, and it is not a very difficult concept to grasp.  It is, in fact, adding.  And if I was to say that I could break apart one of my groups into individual pieces,

(• • • • • • • • • •) • • • • • • • • • • • • •

we'd all know what I was saying.

So how does the borrow one method work?  Well, you start taking away like-groups from like-groups, with 1s first.  If you have 8 ones and want to remove 3 of them, easy!  Then you move on.  But if you have 3 ones and want to remove 8, what do you do?  Well, again each larger "group" is merely a collection of a certain number of smaller groups.  Break one apart, like we did to the 2 groups of ten: we now have 3 and ten objects, more than 8, so we subtract.  And then we move on to the next group size, remembering that we just broke one apart.

The act of breaking a group apart is borrowing.  Does everyone think that they would be able to explain this now?

Again, this is not particularly difficult.  This is how kids have been taught how to subtract for a long time; that does not make it the best way of course, but has anyone really had a problem with it?  Has any math teacher been unable to explain the method?

Whether we can appreciate if simple subtraction is easier than simple addition, we can all appreciate that a method that takes fewer steps to implement has an appeal to it.  Now the example given in Hemant's blog of 3000-2999 is quite damning toward the borrow one method, isn't it!  You have to borrow several times, and keep track of extra numbers, you have to add to your groups, so on.

The thing about picking out individual examples because you like them (as you would pick, if you will, cherries) is that you don't really get a good idea of what the general case is.  Is the new Common Core method always better?  Has anyone taken the time to answer that question?

Fear not, I have!  In fact I've also taken the time to figure out how many different calculations you would need to complete subtraction on two numbers using the borrow one method, and the method of complements to boot.  I've completed this work in Excel (I found it more useful than, say, Matlab or R in keeping visual track of the steps of the algorithms, and the conditionals).  My graphs below show the results.

To briefly explain how each algorithm is carried out:

Borrow One: you subtract the ones-digit of the subtrahend from the ones-digit of the minuend.  If it is smaller, easy-peasy, one step; if it is larger, you (1) subtract one from the next digit, (2) add 10 to the current digit, and (3) subtract.  You then move on.  Trivial subtraction is counted as well (so, it takes a step to subtract 0 from 6, or 5 from 5).

Common Core: you add to the right-most non-zero digit to increase the next digit by one; you do this until the digits to the left of your current digit in the subtrahend match those in the minuend.  Then you add going to the right, to increase to the minuend's digits.  Then you add all of those numbers you added (if you added n numbers, you perform n–1 more additions).  An example, 84680–59391:

59391 (then add 9)
59400 (then add 600)
60000 (then add 20000)
80000 (then add 4000)
84000 (then add 600)
84600 (then add 80)

then add 9 + 600 + 20000 + 4000 + 600 + 80: 11 steps altogether.

Method of Complements: you first find the 9s–complement of your subtrahend (if your subtrahend XXXX has 4 digits, the 9s-complement is 9999–XXXX), which is 4 steps here (again, trivial subtraction is counted; this also technically uses the borrow one method, but you never have to borrow since you're starting with 9 each time).  Then you add that complement to your minuend (here, 4 steps more, or 5 since you'll have to carry), then knock off the leading 1 (same as subtracting 10000 here; 1 step), then add one to the result.  To demonstrate generally why this works, define your complement C, subtrahend S, minuend M, and difference D:

C = 9999 – S
S = 9999 – C

D = M – S
D = M – 9999 + C
D = M + C + (1 – 10000)

So which is more efficient?  If we're working with 5-digit numbers (leading zeros count), and draw a sufficiently large sample of pairings (10000 here) with the larger of the pair subtracting the smaller, then we can see that the borrow one method is much more efficient in general:
The Common Core method has a wider spread, with some very few pairings needing only 1 step; the minimum for the borrow one method is 5.  At the same time, the borrow one method has an average step count of ~9, whereas the average step count for the Common Core method is about 14-15, with a heavy skew toward higher numbers (15/17).  It is much less efficient.

But these kids are only barely learning subtraction, of course!  Let's keep it to numbers with only 2 or fewer digits.  This is a list we can fully exhaust, there are only 5050 unique pairings.  The results change somewhat: the method of complements is unequivocally the least efficient:
Still, the new method is inefficient, though it does have advantages to the simplest problems (the ones where it's actually kind of silly to think in terms of "algorithms" anyway).

This method isn't wrong by any means; I personally find it to be rather ad-hoc (especially for instance the movement to multiples of 5, and then multiples of 10... why?) and lengthy, and not really that much easier to grasp than the idea of simple subtraction or powers of ten (which students will have to learn anyway if they want to have a good understanding of decimals, non-linear equations, so on).  And I hope that I've helped to explain why.

I find Hemant's commentary funny:
"But none of that matters to the people who would rather complain about the "new" math without taking a second to understand what they're even looking at."
I guess I agree with that, yes.