Thursday, January 31, 2013

Taylor Polynomials

When attempting to solve certain problems, it is often useful to approximate a function as a polynomial. Polynomials are generally easier to deal with in calculus problems than other functions, and many functions can be approximated as polynomials to arbitrary precision.

The general method for producing a polynomial approximation of a function is as follows. First, choose the point that you want your approximation to be centered around. (When you’re solving a problem, you should expand your approximation around a point within the region that you’re most interested in, because Taylor Polynomial approximations are most accurate near their center, but for simplicity’s sake I’ll just pick x = 0 as the center for now, and give a more general formula later.) Once you've chosen the point that you want to expand about (in our case, x = 0), you can start writing down polynomial approximations. The simplest (and least accurate) polynomial we could use to approximate a function would just be a constant, equal to the value of the function at x = 0 (i.e. P0(x) = f(0), where "P" is a polynomial function and the subscript indicates the order of the polynomial).
Zero-Order Taylor Polynomial (magenta) for f(x) (blue)
As you can probably tell, a zero-order polynomial usually won't be sufficient to provide a useful approximation of a function. It is, of course, exactly equal to the function at x = 0, but it quickly diverges from the function as we move away from x = 0, since the function is changing its value while our approximation is not. We can do better by using a polynomial that not only has the same value as the function at x = 0, but also changes at the same rate (i.e. has the same value of its first derivative). We do this by setting P'(x) = f'(0) and then integrating:
*Remember that the derivative of f evaluated at a single
 point (x = 0) is a constant, not a function of x.
And our approximation gets a whole lot better:
First-Order Taylor Polynomial (magenta) for f(x) (blue)
Of course, f(x) isn't linear, so our first-order approximation still diverges from the function eventually. If we only need to consider small values of x, this approximation may be sufficient, but if the situation calls for accuracy at larger values of x, we need to go deeper...
To the second dream lev- ...err, derivative.
The linear approximation eventually diverges from f because, while Phas a constant slope, the slope of f is changing. To get a better approximation, we need a polynomial that not only has the same value and slope as f at x = 0, but also is changing its slope at the same rate as f(x) (i.e. P''(x) = f''(0)).
As before, we find the constants of integration by setting
the nth derivative of P equal to the nth derivative of f at x = 0.

Second-Order Taylor Polynomial (magenta) for f(x) (blue)
For increasingly better approximations, we repeat this process of setting the nth derivative of P equal to the nth derivative of f at x = 0 and then integrating, for ever-increasing values of n, so that:
In fact, if we let n go to infinity, and if f(x) is a continuous, analytic function, then the Taylor Series is exactly equivalent to the function:
Earlier I promised I'd give a generalized formula to allow for the Taylor Polynomial to be centered around any point. To do this, just replace "0" with "c," and "x
(= x - 0) with "x - c":
Obviously, you're never actually going to calculate an infinite number of terms, but luckily in many cases you can get a pretty good approximation of a function with just a few terms.


Now, you might be wondering when you'd ever want to approximate a function as a polynomial. It turns out that in certain problems, either the function that you're working with isn't explicitly known, or it's just more difficult to work with than a polynomial. For example, if you're only dealing with small angles, it'll often make it easier on you to approximate sin x as just x, and cos x as 1 - x(It's this approximation that allows us to describe the motion of a pendulum bob as simple-harmonic, as long as the maximum displacement angle is small).

Tuesday, January 15, 2013

Combinations, and the issue of repetition

(note - I'll try my best to keep the explanations easy to understand and will provide visuals where needed, but this post will end up being pretty technical nonetheless.  I also divorce myself from any responsibility for resulting boredom on your part from here on out.)

A combination is a set of objects that can be identified by its constituents.  An easy way to think about combinations is simply by thinking of words: the word "PARALLEL" is a combination of letters, those letters being 1 P, 2 As, 1 R, 3 Ls, and 1 E.  Any anagram of "PARALLEL" (such as "ALL PEARL") still contains those exact letters and numbers of those respective letters; it's a different order – a different permutation – but not a different combination, where order does not matter.

In my Intro to Probability statistics course I'm taking right now, we learned how to calculate the number of combinations of size k that you can make from n number of objects, when all of those objects are distinct (such as, 7-letter 'words' from the letters A, B, C, D, E, F, G, H, I, J, and K).  To do this we need to know first how many permutations we can form: how many different orders of the same choices can be made?  That way we can then remove all of the choices that have identical letters.

In our example we have 11 letters, and we want to choose 7 of them.  I have 11 options to start, for the first letter in my word.  Once I make that choice, I now have 10 letters left to choose from; I would then have 9, then 8, etc.  And then I'm all out of letter space in my word; my total number of possible outcomes – of possible permutations – is equal to 11*10*9*8*7*6*5; the equation for permutations is n factorial (n! = n*(n–1)*(n–2)*...*2*1) divided by (nk) factorial:


To find the number of combinations, we remove the number of repeats: the number of ways that these 7-letter words can be arranged.  This is equal to 7 factorial (k!), and so more generally:


This is an easy equation to use in the case where the n objects we're choosing from are all distinct. However, it becomes much more complicated when there is repetition in the objects.

Thursday, January 3, 2013

Perihelion

A little over a day ago, the Earth approached the point in its orbit closest to the Sun, called perihelion.  This means that the Earth was getting more sunlight than in any day of the year (and to close approximation, still is).  This is a peculiar fact for us northern hemisphere denizens, since this is also our winter, and we don't quite get the most sunlight at all right now thank you very much.

No, if we wanted the most intense sunlight, we would want to go to our opposing Tropic, the Tropic of Capricorn, at the 23.5° S latitude.  There, solar insolation (power per square meter) at the top of the atmosphere (TOA) reaches 1413 W/m^2 at its peak during the day, the highest of any other latitude on Earth.  This reduces some as the light goes through our atmosphere and is reflected by clouds and particulates, but otherwise Antofagasta in Chile, São Paulo (São Paulo) in Brazil, Rockhampton (Queensland) in Australia, Polokwane in South Africa – these are our tanning destinations!

Now I'm not much of a tanning person myself (for several reasons, one being more genetically-oriented), but I'm a huge fan of free energy like anyone else.  If I wanted a winter vacation home and cared only about how much energy I could produce using solar panels, where should I go?  While peak insolation is highest at the Southern Tropic, total solar insolation is not at this time of year.  If you thought that perihelion during the northern winter was peculiar, you probably don't know this: to gather the most sunlight I need to go to the South Pole:

Larger version here.

The above graph shows average 24-hour TOA insolation at different solar declinations ("delta"), for all latitudes, and is corrected for changing Earth-Sun distance throughout the year (perihelion and aphelion are roughly concomitant with the solstices, off by ~11 days).

The main reason that solar insolation increases toward the poles, after you hit the Polar Circles, is that there's no longer any nighttime.  The light that the poles get while the rest of the Earth is in shadow outweighs the difference in insolation during the day.

A winter vacation home here would have the added bonus of increased efficiency too, since solar panels tend to work better in cold temperatures.  If, however, you're not the type that likes winter, if you're instead one of those "tanning" types that likes to have as much Sun as possible, then you'd be better off just living closer to the equator.  In fact, what this graph would imply is that you should live somewhere south of the equator, around the 2/3° S latitudes.

In this respect, the above graph is actually wrong.  This is because it is a single-day snapshot, with an equal number of days during the summer as during the winter.  If we define the split between summer and winter at the days where the Earth-Sun distance is perpendicular to the distance when at perihelion or aphelion (i.e. the two days in the year the Earth receives the same insolation in both hemispheres), the northern hemisphere's winter is shorter than the southern hemisphere's.  The flip side of that coin is that our summer is longer – we have a longer summer while we're further away, and the southern hemisphere has a shorter summer while it is closer.  This is because the Earth moves faster in its orbit while it is closer to the Sun, during perihelion; slower while it's further away.

The countering time and distance effects cancel each other out more or less completely.  If you wanted to bask in the most sun throughout the year, you'd want to live right on the equator.

Wednesday, January 2, 2013

Radiocarbon dating just got more accurate

Radiocarbon dating is useful for determining the ages of formations, objects, and events that are within several ten-thousands of years old, but accurate dating can only be best done in a time range where we have a record of atmospheric 14C levels.  That way, the measurements can be properly calibrated.

Formerly, tree rings were best for this calibration: we know how old trees are because each year they grow a new ring.  However, tree rings only go back so far, a little over ten thousand years.  Now, Ramsey et al. (2012) in Science use sediments from Lake Suigetsu, Japan, to develop a longer record – organic sedimentation in the lake is seasonally distinguishable, with plant matter that drifts to the bottom of the lake being lighter colored in winter than in the summer.  This record goes back to almost 53,000 years B.P. (before present, which is 1950 by convention).

This isn't really that interesting from a new science perspective – it's not new science at all.  I find this particularly cool because I've spent the longest time arguing with young earth creationists, who have a deep loathing for radiometric dating, especially radiocarbon dating.  One of the common "arguments" against the method is, perplexingly, that we don't know how much 14C was in the atmosphere in the past (and so how can the calculations be accurate?).  But... of course we know!  That's what the tree rings are for.  And now we know how much 14C there was for a much longer period of time (middle graph; correct spelling is "Suigetsu," typo as appears in paper):

(Figure 3 from Ramsey et al. (2012), larger image here)

Climate science deniers are pretty bad, but creationists are, to steal Vizzini's words, "brainless, helpless, hopeless!"

This little "ha!" moment feels so good right now.