Friday, January 29, 2010

limits and turf

Sometimes we computer science folk get into jurisdictional disputes with our neighbours in mathematics (partly because our neighbours used to be us, depending on how you reckon the heritage of computer science).

A standard dispute is over whether zero is a natural number. From a computer science perspective it seems pretty innocuous: zero is often a useful case to consider in the realm of whole numbers of a non-negative flavour, and it's certainly possible to count things starting from zero (and we do, in many programming languages, with list indices and such). In calculus, though, they insist on excluding this completely innocuous (not to mention small) element from the natural numbers.

Now I find myself a little impatient with the way at least one calculus text presents limits. I think of finite limits (what value does a function "get close" to as it argument approaches some constant) and infinite limits (what does it mean for a function to "get close" to infinity as its argument gets close to some value) as part of the same topic. After finding that these two topics seemed hugely different to my fall semester students, I began checking my calculus texts. Courant and Spivak get around to limits of sequences, and limits that are unbounded, toward the end of their treatment of limits. Most of my students use Sala, Hille, and Etgen, where finite limits are treated in the first hundred pages, and the remaining limit-related topics are spread over hundreds more pages (and months of the course).

My motive for discussing limits is twofold. They provide a good example of mixing statements involving "for all" with statements involving "exists". They are also an ideal starting point for the computer science topic of asymptotics: how do you express the idea that one function grows qualitatively faster than another?

The mixed quantifiers value of limits takes my students back to calculus. For every positive real number epsilon, there is a positive real number delta, for every real number x, if x is within delta of a, then x-squared is within epsilon of a-squared. A true statement, since you get to choose the delta based on the epsilon. However, if you change the order of "for every positive real number epsilon" and the "there is a positive real number delta" you get a falsehood: you can't choose a delta that works for every epsilon.

The same reasoning, with appropriate modifications, works for infinite limits. For every positive real number epsilon, there is a positive real number delta, for every real number x, if x is within delta of infinity, then x-squared is within epsilon of infinity. What does "within delta" mean in this context? It means delta to the right of zero --- being close to infinity is synonymous with being far from zero.

Computer scientists want to talk about how exp(x) grows faster than x-squared. One slick way to do this (so slick that we don't allow our students to use it for a few weeks) is to consider the ratio exp(x)/x-squared. If, for every positive real number epsilon there is a positive real number delta, for every real number x, when x is within delta of infinity (bigger than delta), exp(x)/x-squared is within epsilon of infinity (bigger than epsilon), then exp(x) grows faster than x-squared (it does).

To we CS folk, the same notion of limit is being used in all these cases. It's a bit odd that they wouldn't be closely combined in our students' calculus text.

No comments: