Calculus! #20: Early Transcendentals 2.4

This is the first section where you might get knocked on your ass by the math. But, here’s the thing with all math proofs – once you break them into discrete, logical steps, each step is simple. Once you understand each step, you can try to appreciate the whole.

This chapter begins with a rigorous definition of limits. What do I mean by “rigorous?” Well, usually when you refer to how rigorous an explanation is, you’re talking about where it lies on the spectrum between a rough sketch of an idea, and complete mathematical proof. So far, you’ve gotten the basic idea of a limit, but you don’t have a definition phrased in the precise symbols of mathematics. That is, you don’t yet have a rigorous definition.

So, let’s dig in.

The book starts with an example, but we’ve had plenty of those. I think you get the basic idea, so I’m going to talk directly about the proof here.

When thinking about a limit, you’re dealing with two values: (1) What value does x approach? (2) What does f(x) approach as x approaches that value. Another way to think about it would be this:

Say x approaches A and f(x) approaches B. If the distance between x and A is given by |x – A|, and the distance between f(x) and B is given by |f(x) – B|, how does the x distance change in relation to the f(x) distance?

So, let’s take the example of y=x^2 as x approaches 4. When x is 0.1 from 4 (i.e. when x=3.9), y=15.21. In this case, we already know the actual limit as x goes to 4 is 16. So, we know that when x is off by 0.1, y is off by .79. Let’s add a few more figures.

|4-3.99| = 0.01, |16-15.9201| = 0.0799

|4-3.999| = 0.001, |16-15.992001| = 0.007999

|4-3.9999| = 0.0001, |16-15.9992| = 0.00079999

Since x and f(x) are related by the function f(x) = x^2, we know the two distances are related. And, we can see how.

So long as |4 – x| < 0.01, |16 – x^2 | < 0.079. That’d be a restatement of the first bit of info up there, but you could do the same for the other two. And, in fact, it’s clear that as you keep making the x distance smaller, the f(x) distance will get smaller as well.

Follow?

Now, here’s the mathematically precise version of the general case of what we just said:

The Limit:

Let f be a function defined on some open interval that contains the number a, except possibly for a itself. Then we say that the limit of f(x) as x approaches a is L, and we write:

\lim{x \to a} f(x) = L

if for every number ϵ > 0, there is a number δ > 0, such that

if 0 < {x – a| < δ then |f(x) – L| < ϵ

Let’s unpack that a little, shall we? First off, don’t be intimidated by the Greek. They’re just the Greek letters Epsilon and Delta. The dumbest kid in Greece isn’t scared of them, and you shouldn’t be either. By convention, ϵ and δ are used for this definition. They’re just symbols, same as a, b, c, and d. Don’t let them put you off.

In any case, the first sentence is just saying that that f(x) corresponds to a domain of x values, which includes a, though it’s possible f(x) is not defined at a.

Then, we say there’s a limit where as x goes to a, f(x) goes to L.

Lastly, we say that this is only true if:

1) There is always a non-zero distance between x and a

2) There is a non-zero number (we’re calling it δ) that is bigger than that distance

3) If delta exists, there’s some other number (we’re calling it ϵ) that is bigger than the distance between f(x) and L.

In other words, by making δ arbitrarily small, we can make ϵ arbitrarily small. By bringing x closer to its final destination, we bring f(x) closer to L. Got it?

The book  provides a geometric interpretation, but I’m not sure it really helps with understanding. But, if you’re still confused, it’s worth a look.

Furthermore, in practice, you can substitute in real values for δ and ϵ. So, you can say “how small does δ have to be before ϵ gets this small.” That is, how close do we have to take x to a before f(x) is within a certain value of L. The book provides some nice examples of this, and shows you how to use the definition of a limit to prove things about limits. Neat!

Next, we get the definition of the right and left hand limits. These may seem obvious, but they’re worth looking at to great elucidate what exactly is going on in the definition of a limit, and in the idea of delta-epsilon proof.

Left-Hand Limit:

\lim_{x \to a^-}f(x) = L

If for every number ϵ > 0, there is a number δ > 0 such that

if a – δ< x < a then |f(x) – L| < ϵ

The term after the “then” there is the same as the old definition. It’s saying “the distance between f(x) and L is less than ϵ.”

The term before the then may look a bit different, but it’s basically the same idea. The old definition was concerned with the absolute distance between x and a, given by |x – a|. Now, we want the distance to the left of a. So, imagine each of the 3 terms there (a - δ, x, and a) are plotted on a line. Rightmost is a. To the left of a is x. And to the left of x is a - δ. That is, x is some distance to the left of a, but that distance is less than δ. If you moved a to the left by δ units, you’d move past x. x is somewhere between δ units left of a and a itself. Got it?

So, essentially, we’re saying that x is approaching a, but it is approaching it from a lower value than a (also known as approaching from the left).

Knowing that, I bet you can guess the right hand limit. We’re just going to define x is being somewhere to the right of a. So…

The Right-Hand Limit

\lim_{x \to a^+} f(x) = L

if for every number ϵ >0, there is some number δ > 0, such that

if a < x < a + δ then |f(x) – L| < ϵ

Once again, we’re just defining a limit where x is forced to be on a certain side of a. In this case, it’s forced to be less than some distance δ to the right of a.

Now that you have these definitions, the book gives you a nice little proof of the Sum Law for limits, which I think is pretty intuitive once you get the idea of limits per the above definitions. Going through it in the blog would be a bit too redundant with the book, I think, but you should read over it and try to understand. It’s good to get a sense of how proofs work. The really tight simple ones like this one always feel like they have a circular quality to me. But, given that the proof of the second thing is in a sense contained within the first thing, this is no surprise.

Infinite Limits

Lastly, we get the definition of an infinite limit. This is basically a modification of the regular old limit, but instead of saying f(x) approaches a circumscribed value, we say it keeps growing forever. Here’s the mathy version:

The Infinite Limit:

Let f be a function defined on some open interval that contains the number a, except possibly for a itself. Then we say that the limit of f(x) as x approaches a is \infty, and we write:

\lim_{x \to a} f(x) = \infty

if for every positive number M, there is a positive number δ, such that

if 0 < {x – a| < δ then f(x) > M

Compare this to our regular old limit definition, and you’ll see it’s similar, except we’ve eliminated ϵ. Recall that ϵ was the value that f(x) approaches. But now there is no value f(x) approaches. We therefore replace it with M (which is short for MWAHAHAHAHAHAHA!), which represents any positive number. So, we’re basically saying this: By making the distance between x and a arbitrarily small, you can make f(x) get larger and larger – larger in fact than any particular number you can come up with. In other words, bring x toward a and you can get f(x) infinitely big.

This leads us to our last definition for this section:

The Negative Infinite Limit

Let f be a function defined on some open interval that contains the number a, except possibly for a itself. Then we say that the limit of f(x) as x approaches a is \infty, and we write:

\lim_{x \to a} f(x) = - \infty

if for every negative number N, there is a positive number δ , such that

if 0 < |x – a| < δ then f(x) < N

So, basically the same deal except we changed M to N (short for NYAHAHAHAHAHA!), made all Ns negative, and said that f(x) is less than N. In other words, by taking x closer to a, you get a bigger and bigger negative number. That is, you go to negative infinity, as in the schoolyard rhyme I just made up “You’re so pretty, times negative infinity.”

And that’s all for 2.4. Next we’re taking a detour through continuity, then onto limits at infinity (as opposed to infinite limits), and then we shall defeat The Mighty Derivative.


This entry was posted in Autodidaction. Bookmark the permalink.

3 Responses to Calculus! #20: Early Transcendentals 2.4

  1. Logan Stokols says:

    I think it’s important to note that YOU decide an arbitrary value for epsilon, and then some math decides what the corresponding delta is. If the aforementioned “some math” is always well defined, then the limit exists

  2. jiitee says:

    “1) There is always a non-zero distance between x and a

    2) There is a non-zero number (we’re calling it δ) that is bigger than that distance

    3) If delta exists, there’s some other number (we’re calling it ϵ) that is bigger than the distance between f(x) and L.”

    Here I don’t quite follow what you’re trying to say (with 2. and 3. ), because:

    1) ok

    2) true, but such a number always exists (just take 2 times the distance of x and a).

    3) like 2) this is also always true – regardless of the function (choose epsilon to be 2 times the distance of f(x) and L, or 1 if the distance happens to be 0).

    “Recall that ϵ was the value that f(x) approaches.” – should have L instead of ϵ.

    It certainly helps the intuition, but you don’t really need to assume that M is positive and N is negative (if by a “number” you mean at least the integers here). Simplifications like this are one factor that make math elegant and beautiful (at least to me) – i.e. assume just what is the absolute necessity.

    “you get a bigger and bigger negative number” – isn’t a big negative number something close to 0? (-1 is bigger that -2, -2<-1).

    ( Math is also fun: "Rigorously testing the right and left hand limits" :p )

  3. jiitee says:

    Seems I wen’t a bit out of context with my previous remark on 3) as you are obviously considering every x within a certain range here, not just one x at a time. So in this context it is of course not true regardless of the function. It is true for any function that is bounded (and in such a case the limit still does not need to exist). But boundedness (near a) is a necessary condition for the limit to exist – so in this sense you are right in saying “Lastly, we say that this is only true if:”

    I think what’s confusing with 3) is that the existence of delta should depend on epsilon, not the other way around.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>