I left my book out after the last blog and my cat hairballed directly onto the definition of an infinite limit. Truly, the gods are attempting to stop us on our path to mathematical glory. But, we shall not waver!

**Infinite Limits**

An infinite limit, quite simply, is a limit that goes to infinity.

To be more precise: If you take a limit as x goes to some value, and you find that f(x) goes to infinity, then you’ve got yourself an infinite limit. Here’s a simple example:

Say you have . So, x goes to zero, and f(x) = . Clearly, the closer x gets to zero, the larger f(x) gets. When this is the case, we say that the limit goes to infinity.

This is a bit of a funky way to say it, since the phrasing implies infinity exists somewhere on the number line. A better way to say it would be “the closer x gets to 0, the larger f(x) gets.” Here’s the book definition:

means that the value of f(x) can be made arbitrarily large by taking x sufficiently close to a, but no equal to a.

Other than that you’re going to infinity, a lot of the regular stuff applies. You can approach a limit from the right hand or left hand side, and if the function is negative, you can approach “negative infinity.” Again, this is a bit of weird term, and it would be better to say that you can make f(x) an arbitrarily low number.

Lastly, we get what is essentially a way to think of infinite limits graphically: Infinite limits are vertical asymptotes. This makes sense, since f(x) is pretty much always expressed as the vertical axis. By our definition, there is some x value that, when you approach it, produces an arbitrarily large y value the closer you get. So, the f(x) value zooms up infinitely large without ever touching the x value. This is also known as a vertical asymptote.

The chapter closes with the particular example of going to zero from the right hand side of a natural log function. Or, mathematically:

You can look up the graph to confirm it visually, but let’s pick it apart to understand why ln(x) goes to negative infinity when x goes to zero.

Remember, when we say “y=ln(x)” it’s equivalent to saying , where *e *is Euler’s Constant (about 2.7). Or, as I like to think of it, we’re asking “to what value would I have to raise *e *in order to get x?” In this case, we want to know to what value we’d have to raise *e *in order to get 0.

Imagine we’re just trying to guess what y should be. We know it’s not 1 or higher because that’d give *e *or some larger value. We might guess that it’s some very small number, like .00001. But, remember, even if y is zero, equals 1. But, remember that negative exponents flip everything. So . That being the case, if y is negative, the larger we make it, the larger the denominator gets, which means the expression gets closer and closer to 0.

In other words, as we get x closer and closer to 0, y gets lower and lower down the vertical axis. That is, it is a negative infinite limit.

Normally I don’t like really nitpicky comments, but lim_{x->0} 1/x doesn’t exist. This is actually a pretty important point– logically there’s no reason to prefer a left over a right handed limit–that shows up a lot further on in calculus (for example, to even define the derivative of a complex function you need to demand that the limit exists from every direction, and this requirement leads to the super-important cauchy-euler equations). Happily 1/x^2 or 1/|x| would work fine.

Another nitpick, but your definition for having a limit of infinity is not strong enough. Take for instance the function defined on ℝ+ (excluding 0, can’t remember how that is noted) sin(1/x)/x. As x approaches 0 (by positive values, since the function is only defined on ℝ+), I can make that function take values as large as I like… but also negative values as low as I like. So does it have a limit of both plus and minus infinity? Of course not (it has none, for the record). I believe the formal definition goes like this:

A function f(x) is said to have a limit of (or tend towards) infinity when x goes to a if and only if: whichever k you take, there exists an εk such that, for all y in [a-εk, a+εk] where f(y) is defined, then f(y)>k.

(for right hand side and left hand side limits, replace the range by ]a, a+εk] and [a-εk, a[, respectively).

Now let us prove that in the case of -ln(x) when x goes towards 0 by the right hand side. For any k, let us take e^-(k+1), if y > 0 and <= e^-(k+1), then ln(y) is the number such that e^ln(y) = y, therefore we necessarily have e^ln(y) <= e^-(k+1). Since one of the known properties of the exponential function is that it's strictly increasing, then it means ln(y) k+1 > k. QED

Of course, for functions that are strictly monotonous, then your book definition works (though a function doesn’t need to be strictly monotonous to have a limit of infinity, e.g. 1/x + 2sin(1/x) when x goes towards 0).