## Why the limit of the sequence 1/n is 0?

We know that one of the basic facts in mathematical analysis is $\lim_{n\to\infty}\frac{1}{n}=0$. My question is why, the sequence $\frac{1}{n}$, which is $>0$ for any $n$, finally equals to 0 as $n\to\infty$?

Yes, that’s the definition of limits. What I will say here is just to help you to understand this  definition.

We should always remember that limit, is a result of a process. In the process of finding the limit, some mutations can happen. When we say “Let n goes to infinity”, it is a process. And the notation $\infty$ itself is not a number but a process. Let’s try to understand this process using the previous example $\lim_{n\to\infty}\frac{1}{n}=0$. Imagine that there is a computer programme which calculate the values of $\frac{1}{n}$ for $n=1,2,\cdots$. For example, in C:

#include
void main()
{   int n=0;
while(++n)
printf("%f\n", 1.0/n);
}


Ideally, this programme will never stop. However, all computers in reality has only limited bits, such as 32, 64… We can imagine the output of the programme will be “0.1,…,0.001,…,0.00001,…” and then “0.000…” which $\approx 0$ although $>0$. When the number of zeroes exceed the bits of our computers, those machines will simply ‘think’ the result is 0. So we can see that limit is kind of approximation, or an irreversible  compression, which will suffer from the approximate error. That’s why a sequence’s limit can be 0 even each member is always bigger than 0.

Sometimes the approximate error of limit can bring troubles. For example, Let a random variable $X\in R^{1}$ be degenerate at the point $0$, that is $P(X=0)=1$, and $X_{n}\in R^{1}$ be degenerate at the point $\frac{1}{n}$. The distribution function of $X_{n}$ is $F_{X_{n}}(x)=I_{[1/n,\infty)}(x)$, and that of $X$ is $F_{X}(x)=I_{[0,\infty)}(x)$. Then $F_{X_{n}}(x)\to F_{X}(x)$ for all $x$ except $x=0$ because for $x=0$ we have $F_{X_{n}}(0)=0\nrightarrow F_{X}(0)=1$. The concept of continuity can deal with this situation. Note that $x=0$ is not continuity point of $F_{X}(x)$. We still have $X_{n}\stackrel{\mathcal{L}}{\to}X$ because convergence in law only require the convergence at $F_X(x)$‘s continuity points.

(Update: 2011/December/09) Another “annoying” thing caused by the “approximate error” occurs when finding the limitation of inequalities. For example: the limitation of $|x|\leq1+\frac{1}{n}$ is $|x|\leq1$, and the limitation of $|x|<1-\frac{1}{n}$ is $|x|<1$, but the limitation of $|x|<1+\frac{1}{n}$ is $|x|\leq1$, and the limitation of $|x|\leq1-\frac{1}{n}$ is $|x|<1$. To give the right result, we should first write down the limitation for both sides of the inequality, and then we need an additional fix – check whether the equality holds and change the inequality sign accordingly. In real analysis, this “approximate error” explains the following “annoying” result:

1. The union of a finite (not arbitrary) number of closed set is closed; (the intersection of an arbitrary collection of closed sets is close.)

2. The intersection of a finite (not arbitrary) number of open set is open; (the union of an arbitrary collection of open sets is open.)

Two more code examples showing the “approximate error”.
In C++:

int main()
{
int n=1;
float in=0.0;

for(int i=0;i<10;i++)
{
n*=10000;
in = (float)1.0/n;
cout<<in<<endl; /*"in" will become "inf" because out of range*/
}

bool temp=(in==0.0);
cout<<temp<<endl;
return 0;
}


(Don’t have a C++ compiler? You can compile online and see the result here:

A more illuminating result in R:

n<-1;
for(i in 1:100)
{
n<-n*10000;
print(1/n==0);
}


(No R installed? Just copy the above code and run it online to see the result here:
http://pbil.univ-lyon1.fr/Rweb/)