We know that one of the basic facts in mathematical analysis is . My question is why, the sequence , which is for any , finally equals to 0 as ?

Yes, that’s the definition of limits. What I will say here is just to help you to understand this definition.

We should always remember that limit, is a result of a process. In the process of finding the limit, some mutations can happen. When we say “Let n goes to infinity”, it is a process. And the notation itself is not a number but a process. Let’s try to understand this process using the previous example . Imagine that there is a computer programme which calculate the values of for . For example, in C:

#include void main() { int n=0; while(++n) printf("%f\n", 1.0/n); }

Ideally, this programme will never stop. However, all computers in reality has only limited bits, such as 32, 64… We can imagine the output of the programme will be “0.1,…,0.001,…,0.00001,…” and then “0.000…” which although . When the number of zeroes exceed the bits of our computers, those machines will simply ‘think’ the result is 0. So we can see that limit is kind of approximation, or an irreversible compression, which will suffer from the approximate error. That’s why a sequence’s limit can be 0 even each member is always bigger than 0.

Sometimes the approximate error of limit can bring troubles. For example, Let a random variable be degenerate at the point , that is , and be degenerate at the point . The distribution function of is , and that of is . Then for all except because for we have . The concept of continuity can deal with this situation. Note that is not continuity point of . We still have because convergence in law only require the convergence at ‘s continuity points.

(Update: 2011/December/09) Another “annoying” thing caused by the “approximate error” occurs when finding the limitation of inequalities. For example: the limitation of is , and the limitation of is , but the limitation of is , and the limitation of is . To give the right result, we should first write down the limitation for both sides of the inequality, and then we need an additional fix – check whether the equality holds and change the inequality sign accordingly. In real analysis, this “approximate error” explains the following “annoying” result:

1. The union of a **finite** (not arbitrary) number of closed set is closed; (the intersection of an arbitrary collection of closed sets is close.)

2. The intersection of a **finite** (not arbitrary) number of open set is open; (the union of an arbitrary collection of open sets is open.)

Two more code examples showing the “approximate error”.

In C++:

int main() { int n=1; float in=0.0; for(int i=0;i<10;i++) { n*=10000; in = (float)1.0/n; cout<<in<<endl; /*"in" will become "inf" because out of range*/ } bool temp=(in==0.0); cout<<temp<<endl; return 0; }

(Don’t have a C++ compiler? You can compile online and see the result here:

http://codepad.org/vgCsH0Pd)

A more illuminating result in R:

n<-1; for(i in 1:100) { n<-n*10000; print(1/n==0); }

(No R installed? Just copy the above code and run it online to see the result here:

http://pbil.univ-lyon1.fr/Rweb/)