(Previous post on John Gabriel: Calculus 101 (Convergence and Derivatives))

Okay, now that we know what sequences are and what it means for a sequence to converge to some limit, we can finally start talking about real numbers:

## Irrational Numbers and Cauchy Sequences

Basically the *whole point* of real numbers is to make the rational numbers *complete with respect to convergence,* in a very specific sense which will become apparent later. The numbers we get which we *didn’t* already have in the set of rational numbers are called *irrational numbers.* The first irrational number that was historically encountered is *the square root of \(2\)*, as is possibly the *most well-known proof in history:*

Assume the square root of 2 is a* rational number* \(\frac ab\). We can safely assume that \(a\) and \(b\) are coprime, meaning that we can’t simplify the fraction \(\frac ab\) any further (otherwise, just simplify the fraction and call the resulting two numbers \(a\) and \(b\)). Then:

\[\left(\frac ab\right)^2=\frac{a^2}{b^2}=2\;\Longrightarrow\;a^2=2b^2\]

From this we can conclude, that \(a^2\) needs to be divisible by \(2\), and hence that \(a\) needs to be divisible by \(2\) as well. so let \(a = 2c\) for some whole number \(c\), then:

\[\frac{(2c)^2}{b^2}=\frac{4c^2}{b^2}=2\;\Longrightarrow\;4c^2=2b^2\;\Longrightarrow\;2c^2=b^2\]

From this we can conclude, that \(b^2\) needs to be divisible by \(2\), and hence \(b\) needs to be divisible by \(2\) as well. But that just means, that we could have simplified the original fraction \(\frac ab\) by dividing both \(a\) and \(b\) by \(2\) – something we explicitly assumed to be already taken care of. That’s a contradiction, hence the square root of \(2\) is irrational.

So what does this *mean* now? The square root of \(2\) (as Gabriel seems to sort-of believe) *does not exist?* Well, the thing is… here’s the graph of the function \(f(x)=x^2\):

I mean… I can see it *clearly* crossing the line at \(y=2\) *some*where… are we supposed to say, that there is *no specific point* on the \( x\)-axis, where the function takes on the value \( 2\)? I can even tell you, that it’s approximately in the ballpark around \(1.41421356237\) – in fact, I can get *arbitrarily close (*nudge,nudge,wink,wink*)* to that *“number”* which possibly does or doesn’t exist. So what exactly could we *mean* when we talk about the square root of \( 2\) as a number?

The answer will, of course, involve *a specific kind of sequence,* namely a *Cauchy sequence*. The annoying thing about our *definition* of convergence (and in fact, Gabriel agrees with me here – *Who would have thought!*) is, that it is intrinsically coupled to *a specific limit*, one which we demanded to be a rational number. It doesn’t tell us when a sequence *“converges”* (whatever that means), it only tells us (however, very specifically!) when a sequence converges *to a specific (rational) number*. Cauchy sequences try to fix that annoyance:

**Definition:** A sequence \( (a_n)\) is called a ** Cauchy sequence**, if for every arbitrarily small \( \epsilon>0\), there is some index \( n_\epsilon\in\mathbb N\) such that for any subsequent indices \( k,\ell>n_\epsilon\) the distance between \( a_k\) and \( a_\ell\) is smaller than \( \epsilon\). In logical Notation:

\[\forall\epsilon>0\;\exists n_\epsilon\in\mathbb N\;\forall k,\ell>n_\epsilon\; \mid a_k-a_\ell \mid < \epsilon\]

This definition doesn’t mention limits anywhere. And yes, it turns out that all convergent sequences are Cauchy sequences. Normally I would say “proof left as exercise”, but I feel generous right now, and also it probably makes sense to see at least one proof about convergence, just to get a better feel for the whole shibang:

Let \( (a_n)\) be a convergent sequence with some limit \( a\) and \( \epsilon>0\) arbitrarily small. Since \( \lim a_n=a\) we know by definition that (choosing \( \frac\epsilon2\)) there exists some index \( n\) such that for any subsequent indices \( k,\ell\) we have \( a_k-a<\frac\epsilon2\) and \( a_\ell-a<\frac\epsilon2\). Now we need to show, that the distance between any arbitrary \(a_k\), \(a_\ell\) (with \(k,\ell>n\)) is smaller than \(\epsilon\), which is just a quick calculation:

\[ \mid a_k – a_\ell \mid = \mid a_k-a_\ell + \underbrace{a – a}_{=0} \mid = \mid (a_k-a)-(a_\ell-a) \mid \leq \underbrace{\mid (a_k-a)\mid}_{<\frac\epsilon2} + \underbrace{\mid (a_\ell-a) \mid}_{<\frac\epsilon2} < \epsilon, \]

hence \( (a_n)\) is a Cauchy sequence.

Now, conversely, are all* Cauchy sequences* also *convergent* (in the rational numbers)? Unfortunately no. We can prove this by constructing a sequence that “converges to” the square root of \( 2\) – and we’ve already shown that this is not a rational number:

We define three sequences \( (a_n)\), \( (b_n)\) and \( (s_n)\) simultaneously via recursion, by letting \( a_1:=1\), \( b_1:=2\) and \( s_1:=1.5\).

For \( k>1\), let (*hellooooo arithmetic mean, old friend!*) \( s_k:=\frac{b_{k-1}+a_{k-1}}{2}\). If \( s_k^2>2\) then let \( a_k:=a_{k-1}\) and \( b_k:=s_k\), if \( s_k^2<2\), then let \( a_k:= s_k\) and \( b_k:=b_{k-1}\).

Now the sequence \( (s_n)\) is a Cauchy sequence and approaches the square root of \( 2\) arbitrarily close (proof left as exercise).

Here’s what the three sequences look like for the first 6 elements:

…if you’re annoyed by my usage of *“the square root of \( 2\)”* here, acting like that was *indeed* an existent number even though it isn’t, or at least we don’t know whether it is, yet – *fair enough.* But if you want everything I said here to be *more rigorous* in that regard, just replace every usage of *“the square root of \( 2\)”* by *“some number \( q\) with the property, that \( q^2=2\)”*. That way, the previous proof becomes an *actual proof, that the sequence doesn’t converge*, because now we’re *not* proving that it *“converges”* to a number which doesn’t exist, but instead prove that *if* the sequence *were* convergent, the limit *would have* some property which we *already proved no rational number can have*. Hence, by contradiction, the sequence doesn’t converge. Point being: *We don’t need to assume the existence of the real number \( \sqrt2\) for the previous proof to work.* We can simply substitute a couple of phrases and we get a proof without that *“illegal”* assumption, at the cost of a certain amount of clarity in the proof (in my opinion).

But the *important* thing here is: *All Cauchy sequences are like that*, in that they* seem* to convergence to* some point*, which may or may not be a rational number. And we can pretty much point* exactly to where that number would be* on the number line, if it *were* a number! So lastly, let’s try to capture when the *“limits”* of two Cauchy sequences are *equal* – preferably without referring to the limits at all, so we can still use that notion for the *non-convergent* ones:

**Definition:** We call two Cauchy sequences \((a_n)\) and \((b_n)\) ** equivalent**, and write \((a_n)\equiv(b_n)\), if and only if \(\lim (a_n-b_n)=0\).

Note, that two sequences can be equivalent *even if neither of them converge* – it’s only the sequence of their *element-wise differences* that needs to converge. The point being, that we want to be able to think of equivalent sequences as *having-the-same-limit* – and in the case of the *convergent* ones, that already works out perfectly: *Two convergent sequences are equivalent if and only if they have the same limit* (proof left as exercise).

So, to summarize:

- We can define
*sequences*on rational numbers, and what it means for a sequence to*converge to a specific number*. - There are
*some*sequences (namely the*non-convergent Cauchy sequences*), that*seem*to converge to a specific number, but when we try to find the limit*it doesn’t exist*(in the rationals). - However, given such a sequence, we
*can approximate*its non-existent*“limit”*to an*arbitrary degree of accuracy*with rational numbers (that’s exactly what Cauchy sequences do, after all).

So, *what are these non-existent limits of non-convergent Cauchy sequences?* Are they *numbers?* Are they… *something else?* Do they simply *not exist*? But… I mean, *we know where they are,* don’t we? The answer is of course, that they are *real numbers*. So, let’s make our way towards getting a grip on these weird beasts by talking about *decimal expansions*:

## The Decimal Number System

Just so that nobody runs into the danger of confusing decimal expansions with whatever-real-numbers-are, I will define them separately and completely detached from either rational or real numbers, just so that we have a clear picture of what I’m talking about when I say “decimal expansion”, and so I can use them freely without anyone accusing me of using real numbers before I defined them in the first place. This might seem needlessly over the top, but, you know, *this guy claims that \(0.333\ldots\neq\frac13\),* so… yeah, you can imagine that I need to explain even how *writing down numbers* works, just to make absolutely sure that we all agree on that.

A sequence of digits like \(1995\) is, *at first*, not* itself a number*. By which I mean: It is a *sequence of symbols* that ** represent** a number, namely the sequence (“1″,”9″,”9″,”5”). I emphasize that, because I could easily choose a

*different representation*, e.g. roman numerals, for

*the same number*: \(MCMXCV\). Just like \(\frac 12\) and \(0.5\) are

*different representations*of

*the same number*. And just like there’s a (somewhat) well-defined system behind roman numerals: \[ \underbrace{\underbrace{M}_{1000}}_{1000+}\underbrace{\underbrace{C}_{100}\underbrace{M}_{1000}}_{(1000-100) +}\underbrace{\underbrace{X}_{10}\underbrace{C}_{100}}_{(100-10)+}\underbrace{\underbrace{V}_5}_5=1000 + 900 + 90 + 5 \]

…so there is a (definitely!) well-defined system behind the decimal notation (and luckily a much more convenient and intuitive one!): \[ \underbrace{1}_{1\cdot1000+}\underbrace{9}_{9\cdot100+}\underbrace{9}_{9\cdot10+}\underbrace{5}_{5\cdot1} = 1\cdot10^3 + 9\cdot10^2 + 9\cdot10^1 + 5\cdot10^0 \]

The digits represent the multiples of the different powers of \(10\). Why exactly \(10\)? Because we have *ten digits, duh*. And, of course, for finitely many digits after the decimal point we can just continue the spiel with negative exponents:

\[\underbrace{7}_{7\cdot100+}\underbrace{2}_{2\cdot10+}\underbrace{8}_{8\cdot1+}.\underbrace{4}_{4\cdot\frac{1}{10}+}\underbrace{1}_{1\cdot\frac{1}{100}+}\underbrace{3}_{3\cdot\frac{1}{1000}} = 7\cdot10^2+2\cdot10^1+8\cdot10^0+4\cdot10^{-1}+1\cdot10^{-2}+3\cdot10^{-3}\]

We can express this in mathspeak as:

Let \(d=z_0\ldots z_m\; .\; a_1\ldots a_n\) be a decimal expansion. Then define \(\textsf{rat}(d):=\textsf{int}(d)+\textsf{float}(d)\), where

\begin{align*}

\textsf{int}(d) &:= z_m+z_{m-1}\cdot10+z_{m-2}\cdot10^2+\ldots+z_1\cdot10^m &&=\sum_{i=0}^mz_{m-i}\cdot10^i &\text{ and }\\

\textsf{float}(d) &:= a_1\cdot10^{-1}+a_2\cdot10^{-2}+\ldots+a_n\cdot10^{-n} &&=\sum_{i=0}^na_i\cdot10^{-i} &

\end{align*}

*Meaning*: \(\textsf{rat}\) is the function that maps *two finite (period-separated) sequences of digits* to the rational number *they actually represent*. Alright? Can I assume, that *we all agree this is what decimal numbers mean?* Great, then now for a proper definition and the infinite case:

**Definition:** A ** decimal expansion** is a pair \( ((z_0,\ldots,z_m),(d_n))\) such that \(z_0,\ldots,z_m\) are (finitely many) digits and \( (d_n)\) is a sequence of digits, i.e. \(z_0,\ldots,z_m,d_i \in \{ 0, \ldots,9 \}\) for all \( i\in\mathbb N\).

A ** proper decimal expansion** is a decimal expansion \( ((z_0,\ldots,z_m),(d_n))\) where there is

*no inde*x \( k\) such that for all subsequent elements \( d_i, i>k\) we have \( d_i=9\). In other words: every element in the sequence has some successor \( \neq9\).

The idea being, that we represent a (not yet existing) number such as \( 314.141111\ldots\) as the *pair consisting of \((3,1,4)\) and the sequence \( (d_n)=(1,4,1,1,1,1,1,\ldots)\)*. The point of the *“proper decimal expansion”*-definition is to exclude those, that end with \( 999\ldots\) repeating. We don’t need them anyway because e.g. \( 0.999\ldots\) is just \( 1\), as everyone except John Gabriel knows. But we will get to that. Why *specifically the digit \(9\)?* Again – because we have 10 digits, \(9\) being the largest one. When e.g. adding two decimal expansions, it’s the digit \(9\) that “flips over” when we increase it, impacting the previous digits. If we were to use a different number system, e.g. base \(8\) (instead of base \(10\)), we would exclude a different digit from repeating indefinitely – e.g. in base \(8\) the digit \(7\).

*Technically,* decimal expansions as I defined them can only represent *positive numbers.* To fix that, we can e.g. define them instead as *triples* \((p,(z_1,\ldots,z_m),(d_n))\), where \(p\in\{+,-\}\) – i.e. we just add the sign separately. But I will ignore negative numbers in the rest of this post for the sake of clarity, assuming that it’s clear to everyone that (and how) we can extend everything that follows to cover negative numbers as well.

Anyway, note how my definition of decimal expansions a) *does not require real (or even rational) numbers* and b) still makes expressions like \( 3.3333\ldots\) *well-defined objects.* So far, \( 3.3333\ldots\) just happens to* not be a number* (it’s just a pair of sequences of digits, after all), so we can’t do *“number stuff”* with it (add, multiply, whatever), but that we will do later on.

The reason why I’m doing this is, that we can now pose and (given some more work) answer the question, *what “number” a decimal with infinitely many decimal places is even supposed to be or represent*. After all, we can’t e.g. settle or even meaningfully* ask* the question, whether \(0.999\ldots=1\), as long as we’re not *absolutely clear on what the hell the expression \( 0.999\ldots\) is supposed to mean exactly.* And that’s the *one, crucially important* question, that all the cranks who claim \(0.999\ldots\neq1\) (or as in Gabriel’s case – \(0.333\ldots\neq\frac13\)) etc. *never seem to answer or even ask* – or even realize that that it *is a question that needs answering* first.

So, *in what sense* can we consider decimal expansions with infinitely many digits to be *numbers?* To answer that question let’s first only consider *those* decimal expansions that represent *rational* numbers. Obviously, every rational number has some decimal expansion, which we can simply compute via long division (for an explanation of long division I defer to wikipedia). Let’s denote the decimal expansion that corresponds to the rational number \(q\) as \(\textsf{de}(q)\):

**Definition:** Let \(q=\frac ab\in\mathbb Q\). The ** decimal expansion \(\textsf{de}(q)\) of \)q\)** is

*that*decimal expansion \(((z_0,\ldots,z_m),(a_n))\), that results from doing long division on \(a\div b\); where \(z_0,…,z_m\) are the digits of the resulting integer part and \((a_n)\) the sequence of digits after the decimal point (if finite, extended by infinitely many zeros).

As an example: \(62\div 5=12.4\), hence \(\textsf{de}(\frac{62}5)=((1,2),(4,0,0,0,\ldots))\). And \(1\div3\), when doing long division, results in \(0.333\ldots\) (*yes, Gabriel, it does.* I *know* you don’t believe that \(0.3333…=\frac13\), but *for fucks sake*, you* can* do long division, *can’t you*? Also, *that* video will be dealt with *later*), hence \(\textsf{de}(\frac13)=((0),(3,3,3,3,\ldots))\).

I* could* give an actually *rigorous* definition of \(\textsf{de}\) (obviously, because long division is a simple and clear algorithm) instead of just deferring to long division, but honestly, it would be rather ugly and distract from the fact that *all we’re doing* is long division while separating the *integer part* from the *digits after the decimal point*, and considering both just as sequences of digits. And we all learned long division in school at some point.

However, note that even in referring to long division and their results as expressions of the form \(0.333\ldots\), I *still* haven’t assumed that decimal expansions are *numbers*, or that e.g. \(0.3333\ldots\) is a number. I’m referring to long division purely as a procedure for generating sequences of digits from two *integers* (the numerator and denominator of the rational number expressed as a fraction). *See how fucking nitpicky this guy pushes me to be?*

Anyway, so we can now assign a unique (and even proper) decimal expansion to each rational number, and we can assign a unique rational number to each finite decimal expansion (i.e. those with only trailing zeros – which, just as an aside, also happen to be proper). All we need to do is fill in the rest.

## Series and (finally!) Real Numbers

Okay, so now that we all know (and hopefully agree) what numbers we (as a running example) mean by \(3.33\), \(3.333\), \(3.3333\) etc., we only need to clarify what we mean by \(3.333\ldots\) exactly. In the finite case, the strings of digits \(3.33\), \(3.333\), \(3.3333\) etc. represent the rational numbers \(3+3\cdot10^{-1}+\cdot10^{-2}+\ldots+\cdot10^{-k}\) (for some maximal index \(k\)). Expressed differently: \[3.\underbrace{3\ldots3}_{k\text{ times}}=\sum_{i=0}^k3\cdot10^{-i}\] – each such sequence of digits represents *a finite sum*.

Now obviously, we’ll want the *infinite* decimal expansions to represent *“infinite sums”* – i.e. we want

\[3.333\ldots = \sum_{i=0}^{\color{red}{\infty}}3\cdot10^{-i}\]

to hold – but here’s that pesky symbol \(\infty\) and it’s *in no sense clear*, what an infinite sum is supposed to *mean* exactly – a *finite* sum I can just compute in a finite amount of time, but an *infinite sum?* But, of course, by now we already have all the tools we need to answer that once and for all: Given that nobody would disagree, that what the sum *should* mean is *“the result from consecutively adding all the addends of the sum”* – which of course perfectly corresponds to a *sequence*, and we know what it means for a *sequence* to converge. Sequences resulting from summation are called series:

**Definition:** Let \((a_n)\) be a sequence of rationals and \(\displaystyle s_n:=\sum_{i=1}^na_i\) the sum of the first \)n\) elements of the sequence. We call the *sequence of partial finite sums* \((s_n)\) the ** series over \((a_n)\)** – denoted as \(\sum a_n\). If \(\sum a_n\) converges, we denote the limit as \(\displaystyle \sum_{n=1}^\infty a_n\). In short:

\[\sum a_n := \left(\sum_{i=1}^na_i\right)_{n\in\mathbb N} \qquad\sum_{n=1}^\infty a_n := \lim_{n\mapsto\infty}\left(\sum_{i=1}^n a_i\right)\]

Now there’s two things to mention here:

- Because
*Gabriel,*I will explicitly distinguish between a*series*\(\sum a_i\) and its*limit*\(\sum_{n=1}^\infty a_n\) (if the limit exists). My notations for this are*not standard*. Of course, if we know and have proven that a series converges, we – for all practical purposes – don’t*need*to distinguish between the two. Context is, as often, everything. but I want to be absolutely precise here. - Note that the
*order*of the sequence \(a_n\) – and*each single element of that sequence*– matter, when we turn it into a series. Whereas the limit of a sequence is uniquely determined by any of its end segments or infinite subsequences, and we can freely rearrange the summands in a*finite sum*, we can’t do that with*series:*when we change the sequence \((a_n)\), its*series will look different*, because the sequence of partial sums that it represents will be different as well, and*possibly have a different limit – or none*. I just wanted to mention that. - There’s this somewhat stupid meme going around about \(1+2+3+\ldots=-\frac1{12}\) – and since I know where I get some of my (negligibly few, but still strictly positive amount of) readers from, I’m pretty sure if I don’t go into that now I’m going to regret it. So here’s the thing: If you put forward an expression like \(1+2+3+\ldots\), all that you can
*possibly mean by that*(at least to a mathematician) is the series \(\sum n\), i.e. the sequence of its partial sums. This sequence is strictly increasing and unbounded and hence has no limit. If you put forward the claim that this series “equals” some number, what most mathematicians will assume – without further context – is, that the series converges to that number, which in the case of the series \(\sum n\) would just be plain wrong.

The origin of this meme are *analytic extensions* – various methods of assigning definite values even to *divergent series,* usually in such a way that they agree with the *classical* notion of convergence for those series that *actually do* converge. There’s a *point* to that, and these analytic extensions are *interesting* and *well-defined* and all that, but they are *not what most mathematicians without further context will mean by the limit of a series. *Consequently, I think people should *either explicitly or implicitly* make sure, that it’s unambiguously clear *which* analytic extension (if any) is used when proclaiming that some series *“equals”* some number. So let me be clear on that: I will *only talk about classical convergence as defined here* when I use an expression like \(\sum_{n=1}^\infty a_n = L\).

Okay? Great, then I hope it’s now clear how to turn each decimal expansion into a series:

**Definition:** Let ()d=((z_1,\ldots,z_m),(d_n))\) be a decimal expansion. We associate \(d\) with the series

\[\textsf{se}(d):=\textsf{int}(d)+\sum (d_n\cdot10^{-n})\]

, where \(\textsf{int}\) is the integer part as defined above.

So we have \(\textsf{de}(\frac13)=0.333\ldots\) and now conversely \(\textsf{se}(0.333\ldots)=0+\sum 3\cdot10^{-n}\), and \(\sum_{n=1}^\infty3\cdot10^{-n}=\lim\textsf{se}(0.333\ldots)=\frac13\). *How nice.* And it turns out that for *proper* decimal expansions \(d\) we have that \(\lim\textsf{se}(d)=q\) if and only if \(\textsf{de}(q)=d\) – the two functions \(\textsf{se}\) and \(\textsf{de}\) are *inverses of each other* – on proper decimal expansions whose series converge, at least. *Non-proper* decimal expansions (meaning – their associated series) converge anyway, since (exemplary): \[\lim\textsf{se}(0.999\ldots)=\lim 0+\sum 9\cdot10^{-n}=\sum_{n=1}^\infty9\cdot10^{-n}=1\]

(general proof left as exercise)

So we’re left wondering, what to do with decimal expansions whose associated series *does not* converge. But, having done all we have done so far, it turns out that all of the following statements are easily provable:

- For any decimal expansion \(d\), the series \(\textsf{se}(d)\) is a
*Cauchy sequence*, - We can conversely compute from each Cauchy sequence \((a_n)\) a
*proper decimal expansion*\(\textsf{de}((a_n))\) such that \((a_n)\equiv\textsf{se}(\textsf{de}((a_n)))\) – i.e. going back and forth between sequences and decimal expansions preserves equivalence. - Two Cauchy sequences \((a_n),(b_n)\) have the same associated decimal expansion
*if and only if*they are equivalent: \((a_n)\equiv(b_n)\;\Leftrightarrow\;\textsf{de}((a_n))=\textsf{de}((b_n))\). - The (element-wise)
*sum, product, difference and quotient*of two Cauchy sequences yield again Cauchy sequences, and all of them*preserve equivalence*; meaning: If \((a_n)\equiv(a’_n)\) and \((b_n)\equiv(b’_n)\), then \)(a_n+b_n)\equiv(a’_n+b’_n)\), and the same holds for products, differences and quotients. (In the case of quotients: assuming that the divisor sequence*neither converges to \(0\) nor has any \(0\) in it*).

…and I’ll even show you how to prove all this stuff (ugly details left as exercise):

** Proof of 1.:** Let \(d\) be any decimal expansion and consider the sequence \[\textsf{se}(d)=\textsf{int}(d)+\sum (d_n\cdot10^{-n})=\left(\textsf{int}(d)+\sum_{i=1}^nd_i10^{-i}\right)_{n\in\mathbb N}.\]

We need to show, that (definition of Cauchy sequences:) for any \(\epsilon>0\) there is an index \(n_\epsilon\) such that for any \(k,\ell>n_\epsilon\) we have \(\mid \textsf{se}(d)_k- \textsf{se}(d)_\ell\mid<\epsilon\). So assume we’re given some arbitrarily small \(\epsilon>0\). We choose \(n_\epsilon\) such that \(10^{-n_\epsilon}<\epsilon\). Now for any arbitrary \(k,\ell>n_\epsilon\) (without loss of generality, let’s say \(k>\ell\)) we have:

\begin{align*}

\mid \textsf{se}(d)_k- \textsf{se}(d)_\ell \mid &= \left| \left( \textsf{int}(d)+\sum_{i=1}^kd_i10^{-i}\right) – \left( \textsf{int}(d)+\sum_{i=1}^\ell d_i10^{-i}\right) \right| \\

&= \left| \underbrace{\textsf{int}(d)-\textsf{int}(d)}_{=0} + \underbrace{ \left( \sum_{i=1}^kd_i10^{-i}\right) – \left( \sum_{i=1}^\ell d_i10^{-i}\right) }_{=\sum_{i=\ell+1}^kd_i10^{-i}} \right| \\

&= \left| \sum_{i=\ell+1}^kd_i10^{-i} \right| < 10^{-n_\epsilon} < \epsilon

\end{align*}

QED.

** Proof of 2.:** We will define \(\textsf{de}((a_n))\) the following way: If the sequence \((a_n)\) converges to some rational number \(q\), we just take the decimal expansion \(\textsf{de}(q)\) as defined above (via long division – remember?), so we can assume that \((a_n)\) does not converge. Now I’ll show, how we can compute an arbitrary element of our intended decimal expansion by showing how to compute its first \(m\) digits (for arbitrary \(m\)):

Since \((a_n)\) is a Cauchy sequence, there is some index \(k\) such that the distance between all subsequent elements is smaller than \(10^{-(m+1)}\). Now consider the decimal expansion of \(a_k\) up to the first \(m\) digits. If the \(m+1\)st digit is neither a \(9\) nor a \(0\), then we know that the first \(m\) digits are fixed in the sense that all subsequent elements in the sequence will have the same first \(m\) digits. Why? Because all subsequent elements differ at most by \(10^{-(m+1)}\), which means the \(m+1\)st digit can change at most by \(\pm1\), and if it’s not \(0\) or \(9\), the previous digits can’t be impacted by that anymore.

So, what happens if the \(m+1\)st digit is \(0\) or \(9\)? Well – depends; is the sequence increasing or decreasing from \(a_{m+1}\)? If it is increasing, we pick the next index of the next digit \(\neq9\) as a new \(m\) and continue from there. If it’s decreasing, we pick the next digit \(\neq0\). In both cases, we always find such an index (since the sequence doesn’t converge, hence can’t end with \(999\ldots\) or \(000\ldots\)) – and the sequence will ultimately always be larger or always smaller than \(a_{m+1}\) since \(a_{m+1}\) is rational and by assumption \((a_n)\) doesn’t converge, hence it particularly doesn’t converge to \(a_{m+1}\).

In either case, we end up with the first \(m\) digits of a decimal expansion. And notice how we constructed this decimal expansion – namely by basically scanning the sequence until the first \(m\) digits stay fixed, which we do by picking an \(\epsilon<10^{-m}\). The same way we can prove that the resulting series of the decimal expansion actually is equivalent to the original sequence \((a_n)\) – let an arbitrary \(\epsilon\) be given, choose some \(m\) with \(10^{-m}<\epsilon\), find an index such that the first \(m\) digits of both sequences’ decimal expansions stay fixed, then the differences \((a_k-\textsf{se}(d)_k)\) between all later elements will be \(<\epsilon\), hence the differences converge to \(0\), QED.

* Proof of 3.:* Well, that two sequences with the same decimal expansions are equivalent is almost exactly the last part of the previous proof, so that’s fine. The converse – that two equivalent sequences have the same decimal expansion – follows from the way we defined \(\textsf{de}((a_n))\), again by a similar argument.

* Proof of 4.:* Exemplary, I’ll show this for addition: Let \((a_n),(b_n)\) be two Cauchy sequences. The sum of two sequences is just element-wise addition, hence we need to show that \((a_n+b_n)\) is a Cauchy sequence. So let \(\epsilon>0\) be arbitrarily small, then since \((a_n),(b_n)\) are Cauchy there exist for \(\frac\epsilon2\) indices \(n_a\) and \(n_b\) such that for all \(k,\ell>n_a,n_b\) we have \(\mid a_k-a_\ell\mid<\frac\epsilon2\) and \(\mid b_k-b_\ell\mid<\frac\epsilon2\). Let \(n_\epsilon=\max{n_a,n_b}\), then for any \(k,\ell>n_\epsilon\).

\[\mid (a_k+b_k)-(a_\ell+b_\ell)\mid \leq \underbrace{\mid(a_k-a_\ell)\mid}_{<\frac\epsilon2}+\underbrace{\mid(b_k-b_\ell)\mid}_{<\frac\epsilon2} <\epsilon,\]

hence the sum is a Cauchy sequence.

Now for equivalence: So assume \((a_n)\equiv(a’_n)\) and \((b_n)\equiv(b’_n)\). We need to show \((a_n)+(b_n)\equiv(a’_n)+(b’_n)\), meaning that the following sequence converges to 0:

\begin{align*}

((a_n)_{n\in\mathbb N}+(b_n)_{n\in\mathbb N}) – ((a’_n)_{n\in\mathbb N}+(b’_n)_{n\in\mathbb N})&=(a_n+b_n)_{n\in\mathbb N}-(a’_n+b’_n)_{n\in\mathbb N}\\

&=(a_n+b_n-a’_n-b’_n)_{n\in\mathbb N}\\ &=\underbrace{(a_n-a’_n)_{n\in\mathbb N}}_{\to0}+\underbrace{(b_n-b’_n)_{n\in\mathbb N}}_{\to0},

\end{align*}

hence the whole sequence converges to \(0\), QED.

*Gosh golly,* it *sure* looks *a lot* like decimal expansions represent *actual numbers,* doesn’t it? I mean – converting to sequences and back we can *add* them, *multiply them,* they contain (representations of) all rational numbers… but, you know, apart from their representations, what really *are* the *“real numbers”* now?

And you’ll be surprised to learn that at this point *it doesn’t even matter anymore*. *Seriously.*

I mean, don’t get me wrong; I can give you at least* three different ways* of formally constructing well-defined sets with well-defined arithmetic operations on them that we can point to and declare to be *“the real numbers”*:

- Let \(\mathbb R\) be the set of all proper decimal expansions, with addition, multiplication etc. defined via their associated series.
- Let \(\mathbb R\) be a representative system of the equivalence classes on rational Cauchy sequences. Meaning: We associate each Cauchy sequence \((a_n)\) with its equivalence class \([(a_n)]\) – i.e. the set of all equivalent Cauchy sequences. Pick one representative from each equivalence class, the result are
*“the real numbers”*. Since all operations (as shown/left as exercise) preserve equivalence, this is well-defined independent of the specific representative system – which basically means we could just take the equivalence classes directly.

In any case, this is just a slightly technical way to rigorously state that*the real numbers are exactly the Cauchy sequences, where we consider two sequences to be equal if they are equivalent.* *Dedekind cuts*: Let \(\mathbb R\) be the set of all subsets \(A\subseteq\mathbb Q\) with the property that if \(a\in A\) and \(b<a\), then \(b\in A\) (think of them as the sets of rational numbers strictly smaller than some real number). Or, if you prefer, the same using \(b>a\) instead of \(b<a\). So basically, there’s two ways of using Dedekind cuts.

But, you know, all of them are *equivalent anyway*, so for all practical purposes, which one we chose (if any) is *completely irrelevant!* What *matters* – and this is what all mathematicians agree on when it comes to *“the” real numbers* – is that the following axioms all hold:

- The real numbers \(\mathbb R\) form an
*ordered field*. That means:- There are elements \(0,1\in\mathbb R\) and operations \(+,\cdot:\mathbb R\times\mathbb R\to\mathbb R\) such that for all \(r\in\mathbb R\) we have \(r+0=r\) and \(r\cdot1=r\).
- For all \(r\in\mathbb R\) there is some \(-r\in\mathbb R\) such that \(r+(-r)=0\).
- For all \(r\in\mathbb R\) with \(r\neq0\) there is some \(r^{-1}\in\mathbb R\) such that \(r\cdot(r^{-1})=1\).
- Both \(+\) and \(\cdot\) are
*associative*and*commutative,*i.e. \(r+s=s+r\), \(r+(s+t)=(r+s)+t\), \(r\cdot s=s\cdot r\) and \(r\cdot(s\cdot t)=(r\cdot s)\cdot t\). - The
*distributive law*holds: \(r\cdot(s+t)=r\cdot s + r\cdot t\). - The elements of \(\mathbb R\) are
*totally ordered;*meaning there is a*reflexive, anti-symmetric, transitive*(never mind what exactly that means, except that it behaves like a proper ordering) relation \(\leq\) such that for all \(r,s\) we have either \(r\leq s\) or \(s\leq r\) (and we have both if and only if \(r=s\)). Furthermore, this order is compatible with addition and multiplication in the usual way (\(a>b\) and \(c>d\) implies \(a+c>b+d\) etc.).

- There is an embedding \(\mathbb Q\to\mathbb R\) that agrees with addition, multiplication, subtraction and \(\frac 1x\) (meaning: I can consider rational numbers to
*“be”*real numbers). - The real numbers are
*archimedian*: For each number \(r\in\mathbb R\) there is some natural number \(n\in\mathbb N\) with \(r<n\) (using the embedding from the previous point). - The real numbers are
*topologically complete*; meaning: every Cauchy sequence of real numbers converges to some real number.

And, of course, all of the above methods of defining the real numbers satisfy all of these properties.

So *what are the real numbers?* Well, *any set that satisfies all of these* can be considered *“the real numbers” –* as long as they do, they’re *all fine*. But the point is, whatever you call *“the real numbers”*; they *have to satisfy these axioms*. If they *don’t, they’re not the real numbers;* at least not what *any mathematician will mean by that.*

And I hope I’ve been detailed and clear enough, that it’s obvious *why* we choose to define real numbers this way – it’s the *most natural way* to interpret decimal numbers with infinitely many digits after the decimal point, and it agrees with what intuitively real numbers are *supposed to be:* They allow us to e.g. take the square root of any positive number, all Cauchy sequences actually converge (instead of just looking like they do), the contain the rationals, they *don’t* contain infinitely small or large *“numbers”*… so now that we *know* all of this, we can continue dissecting Gabriel.

*You know, next time.*

(Next post on John Gabriel: “Cauchy’s Kludge”)