(Previous post on John Gabriel: The Dunning-Krüger Effect in a Nutshell)

Few people can ever begin to match my intelligence and depth of insight. I am not arrogant or deluded.John Gabriel

Yeah. That’s an actual quote. I’ve been made aware of Gabriel’s LinkedIn page, where he wrote hilarious posts about his new calculus and his axioms for arithmetic. And (as someone on Mathematical Mathematics Memes pointed out), it’s becoming increasingly plausible that this guy has some mild form of mental illness, or at least a personality disorder. I mean that without a hint of irony – the narcissism and ignorance of this guy even dwarf Donald Trump. Here are just some further choice quotes:

“After Euclid and before me, not a single mathematics academic, ever understood what is a number. That’s quite a big statement, but I have proved it.”

 

“Georg Cantor, whom I consider one of the greatest fools in mathematics and the reason so many have problems with math.”

 

“I loathe mainstream academia and it’s hard for me to restrain myself. My tolerance for stupid people has long ceased to exist.”

 

“I realised many years later, they rejected my discoveries for several reasons, but the one that stood out is the fact that they did not like me personally. Truth or proofs had little to do with the rejection. They decided to libel and defame me, rather than study my ingenious work which is worthy not of one Abel prize, but of ten Abel prizes.”

 

“One would think that given I am helping future generations of aspiring young mathematicians, they would be grateful and welcome this new knowledge I reveal. But no, my life has all but been destroyed by the efforts and attacks of the most vile scum in mainstream academia.”

 

“The NC is the first and only rigorous formulation of calculus in human history. That is an incredible accomplishment given that no one before me was able to do this – not even the so-called greats such as Archimedes, Newton or anyone else. It is no longer debatable, but proven fact.”

 John Gabriel – Jesus, Aristotle, Newton and Einstein all rolled into one. Praise him.

Yeah. Verbatim, people, verbatim. And I don’t think he’s a troll either – he’s been doing this for years, if not decades, and he takes every piece of criticism as a personal attack. He really seems to think he is god’s gift to humanity. So, let’s continue to take him down a notch.

In the second video on John Gabriel’s YouTube channel, he starts ranting about how calculus (unlike his new calculus, which is perfect in every way!) is wrong, which means I might as well use this opportunity to explain why it’s not and in general how this stuff actually works. Unfortunately (or fortunately, depending on your aesthetics) that means getting into serious math territory – many things that Gabriel gets wrong have to do with the fundamental definitions of e.g. convergence, the real numbers etc. However, if we want to see how wrong Gabriel really is, we first need to make sure that we all agree what the “official” (i.e. “right”) definitions of all those concepts are, what motivates these definitions and what their implications are.

Disclaimer: I will assume that we all know and somewhat agree that rational numbers are, like, a thing – that is, numbers that can be expressed as fractions of integers \pm\frac ab. The set of all rational numbers is denoted as \mathbb Q, the set of all natural numbers – i.e. the numbers 1,2,3,\ldots – as \mathbb N. I mention this, because I will have to talk about what “real numbers” really are in modern mathematics – something that Gabriel really doesn’t seem to grasp. Also: Usually I prefer to have 0 to be a natural number, but I specifically exclude it from \mathbb N here, just for convenience – it allows me to e.g. define a sequence (\frac1n) without needing to worry about the case n=0.

Second disclaimer: I’m not a historian. I might get some, many or all of the historical details wrong. I’m writing this pretty much from the top of my head. The same holds for all definitions, proofs etc. With respect to the historical stuff, it doesn’t even matter – after all, almost everything to do with actual mathematics has changed since then, and what’s important is the motivation behind this stuff, not the precise historical development, which is why I can’t be bothered to fact check this in detail. With respect to the actual math: It’s waaay more fun to redevelop all the concepts from the top of my head, rather than looking everything up in textbooks. So don’t believe anything, check everything for yourself and see whether it works out. I’m still, like 90% sure that all my definitions are either standard or equivalent to standard definitions, so don’t reject everything I say out of hand either.


The Origins of Calculus

Calculus was developed by Isaac Newton and Gottfried Leibniz. It’s not quite clear who invented it first; it’s not unlikely that they invented it independently of each other, inspired by similar problems. What we do know is that Leibniz published his calculus first and it’s his notations that we still use today. Newton (of course) claimed he invented it first and he used it to prove, that an inverse square law like the one in his theory of gravity would in fact imply elliptical planetary orbits. It’s an astonishing feat of intellect – this guy basically came up with a working, mathematical theory of gravity to explain planetary orbits, and invented completely new mathematics just to prove that it works.

 Calculus is (quote Wikipedia) the mathematical study of continuous change. Its basic objects of interest are continuous functions on the real numbers (often described as “functions whose graph can be drawn in one stroke without lifting the pen”) and its most important notions (besides continuity) are derivatives and (basically the inverse to derivatives) integrals. Nowadays, we define the latter using limits of sequences, and those we define using \epsilon\deltacriteria, which we have to thank Augustin-Louis Cauchy and Karl Weierstrass for.

However, in Newton’s and Leibniz’ times the “limit of a sequence” wasn’t yet a well-defined notion; instead, they used infinitesimal numbers in the development of their theories. So here’s approximately their thought process:

Assume we have some continuous function f. As an example, let’s say f(x)=\frac{1}{10}x^2. Its graph looks like this:

Question: What is the slope of that function at the point x=4? I mean, obviously the function is increasing to the right, but how fast is it increasing? Obviously it’s not increasing “at the same speed” everywhere – otherwise the graph would just be a straight line. So, how can we find out “how fast” the function is increasing at the specific point x=4 – and what does that even mean?

Well, let’s look at two points instead: e.g. x_1=4 and x_2=6. How fast does the function grow in the interval from x_1 to x_2? Now this we can answer: we know f(x_1)=\frac{1}{10}\cdot 4^2=\frac85 and f(x_2)=\frac{1}{10}\cdot 6^2=\frac{18}{5}. So the function has grown by f(x_2)-f(x_1)=\frac{10}{5}=2. That’s an absolute growth of 2 in the interval of length x_1-x_2=2.

Which means: on average the function grows by a factor of \frac{f(x_2)-f(x_1)}{x_2-x_1}=1 in that interval:

That’s how we measure speeds in practice: Note at which time x_1 e.g. a car passes a fixed point y_1, at which time x_2 it passes a second point y_2 and divide the distance y_2-y_1 by the time it took, i.e. x_2-x_1. This will give you the average speed in the time period from x_1 to x_2.

But of course, it doesn’t give you the exact slope at the singular point x_1=4. But it might give you an idea how to get there: If we decrease the distance between x_1 and x_2 (assuming the function doesn’t do weird stuff in between), we will be somewhat closer to the exact slope. For example, if we pick x_2=5, then f(x_2)=\frac{5}{2} and thus the average growth is \frac{f(x_2)-f(x_1)}{x_2-x_1}=\frac{9}{10}.

And here’s Newton’s and Leibniz’ mental leap: If we decrease the distance between x_1 and x_2 to the point where it is infinitesimally small, then we will get the exact slope of f at the point x_1 (or x_2 – the difference between the slopes will also be infinitesimally small)!

So, let’s assume we have some infinitesimally small \partial x (whatever that means), then the derivative f'(x) of f (i.e. the slope of f at the point x) is given by \displaystyle\frac{f(x+\partial x)-f(x)}{\partial x}.

For our function, that means:

\displaystyle f'(x)=\frac{f(x+\partial x)-f(x)}{\partial x}=\frac{\frac{1}{10}(x+\partial x)^2-\frac{1}{10}x^2}{\partial x}=\frac{1}{5}x + \frac 1{10}\partial x

…and (so the reasoning goes) since \partial x is just an infinitesimally small number and hence ultimately negligible, we can ignore it and get f'(x)=\frac 15 x, and hence we finally get the exact value f'(4)=\frac 45.


Obviously, there are problems with that reasoning: What the hell are those “infinitesimal numbers” that are suddenly introduced, that I can can apparently add and multiply and divide by (I mean – I can’t divide by zero, but I can divide by something that’s “infinitely close” to zero?), but then in the end I just ignore them? What’s that all about? Is this supposed to make sense? And if \partial x is “infinitesimally small”, shouldn’t that mean that \frac 1{\partial x} would have to be infinitely large? Does that still make sense? What’s going on here? Aaaaaaaah!

Well… the thing is… it sort-of works. At least for relatively simple functions as the exemplary one I used it yields meaningful results, regardless of how weird the reasoning to justify the method is. But infinitesimals were never quite satisfactory, which is why Cauchy and Weierstrass tried to put the whole thing on a more solid basis.

Interestingly enough, this whole infinitesimal stuff was actually formally grounded in a rigorous way in the 20th century (and resurrected as “non-standard calculus”). But the way “standard” mathematicians interpret and think about calculus and real numbers in general is in terms of cauchy sequences, limits and \epsilon\delta-criteria, so let’s explain the modern foundation for calculus now.


Sequences, Limits and Differentiability

Definition: A sequence of rationals is simply a function a:\mathbb N\to\mathbb Q – i.e. a function that maps each natural number n\in\mathbb N to some rational number a(n)\in\mathbb Q.

Sequences are usually denoted as (a_n)_{n\in\mathbb N} (or in short just (a_n)) and the individual elements as a_i (instead of a(i) – i.e. we just write the function argument as an index).

So, why are sequences interesting? Consider the following two examples:

  1. (a_n) = (1,2,3,4\ldots) (i.e. a(n):=n) and
  2. (b_n)=(1,\frac12,\frac13,\frac14,\ldots) (i.e. b(n):=\frac1n).

There’s something fundamentally different about the two: Obviously, if we increase n, the first sequence (a_n) will strictly increase as well, while the second one (b_n) strictly decreases. Okay, that’s not too interesting, but if we look closer, we notice that the first sequence is also unbounded: Pick an arbitrarily large number r – at some point the first sequence will grow larger than r (just pick any natural number k larger than r, then a_k=k>r). For the second sequence however, we can give a lower bound; e.g. -1. Even though (b_n) strictly decreases, it will never become smaller than -1.

But of course, we can give a “better” lower bound than -1 – namely 0. This is also a lower bound, because all the elements of b_n are strictly positive; hence no element will ever be \leq0. In fact, 0 is the largest lower bound (or infimum) of the sequence, and the larger a natural number k we choose, the closer the sequence element b_k will be to 0.

It’s consequently not completely absurd to suggest that the sequence (b_n) approaches 0 in such a way, that we may meaningfully say that 0 is the limit of the sequence (b_n). In contrast, (a_n) does not seem to have such a limit – the sequence just gets larger and larger with no bound in sight (we could say that the limit of the sequence is “infinity”, but infinity is not a number per se, and infinities are – without a careful formal treatment! – problematic anyway). We say the sequence (b_n) converges towards 0, and the sequence (a_n) diverges. Now let’s properly define those two terms:

Definition: Let (a_n) be a sequence of rationals. Assume there is some rational number L such that the following holds:

For any arbitrarily small rational number \epsilon>0 there is some index n_\epsilon\in\mathbb N such that for any index k>n_\epsilon the distance \mid a_k - L\mid is smaller than \epsilon. In logical notation:

    \[\forall\varepsilon>0\;\exists n_\varepsilon\in\mathbb N\;\forall k>n_\varepsilon\; \mid a_k - L\mid <\epsilon\]

Then we say the sequence (a_n) converges to L and write \displaystyle \lim_{n\mapsto\infty}a_n=L or \lim a_n=L.

If no such L exists, we say the sequence diverges.

Okay, this looks a bit complicated, so let’s explain it in more detail: We say a sequence (a_n) converges to some number L, if we can get “arbitrarily close” to L by making the index n of our sequence larger. This “arbitrarily close” we can express formally by thinking about it as a kind of game: You tell me how close to L you want to be, by giving me an (arbitrarily small) distance \epsilon\in\mathbb Q. Then I’ll give you an index n_\epsilon in return, such that all subsequent elements in the sequence are closer to L than your chosen distance \epsilon – i.e. for all subsequent indices k>n_\epsilon, we have \mid a_k-L\mid<\epsilon. If I can always give you such an initial index, no matter how small a distance \epsilon you choose, then I can adequately say that the sequence converges towards L.

Alright? So far, so good. Now we can use limits of sequences to define the limit of a function f at a point p_0. Why should we? Well, look at the function f(x)=\frac{x^2-1}{x-1}, for example. This function is not well-defined at x=1, because then the denominator would be 0 – i.e. f(1) “doesn’t exist”. But, you know, here’s what this function looks like:

In fact, the function is equal to the function x+1 everywhere except at x=1! Annoying, but if we build a sequence (a_n) that converges to 1 (for example the sequence (1+1,1+\frac{1}{2},1+\frac{1}{3},1+\frac{1}{4},\ldots), then we can define f(1) as the limit of the sequence (f(a_n)) (the resulting, now everywhere-defined, function is called the continuous extension of f), which happens to work out nicely and give us f(1)=2. Problem solved!

…eeeexcept, of course, that this only makes sense if the sequence (f(a_n)) converges at all, and – more notably – that the limit does not depend on the specific sequence (a_n). So instead of defining the limit of a function using sequences, we will use another \epsilon\delta-criterion:

Definition: Let f:A\to\mathbb Q be a function on rationals (i.e. A\subseteq\mathbb Q) and p_0\in\mathbb Q. If there is some number L such that the following holds:

For every arbitrarily small rational number \epsilon>0, there exists some \delta_\epsilon>0 such that for every x\in A with \mid x-p_0\mid<\delta_\epsilon we have \mid f(x)-L\mid<\epsilon. In logical Notation:

    \[\forall\epsilon>0\;\exists\delta_\epsilon>0\;\forall x\in A\; (\mid x-p_0\mid<\delta_\epsilon \;\Longrightarrow\; \mid f(x)-L\mid<\epsilon).\]

Then we call L the limit of f at p_0 and write \displaystyle \lim_{x\mapsto p_0}f(x)=L.

The idea being a similar game as in the definition of convergence for sequences: You tell me any arbitrarily small distance \epsilon\in\mathbb Q to the (supposed) limit L you want to have, and in return I will give you a distance \delta_\epsilon, such that if any x\in A is closer to p_0 than \delta_\epsilon, then f(x) will be closer to L than \epsilon. If I can always give you such a \delta, no matter which \epsilon you pick, then I win and L is indeed the limit of f at p_0.

Alright, and now we can finally define derivatives using function limits – the idea being, that instead of picking an “infinitesimal number”, we take the function limit of the quotients:

Definition: Let f:A\to\mathbb Q be a function on rationals and p_0\in\mathbb Q. If the limit

    \[\lim_{x\mapsto p_0}\frac{f(x)-f(p_0)}{x-p_0}\]

exists, we call f differentiable at p_0. If f is differentiable at every point in A we call f differentiable, and the function

    \[ f' : A \to \mathbb Q\qquad f'(x):=\lim_{y\mapsto x}\frac{f(y)-f(x)}{y-x}=\lim_{h\mapsto 0}\frac{f(x+h)-f(x)}h\]

the derivative of f.

You’ll note, that this is exactly what Newton and Leibniz did; except that we got rid of those pesky infinitesimals and only used notions, that are formally and rigorously defined – there’s no room for ambiguity anymore. Furthermore, all of this works perfectly and beautifully – for example, all of the following highly desirable properties (assuming all the occuring limits exist) hold and can be easily proven using the above definitions (left as an exercise):

  1. The limit of a convergent sequence is unique,
  2. \displaystyle \left(\lim a_n\right) + \left(\lim b_n\right) = \lim (a_n + b_n)
  3. \displaystyle \left(\lim a_n\right) \cdot \left(\lim b_n\right) = \lim (a_n \cdot b_n)
  4. \displaystyle \left( \lim_{x\mapsto x_0} f(x) \right) + \left( \lim_{x\mapsto x_0} g(x) \right) = \lim_{x\mapsto x_0} (f(x) + g(x))
  5. \displaystyle \left( \lim_{x\mapsto x_0} f(x) \right) \cdot \left( \lim_{x\mapsto x_0} g(x) \right) = \lim_{x\mapsto x_0} (f(x) \cdot g(x))
  6. \lim a_n = \lim b_n if and only if \lim (a_n - b_n) = 0
  7. \displaystyle \frac 1{\lim a_n} = \lim\frac{1}{a_n} \qquad \frac{1}{\lim_{x\mapsto x_0}f(x)}=\lim_{x\mapsto x_0}\frac 1{f(x)}

…and we didn’t even touch the real numbers yet!

(Next post on John Gabriel: Calculus 102 (Cauchy Sequences and the Real Numbers))