(Previous post on John Gabriel: The Dunning-Krüger Effect in a Nutshell)
Few people can ever begin to match my intelligence and depth of insight. I am not arrogant or deluded.
John Gabriel
Yeah. That’s an actual quote. I’ve been made aware of Gabriel’s LinkedIn page, where he wrote hilarious posts about his new calculus and his axioms for arithmetic. And (as someone on Mathematical Mathematics Memes pointed out), it’s becoming increasingly plausible that this guy has some mild form of mental illness, or at least a personality disorder. I mean that without a hint of irony – the narcissism and ignorance of this guy even dwarf Donald Trump. Here are just some further choice quotes:
“After Euclid and before me, not a single mathematics academic, ever understood what is a number. That’s quite a big statement, but I have proved it.”
“Georg Cantor, whom I consider one of the greatest fools in mathematics and the reason so many have problems with math.”
“I loathe mainstream academia and it’s hard for me to restrain myself. My tolerance for stupid people has long ceased to exist.”
“I realised many years later, they rejected my discoveries for several reasons, but the one that stood out is the fact that they did not like me personally. Truth or proofs had little to do with the rejection. They decided to libel and defame me, rather than study my ingenious work which is worthy not of one Abel prize, but of ten Abel prizes.”
“One would think that given I am helping future generations of aspiring young mathematicians, they would be grateful and welcome this new knowledge I reveal. But no, my life has all but been destroyed by the efforts and attacks of the most vile scum in mainstream academia.”
“The NC is the first and only rigorous formulation of calculus in human history. That is an incredible accomplishment given that no one before me was able to do this – not even the so-called greats such as Archimedes, Newton or anyone else. It is no longer debatable, but proven fact.”
John Gabriel – Jesus, Aristotle, Newton and Einstein all rolled into one. Praise him.
Yeah. Verbatim, people, verbatim. And I don’t think he’s a troll either – he’s been doing this for years, if not decades, and he takes every piece of criticism as a personal attack. He really seems to think he is god’s gift to humanity. So, let’s continue to take him down a notch.
In the second video on John Gabriel’s YouTube channel, he starts ranting about how calculus (unlike his new calculus, which is perfect in every way!) is wrong, which means I might as well use this opportunity to explain why it’s not and in general how this stuff actually works. Unfortunately (or fortunately, depending on your aesthetics) that means getting into serious math territory – many things that Gabriel gets wrong have to do with the fundamental definitions of e.g. convergence, the real numbers etc. However, if we want to see how wrong Gabriel really is, we first need to make sure that we all agree what the “official” (i.e. “right”) definitions of all those concepts are, what motivates these definitions and what their implications are.
Disclaimer: I will assume that we all know and somewhat agree that rational numbers are, like, a thing – that is, numbers that can be expressed as fractions of integers \(\pm\frac ab\). The set of all rational numbers is denoted as \)\mathbb Q\), the set of all natural numbers – i.e. the numbers \(1,2,3,\ldots\) – as \(\mathbb N\). I mention this, because I will have to talk about what “real numbers” really are in modern mathematics – something that Gabriel really doesn’t seem to grasp. Also: Usually I prefer to have \(0\) to be a natural number, but I specifically exclude it from \(\mathbb N\) here, just for convenience – it allows me to e.g. define a sequence \((\frac1n)\) without needing to worry about the case \(n=0\).
Second disclaimer: I’m not a historian. I might get some, many or all of the historical details wrong. I’m writing this pretty much from the top of my head. The same holds for all definitions, proofs etc. With respect to the historical stuff, it doesn’t even matter – after all, almost everything to do with actual mathematics has changed since then, and what’s important is the motivation behind this stuff, not the precise historical development, which is why I can’t be bothered to fact check this in detail. With respect to the actual math: It’s waaay more fun to redevelop all the concepts from the top of my head, rather than looking everything up in textbooks. So don’t believe anything, check everything for yourself and see whether it works out. I’m still, like 90% sure that all my definitions are either standard or equivalent to standard definitions, so don’t reject everything I say out of hand either.
The Origins of Calculus
Calculus was developed by Isaac Newton and Gottfried Leibniz. It’s not quite clear who invented it first; it’s not unlikely that they invented it independently of each other, inspired by similar problems. What we do know is that Leibniz published his calculus first and it’s his notations that we still use today. Newton (of course) claimed he invented it first and he used it to prove, that an inverse square law like the one in his theory of gravity would in fact imply elliptical planetary orbits. It’s an astonishing feat of intellect – this guy basically came up with a working, mathematical theory of gravity to explain planetary orbits, and invented completely new mathematics just to prove that it works.
Calculus is (quote Wikipedia) the mathematical study of continuous change. Its basic objects of interest are continuous functions on the real numbers (often described as “functions whose graph can be drawn in one stroke without lifting the pen”) and its most important notions (besides continuity) are derivatives and (basically the inverse to derivatives) integrals. Nowadays, we define the latter using limits of sequences, and those we define using \(\epsilon\)-\(\delta\)-criteria, which we have to thank Augustin-Louis Cauchy and Karl Weierstrass for.
However, in Newton’s and Leibniz’ times the “limit of a sequence” wasn’t yet a well-defined notion; instead, they used infinitesimal numbers in the development of their theories. So here’s approximately their thought process:
Assume we have some continuous function \(f\). As an example, let’s say \(f(x)=\frac{1}{10}x^2\). Its graph looks like this:

Question: What is the slope of that function at the point \(x=4\)? I mean, obviously the function is increasing to the right, but how fast is it increasing? Obviously it’s not increasing “at the same speed” everywhere – otherwise the graph would just be a straight line. So, how can we find out “how fast” the function is increasing at the specific point \(x=4\) – and what does that even mean?
Well, let’s look at two points instead: e.g. \( x_1=4\) and \( x_2=6\). How fast does the function grow in the interval from \(x_1\) to \(x_2\)? Now this we can answer: we know \( f(x_1)=\frac{1}{10}\cdot 4^2=\frac85\) and \( f(x_2)=\frac{1}{10}\cdot 6^2=\frac{18}{5}\). So the function has grown by \( f(x_2)-f(x_1)=\frac{10}{5}=2\). That’s an absolute growth of 2 in the interval of length \( x_1-x_2=2\).
Which means: on average the function grows by a factor of \( \frac{f(x_2)-f(x_1)}{x_2-x_1}=1\) in that interval:

That’s how we measure speeds in practice: Note at which time \( x_1\) e.g. a car passes a fixed point \( y_1\), at which time \( x_2\) it passes a second point \( y_2\) and divide the distance \( y_2-y_1\) by the time it took, i.e. \( x_2-x_1\). This will give you the average speed in the time period from \( x_1\) to \( x_2\).
But of course, it doesn’t give you the exact slope at the singular point \( x_1=4\). But it might give you an idea how to get there: If we decrease the distance between \( x_1\) and \( x_2\) (assuming the function doesn’t do weird stuff in between), we will be somewhat closer to the exact slope. For example, if we pick \( x_2=5\), then \( f(x_2)=\frac{5}{2}\) and thus the average growth is \( \frac{f(x_2)-f(x_1)}{x_2-x_1}=\frac{9}{10}\).
And here’s Newton’s and Leibniz’ mental leap: If we decrease the distance between \( x_1\) and \( x_2\) to the point where it is infinitesimally small, then we will get the exact slope of \( f\) at the point \( x_1\) (or \( x_2\) – the difference between the slopes will also be infinitesimally small)!
So, let’s assume we have some infinitesimally small \( \partial x\) (whatever that means), then the derivative \( f'(x)\) of \( f\) (i.e. the slope of \( f\) at the point \( x\)) is given by \( \displaystyle\frac{f(x+\partial x)-f(x)}{\partial x}\).
For our function, that means:
\( \displaystyle f'(x)=\frac{f(x+\partial x)-f(x)}{\partial x}=\frac{\frac{1}{10}(x+\partial x)^2-\frac{1}{10}x^2}{\partial x}=\frac{1}{5}x + \frac 1{10}\partial x\)
…and (so the reasoning goes) since \( \partial x\) is just an infinitesimally small number and hence ultimately negligible, we can ignore it and get \( f'(x)=\frac 15 x\), and hence we finally get the exact value \( f'(4)=\frac 45\).
Obviously, there are problems with that reasoning: What the hell are those “infinitesimal numbers” that are suddenly introduced, that I can can apparently add and multiply and divide by (I mean – I can’t divide by zero, but I can divide by something that’s “infinitely close” to zero?), but then in the end I just ignore them? What’s that all about? Is this supposed to make sense? And if \( \partial x\) is “infinitesimally small”, shouldn’t that mean that \( \frac 1{\partial x}\) would have to be infinitely large? Does that still make sense? What’s going on here? Aaaaaaaah!
Well… the thing is… it sort-of works. At least for relatively simple functions as the exemplary one I used it yields meaningful results, regardless of how weird the reasoning to justify the method is. But infinitesimals were never quite satisfactory, which is why Cauchy and Weierstrass tried to put the whole thing on a more solid basis.
Interestingly enough, this whole infinitesimal stuff was actually formally grounded in a rigorous way in the 20th century (and resurrected as “non-standard calculus”). But the way “standard” mathematicians interpret and think about calculus and real numbers in general is in terms of cauchy sequences, limits and \( \epsilon\)-\( \delta\)-criteria, so let’s explain the modern foundation for calculus now.
Sequences, Limits and Differentiability
Definition: A sequence of rationals is simply a function \( a:\mathbb N\to\mathbb Q\) – i.e. a function that maps each natural number \( n\in\mathbb N\) to some rational number \( a(n)\in\mathbb Q\).
Sequences are usually denoted as \( (a_n)_{n\in\mathbb N}\) (or in short just \( (a_n)\)) and the individual elements as \( a_i\) (instead of \( a(i)\) – i.e. we just write the function argument as an index).
So, why are sequences interesting? Consider the following two examples:
- \( (a_n) = (1,2,3,4\ldots)\) (i.e. \( a(n):=n\)) and
- \( (b_n)=(1,\frac12,\frac13,\frac14,\ldots)\) (i.e. \( b(n):=\frac1n\)).
There’s something fundamentally different about the two: Obviously, if we increase \( n\), the first sequence \( (a_n)\) will strictly increase as well, while the second one \( (b_n)\) strictly decreases. Okay, that’s not too interesting, but if we look closer, we notice that the first sequence is also unbounded: Pick an arbitrarily large number \( r\) – at some point the first sequence will grow larger than \( r\) (just pick any natural number \( k\) larger than \( r\), then \( a_k=k>r\)). For the second sequence however, we can give a lower bound; e.g. \( -1\). Even though \( (b_n)\) strictly decreases, it will never become smaller than \( -1\).
But of course, we can give a “better” lower bound than \( -1\) – namely \( 0\). This is also a lower bound, because all the elements of \( b_n\) are strictly positive; hence no element will ever be \( \leq0\). In fact, \( 0\) is the largest lower bound (or infimum) of the sequence, and the larger a natural number \( k\) we choose, the closer the sequence element \( b_k\) will be to \( 0\).
It’s consequently not completely absurd to suggest that the sequence \( (b_n)\) approaches \( 0\) in such a way, that we may meaningfully say that \( 0\) is the limit of the sequence \( (b_n)\). In contrast, \( (a_n)\) does not seem to have such a limit – the sequence just gets larger and larger with no bound in sight (we could say that the limit of the sequence is “infinity”, but infinity is not a number per se, and infinities are – without a careful formal treatment! – problematic anyway). We say the sequence \( (b_n)\) converges towards \( 0\), and the sequence \( (a_n)\) diverges. Now let’s properly define those two terms:
Definition: Let \( (a_n)\) be a sequence of rationals. Assume there is some rational number \( L\) such that the following holds:
For any arbitrarily small rational number \( \epsilon>0\) there is some index \( n_\epsilon\in\mathbb N\) such that for any index \( k>n_\epsilon\) the distance \( \mid a_k – L\mid\) is smaller than \( \epsilon\). In logical notation: \[\forall\varepsilon>0\;\exists n_\varepsilon\in\mathbb N\;\forall k>n_\varepsilon\; \mid a_k – L\mid <\epsilon\]
Then we say the sequence \( (a_n)\) converges to \( L\) and write \( \displaystyle \lim_{n\mapsto\infty}a_n=L\) or \( \lim a_n=L\).
If no such \( L\) exists, we say the sequence diverges.
Okay, this looks a bit complicated, so let’s explain it in more detail: We say a sequence \( (a_n)\) converges to some number \( L\), if we can get “arbitrarily close” to \( L\) by making the index \( n\) of our sequence larger. This “arbitrarily close” we can express formally by thinking about it as a kind of game: You tell me how close to \( L\) you want to be, by giving me an (arbitrarily small) distance \( \epsilon\in\mathbb Q\). Then I’ll give you an index \( n_\epsilon\) in return, such that all subsequent elements in the sequence are closer to \( L\) than your chosen distance \( \epsilon\) – i.e. for all subsequent indices \( k>n_\epsilon\), we have \( \mid a_k-L\mid<\epsilon\). If I can always give you such an initial index, no matter how small a distance \( \epsilon\) you choose, then I can adequately say that the sequence converges towards \( L\).
Alright? So far, so good. Now we can use limits of sequences to define the limit of a function \( f\) at a point \( p_0\). Why should we? Well, look at the function \( f(x)=\frac{x^2-1}{x-1}\), for example. This function is not well-defined at \( x=1\), because then the denominator would be \( 0\) – i.e. \( f(1)\) “doesn’t exist”. But, you know, here’s what this function looks like:

In fact, the function is equal to the function \( x+1\) everywhere except at \( x=1\)! Annoying, but if we build a sequence \( (a_n)\) that converges to \( 1\) (for example the sequence \( (1+1,1+\frac{1}{2},1+\frac{1}{3},1+\frac{1}{4},\ldots\)), then we can define \( f(1)\) as the limit of the sequence \( (f(a_n))\) (the resulting, now everywhere-defined, function is called the continuous extension of \( f\)), which happens to work out nicely and give us \( f(1)=2\). Problem solved!
…eeeexcept, of course, that this only makes sense if the sequence \( (f(a_n))\) converges at all, and – more notably – that the limit does not depend on the specific sequence \( (a_n)\). So instead of defining the limit of a function using sequences, we will use another \( \epsilon\)-\( \delta\)-criterion:
Definition: Let \( f:A\to\mathbb Q\) be a function on rationals (i.e. \( A\subseteq\mathbb Q\)) and \( p_0\in\mathbb Q\). If there is some number \( L\) such that the following holds:
For every arbitrarily small rational number \( \epsilon>0\), there exists some \( \delta_\epsilon>0\) such that for every \( x\in A\) with \( \mid x-p_0\mid<\delta_\epsilon\) we have \( \mid f(x)-L\mid<\epsilon\). In logical Notation: \[\forall\epsilon>0\;\exists\delta_\epsilon>0\;\forall x\in A\; (\mid x-p_0\mid<\delta_\epsilon \;\Longrightarrow\; \mid f(x)-L\mid<\epsilon).\]
Then we call \( L\) the limit of \( f\) at \( p_0\) and write \( \displaystyle \lim_{x\mapsto p_0}f(x)=L\).
The idea being a similar game as in the definition of convergence for sequences: You tell me any arbitrarily small distance \( \epsilon\in\mathbb Q\) to the (supposed) limit \( L\) you want to have, and in return I will give you a distance \( \delta_\epsilon\), such that if any \( x\in A\) is closer to \( p_0\) than \( \delta_\epsilon\), then \( f(x)\) will be closer to \( L\) than \( \epsilon\). If I can always give you such a \( \delta\), no matter which \( \epsilon\) you pick, then I win and \( L\) is indeed the limit of \( f\) at \( p_0\).
Alright, and now we can finally define derivatives using function limits – the idea being, that instead of picking an “infinitesimal number”, we take the function limit of the quotients:
Definition: Let \( f:A\to\mathbb Q\) be a function on rationals and \( p_0\in\mathbb Q\). If the limit \[\lim_{x\mapsto p_0}\frac{f(x)-f(p_0)}{x-p_0}\]
exists, we call \( f\) differentiable at \( p_0\). If \( f\) is differentiable at every point in \( A\) we call \( f\) differentiable, and the function \[ f’ : A \to \mathbb Q\qquad f'(x):=\lim_{y\mapsto x}\frac{f(y)-f(x)}{y-x}=\lim_{h\mapsto 0}\frac{f(x+h)-f(x)}h\]
the derivative of \( f\).
You’ll note, that this is exactly what Newton and Leibniz did; except that we got rid of those pesky infinitesimals and only used notions, that are formally and rigorously defined – there’s no room for ambiguity anymore. Furthermore, all of this works perfectly and beautifully – for example, all of the following highly desirable properties (assuming all the occuring limits exist) hold and can be easily proven using the above definitions (left as an exercise):
- The limit of a convergent sequence is unique,
- \( \displaystyle \left(\lim a_n\right) + \left(\lim b_n\right) = \lim (a_n + b_n)\)
- \( \displaystyle \left(\lim a_n\right) \cdot \left(\lim b_n\right) = \lim (a_n \cdot b_n)\)
- \( \displaystyle \left( \lim_{x\mapsto x_0} f(x) \right) + \left( \lim_{x\mapsto x_0} g(x) \right) = \lim_{x\mapsto x_0} (f(x) + g(x))\)
- \( \displaystyle \left( \lim_{x\mapsto x_0} f(x) \right) \cdot \left( \lim_{x\mapsto x_0} g(x) \right) = \lim_{x\mapsto x_0} (f(x) \cdot g(x))\)
- \( \lim a_n = \lim b_n\) if and only if \( \lim (a_n – b_n) = 0\)
- \( \displaystyle \frac 1{\lim a_n} = \lim\frac{1}{a_n} \qquad \frac{1}{\lim_{x\mapsto x_0}f(x)}=\lim_{x\mapsto x_0}\frac 1{f(x)}\)
…and we didn’t even touch the real numbers yet!
(Next post on John Gabriel: Calculus 102 (Cauchy Sequences and the Real Numbers))
Hello Crank!
I see you started with Euler’s Blunder but you said nothing about it – just like you’ve pretty much said nothing about anything else that follows.
Do you or don’t you agree that Euler wrote this in his Elements of Algebra?
Oh come on now hippy! Humour me. Chuckle.
Why would I care what Euler wrote? It doesn’t matter. Euler’s writings don’t define what is or isn’t mathematics. Euler was certainly capable of being mistaken, that doesn’t invalidate any modern mathematics, let alone the formal foundations of calculus, which aren’t based on Euler anyway, they’re largely based on ideas developed by Weierstrass. And Weierstrass too probably made mistakes at some points. Which is why we base mathematics on *ideas* and not on some famous guy’s authority.
@Jazzpirate
If you knew or even understood the very mainstream mathematics you claim to defend, you would know that it is due to Euler that you still peddle nonsense like 1/3 = 0.333…
Euler knew more algebra than you or any of the fools in academia the last 200 years and his Elements of Algebra very much influenced the way ALL mainstream mathematicians think or don’t think.
The formal “foundations” of mainstream mythmatics are a joke and based on ill-formed concepts which is something one like you would not be able to grasp due to your limited intellectual capacity.
Mainstream calculus is based on a bogus formulation in more ways than one. My free eBook which is the most important mathematics book ever written debunks everyone of your delusional claims:
https://drive.google.com/file/d/1CIul68phzuOe6JZwsCuBuXUR8X-AkgEO/view
“it is due to Euler that you still peddle nonsense like 1/3 = 0.333…” – No. It is entirely due to the definition of the floating point representation of numbers. There’s no sensical definition of what number “0.333…” is even supposed to represent, that preserves basic arithmetic laws and the euclidean axiom and does NOT entail that it’s equal to 1/3.
“Euler knew more algebra than you” – No. I know more algebra than even existed at Euler’s times. Because it evolved massively over the last 100 years. Also, arithmetics is not algebra.
“based on ill-formed concepts” – They’re not ill-formed. They are sufficiently well-defined to be axiomatizable and proofs on them are entirely computer-verifiable. Fragments are even decidable. Can you claim the same about your “axioms”?
Counter question, since you disagree:
– Do you agree that “0.333…” *means* (the result of evaluating) the series \sum_{i=1}^\infty 3*10^{-i}? If not, what else would it mean?
– Do you agree that an infinite series should evaluate to the limit of the sequence of its partial sums? If not, what else would an infinite series denote?
– Do you agree that the limit of a sequence should be *the* number that can be arbitrarily closely approximated by going along the sequence (assuming such a number exists)? If not, how do you define the limit of a sequence?
“it is due to Euler that you still peddle nonsense like 1/3 = 0.333…” – No. It is entirely due to the definition of the floating point representation of numbers.
Nonsense, Floating point representation (fpr) is just another way of representing rational numbers. fpr does NOT represent any bogus “real” number.
“There’s no sensical definition of what number “0.333…” is even supposed to represent,”
You don’t even know what your own mainstream theory claims. According to your flawed mainstream theory, 0.333… IS the LIMIT.
” that preserves basic arithmetic laws and the euclidean axiom and does NOT entail that it’s equal to 1/3.”
All nonsense. There are NO axioms or postulates in Euclid’s Elements. Stupid people like you did not understand and this is why they decided to believe (axiomatize).
“Euler knew more algebra than you” – No. I know more algebra than even existed at Euler’s times.
Crank! If you did, then you wouldn’t be arguing. You don’t even know half of what Euler knew.
” Because it evolved massively over the last 100 years.”
False. Classic algebra has not advanced even ONE iota past Euler. If you are talking about “abstract algebra”, well, this is NOT algebra but something entirely different.
“Also, arithmetics is not algebra.”
So? Who said these are the same?
“based on ill-formed concepts” – They’re not ill-formed.
They are ill-formed concepts as I have proved beyond any shadow of doubt.
” They are sufficiently well-defined to be axiomatizable and proofs on them are entirely computer-verifiable.”
Look stupid, FOL(first order logic) is based on bogus axioms, There are NO axioms in geometry or sound mathematics. There is no place for “belief” or “religion” in rational thought, That nonsense belongs to your flawed mainstream theory. The word “axiom” and “postulate” appear NOWHERE in the Elements of Euclid, but you have never studied the same so you don’t have a clue.
” Fragments are even decidable. Can you claim the same about your “axioms”?”
Non-sequitur. There are no axioms in the New Calculus, only in your bogus and dysfunctional mythmatics.
“Counter question, since you disagree:
– Do you agree that “0.333…” *means* (the result of evaluating) the series \sum_{i=1}^\infty 3*10^{-i}? ”
No idiot. I have never agreed. There is NO such thing as \sum_{i=1}^\infty 3*10^{-i}. Infinity is a JUNK concept. You cannot sum an infinite series. You can only find its limit if it converges and the limit happens to be a RATIONAL NUMBER. The limit of the series 0.3+0.03+… is 1/3.
“If not, what else would it mean?”
It is nonsense that YOU preach. Perhaps you should ask yourself what it means, because to my super intelligent mind, it is clearly syphilitic thinking.
“– Do you agree that an infinite series should evaluate to the limit of the sequence of its partial sums?”
NO. The limit of a CONVERGENT series is a RATIONAL NUMBER or it is an INCOMMENSURABLE MAGNITUDE (NOT an irrational number because there is NO such thing. A number by definition describes the measure of a magnitude or size. There are NO numbers that describe the measure of pi, e, sqrt(2), etc.)
Furthermore, since there is NO such thing as an “infinite series”, your question is misdirected. The limit in any case doe NOT care if the terms are all in the series or even there at all. Moreover, it is fairly easy to prove that even if 0.3+0.03+… hypothetically could be summed, that the sum WILL NEVER be 1/3 because 1/3 has no MEASURE in base 10.
” If not, what else would an infinite series denote?”
It’s nonsense because “infinite series” is a MISNOMER. A series consists only of partial sums and possible an ellipsis at the end to denote there is no last term.
“– Do you agree that the limit of a sequence should be *the* number that can be arbitrarily closely approximated by going along the sequence (assuming such a number exists)?”
No. Because there may be NO number describing the “limit”. As for arbitrary closer, well that is just syphilitic and meaningless nonsense.
“If not, how do you define the limit of a sequence?”
It can only be defined for a CONVERGENT sequence. If it is measurable, then a RATIONAL NUMBER exists which describes it as in the case of 0.3+0.03+… If it is not measurable, then the limit exists as some quantity whose measure cannot be determined – not even by the gods!
“Floating point representation (fpr) is just another way of representing rational numbers” – and how is the representation DEFINED? What does it mean to put “…” after 0.333?
“There are NO axioms or postulates in Euclid’s Elements.” – First of all: Yes, there are. Second of all, the “Euclidean axiom” is not in Euclid’s Elements. It is a name for the axiom “For each element (of some field) a there exists a natural number n such that n>a”. This is *called* the Euclidean Axiom.
“If you are talking about “abstract algebra”, well, this is NOT algebra but something entirely different.” – *sigh*. Then you need to clarify what you mean by “algebra” if you don’t mean what everyone else means.
“They are ill-formed concepts as I have proved beyond any shadow of doubt.” – you haven’t. However, every computer implementation of the axioms proves beyond any shadow of a doubt that they are well-defined.
“Infinity is a JUNK concept.” – then what is “0.333…” supposed to mean?
“It can only be defined for a CONVERGENT sequence. If it is measurable, then a RATIONAL NUMBER exists which describes it as in the case of 0.3+0.03+… If it is not measurable, then the limit exists as some quantity whose measure cannot be determined – not even by the gods!” – this is not a definition. It is entirely incomprehensible mumbling. I asked for a definition. What does “the limit of a sequence” mean in your world?
1. No, there are NO axioms or postulates in Euclid’s Elements. Whether you like this or not, it is a FACT.
2. I don’t give a crap about what you call the “Euclidean axiom”. What I know beyond any shadow of doubt is that there are NO axioms in Euclid’s Elements. I read Greek and I am a mathematician. What are you? Chuckle.
3. “It is a name for the axiom “For each element (of some field) a there exists a natural number n such that n>a”. This is *called* the Euclidean Axiom.”
That statement is true but it is not called an axiom, never mind Euclidean axiom – anywhere in the original Elements.
“They are ill-formed concepts as I have proved beyond any shadow of doubt.” – you haven’t.
Actually, I have. You are simply not intellectually capable of understanding. Hardly surprising given the morons who taught you. Rather than argue with you, I invite others to read my free eBook:
https://drive.google.com/file/d/1CIul68phzuOe6JZwsCuBuXUR8X-AkgEO/view
“However, every computer implementation of the axioms proves beyond any shadow of a doubt that they are well-defined.”
Poppycock! You don’t even have a clue what that means, much less the fact that there are no axioms.
“Infinity is a JUNK concept.” – then what is “0.333…” supposed to mean?
You tell me idiot! I don’t subscribe to nonsense created by that Swiss moron Euler.
“It can only be defined for a CONVERGENT sequence. If it is measurable, then a RATIONAL NUMBER exists which describes it as in the case of 0.3+0.03+… If it is not measurable, then the limit exists as some quantity whose measure cannot be determined – not even by the gods!” – this is not a definition. It is entirely incomprehensible mumbling. I asked for a definition. What does “the limit of a sequence” mean in your world?
It is an explanation you idiot. This discussion is over because you are evidently not able to comprehend even the simplest concepts.