Alright, now that we’ve tackled the basic definitions and theorems regarding calculus, we can start looking at Gabriel’s first video on calculus.

Before we start, let’s play a little game. The above image is a screenshot of Gabriel’s video, and there are *two incredibly annoying formal errors in it.* They might be nitpicky, and the first one might be either due to the software he uses (is that Geogebra? No idea…) or might just be due to convenience/lazyness – but the *second* one I think definitely points to Gabriel *not knowing how to even use logical notation* properly. Or at least in a way that *makes any sense.* Can you find the two errors?

**ding* – time’s up!* So here’s the first one:

*What the hell is supposed to mean? “As approaches “?* Fixed numbers *can’t approach anything!* Only *variables* can. And if it’s not the *actual limit* for an *actual variable,* then the in front is just *plain wrong!*

But okay, it might be that this just annoys me because I’m a logician and I’ve been somewhat conditioned to have an eye out for formalities like that – after all, he’s just trying to demonstrate how the derivative results from taking the limit of secant lines. And he’s actually not doing a bad job at it – or rather, I assume, his software isn’t. So let’s assume that this weird is also due to the software. The second error, however…

Here’s Gabriel’s definition of a function limit:

Compare that to my definition of a function limit:

Now, *why* did I put a lower index after the ? Well, a formula translates to *“For all , there exists some , such that…”* – meaning that the *exact value* of the whose existence is posited by the formula *may depend on the specific value of .* You can think of it as a *function* , that maps *each to some specific value * (in fact: Functions like that, that are the “result of” a -Formula are called *Skolem functions*). Point being: the value of may depend on your choice of . This makes sense if you think back to the “game interpretation” of the formula: You give me some arbitrarily small number , and *in response* I choose a value , depending on your , such that the formula holds.

** Example:** The formula says:

So, what does Gabriel want to tell us with ? I have no idea. The only thing that I can think of is, that he wants to tell us that the value of may depend on the specific value of – but that makes *no sense whatsoever,* given that

- the comes
*after*the – at the point of the , the that this refers to*hasn’t even been introduced,*

- the is
*universally quantified*anyway – meaning: The already says “For all “, hence I may freely choose any value for anyway. To note that some value may*depend on*some other value only makes sense in the case of*existential quantifiers*, i.e. , and

- The may depend on the , as I explained – hence can’t also depend on , otherwise you get a circular dependency. How is that game going to work? You choose an based on my not-yet-made choice of , and then I pick my based on your and you choose a based on my and…
*huh?*

So, *what’s going on here?* If you have any idea (or potentially, where he got this specific formula from), let me know, because I don’t. All I can conclude from this is that Gabriel *doesn’t know what he’s talking about.* Not surprising, but definitely annoying.

Now that we got the formal nitpicking errors out of the way, let’s go for the actual… errm… “content” of that screenshot. Bullshit is colored red:

In order to use the ill-defined limit definition:

One must know the value of . However, in order to find , one must use the ill-defined limit definition:

Now you see :

and now you don’t!

(insert stupid

Cauchy’s Kludge (3:57)magician-potato-thingyβ¦β¦ here)what the hell is that thing?

Holy shit, that guy Okay, one by one:

1. I love how he calls the limit definition “ill-defined” and then gives an (apart from the error above) *rigorous, formal definition in first-order logic*. Seriously, you can’t make something more well-defined than by using first-order logic. I mean, that’s kind-of* what first-order logic is for!* You can even input this definition in an automated theorem prover and see that it checks out. Just for fun, I did exactly that in our own MMT system:

Tadaa, the formula type checks. Awesome

2. No, in order to find one does *not* have to use the “ill-defined” (which is in fact perfectly well-defined, see previous post for details) “limit definition”. In fact, you can use *whatever method you want* to find . You can *conjure it up from the entrails of a chicken,* for all I care. It doesn’t matter one bit, where you get your from, as long as you can afterwards use the above definition to prove, that the you found *is in fact the limit.*

Actually: One of the things that annoy many (especially undergrad) math students is that often, the limits for the more complicated sequences, series etc. *seem to just fall from the sky.* The professor announces the limit to be some *seemingly arbitrary* value, and then he proves that that value *really is the limit!* And as a student, you’re just left pondering how anyone came up with that value. Probably by reading the entrails of a chicken, who knows. (Of course, because some incredibly smart guy took an incredibly educated guess which happened to work out, or by deriving it by some incredibly elaborate method way too complicated to demonstrate in class.)

And if you prefer an example *specifically for derivatives:* The derivative of the *natural logarithm* is . I could prove it to you; it’s not even that hard if you know the trick (although you need partial integration, which is why I’m skipping it here). But my point is: I have *no idea how to derive that.* Seriously. I haven’t tried in a while, it might actually be rather easy, but in class we only proved that it *is* in fact the derivative, not how any one *realized* that it is.

*3. “Now you see , now you don’t”* – errrm, yeah, because you substituted the *thing-containing-* by the *thing-that-by-definition-is-the-previous-thing-containing-*. Are you *seriously* surprised that, once you *define a symbol* as a specific term and you *replace the term by the symbol that by definition is the same thing as that term*, the term is gone? That’s the *whole point of definitions, you daft moron!* The didn’t disappear, it’s still there, you just need to expand the by its definition!

But also: is not, like, a specific value or something; it’s a bound variable of the limit. It is a purely syntactic element of the definition; splitting the body from the binder in front gives the whole thing a very different semantics. It’s like the difference between and (not surprisingly – if you expand the definition you get a ) – the latter needs some specific assignment for the variable to be a well-formed statement in the first place, while the former *is already a well-formed statement.* *Just sayin’.*

4. Apparently definitions are magic tricks for Gabriel. *Awesome,* apparently I’m a magician then

But still – does Gabriel *maybe have a point* here? I mean – the definition of a function limit *“requires” some limit* to exist, so if we use the definition of the derivative, which contains a function limit, to* compute the limit* – isn’t that circular reasoning? Well, let’s just expand all the definitions to see whether that works out:

We have . Now, expanding the definition, we get: if and only if:

(Note: the is not cheating; I used an extra for the *domain of the function* in my definition with the intention, that does not include the approached value (in this case ) itself; Gabriel instead uses an . Both are fine and amount to “the same thing” in this context (since the fraction is in fact *not defined* for , hence ), even if they’re not strictly equivalent in a logical sense)

Okay, now note that in the term , we have as a prerequisite; hence having in the divisor of the fraction is unproblematic. Meaning, we can easily “compute” this fraction, without changing the truth value of the formula – after all, “computing” just means: We’re changing the expression in such a way that *equality is preserved.* Let’s do this with the function as an example:

Now we can (since they’re *equal*) substitute the resulting term in our original formula, yielding:

Now remember, this formula says: *“For all , there is some such that for any with we have “.* Our goal is to find an that makes this formula true. And now it’s obvious that if such an exists, i.e. if the formula is supposed to be true, it has to be , because then we have . To show that this indeed works out, we just need to be able to give a such that whenever , then . This is trivially true for . Hence we have proven, that

and since we didn’t make any assumptions about (i.e. the proof works for any ), the derivative *is indeed a well-defined function. Hooray!*

And *now* note, that (as people do in practice) *this whole convoluted reasoning can be done way shorter by simply doing the following:*

*This* seems to be what Gabriel is annoyed about and what he claims is *“ill-defined”* – but note, that what we’re doing by computing is really just a shorthand, convenient way to derive a term in such a way, that we can *easily extract a proof* that the result of our derivation *is in fact the limit* we’re looking for according to the formal, rigorous definition of a function limit.

*Nothing is ill-defined here! *We can, if in doubt, always convert any such derivation to an actual formal proof. Except, of course, if we made a mistake. But then try getting through peer-review…

Now, the question remains why Gabriel would think the definition of a function limit / derivative would be ill-defined. As often with Gabriel, it’s hard to find out what exactly his problem is, but the following might be a clue:

There are a lot of problems with mainstream calculus; it’s flawed for several reasons, as I’ll explain shortly, but one of the main reasons is that in order for this function here to be differentiable at this point here – essentially what Cauchy’s definition is saying is, that it needs to have a derivative at every point in this short interval here.

Cauchy’s Kludge (2:13)

And Gabriel is (as it turns out, not even that). He’s “wrong” (or at least somewhat inaccurate) in that *half-right* here*“this short interval”* he’s pointing at is in no way significant. But he’s~~ right~~ also wrong, in that in order for a function to be differentiable *at some point* , the function will *also*** not** need to be differentiable

All these limit-definitions and –-style definition have one thing in common: They always talk about what happens when we get *arbitrarily close* to some value of interest – i.e. they define a property of some point by *how this point relates to its neighborhood.* It’s not a property that can be established for a point *in isolation* – only *in relation to* surrounding points. Which makes sense, if you think about it: If you want to “approach” a limit, the very word “approach” implies that you’re in some sense covering the surrounding neighborhood of that limit, and that often happens to work because of the same property holding for those points you’re covering.

This is basically the very thing that one expects from ** continuous** (i.e.

Anyway, what I’m trying to say is: To say *“the function is differentiable at point “* is [** often, but not**] actually the same as

One last thing: The bottom of the screenshot shows the following quote:

Cauchy had stated in his Cours d’analyse that irrational numbers are to be regarded as the limits of sequences of rational numbers. Since a limit is defined as a number to which the terms of the sequence approach in such a way that ultimately the difference between this number and the terms of the sequence can be made less than any given number, the existence of the irrational number depends, in the definition of limit, upon the known existence, and hence the prior definition, of the very quantity whose definition is being attempted.

The History of Calculus and its Conceptual Development (Page.281) Carl B. Boyer

That is, one cannot define the number ‘square root of 2’ as the limit of the sequence 1, 1.4, 1.41, 1.414,… because to prove that this sequence has a limit one must assume, in view of the definitions of limits and convergence, the existence of this number as previously demonstrated or defined. Cauchy appears not to have noticed the circularity of the reasoning in this connection, but tacitly assumed that every sequence converging within itself has a limit.

There’s two things to say about this:

*“are to be regarded as”*is not the same thing as*“are defined as”.*I don’t know if Cauchy actually meant*“Irrational numbers are*here. If he did, he indeed used circular reasoning. In that case, Cauchy was just wrong. People tend to be that quite often. However, I’m actually fine with saying**defined**as the limits of sequences”*“irrational numbers can be*They can be defined in such a way that this makes sense and is not circular (see my last post, where I give three different but equivalent definitions of the real numbers).**regarded as**limits of sequences”.

*Who cares?*Cauchy is*not the last word on anything to do with Calculus.*Nowadays, we*don’t*define real numbers as*“limits” of Cauchy sequences*– we can rather just define them as (equivalence classes of)*Cauchy sequences directly.*There’s no circularity there, and if you don’t like that, define them as decimal expansions, or as Dedekind cuts, or axiomatically, or…

To use this quote to cast doubt on modern mathematics (and it seems clear, that this is what Gabriel wants to do) is *typical creationist logic:* *“Hey, I found something in a book by Cauchy that’s flawed, hence Calculus is wrong”* is exactly like saying *“I found an error in Darwin’s book, hence the modern theory of Evolution is wrong”.* Why do cranks always think, that *one single book* is the *infallible foundation* of a whole modern scientific field?

Well okay, in the case of creationists that’s just projection, I assume. They have an infallible book, hence the opposition must have one as well. Gabriel, it seems, regards Plato’s works (as we’ll see) as his infallible bible, hence… modern mathematicians must use Cauchy’s works as their infallible bible? Is that the reasoning here? Who knows…

]]>Okay, now that we know what sequences are and what it means for a sequence to converge to some limit, we can finally start talking about real numbers:

Basically the *whole point* of real numbers is to make the rational numbers *complete with respect to convergence,* in a very specific sense which will become apparent later. The numbers we get which we *didn’t* already have in the set of rational numbers are called *irrational numbers.* The first irrational number that was historically encountered is *the square root of *, as is possibly the *most well-known proof in history:*

Assume the square root of 2 is a* rational number* . We can safely assume that and are coprime, meaning that we can’t simplify the fraction any further (otherwise, just simplify the fraction and call the resulting two numbers and ). Then:

From this we can conclude, that needs to be divisible by , and hence that needs to be divisible by as well. so let for some whole number , then:

From this we can conclude, that needs to be divisible by , and hence needs to be divisible by as well. But that just means, that we could have simplified the original fraction by dividing both and by – something we explicitly assumed to be already taken care of. That’s a contradiction, hence the square root of is irrational.

So what does this *mean* now? The square root of (as Gabriel seems to sort-of believe) *does not exist?* Well, the thing is… here’s the graph of the function :

I mean… I can see it *clearly* crossing the line at *some*where… are we supposed to say, that there is *no specific point* on the -axis, where the function takes on the value ? I can even tell you, that it’s approximately in the ballpark around 1.41421356237 – in fact, I can get *arbitrarily close (*nudge,nudge,wink,wink*)* to that *“number”* which possibly does or doesn’t exist. So what exactly could we *mean* when we talk about the square root of as a number?

The answer will, of course, involve *a specific kind of sequence,* namely a *Cauchy sequence*. The annoying thing about our *definition* of convergence (and in fact, Gabriel agrees with me here – *Who would have thought!*) is, that it is intrinsically coupled to *a specific limit*, one which we demanded to be a rational number. It doesn’t tell us when a sequence *“converges”* (whatever that means), it only tells us (however, very specifically!) when a sequence converges *to a specific (rational) number*. Cauchy sequences try to fix that annoyance:

**Definition:** A sequence is called a ** Cauchy sequence**, if for every arbitrarily small , there is some index such that for any subsequent indices the distance between and is smaller than . In logical Notation:

This definition doesn’t mention limits anywhere. And yes, it turns out that all convergent sequences are Cauchy sequences. Normally I would say “proof left as exercise”, but I feel generous right now, and also it probably makes sense to see at least one proof about convergence, just to get a better feel for the whole shibang:

Let be a convergent sequence with some limit and arbitrarily small. Since we know by definition that (choosing ) there exists some index such that for any subsequent indices we have and . Now we need to show, that the distance between any arbitrary , (with ) is smaller than , which is just a quick calculation:

hence is a Cauchy sequence.

Now, conversely, are all* Cauchy sequences* also *convergent* (in the rational numbers)? Unfortunately no. We can prove this by constructing a sequence that “converges to” the square root of – and we’ve already shown that this is not a rational number:

We define three sequences , and simultaneously via recursion, by letting , and .

For , let (*hellooooo arithmetic mean, old friend!*) . If then let and , if , then let and .

Now the sequence is a Cauchy sequence and approaches the square root of arbitrarily close (proof left as exercise).

Here’s what the three sequences look like for the first 6 elements:

…if you’re annoyed by my usage of *“the square root of “* here, acting like that was *indeed* an existent number even though it isn’t, or at least we don’t know whether it is, yet – *fair enough.* But if you want everything I said here to be *more rigorous* in that regard, just replace every usage of *“the square root of “* by *“some number with the property, that “*. That way, the previous proof becomes an *actual proof, that the sequence doesn’t converge*, because now we’re *not* proving that it *“converges”* to a number which doesn’t exist, but instead prove that *if* the sequence *were* convergent, the limit *would have* some property which we *already proved no rational number can have*. Hence, by contradiction, the sequence doesn’t converge. Point being: *We don’t need to assume the existence of the real number for the previous proof to work.* We can simply substitute a couple of phrases and we get a proof without that *“illegal”* assumption, at the cost of a certain amount of clarity in the proof (in my opinion).

But the *important* thing here is: *All Cauchy sequences are like that*, in that they* seem* to convergence to* some point*, which may or may not be a rational number. And we can pretty much point* exactly to where that number would be* on the number line, if it *were* a number! So lastly, let’s try to capture when the *“limits”* of two Cauchy sequences are *equal* – preferably without referring to the limits at all, so we can still use that notion for the *non-convergent* ones:

**Definition:** We call two Cauchy sequences and ** equivalent**, and write , if and only if .

Note, that two sequences can be equivalent *even if neither of them converge* – it’s only the sequence of their *element-wise differences* that needs to converge. The point being, that we want to be able to think of equivalent sequences as *having-the-same-limit* – and in the case of the *convergent* ones, that already works out perfectly: *Two convergent sequences are equivalent if and only if they have the same limit* (proof left as exercise).

So, to summarize:

- We can define
*sequences*on rational numbers, and what it means for a sequence to*converge to a specific number*. - There are
*some*sequences (namely the*non-convergent Cauchy sequences*), that*seem*to converge to a specific number, but when we try to find the limit*it doesn’t exist*(in the rationals). - However, given such a sequence, we
*can approximate*its non-existent*“limit”*to an*arbitrary degree of accuracy*with rational numbers (that’s exactly what Cauchy sequences do, after all).

So, *what are these non-existent limits of non-convergent Cauchy sequences?* Are they *numbers?* Are they… *something else?* Do they simply *not exist*? But… I mean, *we know where they are,* don’t we? The answer is of course, that they are *real numbers*. So, let’s make our way towards getting a grip on these weird beasts by talking about *decimal expansions*:

Just so that nobody runs into the danger of confusing decimal expansions with whatever-real-numbers-are, I will define them separately and completely detached from either rational or real numbers, just so that we have a clear picture of what I’m talking about when I say “decimal expansion”, and so I can use them freely without anyone accusing me of using real numbers before I defined them in the first place. This might seem needlessly over the top, but, you know, *this guy claims that ,* so… yeah, you can imagine that I need to explain even how *writing down numbers* works, just to make absolutely sure that we all agree on that.

A sequence of digits like is, *at first*, not* itself a number*. By which I mean: It is a *sequence of symbols* that ** represent** a number, namely the sequence (“1″,”9″,”9″,”5”). I emphasize that, because I could easily choose a

…so there is a (definitely!) well-defined system behind the decimal notation (and luckily a much more convenient and intuitive one!):

The digits represent the multiples of the different powers of . Why exactly ? Because we have *ten digits, duh*. And, of course, for finitely many digits after the decimal point we can just continue the spiel with negative exponents:

We can express this in mathspeak as:

Let be a decimal expansion. Then define , where

*Meaning*: is the function that maps *two finite (period-separated) sequences of digits* to the rational number *they actually represent*. Alright? Can I assume, that *we all agree this is what decimal numbers mean?* Great, then now for a proper definition and the infinite case:

**Definition:** A ** decimal expansion** is a pair such that are (finitely many) digits and is a sequence of digits, i.e. for all .

A ** proper decimal expansion** is a decimal expansion where there is

The idea being, that we represent a (not yet existing) number such as as the *pair consisting of and the sequence *. The point of the *“proper decimal expansion”*-definition is to exclude those, that end with repeating. We don’t need them anyway because e.g. is just , as everyone except John Gabriel knows. But we will get to that. Why *specifically the digit ?* Again – because we have 10 digits, being the largest one. When e.g. adding two decimal expansions, it’s the digit that “flips over” when we increase it, impacting the previous digits. If we were to use a different number system, e.g. base (instead of base ), we would exclude a different digit from repeating indefinitely – e.g. in base the digit .

*Technically,* decimal expansions as I defined them can only represent *positive numbers.* To fix that, we can e.g. define them instead as *triples* , where – i.e. we just add the sign separately. But I will ignore negative numbers in the rest of this post for the sake of clarity, assuming that it’s clear to everyone that (and how) we can extend everything that follows to cover negative numbers as well.

Anyway, note how my definition of decimal expansions a) *does not require real (or even rational) numbers* and b) still makes expressions like *well-defined objects.* So far, just happens to* not be a number* (it’s just a pair of sequences of digits, after all), so we can’t do *“number stuff”* with it (add, multiply, whatever), but that we will do later on.

The reason why I’m doing this is, that we can now pose and (given some more work) answer the question, *what “number” a decimal with infinitely many decimal places is even supposed to be or represent*. After all, we can’t e.g. settle or even meaningfully* ask* the question, whether , as long as we’re not *absolutely clear on what the hell the expression is supposed to mean exactly.* And that’s the *one, crucially important* question, that all the cranks who claim (or as in Gabriel’s case – ) etc. *never seem to answer or even ask* – or even realize that that it *is a question that needs answering* first.

Β So, *in what sense* can we consider decimal expansions with infinitely many digits to be *numbers?* To answer that question let’s first only consider *those* decimal expansions that represent *rational* numbers. Obviously, every rational number has some decimal expansion, which we can simply compute via long division (for an explanation of long division I defer to wikipedia). Let’s denote the decimal expansion that corresponds to the rational number as :

**Definition:** Let . The ** decimal expansion of ** is

As an example: , hence . And , when doing long division, results in (*yes, Gabriel, it does.* I *know* you don’t believe that , but *for fucks sake*, you* can* do long division, *can’t you*? Also, *that* video will be dealt with *later*), hence .

I* could* give an actually *rigorous* definition of (obviously, because long division is a simple and clear algorithm) instead of just deferring to long division, but honestly, it would be rather ugly and distract from the fact that *all we’re doing* is long division while separating the *integer part* from the *digits after the decimal point*, and considering both just as sequences of digits. And we all learned long division in school at some point.

However, note that even in referring to long division and their results as expressions of the form , I *still* haven’t assumed that decimal expansions are *numbers*, or that e.g. is a number. I’m referring to long division purely as a procedure for generating sequences of digits from two *integers* (the numerator and denominator of the rational number expressed as a fraction). *See how fucking nitpicky this guy pushes me to be?*

Anyway, so we can now assign a unique (and even proper) decimal expansion to each rational number, and we can assign a unique rational number to each finite decimal expansion (i.e. those with only trailing zeros – which, just as an aside, also happen to be proper). All we need to do is fill in the rest.

Okay, so now that we all know (and hopefully agree) what numbers we (as a running example) mean by , , etc., we only need to clarify what we mean by exactly. In the finite case, the strings of digits , , etc. represent the rational numbers (for some maximal index ). Expressed differently:

– each such sequence of digits represents *a finite sum*.

Now obviously, we’ll want the *infinite* decimal expansions to represent *“infinite sums”* – i.e. we want

to hold – but here’s that pesky symbol and it’s *in no sense clear*, what an infinite sum is supposed to *mean* exactly – a *finite* sum I can just compute in a finite amount of time, but an *infinite sum?* But, of course, by now we already have all the tools we need to answer that once and for all: Given that nobody would disagree, that what the sum *should* mean is *“the result from consecutively adding all the addends of the sum”* – which of course perfectly corresponds to a *sequence*, and we know what it means for a *sequence* to converge. Sequences resulting from summation are called series:

**Definition:** Let be a sequence of rationals and the sum of the first elements of the sequence. We call the *sequence of partial finite sums* the ** series over ** – denoted as . If converges, we denote the limit as . In short:

Β Now there’s two things to mention here:

- Because
*Gabriel,*I will explicitly distinguish between a*series*and its*limit*(if the limit exists). My notations for this are*not standard*. Of course, if we know and have proven that a series converges, we – for all practical purposesΒ – don’t*need*to distinguish between the two. Context is, as often, everything. but I want to be absolutely precise here. - Note that the
*order*of the sequence – and*each single element of that sequence*– matter, when we turn it into a series. Whereas the limit of a sequence is uniquely determined by any of its end segments or infinite subsequences, and we can freely rearrange the summands in a*finite sum*, we can’t do that with*series:*when we change the sequence , its*series will look different*, because the sequence of partial sums that it represents will be different as well, and*possibly have a different limit – or none*. I just wanted to mention that. - There’s this somewhat stupid meme going around about – and since I know where I get some of my (negligibly few, but still strictly positive amount of) readers from, I’m pretty sure if I don’t go into that now I’m going to regret it. So here’s the thing: If you put forward an expression like , all that you can
*possibly mean by that*(at least to a mathematician) is the series , i.e. the sequence of its partial sums. This sequence is strictly increasing and unbounded and hence has no limit. If you put forward the claim that this series “equals” some number, what most mathematicians will assume – without further context – is, that the series converges to that number, which in the case of the series would just be plain wrong.

The origin of this meme are *analytic extensions* – various methods of assigning definite values even to *divergent series,* usually in such a way that they agree with the *classical* notion of convergence for those series that *actually do* converge. There’s a *point* to that, and these analytic extensions are *interesting* and *well-defined* and all that, but they are *not what most mathematicians without further context will mean by the limit of a series. *Consequently, I think people should *either explicitly or implicitly* make sure, that it’s unambiguously clear *which* analytic extension (if any) is used when proclaiming that some series *“equals”* some number. So let me be clear on that: I will *only talk about classical convergence as defined here* when I use an expression like .

Okay? Great, then I hope it’s now clear how to turn each decimal expansion into a series:

**Definition:** Let be a decimal expansion. We associate with the series

, where is the integer part as defined above.

So we have and now conversely , and . *How nice.* And it turns out that for *proper* decimal expansions we have that if and only if – the two functions and are *inverses of each other* – on proper decimal expansions whose series converge, at least. *Non-proper* decimal expansions (meaning – their associated series) converge anyway, since (exemplary):

(general proof left as exercise)

So we’re left wondering, what to do with decimal expansions whose associated series *does not* converge. But, having done all we have done so far, it turns out that all of the following statements are easily provable:

- For any decimal expansion , the series is a
*Cauchy sequence*, - We can conversely compute from each Cauchy sequence a
*proper decimal expansion*such that – i.e. going back and forth between sequences and decimal expansions preserves equivalence. - Two Cauchy sequences have the same associated decimal expansion
*if and only if*they are equivalent: . - The (element-wise)
*sum, product, difference and quotient*of two Cauchy sequences yield again Cauchy sequences, and all of them*preserve equivalence*; meaning: If and , then , and the same holds for products, differences and quotients. (In the case of quotients: assuming that the divisor sequence*neither converges to nor has any in it*).

…and I’ll even show you how to prove all this stuff (ugly details left as exercise):

** Proof of 1.:** Let be any decimal expansion and consider the sequence

We need to show, that (definition of Cauchy sequences:) for any there is an index such that for any we have . So assume we’re given some arbitrarily small . We choose such that . Now for any arbitrary (without loss of generality, let’s say ) we have:

QED.

** Proof of 2.:** We will define the following way: If the sequence converges to some rational number , we just take the decimal expansion as defined above (via long division – remember?), so we can assume that does not converge. Now I’ll show, how we can compute an arbitrary element of our intended decimal expansion by showing how to compute its first digits (for arbitrary ):

Since is a Cauchy sequence, there is some index such that the distance between all subsequent elements is smaller than . Now consider the decimal expansion of up to the first digits. If the st digit is neither a nor a , then we know that the first digits are fixed in the sense that all subsequent elements in the sequence will have the same first digits. Why? Because all subsequent elements differ at most by , which means the st digit can change at most by , and if it’s not or , the previous digits can’t be impacted by that anymore.

So, what happens if the st digit is or ? Well – depends; is the sequence increasing or decreasing from ? If it is increasing, we pick the next index of the next digit as a new and continue from there. If it’s decreasing, we pick the next digit . In both cases, we always find such an index (since the sequence doesn’t converge, hence can’t end with or ) – and the sequence will ultimately always be larger or always smaller than since is rational and by assumption doesn’t converge, hence it particularly doesn’t converge to .

In either case, we end up with the first digits of a decimal expansion. And notice how we constructed this decimal expansion – namely by basically scanning the sequence until the first digits stay fixed, which we do by picking an . The same way we can prove that the resulting series of the decimal expansion actually is equivalent to the original sequence – let an arbitrary be given, choose some with , find an index such that the first digits of both sequences’ decimal expansions stay fixed, then the differences between all later elements will be , hence the differences converge to , QED.

* Proof of 3.:* Well, that two sequences with the same decimal expansions are equivalent is almost exactly the last part of the previous proof, so that’s fine. The converse – that two equivalent sequences have the same decimal expansion – follows from the way we defined , again by a similar argument.

* Proof of 4.:* Exemplary, I’ll show this for addition: Let be two Cauchy sequences. The sum of two sequences is just element-wise addition, hence we need to show that is a Cauchy sequence. So let be arbitrarily small, then since are Cauchy there exist for indices and such that for all we have and . Let , then for any .

hence the sum is a Cauchy sequence.

Now for equivalence: So assume and . We need to show , meaning that the following sequence converges to 0:

hence the whole sequence converges to , QED.

*Gosh golly,* it *sure* looks *a lot* like decimal expansions represent *actual numbers,* doesn’t it? I mean – converting to sequences and back we can *add* them, *multiply them,* they contain (representations of) all rational numbers… but, you know, apart from their representations, what really *are* the *“real numbers”* now?

And you’ll be surprised to learn that at this point *it doesn’t even matter anymore*. *Seriously.*

I mean, don’t get me wrong; I can give you at least* three different ways* of formally constructing well-defined sets with well-defined arithmetic operations on them that we can point to and declare to be *“the real numbers”*:

- Let be the set of all proper decimal expansions, with addition, multiplication etc. defined via their associated series.
- Let be a representative system of the equivalence classes on rational Cauchy sequences. Meaning: We associate each Cauchy sequence with its equivalence class – i.e. the set of all equivalent Cauchy sequences. Pick one representative from each equivalence class, the result are
*“the real numbers”*. Since all operations (as shown/left as exercise) preserve equivalence, this is well-defined independent of the specific representative system – which basically means we could just take the equivalence classes directly.

In any case, this is just a slightly technical way to rigorously state that*the real numbers are exactly the Cauchy sequences, where we consider two sequences to be equal if they are equivalent.* *Dedekind cuts*: Let be the set of all subsets with the property that if and , then (think of them as the sets of rational numbers strictly smaller than some real number). Or, if you prefer, the same using instead of . So basically, there’s two ways of using Dedekind cuts.

But, you know, all of them are *equivalent anyway*, so for all practical purposes, which one we chose (if any) is *completely irrelevant!* What *matters* – and this is what all mathematicians agree on when it comes to *“the” real numbers* – is that the following axioms all hold:

- The real numbers form an
*ordered field*. That means:- There are elements and operations such that for all we have and .
- For all there is some such that .
- For all with there is some such that .
- Both and are
*associative*and*commutative,*i.e. , , and . - The
*distributive law*holds: . - The elements of are
*totally ordered;*meaning there is a*reflexive, anti-symmetric, transitive*(never mind what exactly that means, except that it behaves like a proper ordering) relation such that for all we have either or (and we have both if and only if ). Furthermore, this order is compatible with addition and multiplication in the usual way ( and implies etc.).

- There is an embedding that agrees with addition, multiplication, subtraction and (meaning: I can consider rational numbers to
*“be”*real numbers). - The real numbers are
*archimedian*: For each number there is some natural number with (using the embedding from the previous point). - The real numbers are
*topologically complete*; meaning: every Cauchy sequence of real numbers converges to some real number.

And, of course, all of the above methods of defining the real numbers satisfy all of these properties.

So *what are the real numbers?* Well, *any set that satisfies all of these* can be considered *“the real numbers” –* as long as they do, they’re *all fine*. But the point is, whatever you call *“the real numbers”*; they *have to satisfy these axioms*. If they *don’t, they’re not the real numbers;* at least not what *any mathematician will mean by that.*

And I hope I’ve been detailed and clear enough, that it’s obvious *why* we choose to define real numbers this way – it’s the *most natural way* to interpret decimal numbers with infinitely many digits after the decimal point, and it agrees with what intuitively real numbers are *supposed to be:* They allow us to e.g. take the square root of any positive number, all Cauchy sequences actually converge (instead of just looking like they do), the contain the rationals, they *don’t* contain infinitely small or large *“numbers”*… so now that we *know* all of this, we can continue dissecting Gabriel.

*You know, next time.*

(Next post on John Gabriel: “Cauchy’s Kludge”)

]]>Few people can ever begin to match my intelligence and depth of insight. I am not arrogant or deluded.John Gabriel

Yeah. *That’s an actual quote.* I’ve been made aware of Gabriel’s LinkedIn page, where he wrote hilarious posts about his new calculus and his axioms for arithmetic. And (as someone on Mathematical Mathematics Memes pointed out), it’s becoming increasingly plausible that this guy has some mild form of mental illness, or at least a personality disorder. *I mean that without a hint of irony* – the narcissism and ignorance of this guy even dwarf Donald Trump. Here are just some further choice quotes:

“After Euclid and before me,Β

, ever understood what is a number. That’s quite a big statement, but I have proved it.”not a single mathematics academic

“Georg Cantor, whom I consider one of the greatest fools in mathematics and the reason so many have problems with math.”

“I loathe mainstream academia and it’s hard for me to restrain myself. My tolerance for stupid people has long ceased to exist.”

“I realised many years later, they rejected my discoveries for several reasons, but the one that stood out is the fact that they did not like me personally. Truth or proofs had little to do with the rejection. They decided to libel and defame me, rather than study my ingenious work which is worthy not of one Abel prize, but of ten Abel prizes.”

“One would think that given I am helping future generations of aspiring young mathematicians, they would be grateful and welcome this new knowledge I reveal. But no, my life has all but been destroyed by the efforts and attacks of the most vile scum in mainstream academia.”

“The NC is the first and only rigorous formulation of calculus in human history. That is an incredible accomplishment given that no one before me was able to do this – not even the so-called greats such as Archimedes, Newton or anyone else. It is no longer debatable, but proven fact.”

Β John Gabriel – Jesus, Aristotle, Newton and Einstein all rolled into one. Praise him.

Yeah. *Verbatim*, people, *verbatim*. And I don’t think he’s a troll either – *he’s been doing this for years*, if not decades, and he takes every piece of criticism as a personal attack. He really seems to think he is god’s gift to humanity. So, let’s continue to *take him down a notch.*

In the second video on John Gabriel’s YouTube channel, he starts ranting about how calculus (unlike his *new calculus*, which is perfect in every way!) is *wrong,* which means I might as well use this opportunity to explain why it’s not and in general *how this stuff actually works*. Unfortunately (or *fortunately*, depending on your aesthetics) that means getting into *serious math territory* – many things that Gabriel gets wrong have to do with the *fundamental definitions* of e.g. convergence, the real numbers etc. However, if we want to see *how wrong Gabriel really is,* we first need to make sure that we all agree what the “official” (i.e. *“right”*) definitions of all those concepts *are,* what *motivates* these definitions and what their *implications* are.

**Disclaimer: **I will assume that we all know and somewhat agree that *rational numbers* are, like, *a thing* – that is, numbers that can be expressed as fractions of integers . The set of all rational numbers is denoted as , the set of all natural numbers – i.e. the numbers – as . I mention this, because I *will have to talk about what “real numbers” really are* in modern mathematics – something that Gabriel really doesn’t seem to grasp. Also: Usually I prefer to have to be a natural number, but I specifically exclude it from here, just for convenience – it allows me to e.g. define a sequence without needing to worry about the case .

**Second disclaimer:** I’m not a historian. I might get some, many or all of the historical details wrong. I’m writing this pretty much from the top of my head. The same holds for all definitions, proofs etc. With respect to the historical stuff, it doesn’t even matter – after all, almost everything to do with actual mathematics has changed since then, and what’s important is the motivation behind this stuff, not the precise historical development, which is why I can’t be bothered to fact check this in detail. With respect to the actual math: It’s waaay more fun to redevelop all the concepts from the top of my head, rather than looking everything up in textbooks. So don’t believe anything, check everything for yourself and see whether it works out. I’m still, like 90% sure that all my definitions are either standard or equivalent to standard definitions, so don’t reject everything I say out of hand either.

Calculus was developed by Isaac Newton and Gottfried Leibniz. It’s not quite clear who invented it first; it’s not unlikely that they invented it independently of each other, inspired by similar problems. What we *do* know is that Leibniz* published* his calculus first and it’s *his notations* that we still use today. Newton (of course) claimed *he* invented it first and he used it to prove, that an inverse square law like the one in his theory of gravity *would in fact imply elliptical planetary orbits*. It’s an astonishing feat of intellect – this guy basically came up with a *working, mathematical theory of gravity* to explain planetary orbits, and *invented completely new mathematics just to prove that it works.*

Β Calculus is (quote Wikipedia)* the mathematical study of continuous change*. Its basic objects of interest are *continuous functions* on the *real numbers* (often described as *“functions whose graph can be drawn in one stroke without lifting the pen”*) and its most important notions (besides continuity) are *derivatives* and (basically the inverse to derivatives)* integrals*. Nowadays, we define the latter using *limits of sequences*, and those we define using ––*criteria*, which we have to thank Augustin-Louis Cauchy and Karl Weierstrass for.

However, in Newton’s and Leibniz’ times the “limit of a sequence” *wasn’t yet a well-defined notion*; instead, they used* infinitesimal numbers* in the development of their theories. So here’s approximately their thought process:

Assume we have some continuous function . As an example, let’s say . Its graph looks like this:

Question: *What is the slope of that function at the point* ? I mean, obviously the function is increasing to the right, but *how fast* is it increasing? Obviously it’s not increasing “*at the same speed*” everywhere – otherwise the graph would just be a straight line. So, how can we find out *“how fast”* the function is increasing at the specific point – and *what does that even mean*?

Well, let’s look at* two* points instead: e.g. and . How fast does the function grow *in the interval from to *? Now *this* we can answer: we know and . So the function has grown by . That’s an *absolute growth* of 2 in the interval of length .

Which means: *on average* the function grows by a factor of in that interval:

That’s how we measure speeds in practice: Note at which time e.g. a car passes a fixed point , at which time it passes a second point and divide the distance by the time it took, i.e. . This will give you the *average speed* in the time period from to .

But of course, it doesn’t give you *the exact slope* at the singular point . But it might give you an idea how to get there: If we decrease the distance between and (assuming the function doesn’t do weird stuff in between), we will be somewhat closer to the exact slope. For example, if we pick , then and thus the average growth is .

And here’s Newton’s and Leibniz’ mental leap: If we decrease the distance between and to the point where it is *infinitesimally small*, then we will get the *exact slope* of at the point (or – the difference between the slopes will *also* be infinitesimally small)!

So, let’s assume we have some* infinitesimally small* (whatever that means), then the *derivative*** of** (i.e. the slope of at the point ) is given by .

For our function, that means:

…and (so the reasoning goes) since is just an *infinitesimally small number* and hence ultimately *negligible*, we can ignore it and get , and hence we finally get the exact value .

Obviously, there are problems with that reasoning: What the hell are those *“infinitesimal numbers”* that are suddenly introduced, that I can can apparently add and multiply and divide by (I mean – I *can’t* divide by zero, but I *can* divide by something that’s *“infinitely close” to zero*?), but then in the end *I just ignore them? What’s that all about? Is this supposed to make sense?* And if is “infinitesimally small”, shouldn’t that mean that would have to be *infinitely large*? Does* that* still make sense? *What’s going on here? Aaaaaaaah!
*

Well… the thing is… it *sort-of works*. At least for relatively simple functions as the exemplary one I used it *yields meaningful results*, regardless of how weird the reasoning to justify the method is. But infinitesimals were never quite satisfactory, which is why Cauchy and Weierstrass tried to put the whole thing on a *more solid basis*.

Interestingly enough, this whole infinitesimal stuff was actually formally grounded in a rigorous way in the 20th century (and resurrected as “non-standard calculus”). But the way *“standard”* mathematicians interpret and think about calculus and real numbers in general is in terms of *cauchy sequences, limits and –-criteria*, so let’s explain the modern foundation for calculus now.

**Definition:** A ** sequence** of rationals is simply a function – i.e. a function that maps each natural number to some rational number .

Sequences are usually denoted as (or in short just ) and the individual elements as (instead of – i.e. we just write the function argument as an index).

So, why are sequences interesting? Consider the following two examples:

- (i.e.Β ) and
- (i.e. ).

There’s something fundamentally different about the two: Obviously, if we increase , the first sequence will *strictly increase* as well, while the second one *strictly decreases*. Okay, that’s not too interesting, but if we look closer, we notice that the first sequence is also *unbounded*: Pick an arbitrarily large number – at some point the first sequence will grow larger than (just pick any natural number larger than , then ). For the second sequence however, we can give a *lower bound*; e.g. . Even though strictly decreases, it will never become smaller than .

But of course, we can give a “better” lower bound than – namely . This is also a lower bound, because all the elements of are strictly positive; hence no element will ever be . In fact, is the *largest lower bound* (or *infimum*) of the sequence, and *the larger* a natural number we choose,* the closer the sequence element will be to *.

It’s consequently not completely absurd to suggest that the sequence *approaches* in such a way, that we may meaningfully say that is the *limit* of the sequence . In contrast, does not seem to have such a limit – the sequence just gets larger and larger with no bound in sight (we *could* say that the limit of the sequence is *“infinity”*, but infinity is not a number per se, and infinities are – *without a careful formal treatment!* – problematic anyway). We say *the sequence converges towards *, and *the sequence diverges*. Now let’s properly define those two terms:

**Definition: **Let be a sequence of rationals. Assume there is some rational number such that the following holds:

For any arbitrarily small rational number there is some index such that for any index the distance is smaller than . In logical notation:

Then we say the sequence **converges** **to** and write or .

If no such exists, we say the sequence **diverges**.

Okay, this looks a bit complicated, so let’s explain it in more detail: We say a sequence converges to some number , if we can get *“arbitrarily close”* to by making the index of our sequence larger. This *“arbitrarily close”* we can express formally by thinking about it as a kind of game: You tell me *how close* to you want to be, by giving me an (arbitrarily small) distance . Then I’ll give you an index in return, such that *all subsequent elements* in the sequence are *closer to* than your chosen distance – i.e. *for all subsequent indices* , we have . If I can *always* give you such an initial index,* no matter how small* a distance you choose, then I can adequately say that the sequence converges towards .

Alright? So far, so good. Now we can use limits of sequences to define the *limit of a function at a point *. Why should we? Well, look at the function , for example. This function is *not well-defined* at , because then the *denominator would be * – i.e. *“doesn’t exist”.* But, you know, here’s what this function looks like:

In fact, the function is equal to the function *everywhere except at* ! Annoying, but if we build a sequence that converges to (for example the sequence ), then we can define as the limit of the sequence (the resulting, now everywhere-defined, function is called the *continuous extension* of ), which happens to work out nicely and give us . *Problem solved!*

*…eeeexcept,* of course, that this only makes sense if the sequence *converges at all,* and – more notably – that the limit* does not depend on the specific sequence* . So instead of defining the limit of a function using sequences, we will use another –-criterion:

**Definition:** Let be a function on rationals (i.e. ) and . If there is some number such that the following holds:

For every arbitrarily small rational number , there exists some such that for every with we have . In logical Notation:

Then we call * the limit of at * and write .

The idea being a similar game as in the definition of convergence for sequences: You tell me any arbitrarily small distance to the (supposed) limit you want to have, and in return I will give you a distance , such that if any is closer to than , then will be closer to than . If I can always give you such a , no matter which you pick, then I win and is indeed the limit of at .

Alright, and now we can finally define derivatives using function limits – the idea being, that instead of picking an “infinitesimal number”, we take the function limit of the quotients:

**Definition:** Let be a function on rationals and . If the limit

exists, we call ** differentiable at **. If Β is differentiable at every point in we call

the ** derivative of **.

You’ll note, that this is exactly what Newton and Leibniz did; except that we got rid of those pesky infinitesimals and only used notions, that are formally and rigorously defined – there’s no room for ambiguity anymore. Furthermore, all of this works perfectly and beautifully – for example, all of the following highly desirable properties (assuming all the occuring limits exist) hold and can be easily proven using the above definitions (left as an exercise):

- The limit of a convergent sequence is unique,
- if and only if

…and we didn’t even touch the real numbers yet!

(Next post on John Gabriel: Calculus 102 (Cauchy Sequences and the Real Numbers))

]]>But I suspect that’s because to be a crank you either have to flat-out lie to people (and what would be the point with math?) or

- Not know enough about the subject to realize you’re wrong, while at the same time
- think you know enough to boldly proclaim your wrongness to the public.

I imagine that’s easier with e.g. physics, where people can read popular books dumbed down for a lay audience (and I don’t mean that in a derogatory way – I love pop science!) and come away thinking that they now know all the important stuff and can start drawing their own conclusions on the subject matter (*Spoiler alert:* No, you can’t. If you can’t solve a SchrΓΆdinger equation, you’re simply not qualified when it comes to quantum physics, period.) But with math I can imagine it being a lot harder to both *think* you understand something well enough to pontificate about it while at the same time *not* understanding it enough to realize your pontifications make *no damn sense*.

John Gabriel manages to do both, and it’s fascinatingly weird. He’s the perfect embodiment of the Dunning-KrΓΌger effect on steroids: He understands so little about modern mathematics that he doesn’t even realize how little he understands, and instead thinks he’s the only one who really gets how math works. In typical crank fashion he rails against “stupid academia” who get so hung up on useless concepts like “*reason*” or “*making any sense whatsoever*” that they just don’t realize what a genius he is.

Or it could be that he’s just wrong and makes no fucking sense. It’s a toss-up.

John, let me recite Potholer’s Trichotomy to you:

If something in science doesn’t make sense to you, you have to conclude that either

- Research scientists are all incompetent, or
- they’re all in on a conspiracy to deceive you, or
- they know something you don’t, and you need to find out what that is.
Hint: Try option three first.Potholer54

Interestingly enough, I had read about Gabriel before – years ago on good math, bad math, where he ended up arguing with Mark Chu-Carroll about Cantor’s second diagonal argument. That article is from 2010, but apparently about a year ago Gabriel started a youtube channel, presumably in the hopes to bring more people to his more enlightened (i.e. nonsensical) side and to proclaim the fact that he invented a new calculus!

That’s right, he has reinvented calculus, and his version is much better and simpler and it’s easy to understand for anyone open enough to abandon sense and rigor, unlike all those stupid academics.

And given that I’ve just been made aware of his existence again, I figured I’d give it a go and dissect that guys videos, because

- it’s fun (at least to me) and
- it’s as good a reason as any to explain some of the stuff he gets wrong in some more detail, and any attempt to explain math to people is time well spent in my opinion.

So let’s start with his first video:

This is just a short video on the *arithmetic mean* – i.e. the “average”. This isn’t as cranky as his other stuff, but it already gives a fascinating glimpse into the way Gabriel thinks. Now, as I said, the arithmetic mean is just the average of a bunch of numbers. We all know how to compute it, we all know why it’s useful – we all remember computing or getting told the average grade in exams, for example. And there is *absolutely no reason* why I mention *that particular example*. Here’s what Gabriel’s video description says about it:

The arithmetic mean is one of the most important concepts in mathematics. While just about anyone knows how to construct an arithmetic mean, almost no one understands it.

Right… the average of a bunch of numbers is really hard to grasp. I remember struggling with it in elementary school as well… no, wait, I didn’t. Maybe that’s just because I didn’t realize how awfully complicated it in fact is, after all, almost no one understands it. But Gabriel does, of course.

To compute the arithmetic mean of a bunch of numbers, we just add them all up and divide the sum by how many numbers we had. In mathspeak:

**Definition: **The *arithmetic mean* of a finite sequence of real numbers is given by

We’ve all done that for grades: Add up all the grades of all the students in an exam, divide the result by how many students there are and you get the average grade in that exam. Here, by contrast, is Gabriel’s “definition” (and yes, he means definition):

An arithmetic mean or arithmetic average is that value which would represent all the elements of a set, if those elements are made equal through redistribution.

…now I don’t know about you, but… is that even a sentence? What does that mean? “*That value which would represent all the elements of a set*“? “*If those are made equal…*” …well, then the set only has one element, doesn’t it? (Sets have no multiplicity – either a number is in a set or it isn’t.) OK, at least then I can guess what he means by “represent”. But “*through redistribution*“? What does “redistribution” mean in this context?

*This is not a definition.* This is at best *a clumsy attempt at explaining* a definition. But *he actually calls this a definition*, and he runs with it. So here’s a beautiful example of why definitions *fucking matter.*

He goes on to explain, that you can compute the arithmetic mean by drawing squares. He demonstrates this with three sets of squares, the first one having one square, the second two, the third three. He moves one square from the last set to the first so that every set has two squares, thus “making them equal”, hence the arithmetic mean is two.

Now at least one can understand what his so-called “definition” was supposed to mean, but the immediate problem now is: *What if the total number of squares isn’t divisible by the number of sets you have?* Then his “redistribution” attempt fails, so according to his definition *there is no arithmetic mean* in that case. But he also shows us how to compute it using “*algebra*“, by which he *means arithmetic* (pun intended – and yeah, he can’t even get *that* right) – i.e. summing up and dividing the result according to the definition I stated above. But that’s not what *his* definition says.

See what I mean when I say this guy makes no sense? But yeah, he runs with it:

A useful arithmetic mean is one where it makes sense to redistribute the values.

Example: Three friends each need $2 to buy lunch. They decide to pool their money because one of the friends may not have enough. If the total they have is $6, then it’s evident there is enough money for all three to buy lunch.

Redistribution is accomplished by sharing the money.

A useless arithmetic mean is one where it makes no sense to redistribute the values.

Example: The arithmetic mean of student grades in a given class is a senseless calculation because students cannot share their marks.

Redistribution cannot be accomplished by sharing grades.

Β …jupp. First, notice how *no arithmetic mean* appears in his first example. *Anywhere*. Something costs $2, three friends pool their money, they need at least $6. The conclusion I’m left to draw is, that a “*useful arithmetic mean*” is one which *isn’t even used*, despite the name. Quite counter-intuitive.

However, the *prime example* for an average – namely the average grade in an exam, something *everyone has seen hundreds of times in school* – is, to him, a “*useless arithmetic mean*“, because students can’t share grades. *How does that even make sense?* And *don’t think* that’s just a term he’s introducing, and that he *doesn’t* mean the word “useless” in a *literal sense*. Listen to the *derision in his voice* when he talks about the “*senseless computation*“.

*Of course it makes sense* to compute the average grade – it gives you a good baseline to *compare* your own grade to, a sense of how well you did *in comparison to the others* without needing to know everyone’s specific result (which are confidential, after all). It gives you a sense of how difficult the exam was, or how lenient it was graded. But no, that’s all meaningless because students can’t share grades.

But also, *why does this matter*? Math is *abstract*, it *doesn’t care how* you apply it, *what* you apply it to and whether the result of that application still has any meaningful interpretation *in the real world*!

Yeah, this is how Gabriel works in a nutshell:

- He takes a mathematical concept with a proper definition which he either doesn’t know, like or understand (or any non-empty subset of the three),
- he visualizes or interprets it in some vague way (“
*making things equal through redistribution*“), - he insists on his ill-defined vague interpretation to be the
*actual definition (even though it’s hand-wavy, vague nonsense)*, - he labels everything outside of his vague interpretation as “
*meaningless*” and therefore void and draws absurd conclusions from his “definition”, - he proclaims that he has found the
*ultimate real meaning*of the mathematical concept and rails against*stupid academia*.

It’s glorious in its arrogance and ignorance.

(Next post on John Gabriel: Calculus 101 (Convergence and Derivatives))

]]>The goal in a SAT problem is to find an *assignment* that satisfies a set of propositional formulae , or a proof that none exist, i.e. that the set is unsatisfiable. DPLL is closely related to the* resolution calculus*, and as with resolution we first transform to a set of *clauses* , where each clause is a set of *literals* , where a literal is either a propositional variable or the negation of a propositional variable. We write if is the propositional variable and if it is the negation of . Then corresponds to the formula

in conjunctive normal form (CNF).

The basic algorithm takes a clause set and a *partial assignment* (where is the set of propositional variables) as arguments (starting with the empty assignment) and returns either a partial assignment that satisfies or if none exist. It proceeds as follows:

**Unit Propagation**: If contains a unit, that is a clause with just one literal, then we extend our current assignment by that literal; meaning if , then we let and if we let . Then we*simplify*(more on that later) with our new assignment and repeat.- When contains no units anymore, we check whether is now empty; if so, we are done and we return our current assignment . If not, we check whether now contains an empty clause; if so, we return .
**Splitting**: So now is not empty, contains no empty clauses and contains no units. We’re left with guessing an assignment, so we pick one propositional variable on which our assignment is currently undefined, let , simplify with our new assignment and recurse. If returns an assignment, that means we guessed currently and we return that. If it returns , we guessed wrong and instead let , simplify and recurse with that.

- By
**simplify**I mean: If in our current assignment we have (or respectively), then we delete from all clauses that contain the literal (respectively ). In the remaining clauses, we eliminate the literal (respectively ).

Now why does this work? Let’s start with unit propagation: If contains a unit literal , then the formula corresponding to our clause set looks like this: and obviously, for this formula to be satisfied, we need to set . Conversely, if we have the unit literal , then the formula looks like this: and we obviously need to set to satisfy . Now for simplification: If we assign , then any clause that contains the literal will be satisfied by , since clauses are interpreted as the *disjunction* of its literals. Consequently, we can ignore these clauses from now on. The remaining clauses however will still need to be satisfied, and if we have , then we know that the literal will be false anyway, which means we can ignore it in any disjunction (i.e. clause) containing it.

**Example:** Let . Then proceeds like this:

- We have no unit clauses, our assignment is empty, no clause is empty and isn’t empty, so we can only split. We pick the variable :
- We let and simplify, yielding . We have no unit clauses, no clause is empty and isn’t empty, so we can only split. We pick the variable :
- We let and simplify, yielding . We have a unit clause, so we do unit propagation by letting and simplifying. Unfortunately, that yields containing the empty clause, so we return .
- Since the former branch failed, we let and simplify, yielding . We have a unit clause, so we do unit propagation by letting and simplifying. Again, that yields containing the empty clause, so we return

- The former branch failed, so we let and simplify, yielding . is empty, so we return the current assignment .

- We let and simplify, yielding . We have no unit clauses, no clause is empty and isn’t empty, so we can only split. We pick the variable :

Apart from unit propagation and simplification, all that does is guessing assignments and backtracking if it fails. As such, it can very easily happen that the algorithm tries the same failing partial assignment again and again; e.g. because somewhere along the way it splits on a propositional variable that has nothing to do with why it failed before. A nice example of how this can happen is, if we have basically independent clauses that don’t share any propositional variables in the first place:

**Example:** We take the clause set from before and pointlessly extend it by the independent clauses and . Obviously, these are basically two independent SAT problems that don’t share any propositional variables. But as we’ve seen before, if we start with our original clause set will necessarily fail, but the algorithm will only notice once (and ) have been assigned as well. If our run will start with , but (in the worst case) then continue with first, it will try the same assignments for and again and again:

This is where **clause learning** comes into play: *If* we fail, we want to figure out *why* we failed and make sure we don’t make the same mistake all over again. One way to do this is with **implication graphs**, that systematically capture which chosen assignments lead to a contradiction (or imply which other assignments via unit propagation). An implication graph is a directed graph whose nodes are literals and the symbols (representing the clause becoming empty, i.e. a failed attempt). We construct such a graph during runtime of :

- If at any point a clause becomes the
*unit clause*, then for each literal with we*add an edge from to*(where is the*negation*of the literal ).

The intuition behind this being, that when we do unit propagation, we’re using the fact the formula is logically equivalent to . The new edge is (sort-of) supposed to capture this implication. - If at any point a clause becomes empty, then for each literal we
*add an edge from to*.

Here we are using the fact, that the formula is logically equivalent to .

Before we answer the question *why* we’re doing this, let’s go back to our previous example and construct the implication graph to the first contradiction. To keep track of which clause a literal is coming from, let’s name them:

**Example:** Let again where , , and . Then proceeds like this:

- We start by splitting at the variable :
- We let and simplify, yielding . Then we split at the variable :
- We let and simplify, yielding .
*is now a unit clause*. Apart from , the original also contained the literals and , so we add to our graph two new edges toΒ , one from and one from (remember: from the*negations*of the original literals!).Now we do unit propagation , resulting in becoming the empty clause. The original contained the literalsΒ ,Β andΒ , so we add edges fromΒ ,Β andΒ to and return .

- We let and simplify, yielding .

- We let and simplify, yielding . Then we split at the variable :

The resulting graph looks like this:

So, how does that help? Well, the graph tells us pretty straight, which decisions lead to the contradiction, namely ,Β andΒ . But more than that, it tells us that wasn’t actually a decision; it was implied by the other two choices. So in our next attempt, we need to make sure we don’t set and to . The easiest way to do that is to add a new clause for , which is . *Hooray, we learned a new clause!* Before we examine how exactly we construct clauses out of graphs, let’s continue our example with this newly added clause:

**Example continued:**

- –
- –
- –
- Since the former branch failed, we add our new clause and simplify, yielding . Now became a unit clause, so we add an edge from to , do unit propagation by letting and simplify, yielding .

Now became a unit, so we need to add edges from and to .We do unit propagation again and becomes the empty clause, adding the edges from , and to .Β Finally, we return .

- –

The resulting graph:

This time, we can immediately see that the bad choice was . We can learn from this, by adding the new clause , which of course is a unit and thus immediately becomes part of our assignment.

So, how *exactly* do we get from implication graphs to clauses? That’s simple: starting from the contradiction we arrived at, we take *every ancestor of the -node with no incoming edges* – i.e. the roots. Those are the literals that were chosen during a split-step in the algorithm, and the other ancestors are all implied (via unit propagation) necessarily by those choices. These choices (i.e. their conjunction) lead to the problem, so we take their negation and add them as a new clause: In the first example graph, the roots were the node and , so we added the new clause . In the second graph, the only root was , so we added the clause .

Admittedly, in this example adding the learned clauses didn’t actually help much – but imagine bringing the variables back into the mix. Where we had hundreds of almost identical branches before, with clause learning the search tree of the algorithm will look more like this:

Two failures are now enough to conclude, that setting was already a mistake and we can *immediately backtrack to the beginning*.

In its (not actually, but for our purposes) most abstract form, a ** logical system** consists of three things:

- A
, i.e. a set of*language*,*formulae* - a class of
, which we use to assign a truth value to a formula, and finally*models* - a
between models and formulae.**logical entailment**relation

We don’t demand anything more specific. The formulae and models may look however we want them to; as long as we have a relation between models and formulae, we have a logical system. There’s nothing that forbids us from defining , and if is divisible by . It’s *pointless* as a logical system, but it is a logical system.

We interpret to mean *the formula is true in the model – *whatever we mean by that. If a formula is true in *every* model, we call a ** tautology** (or

Let’s look at propositional logic to see how this formalism works in practice: We start with some set of propositional variables .

is simply the set of all *propositional formulae*, i.e. the set defined by the following rules:

- Every variable is a formula, i.e. .
- If is a formula, then .
- If are formulae, then so are , , and .

The way we assign truth values to propositional formulae is via *assignments*: Functions that assign to each propositional variable either 1 (true) or 0 (false). We can then extend an assignment to a function on all formulae inductively over the recursive definition of formulae:

- For every variable , we let .
- For any formula , we let if and only if .
- For any formulae , we let if and only if and .

We let if and only if either or (or both).

We let if and only if or (or both).

And finally, we let if and only if .

Since we can extend *every* assignment (on ) to all formulae this way, we don’t actually need to distinguish between assignments and their extensions – once we assign 0 or 1 to each variable, the above rules tell us exactly which value to assign to any arbitrary formula. Now, our set of models is simply the set of assignments, and the relation we define by letting if and only if . So if we have an assignment with and , we have , , … Furthermore, we can immediately see that e.g. is a tautology – for every model (i.e. assignment) we have . Why? Because either (and thus by definition of the extension we have ) or and thus by definition of our extension and hence .

It should be noted, that what a model *is* depends heavily on the specific logic. In propositional logic they are just assignments; however in *first-order logic*, models are *structures* with a universe and various functions and relations on (depending on the *signature* of the language), in *modal logic* they are sets of worlds with a visibility relation between them… models can get seriously weird, and they get weirder with the expressive strength of the language. This is why the definition of a logical system is so broad – we want to be able to express *all logics*, no matter how weird, in this framework – even if it’s a nonsensical pointless logic like the one with natural numbers.

By the ** semantics** of a logic we mean its models and the logical entailment relation. On the other hand we have the

Let be a set of formulae. We say is ** a model of** (and write ) if and only if for every . We say

This way, we can talk about the truth of a formula *relative to a set of axioms/assumptions* . Again, we call a *set* of formulae ** satisfiable** if it has a model; i.e. if there is a model in which

Let’s look at a couple of examples again: In propositional logic, we have e.g. – after all, *every* assignment that satisfies *both and* also *has to* satisfy . Also, the set is *satisfiable*, as any assignment with and proves. An example for a *non-satisfiable* set would be – even though both formulae on their own are satisfiable, *no* assignment can set *both and* to 1.

For our pointless divisibility logic, we already know that 0 is not satisfiable. As a result, any set of numbers containing 0 also is not satisfiable. But we also know that every number (except 0) divides 0, which means 0 is a model for *every set of natural numbers* (not containing 0) – which makes every set that *doesn’t* contain 0 satisfiable. This is boring, so let’s get rid of 0 as a model by letting . Now only *finite*(!) sets of numbers that don’t contain 0 are satisfiable. To prove that, we take an arbitrary finite set of numbers that doesn’t contain 0. We now need to find a model for this set, i.e. a number that divides every one of the . But that’s easy: We just multiply them together, so we have , hence is satisfiable. On the other hand, infinite sets of numbers can’t be satisfiable, since infinite sets of natural numbers are unbounded, i.e. contain arbitrarily large numbers. A number divisible by all of them would have to be larger than any natural number (or 0, which we got rid of). So the satisfiable sets are now exactly the finite sets not containing 0.

Of course a statement still belongs squarely in the realm of *semantics* even if it doesn’t explicitly mention models anymore – whether the statement is true or not still depends on the models, after all. So what we want is a *second* relation that is definable *completely without* models, purely on the basis of *syntax*, but happens to express (at least approximately) *the same thing*; i.e. ideally we want to be true if and only if . So let’s use *proof calculi* to define such a relation:

A ** proof calculus** for a language consists of a set of

- (i.e. is an assumption/axiom), or
- (i.e. is a -axiom of the calculus), or
- there are formulae with (for every ) and an inference rule such that (i.e. is derivable from already proven formulae via some inference rule).

We write short for , i.e. if is provable in without any additional axioms. Note, that a calculus is defined directly on the formulae; *purely syntactically*. Our definition doesn’t mention models anywhere. Now, given some proof calculus , what matters to us (and what we are interested in) is how that calculus relates to the *semantics* of our logical system – i.e. how similar and are. The most obvious property a calculus should have in that regard is *correctness*:

A calculus is called ** correct** or

In other words again: If a calculus *isn’t* sound, we can prove things that *aren’t true*. In that case it is *completely useless*.

Let’s again look at propositional logic as an example. Without using a specific existent calculus, it should be clear that the axioms of the calculus should be tautologies (e.g. or …), since they are always provable, no matter my assumptions. Consequently, they need to be always true if we want our calculus to be sound, hence tautologies. Inference rules are usually written in the following style:

…meaning the inference rule maps the formulas *above* the bar to the formula *below* the bar. We can read this as *“If are all provable, then I can also derive/prove “*. Typical inference rules are e.g.:

Modus Ponens / Implication-Elimination | |

Conjunction Introduction | |

Implication Introduction |

The last one here is interesting: Whereas in the other examples, the premises (i.e. the stuff above the bar) are all just formulae, in the implication introduction the premise is itself a demand that some formula is provable from some other one.

The converse to soundness is *completeness*:

A calculus is called ** complete** if and only if whenever (for some set of formulae and some formula ) we have . In other words: if everything that is

Whereas soundness is not just (as good as) required, but also often quite easy to ensure (or prove), completeness is a much more difficult property to achieve (or prove). Let’s take our pointless calculus as an example to try to come up with a sound and complete calculus:

A set of axioms/assumptions is just a set of numbers (not necessarily finite, but for this example let’s assume it is). A “formula” (i.e. number) is entailed by if any “model” (i.e. number) that is divisible by all the is also divisible by . As we already mentioned, 1 is the only “tautology” (since it divides every number, i.e. is “true” in every “model”), so we can take that as the only axiom of a calculus: i.e. . And we already have a sound calculus: The only provable “formulae” (i.e. numbers) so far are 1 and the ones we assume, so everything we can prove in this calculus also happens to be true. But it’s certainly not complete, since e.g. – every number that is divisible by 5 and 6 is also divisible by 10 after all. But the only “formulae” that are provable from the set are 5 and 6 themselves and 1. So if we assume 5 and 6, then 10 is true but not provable.

We already know that 1 is the only possible axiom of our calculus – since axioms are always provable (no matter the assumptions), they need to be tautologies (assuming we want to stay correct!), and 1 is the only tautology in our pointless logical system. So the only way we can extend our calculus to something more useful is to add inference rules.

I’ll choose the following: If we know, that is divisible by , then is also divisible by . We can express that as , which tells us: “If every number that is divisible by is also divisible by , and we know/assume that some number is divisible by , then it is also divisible by “. With this inference rule we can e.g. prove that :

We trivially have . With this and the assumption we can use our inference rule to infer that . And from this and the fact that we can infer . Similarly we can show that . So this new inference rule allows us to infer all divisors of our axioms/assumptions. That’s great, and since our inference rule is obviously correct (i.e. it allows for only inferring valid formulae, provided the premises are valid), our calculus with this new inference rule is also still sound. But it’s still not complete: As before, I still cannot infer that – since this new rules only lets us infer numbers that are *smaller* than the premises and 10 is larger than both 5 and 6.

For completeness we need one more rule: The fact that if a number is divisible by two numbers , then it is also divisible by the *least common multiple* of those two numbers, i.e.:

…and with this additional rule we can finally prove (since and ). In fact, with this additional rule, the calculus is finally complete (proof left as an exercise ).

]]>MMT used to stand for * Module system for Mathematical Theories*, but by now Florian prefers the acronym to stand for

MMT consists of two parts: A formal language (*OMDoc/MMT*) and a corresponding software and API for managing documents in that language, so I’ll first explain how the former works and then what the latter is capable of.

(A slightly more technical introduction to MMT can be found here)

**OMDoc/MMT** is a *description language* based on *OpenMath*, a simple XML-based language to describe mathematical expressions with respect to their semantics. Exemplary, the expression in OpenMath would look like this:

<OMOBJ> <OMA> <OMS name="sin" cd="transc1"/> <OMV name="x"/> </OMA> </OMOBJ>

The `OMA`

stands for an *application* of a symbol, `OMS`

for a previously declared *symbol* in some *content dictionary* (i.e. a document declaring new symbols, given by the `cd=`

…-tag) and `OMV`

represents a *variable*. Expressions like these are called *objects* in OpenMath (hence the `OMOBJ`

tag).

The advantage of OpenMath when compared to other description languages like *LaTeX* (which of course, is solely for presentation and thus not directly comparable) is, that it distinguishes between symbols with the *same notation*. For example, the composition of two functions is usually denoted by – however, so is the composition of two arbitrary elements of some general monoid/group etc. In OpenMath, function composition and monoid composition are two distinct symbols, even though they are denoted the same way. By contrast, in LaTeX for example both are just expressed as `f\circ g`

with no information, what `f`

,`g`

and `\circ`

actually *mean*.

Furthermore, the content dictionaries can (and should) not just declare the symbols, but also provide their definitions and additional information (such as examples). This is what the entry for the *logarithm* looks like in its OpenMath content dictionary:

<CDDefinition> <Name> log </Name> <Description> This symbol represents a binary log function; the first argument is the base, to which the second argument is log'ed. It is defined in Abramowitz and Stegun, Handbook of Mathematical Functions, section 4.1 </Description> <CMP> a^b = c implies log_a c = b </CMP> <FMP> <OMOBJ> <OMA> <OMS cd="logic1" name="implies"/> <OMA> <OMS cd="relation1" name="eq"/> <OMA> <OMS cd="arith1" name="power"/> <OMV name="a"/> <OMV name="b"/> </OMA> <OMV name="c"/> </OMA> <OMA> <OMS cd="relation1" name="eq"/> <OMA> <OMS cd="transc1" name="log"/> <OMV name="a"/> <OMV name="c"/> </OMA> <OMV name="b"/> </OMA> </OMA> </OMOBJ> </FMP> </CDDefinition>

…where `CMP`

is the *defining mathematical property* of the symbol in natural language and `FMP`

the same property expressed as an OpenMath object. That way, every usage of a symbol is intrinsically linked to its definition, and (if possible) the definition itself is expressed formally in OpenMath – this is a big step towards linking mathematical expressions to their actual *semantics*.

Michael then extended OpenMath to not just cover mathematical *formulae*, but actual mathematical *documents* – the result was OMDoc, adding features such as definitions, theorems, proofs, examples etc. and additional *narrative* structure: It (in theory) allows for describing the content and structure of arbitrary formal documents such as papers, textbooks, lecture notes etc. OMDoc introduces the following three leveled knowledge structure:

**Object level:**formulae (as in OpenMath) are OMDoc*objects.***Statement level:**OMDoc*statements*are named objects of a certain type (*definition*,*theorem*,…).**Theory level:**OMDoc statements are collected in theories, which are backwards compatible to OpenMath content dictionaries.

Additionally, OMDoc adds various way to *connect* theories via *morphisms*, giving rise to the notion of a **theory graph**. For example, theories can simply *import* other theories (making the contents of the imported theory available to the importing theory). Other morphisms can act like imports but *change* names or other properties of the imported symbols, or simply *map* statements in one theory to mathematical expressions over the contents of another theory:

The above theory graph shows the development of the theory of *rings* – a ring has two ingoing morphisms and , the former for the *additive group* of the ring, the latter for its *multiplicative monoid*. Additionally, the theory of groups inherits (i.e. imports) from the theory of monoids, which in turn extends the theory of semigroups.

Now, why are theory graphs useful? Because they allow us to build theories according to the *little theories* approach. The inheritance arrows between theories allow us to also inherit provable results, without the need of reproving them again – just as we do it in mathematical practice. If I know that a ring is a group, I know every result that holds for groups in general also holds for rings. If I know a group is a monoid, I know everything that holds for monoids also holds for groups. It allows me to state (and prove) everything on the most general level.

Finally, MMT/OMDoc is just a slight adaption of OMDoc, most prominently it introduces the notion of a *constant* on the statement level. A constant is a named symbol with (optionally) a *“**type”* object, a *“definition”* object and additional information such as its *notation* and what *role* the constant plays in a larger context (e.g. *equality* or *simplification rule*).

However, MMT/OMDoc still does not fix any *meaning* to the words *type* or *defintion* – it is still just a description language without any inherent rules. The same holds for theories – they are just collections of declarations without any fixed semantics. In Florian’s terms, MMT is *foundation independent*. That’s because MMT/OMDoc strives to be general enough to capture the (abstract) *syntax*, *semantics* and *proof theory* of *any arbitrary formal language* (in addition to the *narrative elements* of documents).

Of course, nobody would want to write OMDoc files by hand – which is why MMT adds a surface language, that allows for creating MMT/OMDoc documents more easily. As an example, the following is a valid document in MMTs surface syntax:

namespace http://cds.omdoc.org/example GS theory Foo : ur:?LF = Nat : type US # β RS plus : β β β β β US # 1 + 2 RS GS

“Compiled” into OMDOc, the above code would yield this:

<omdoc> <theory name="Foo" base="http://cds.omdoc.org/example" meta="http://cds.omdoc.org/urtheories?LF"> <constant name="Nat"> <type><OMOBJ> <OMS base="http://cds.omdoc.org/urtheories" module="Typed" name="type"/> </OMOBJ></type> <notations> <notation dimension="1" fixity="mixfix" arguments="β"/> </notations> </constant> <constant name="plus"> <type><OMOBJ> <OMA> <OMS base="http://cds.omdoc.org/urtheories" module="LambdaPi" name="arrow"/> <OMS base="http://cds.omdoc.org/example" module="Foo" name="Nat"/> <OMS base="http://cds.omdoc.org/example" module="Foo" name="Nat"/> </OMA> </OMOBJ></type> <notations> <notation dimension="1" fixity="mixfix" arguments="1 + 2"/> </notations> </constant> </theory> </omdoc>

A more comprehensive description of both MMTs surface language as well as the abstract language of OMDoc/MMT can be found here.

The nice thing about MMTs surface syntax is, that – as in the example above – we can attach *notations* to our constants, that mirror the notations that are actually used in mathematical practice and which we can then use to write formulae. Notations also allow for *implicit arguments*, which again mirror mathematical practice and are quite convenient.

Suppose, we wanted to give an operator for declaring some function to be *injective*. The type of such an operator would be something like

It takes the domain and codomain of a function, the function itself and returns the proposition, that is injective. The operator has to take domain and codomain as argument, because otherwise I can’t even state the (required) type of the argument function . But if I know the type of , I also know the domain and codomain – stating them each time I want to declare a function as injective would be most annoying, so I can give a notation `# 3 is injective`

, which only uses the third argument and thus leaves the first and second one implicit (and thus to be inferred by the MMT system, ultimately). Now if I have a function around, I can just write `f is injective`

and I’m done.

So, the question remains, what we can do with OMDoc/MMT documents – after all, just having the language itself is rather pointless. This is where the MMT system comes into play.

At its core, MMT is an API written in Scala (which compiles into .class files and hence is “backwards compatible” with Java) – i.e. a library of classes and functions related to OMDoc content. At its center, it has an (executable) controller, that provides various user interfaces and is extended by arbitrary plugins providing various features and commands. This plugin architecture allows MMT to be highly generic and extensible, and all of its core classes are attempted to be implemented modularly and in various stages of abstraction, to make adding additional features and support for new systems and languages as easy and convenient as possible. A comprehensive description can be found here, so I’ll restrict myself to a list of things MMT provides and what they can be (and are) used for:

- A
**backend**that manages OMDoc/MMT libraries and stores/reads them into memory from various sources (files, databases etc.). - A
**build system**, that takes documents in some format and converts them to (primarily, but not necessarily) OMDoc files, separating its contents into the actual (semi-formal) content, its narrative structure in the original document and all relational information (i.e. how all the modules relate to each other via theory morphisms like inclusions). - This is managed via an abstract class
**Importer**, that combines a**parser**and an (optional)**type-checker**for any arbitrary formal language and returns well-formed OMDoc elements; the build manager then handles all the input and output, the archive structure, keeping track of dependencies, the communication with the backend etc.

Standard implementations for a parser are the one for MMTs*surface syntax*and one for the*Twelf*syntax, but by now we also have one for (mostly xml-exports of) the Mizar, HOLLight and PVS systems.

The type checker is also implemented as generically as possible, so that all a user needs to do is implement the checking*rules*and none of the algorithmic “boilerplate” stuff – exemplary, we have a checker for the logical framework LF, whose implementation (basically) just consists of a scala object for each of the inference rules of the -calculus.

I implemented extensions of the latter by standard type constructors (-types, coproducts, finite types,…) up to a logical framework based on*homotopy type theory*– in each case by just implementing the rules alone as separate scala objects. Building a logical framework / foundational logic couldn’t be more convenient - A
**theorem prover**, which similarly to the type checker can be supplied with foundation-dependent rules. It’s not even remotely comparable to state-of-the-art theorem provers (which, I guess, it can’t reasonably expected to), but works surprisingly well for “trivial” things, and can of course – as a general class – be used as a basis to implement your own algorithm. - A
**shell**which can be used as a frontend to issue commands (e.g. building) to the MMT system. - An
**IDE**in the form of a jEdit-Plugin, which offers syntax highlighting, auto-completion and access to the shell. - A
**server**which can be used to browse and present the contents of available archives - A
**presenter**-class that outputs available content as e.g. text, html, latex… (the html presenter is e.g. used by the server, the text presenter by the shell)

So basically, the MMT system can be used as a generic *“engine”* to conveniently and quickly implement arbitrary formal systems without the developer having to care about all the boilerplate stuff that* isn’t part of the purely formal specification* of the system. Everything works via *plugins*, so adding new features, imports or instances of any of the above abstract features is intended to be *as convenient as possible*.

Furthermore, it can serve as a ** common framework** for various different systems – if you consider my Ph.D. topic, you’ll notice that this is pretty much

Which ultimately is exactly what MMT wants to be – in every aspect ** as generic as possible**.

Obviously, I’m not a philosopher, but Plantinga argues specifically against naturalism (which I espouse) and I happen to currently read The Big Picture – On the Origins of Life, Meaning, and the Universe Itself by Sean Carroll. Carroll is one of the very few people whom I find myself agreeing with on almost every point – in this book, he defends his worldview of *poetic naturalism* using *epistemological bayesianism*, both of which I find incredibly attractive, so I thought I’d have some fun and give my two cents on Plantinga from those points of view.

The elevator pitch version and three talks on poetic naturalism can be found here – roughly put, it claims that everything that exists is ultimately described by physics alone:

“

is a philosophy according to which there is only one world β the natural world, which exhibits unbroken patterns (the laws of nature), and which we can learn about through hypothesis testing and observation. In particular, there is no supernatural world β no gods, no spirits, no transcendent meanings.”Sean Carroll hereNaturalism

However, in **poetic** naturalism we *are* justified in saying that other things “exist”, if that is a useful way to talk about the world *on a higher level of abstraction*. If I talk about e.g. some person, describing them on the basis of fundamental particles/fields/whatever simply isn’t appropriate. Talking about their *intentions, free will, desires* etc. on the other hand is – it allows us to form hypotheses about people, derive predictions – hence psychology, sociology, etc. So even though human behavior *ultimately* is the result of the laws of physics acting on purely physical entities, it is *infinitely more useful* to talk about them in terms of human characteristics, hence we’re perfectly justified in saying that e.g. we have* free will* – it’s just that whatever we *mean* by free will is the result of purely deterministic natural laws.

By *Bayesianism* I mean the notion, that the *degree* to which we *believe* and place confidence in some proposition should be the *probability**of that proposition being true* given the available information. This is a significant point, because even if we (in most practical cases) can’t assign *definite* values to those probabilities, the sheer fact that they *are* probabilities means they are governed by the *mathematical laws* of *probability theory*. In particular, this tells us a lot about how they are influenced by *new information* – i.e. *how we should update our beliefs and the confidence we put in them*, based on (new or preexisting) *evidence*.

Β The thesis Plantinga presents and defends in his talk is the following:

“I just wanna talk about evolutionary theory here. Contemporary evolutionary theory. […] I want to comment on the question whether or not that’s compatible with theistic belief. Belief in God. […] Contemporary evolutionary theory is not incompatible with theistic belief, belief in God. And I want to argue that the main anti-theistic arguments involving evolution, together with other premises, also fail. Then I want to argue thirdly, […] that there is, then, a science and religion, or science quasi-religion conflict, but it’s between naturalism and science, and not between theistic religion, Christianity, let’s say, and science.”Alvin Plantinga (06:20)

…and we’re already off to a weird start. Naturalism (pretty much by definition) accepts *every result that the scientific method has lead to* and is inherently *based on* accepting the scientific method as a reliable methodology to discover truths about the world. Conversely, the scientific method entails *pragmatic* naturalism (i.e. science excludes – not in principle, but pragmatically – anything that is not subject to empirical observation, i.e. anything *outside* of the natural world). If you look into any scientific paper, whether about physics, chemistry, biology or psychology, you won’t find *any reference to non-naturalistic forces or beings*.

It didn’t have to be that way – Newton invoked god regularly in his scientific writing. It just so happens, that over time, naturalism *has won* in science, at the *very least pragmatically*. It would be seriously problematic, if one of the *universal, principal, pragmatic assumptions* used in science is *incompatible* with it, but okay, let’s just see where he’s going with that (Spoiler: his evolutionary argument against naturalism of course).

But the other problem is that merely showing that theism and science/evolution are *compatible* is **completely pointless** (and that’s what he’s going to do). *Everything* observable can be ad-hock rationalized to be compatible with theistic beliefs – I can always take one step back and proclaim “yes, god made it look like that on purpose”, and it is with that kind of reasoning, that Plantinga dismisses perfectly valid arguments against (the *plausibility* of) theism, as we will see later on. But *this doesn’t get us anywhere*: mere compatibility implies nothing, it allows for no testable predictions and it doesn’t mean that theism is even remotely *plausible*. Nothing as vaguely defined as theistic belief can ever be excluded with metaphysical certainty. Neither can the existence of unicorns, Russell’s teapot orbiting earth, the flying spaghetti monster. So just showing that science and theism are not *perfectly contradictory* is an exercise in futility – it’s *trivially true*.

Here’s how Plantinga talks about “evolutionary theory”:

“Evolution covers a multitude of theses: […]

- First of all, the ancient Earth thesis, that the Earth is maybe four billion years old. Who knows exactly how old. […] The universe itself is, maybe 13 billion years old. […]
- Second, the thesis of descent with modification,where the idea is that all the vast variety of flora and fauna […] all came to be by virtue of offspring differing, ordinarily in rather small respects, from their parents. And these differences proliferate and spread out, and as a result you get this enormous variation. […]
- And then third, there’s the common ancestry thesis. The idea that if you pick […] any two living creatures […] and trace their ancestry far enough back, you’ll run into a common ancestor. […]
- And then the fourth one, Darwinism, in some ways, the most important one. […] The claim that the principal mechanism driving this process of descent with modification is natural selection winnowing or working on random genetic mutation. […] These random mutations occur periodically. Some of the, most of them are lethal, but some, some are in fact, not merely, not lethal, but adaptive.”
Alvin Plantinga (08:50)

Now, I’ll happily grant Plantinga his terms if he wants to call all of those “evolution”, but it’s worth pointing out, that

- the age of the earth (let alone the universe) is not part of evolutionary theory, even if it is corroborated and entailed by what we know about evolution,
- descent with modification
*itself*is not a thesis but an observable fact – only the claim that all of the biological variation isdescent with modification is part of the*due to**theory*, - Natural Selection is not the
*only*driver of evolutionary changes (Wikipedia lists e.g.Β Natural selection, Biased mutation, Genetic drift, Genetic hitchhiking and Gene flow), - most mutations are not lethal but for all practical purposes
*neutral*, and mutations are not the only source of variations in the gene pool (again – Wikipedia lists e.g.Β Mutation, Sex and recombination (not surprisingly), and Gene flow) - What Plantinga calls
*“Darwinism”*is either horribly outdated (if we take it exactly as he describes it) and thus irrelevant, or – augmented by the other mechanisms driving evolution – better called the*modern evolutionary synthesis*, which Wikipedia describes as*“a 20th-century synthesis of ideas from several fields of biology that provides an account of evolution which is widely accepted as the current paradigm in evolutionary biology, and reflects the consensus about how evolution works”*.

So how does Plantinga define theism/Christianity?

“When I think of Christianity, I’m thinking of something like the intersection of the great Christian creeds, […] what they have in common, […] you might call this

mere Christianity.”Alvin Plantinga (12:10)

Okay, so broadly speaking – Christianity is whatever all Christians (that are not at the very fringe) agree on. Now, Plantinga doesn’t actually tell us what this entails specifically, but I don’t think it would be unfair to say, that this should include at least the following propositions:

- Β There is some thing we call “god”, that has at least certain characteristics of
*personhood*: An awareness, intentions, (possibly) desires and certain abilities, - this god is separate of / outside of / above the laws of physics and the purely natural world (in fact, often claimed to be the cause of them), but at least within certain limits capable of influencing or controlling the natural, physical world,
- this god has certain special intentions for, interests in and a particular awareness of life on earth in general and human beings in particular.

Of course, you can be a *theist *without accepting e.g. point 3, but it’s difficult for me to see, how it would make sense to call yourself a C*hristian* (except maybe in a cultural sense) without accepting all three of those, so I will assume Plantinga holds these views at least in some form.

“And when, and if you take a look […] at these four theses [of Evolution], the first three are pretty obviously compatible with that.”Alvin Plantinga (12:48)

Well, yeah, *compatible* – but again, that’s *completely useless*. We can’t *disprove the existence* of something with *absolute metaphysical certainty* – that’s like *Philosophy 101*! On the other hand –** if** Christianity were true,

- the earth (and the universe)
*could*be a mere 6000 years old, - Adam and Eve
*could*have been real people, - humans and other animals would not
*have*to be related, - all species
*could*have been created separately and instantaneously*without the need*for a gradual, step-by-step process behind it.

On *naturalism* however, there would **have to be** some kind of natural, unguided process (ultimately reducible to the laws of physics) that gives rise to the enormous biological complexity we observe today. Christianity and Evolution might not be *logically **inconsistent*, but we have no reason to *expect* a mechanism like evolution under theism. No Christian (as far as I know) predicted an evolutionary process based on their Christian beliefs (until Darwin came along with evidence, and then the churches were the *first* to denounce his thesis vehemently), whereas naturalists like Lamarck already proposed evolutionary explanations before Darwin, and even ancient Greek “proto-naturalist” philosophers (such as Anaximander of Miletu) proposed ideas similar to evolution way before Darwin came along – opposing philosophers like Aristotle, who repeatedly explains biological matters by invoking divine design.

Naturalism *predicts* the existence of natural, piece-by-piece processes behind any complex system, whether that’s atoms, all of chemistry, the first replicating molecules or biological creatures, up to phenomena like the human mind. This means (from a Bayesian point of view), evolution *is strong evidence for* naturalism, it *raises the probability* that naturalism is true, and hence by necessity it *reduces the probability* that *theism* is true. *That’s how evidence works*.

If theism is true, why do we find that the world looks* exactly the way it needs to look* if

Β So, what about point 4 then?

“Is Darwinism incompatible with theistic religion? And a large number of people think so. A very large number of people say that it is. People both weigh to the left on the theological spectrum, and also to the right on the theological spectrum.”Alvin Plantinga (13:40)

Yes, because for all practical purposes, “Darwinism” is again *strong evidence against theism*. We simply wouldn’t *expect* something like the modern evolutionary synthesis to flawlessly work, if theism were true. That’s why even people on the “left on the theological spectrum” have difficulty grappling with that – as does Plantinga, as we will see. He claims:

“If God, if we have in fact come to be by virtue of evolution, by way of evolution, then it would’ve been by God’s guiding this whole process. Directing it, orchestrating it, okay? So what’s not consistent with Christian belief is the claim that evolution and Darwinism are

unguided.”Alvin Plantinga (18:36)

This… doesn’t make *any sense*. *Darwinism* – if Plantinga means the *modern evolutionary synthesis* – * is the process that does the guiding*. That is the

Plantinga quotes Dawkins on this point:

“All appearances to the contrary, the only watchmaker in nature is the blind forces of physics, albeit deployed in a very special way. A true watchmaker has foresight; he designs his cogs and springs and plans their interconnections with a future purpose in his mind’s eye. Natural selection, the blind, unconscious, automatic process which Darwin discovered, and which we now know is the explanation for the existence and apparently purposeful form of all life, has no purpose in mind. It has no mind, and no mind’s eye. It does not plan for the future. It has no vision, no foresight, no sight at all. If it can be said to play the role of watchmaker in nature, it is the blind watchmaker.”Alvin Plantinga quoting Richard Dawkins (20:24)

That’s the argument: We *know* the guiding processes of evolution and these processes can be demonstrated to occur and perfectly explain all the diversity in biology, and they are completely *not what we would expect* if there was an additional *intentional* guiding force behind them. The processes themselves have no goal, no purpose, no intention. This is *strong evidence* for naturalism and against theism.

Instead of replying to Dawkins’ point directly, Plantinga goes off on a tangent on *irreducible complexity*, the proposition put forward by creationist Michael Behe, that certain features in biology are too complex and interdependent to have come about by gradual, unguided evolutionary processes as those described in the modern evolutionary synthesis. They can not be reduced to prior developmental stages that would be necessary for an unguided evolutionary process. Classic examples of **supposedly** irreducibly complex organs are e.g. the blood clotting system, the immune system or the bacterial flagellum. And of course, naturalism predicts that those *can’t* be irreducible complex, which means their individual components *must* have come about individually and to the advantage of the organism.

So here’s Ken Miller, a* catholic* evolutionary biologist who’s *a firm believer in guided evolution*, showing exactly how all these systems * can* be reduced, completely dissecting all proposed examples of irreducible complexity and exposing it as the creationist propaganda that it is:

It’s the only argument people opposed to the theory of evolution have ever really had, since the scientific consensus was established: “X is too complex to have evolved”. Biologists have taken them up on their claims and time and again, they have been shown to be false. Every testable prediction the modern evolutionary synthesis makes has been confirmed. Plantinga says:

“Michael Behe has talked about irreducible complexity. […] Well, one of the things that Dawkins does is to try to show that these arguments don’t really work. And sometimes he’s reasonably successful, and other times, seems to me he’s not very successful at all.”Alvin Plantinga (24:15)

Not very successful? Care to give examples? No? Are you actually going to invoke lousy creationist arguments and not even try to back them up?

“Another thing is he does thirdly, is to make suggestions as to how it did, in fact, happen. How it could be that these and other organic systems have developed by unguided evolution.”Alvin Plantinga (24:40)

“Suggestions”. Well, yes, that’s all he *possibly could* do – we don’t have a gapless perfect record of the complete evolutionary ancestry of every single species, and we never will – but these suggestions are backed by *evidence*: comparative anatomy, genetics, fossils… Dawkins is not just throwing vague theses around. And *irreducible complexity* claims these features * couldn’t possibly have evolved* without intentional guidance – so a good suggestion whose feasibility is established by evidence is

“But the principle form of argument for the conclusion, the basic conclusion here, is that evolution reveals a universe without design. […] We know of no other premise [than]: “We know of no irrefutable objections to its being biologically possible that all of life has come to be by way of unguided Darwinian processes”. All right? That’s the premise. Therefore, here’s the conclusion: “All of life has come to be by way of unguided Darwinian processes”. All right? If you think about that argument […] it is a horrifyingly lousy argument.”Alvin Plantinga (25:06)

…the * fundamental principle behind the scientific method* is a

If *the only way* to defend your position is not just to point out that science isn’t perfect and doesn’t know everything with 100% certainty (which is of course totally true, albeit a pointless argument), but instead claim the *fundamental principle behind the scientific method* is *“horrifyingly lousy”*…. I’m at a loss for words. *Why is Plantinga considered to be a serious philosopher? *

Still on Dawkins refuting intelligent design:

“It basically goes like, “It hasn’t been proven impossible, therefore it’s true.” “Alvin Plantinga (26:25)

Oh, the irony…

“So for example, suppose last year I come to the Chairman of the Philosophy Department and said, “The President wants me to receive a $50,000 a year raise.” Well, naturally the Chairman might want to know, why I thought that was true. You know? He’d say, “Really? What makes you think that’s true?” I’d say, “Nobody has proven it impossible.” “Alvin Plantinga (26:30)

Yes, and if you *had actually tried* to prove it impossible, and tried *every conceivable way* (for example by just asking the president, obtaining his emails to HR etc.), you’d be *fully justified* in believing that he *does* want you to get a raise, up to the point where assuming the contrary would be *absurd*. This is what irreducible complexity tried: disproving the modern synthesis. And they failed *every time*, as did *every other attempt* to falsify the theory – of which there have been a lot, because it actually makes *predictions that can be tested*. Therefore, the theory is most likely true, up to the point where denying it becomes *absurd*. That’s how science works, and it works that way because that’s how *probability* works.

Why do I feel like I need to explain that to a* supposed serious philosopher*?

So next he instead tries to argue, that maybe mutations aren’t actually random in a way that contradicts Christianity. Of course, under naturalism they *aren’t* random in the sense, that they are perfectly determined by the laws of physics. But that’s not what we *mean* by random. What matters with respect to the question here is, whether we can find any kind of *direction, intention* or otherwise *discernible pattern* in mutations. And we have found some patterns, and obviously they are due to purely chemical reasons and with no signs of divine (or otherwise) intention. So in what sense *could* mutations be non-random such that we had a sign of intelligent guidance? How about this:

“Ernst Mayr says, “When it’s said that mutation or variation is random, the statement simply means that there is no correlation between the production of new genotypes and the adaptational needs of an organism in a given environment.” […]

[Elliott Sober] says, “There is no physical mechanism, either inside organisms or outside of them, that detects which mutations would be beneficial and then causes those mutations to occur.” […] If these genetic mutations are random in that sense, that’s perfectly compatible with their having been caused by God. So the point is that a mutation accruing to an organism is random just if neither the organism nor its environment contains such a mechanism. Okay. So, as far as I can see, the claim that evolution demonstrates that human beings and other living creatures have not, contrary to appearances, […] been designed [is] not a part or a consequence of the scientific theory of evolution just as such.”Alvin Plantinga (28:10)

**Yes, it is.** The “scientific theory of evolution” is concerned with *any and all mechanisms guiding evolution*, none of which show *any kind of goal, intention, design or other signs of an intelligence*. And this is incompatible with theism in the sense, that it is *not, what we would very likely expect to see* under theism. Therefore, again, *it is strong evidence for naturalism, *because naturalism* requires* that mutations would have to be random – in the above sense – and show no signs of intent or goal. So ** why** doesn’t Plantinga think, that this is incompatible with god? He doesn’t say. I assume god just guides evolution in a way indistinguishable from exactly what we would expect from the purely natural laws of physics, because, I don’t know, free will or something.

Again, yeah, you can play that game. They’re not incompatible from a strictly logical perspective. But what you’re doing then is a posteriori reasoning, rationalizing away a fact that would not have been predicted by theism and is necessarily entailed by naturalism. The probabilities *still shift massively in favor of the competing hypothesis*.

And then:

“As a scientific theory, it doesn’t address such questions as whether or not evolution has been guided by God, for example.”Alvin Plantinga (29:40)

**Yes, it does. **The theory of evolution encompasses *any and all mechanisms guiding evolution*, that is the whole *point* of the theory. If we would find any kind of intentional goal or other sign of intelligence in mutation rates, that would impact the current modern evolutionary synthesis and it would have to be improved and modified accordingly (basically to include god’s intentions, whatever they might be). Irreducible complexity *really would disprove* the theory of evolution.

Now, to be fair, Plantinga points out that it’s not really a clear cut matter, which exact propositions are part of “the scientific theory of evolution” and which aren’t. But I assume that the *mechanisms guiding* evolution *certainly* should be – that’s what e.g. *natural selection* is, after all*.* Also, Plantinga wants to argue specifically for* Christianity* as opposed to *naturalism*. Naturalism *definitely* does address the question, and the success of evolutionary theory is *strong evidence* in its favor.

And now we’re at the point where Plantinga completely jumps the shark. When discussing the result of polls that show, that roughly half of the US population does not accept evolution, Plantinga finds someone to blame:

“Well, why is this? I think the reason why this is, is that we are regularly told by the experts, Dawkins, Dennett, Gould, Simpson, Ayala, and others, that current scientific evolutionary theory asserts or implies that the living world is not designed, and that the whole evolutionary process is unguided. When the experts tell us this over and over again. The National Association of Biology Teachers, until 10 years ago, officially described evolution as, on their website, they described it as, “An unsupervised, impersonal, unpredictable, and natural process.” Unsupervised, impersonal, not supervised, not orchestrated by God or anyone else. Okay? If we’re regularly told by the experts that in fact the theory is a theory of unguided evolution, it’s no wonder that many Christians believe that. The experts are, after all, experts. And if they do believe it, then it’s not surprising that they don’t want it to be taught as a sober truth in public schools.”Alvin Plantinga (33:15)

Evolution, according to the modern evolutionary synthesis,** is** an unsupervised, impersonal process, not orchestrated by god or anyone else. We do not see *any sign* of orchestration or supervision anywhere and the theory** is** concerned with *exactly the processes that guide evolution*. But the insane part is, that he seems to think *scientists and naturalists* are the ones who went out of their way to hammer Christians with the claim, that religion and evolution are incompatible. No, they’re not, *nothing* is strictly incompatible with theism – if you want to imagine some guiding force on top of the known, natural evolutionary processes, go ahead. But the 40-odd percent of Americans doubting evolution *certainly* didn’t get the idea “religion vs. evolution” from reading Dawkins, Gould or Dennett et al – most of these authors only became vocal about religion *precisely because* of the alarming number of creationists in the US and their attempts to remove evolution from classrooms!

It’s the *ministers* over there, that preach that god created the earth in 7 days, 6000 years ago finishing with Adam and Eve in the garden of eden and that therefore evolution is a lie and leads to eugenics and abortions and contradicts the bible, which is the infallible word of god. Ken Miller seems to wholeheartedly agree where the problems are if you look at his above talk. He *actually examines* the claims made by creationists – and they’re not being made by the kind of Christian who would *accept a guided form of evolution*. Without creationists, I doubt Gould and Dawkins would have become as prominent as enemies of religion as they have – in fact, Gould is the one who proposed science and religion as *non-overlapping magisteria* with different domains of applicability and thus *necessarily* compatible. And most of Dawkins work is explicitly battling creationism and explaining evolution to people whose only knowledge about evolution comes from their ministers’ malicious misrepresentations thereof.

To put the blame on naturalists is to completely reverse the causal arrow here.

“Clearly, there are questions of justice here. Would it be just to teach in public schools positions that go contrary to the religious beliefs of most of those who pay for those schools? The answer seems to me, fairly clear. That would not be just. That would not be proper. And therefore, in a way, these people like Dawkins, Dennett, Ayola, Ayala, and the rest, who trumpet the incompatibility of current evolutionary theory with religious belief, with Christian belief, as far as that goes, with Jewish and Muslim belief as well. In a way, they’re doing science a real disservice.”Alvin Plantinga (34:30)

So let me get this straight: if scientific findings and conclusions *contradict* commonly held religious beliefs, then we should *not* teach those findings in science classes and scientists should *not* point that out? If creationists believe in a young earth, then science class is *not supposed to teach them* the contrary? That would not be “*proper*“? Then * what’s the point of science class at all*, if not to teach science?

*How can anyone take Plantinga seriously?*

I would seriously urge Plantinga to *take some advice from his fellow Christian* Ken Miller. In the above video, he not only explains why it’s an incredibly bad idea to take evolution out of the school curriculum, he also argues beautifully for why *pragmatic naturalism* is a ** necessary aspect** of the scientific method.

So we finally come to Plantinga’s actual argument against naturalism:

“I want to argue that in fact there is conflict between naturalism and evolution. […] It’s not that it’s logically impossible that they both be true, not that kind of conflict. They could both be true. It’s rather that one can’t sensibly believe them both.”Alvin Plantinga (36:50)

Oh, so in the case of Christianity,* mere compatibility* is good enough, but in the case of naturalism we *broaden our definition* of incompatibility. Noted.

“I’m going to use the letter

Nas an abbreviation for naturalism, andEas an abbreviation for the thought that we human beings and all of our faculties and parts have come to be by virtue of the processes pointed out or mentioned in current evolutionary theory. AndRis the proposition that our cognitive faculties are reliable. Here, by cognitive faculties, I just mean things like memory and perception, […] that of logical or a priori intuition, […] sympathy, whereby you can tell what somebody else is thinking and feeling. […] For them to be reliable is for them to produce in us, for the most part, true beliefs. […] They don’t have to be 100 percent reliable to be reliable, but […] as a general overall figure, maybe three out of four beliefs have to be true for the whole battery of cognitive faculties to be reliable. […]Okay, so then, then we’ve got

premise one: The probability ofR, givenNandE[is] low.The

second premiseis, if you acceptNandE, and you also see that one is true, then you have a defeater for your belief inR, […] where a defeater for a belief is some other belief you’ll acquire, such that as long as you hold the second belief, you can no longer rationally accept the first. […]Then the

next premiseis, one who has a defeater forR, for the proposition that one’s cognitive faculties are reliable, has a defeater for any belief she takes to be produced by her cognitive faculties. And of course, that would be all of her beliefs, right? One’s beliefs that are produced by one’s cognitive faculties. […] And of course, one of those beliefs isNandEitself, right?And

hence, if you believeNandR,NandE, you’ve now got a defeater forNandE, right? Well then, it looks as ifNandEis self-defeating. It provides the defeater for itself. […] It is self-referentially incoherent. But the bottom line would be that it’s not rationally acceptable, you can’t rationally accept it. You can’t sensibly believe bothNandE.”Alvin Plantinga (37:50)

Now this argument has several flaws, some of which are given here on wikipedia. The immediate responses that come to mind are:

- Our cognitive faculties
unreliable and here is a list of documented, well studied and known ways in which human cognition is typically error prone.*are* - That’s why science places such a
*high*emphasis on**empiricism**,**replication**and**predictions**and has such a*low*regard for*anecdotes, eye-witness accounts and memory*. The scientific method allows us to*compensate for*our cognitive shortcomings. - The probability of
**R**(being*generally*sufficiently reliable to be trusted, even if prone to errors and unless having reason to think otherwise) specifically for our*reasoning*apparatus given**N**and**E**is still*extremely high*, for the simple reason that it is its*only advantage*. If it*wouldn’t*allow us to draw by-and-large accurate conclusions about our surroundings (especially in predicting the behavior of others), they would confer*no advantage and wouldn’t be selected for*. If our cognitive faculties would cause us to mostly come to wrong conclusions, we wouldn’t be able to survive for long. That’s why psychiatry is a thing. - More generally – if we wanted to be
*accurate*, we’d have to*decompose***R**into a whole*list*of individual cognitive faculties, all of which have*different*probabilities to be correct, some of them rather large and some of them smaller – not surprisingly, some of our mental faculties are rather reliable (facial recognition, simple causal reasoning, intuitive physics) and some of them are hardly (intuition for probabilities, hyperactive agency detection,…). Also not surprisingly, things like facial recognition, causal reasoning and an intuition for simple newtonian physics are*extremely*advantageous, whereas e.g. estimating probabilities*isn’t exactly a natural thing to do*– as opposed to going with our*gut instincts*, which are tuned for survival “in the wild” and*terrible*in our modern situations, when e.g. estimating your chances in gambling. - Every imaginable hypothesis about cognitive faculties being optimized for
*other*advantages (happiness, ethics,…) than their*being accurate*would be*just as valid under theism*– if they’re advantageous, why*shouldn’t*a god design our mental faculties to confer us with*those*advantages instead of accuracy? If you want to claim god prefers accuracy above all else, why do we have all of these*stupid cognitive biases*? - Just
*pragmatically*, the argument gets us nowhere. Assuming we can’t trust our mental faculties to (in general and in specific domains) come to the right conclusions – all wedo if we want to discern truth from falsehood in any way is to*can**look at what’s more probable*. And the plain fact is that they*seem to work*. Even**if**the probability of**R**given**N**and**E**were*incredibly low*– it wouldn’t lead to a defeater, since no matter what epistemology we follow,*some sufficient version of*or we can’t form**R**has to be true from the start*any beliefs whatsoever*. It would “only” be strong evidence against**N**and**E**. - In particular, without
**R**we couldn’t believe Plantinga’s argument in the first place. And if we accept that**R**is true, and we seem to, its probability given**have to****N**and**E***without***R**as an assumption doesn’t matter anymore with respect to**R**, as long as it’s not 0 – and I*don’t*think Plantinga wants to claim*that*. If**R**is an axiom, or for other reasons true*a priori*, its probability is.*exactly 1*

So in conclusion: Plantinga’s arguments why evolution and theism are compatible are either trivial (under *one* definition of “compatible”) or invalid/ignorant/missing the point (under another definition), he gives us no reason why theism should be *plausible* (not to mention *more plausible than naturalism*), fails to counter any one of the discussed arguments why theism is *implausible* and his argument why naturalism is supposed to be incompatible with evolution doesn’t work and applies just as well to theism. In the process he shows a tendency towards creationism and eliminating evolution from the school curriculum and betrays ignorance about the scientific method and evolutionary theory.

*And yet people take him seriously. Why?*

I figured I might try going a bit into detail, in case anyone’s interested – I’m not sure, why they should be, but who knows

The title **Bananifold** is a stupid math joke I like that started here – it was kicked off by the quote

Classification of mathematical problems as linear and nonlinear is like classification of the Universe as bananas and non-bananas

which was followed by

Let a

bananifoldbe an object that locally resembles a banana.

A *Manifold* is a subset of a vector space, that (more or less) *locally* resembles a lower-dimensional vector space. A beautiful example is the surface of a sphere: Obviously, a sphere is a three dimensional object, but if you zoom in enough, the surface looks sufficiently two dimensional that we can e.g. draw *maps *of parts of it. Any map of the whole earth is always inadequate in some way (which is why there are so many different ways to map the whole globe – each trading accuracy in *some* aspect for *in*accuracy in some other one), but if you just look at e.g. a single city, you can draw a map that is sufficiently accurate that it’s difficult (if not impossible) to see, that it’s **not** actually flat – the curvature doesn’t matter anymore.

This is of course only an analogy – the actual definition is given via homeomorphisms (“maps”) on neighborhoods (“local areas”).

The song kicks off with the chorus, which is in 10/4 and based on the chord progression Cm – Bb – Am – Ab – Gm. Here’s the guitar riff for it:

The verse (in 7/4) is kicked off (and continuously accompanied by) the following clean guitar part. The “melody” notes are emphasized, since those will show up later again:

It starts off in Gm, but as soon as the lead guitar enters, the primary chord is an Ebj7. This is to emphasize the Lydian scale as the primary mode (I love Lydian – it’s a major scale, but it doesn’t sound as trivially happy as the “normal” major scale), shortly switching back to Gm, which basically serves as a subdominant, following an AABA structure:

The tapping part of the verse is in G major:

The bridge / solo section starts of with the clean guitar playing this:

Note, how the upper notes are the exact same melody as the emphasized notes in the previous clean part, but this time played straight in 4/4. However, the rhythm guitar is playing an 11/8 over it, which runs through the solo section:

I’m generally a big fan of polyrhythms, and this one might be one of my better ones… The chords (Gm – Cm) during the solo change with the 11/8, thus emphasizing the rhythm guitar, whereas the drums and clean guitar stay in 4/4, which still yields a classic rock-feel (with an additional 2/4 after every five bars to resolve properly – ) – it should “fit” well enough not to sound too chaotic, but also doesn’t sound too predictable. I hope that worked out

Those are the most “interesting” (for a lack of a better word) parts of the song. The full tabs can be found here (gp5 format).

]]>One of our undergrad students recently finished his B.Sc. thesis on a serious game intended to teach the player to apply maths to real-world problems (this is probably going to be worth its own blog post, once the paper is publicly available somewhere), which got me thinking about other ways to teach math using video games. It occurred to me, that magic systems in RPGs might be one way to do this.

Components of magic systems in video games may contain any of the following:

- Magic spells need to be learned,
- they might take magic ingredients to be cast,
- they might require a specific sequence of actions to be performed during/when casting spells (gestures, magic words etc.),
- more powerful spells require more powerful ingredients that are harder to come by.

So I got the idea, that it might be possible to use formal logic to come up with a new magic system for RPGs – the idea being (and these are only very superficial thoughts):

*Magic ingredients*might correspond to*symbols*of a formal syntax,*spells*might correspond to*propositions*of the logic,*gestures / magic words*needed to cast / activate a spell might correspond to*inference rules*– a spell is*activated*by performing the right sequence of gestures / magic words, which corresponds to*proving the associated proposition*.

If one manages to spin these analogies further – i.e. give **a full RPG magic semantics to the syntax and proof theory** of some logical system – this might be a way to coerce and teach the player to actually **think mathematically without them even noticing** that’s what they’re doing. In an unrealistically ideal world, one could even imagine that giving rise to new interesting theorems and proofs (think of things like Foldit, Quantum Moves and similar crowdsourcing research games).

Progress in the game might be coupled with guiding the player to learn new spells/theorems needed to finish storyline quests. Once a spell/theorem has been activated/proven, they would only need the required ingredients/symbols to cast it again (you don’t want to force people to prove the same thing over and over again).

The most difficult part would be (I imagine), to assign *actual effects* to the spells (i.e. theorems) in such a way, that there’s incentive to continually come up with *new* proofs/spells and such that more advanced proofs correspond to more powerful spells. Ideally, the more *interesting* or *meaningful* a proposition in the logical system is, the more useful the corresponding spell should be – but you don’t want to only do that for predefined propositions – ideally, you want the player to be able to come up with their own as well. There certainly isn’t an* ideal* solution to that problem (it seems like an AI complete problem to me), but there might be a partial solution that’s *“good enough”*. For example, different metrics (length of a proposition in some normal form, number of logical connectives, quantifier switches in prenex form etc.) could conceivably correspond to strengths of different effects (fire/cold damage, healing, shield, stun enemy…)

So, I’ve started an Authorea document here to collect thoughts, ideas etc. Do you have something to contribute? Are there already projects in that direction? Do you have ideas? Please share any thoughts and ideas directly there. Maybe we can actually manage to come up with a useful system

]]>