This post is primarily directed at students in the AI class at University Erlangen to explain the basics, and in particular the formalism we use in the lectures, of abstract logic.

In its (not actually, but for our purposes) most abstract form, a logical system consists of three things:

  1. A language \(\mathbb L\), i.e. a set of formulae,
  2. a class \(\mathbb M\) of models, which we use to assign a truth value to a formula, and finally
  3. a logical entailment relation \(\models\) between models and formulae.

We don’t demand anything more specific. The formulae and models may look however we want them to; as long as we have a relation \(\models\) between models and formulae, we have a logical system. There’s nothing that forbids us from defining \(\mathbb L:=\mathbb N\), \(\mathbb M:=\mathbb N\) and \(n\models m\) if \(n\) is divisible by \(m\). It’s pointless as a logical system, but it is a logical system.

We interpret \(M\models \varphi\) to mean the formula \(\varphi\) is true in the model \(M\) – whatever we mean by that. If a formula \(\varphi\) is true in every model, we call \(\varphi\) a tautology (or valid) and write \(\models\varphi\). In the pointless logical system above, a tautology would be a number that divides every other number, so the only tautology is \(1\). If there is some model \(M\) with \(M\models\varphi\), we say \(\varphi\) is satisfiable. In the pointless system, every number (except \(0\)) is satisfiable, since every multiple of it is a model in which that number is “true” – furthermore, since \(0\) is divisible by every other number, we have \(0\models n\) for every natural number \(n\) except \(0\). On the other hand, there is no number that is divisible by \(0\), so \(0\) is (as a “formula”) not satisfiable. Hey, this isn’t actually as pointless as I thought…

Let’s look at propositional logic to see how this formalism works in practice: We start with some set of propositional variables \(V= \{ A_0 , A_1 , A_2 , \ldots \}\).

\(\mathbb L\) is simply the set of all propositional formulae, i.e. the set defined by the following rules:

  1. Every variable \(A_i \in V\) is a formula, i.e. \(V\subset\mathbb L\).
  2. If \(\varphi\in\mathbb L\) is a formula, then \(\neg\varphi\in\mathbb L\).
  3. If \(\varphi_1,\varphi_2\in\mathbb L\) are formulae, then so are \((\varphi_1\wedge\varphi_2)\), \((\varphi_1\vee\varphi_2)\), \((\varphi_1\to\varphi_2)\) and \((\varphi_1\leftrightarrow\varphi_2)\).

The way we assign truth values to propositional formulae is via assignments: Functions \(f: V \to \{ 0 , 1 \}\) that assign to each propositional variable either \(1\) (true) or \(0\) (false). We can then extend an assignment \(f\) to a function \(F:\mathbb L\to \{ 0 , 1 \}\) on all formulae inductively over the recursive definition of formulae:

  1. For every variable \(A_i\in V\), we let \(F(A_i) = f(A_i)\).
  2. For any formula \(\varphi\in\mathbb L\), we let \(F(\neg\varphi)=1\) if and only if \(F(\varphi)=0\).
  3. For any formulae \(\varphi_1,\varphi_2\in\mathbb L\), we let \(F((\varphi_1\wedge\varphi_2))=1\) if and only if \(F(\varphi_1)=1\) and \(F(\varphi_2)=1\).
    We let \(F((\varphi_1\vee\varphi_2))=1\) if and only if either \(F(\varphi_1)=1\) or \(F(\varphi_2)=1\) (or both).
    We let \(F((\varphi_1\to\varphi_2))=1\) if and only if \(F(\varphi_1)=0\) or \(F(\varphi_2)=1\) (or both).
    And finally, we let \(F((\varphi_1\leftrightarrow\varphi_2))=1\) if and only if \(F(\varphi_1)=F(\varphi_2)\).

Since we can extend every assignment (on \(V\)) to all formulae this way, we don’t actually need to distinguish between assignments and their extensions – once we assign \(0\) or \(1\) to each variable, the above rules tell us exactly which value to assign to any arbitrary formula. Now, our set of models \(\mathbb M\) is simply the set of assignments, and the relation \(\models\) we define by letting \(f \models \varphi\) if and only if \(f(\varphi)=1\). So if we have an assignment \(f\) with \(f(A_0)=1\) and \(f(A_1)=1\), we have \(f\models A_0\), \(f\models A_1\), \(f\models (A_0\wedge A_1)\)… Furthermore, we can immediately see that e.g. \((A_0\vee\neg A_0)\) is a tautology – for every model (i.e. assignment) \(f\) we have \(f\models (A_0\vee\neg A_0)\). Why? Because either \(f(A_0)=1\) (and thus by definition of the extension we have \(f((A_0\vee\neg A_0))=1\)) or \(f(A_0)=0\) and thus by definition of our extension \(f(\neg A_0)=1\) and hence \(f((A_0\vee\neg A_0))=1\).

It should be noted, that what a model is depends heavily on the specific logic. In propositional logic they are just assignments; however in first-order logic, models are structures with a universe \(U\) and various functions and relations on \(U\) (depending on the signature of the language), in modal logic they are sets of worlds with a visibility relation between them… models can get seriously weird, and they get weirder with the expressive strength of the language. This is why the definition of a logical system is so broad – we want to be able to express all logics, no matter how weird, in this framework – even if it’s a nonsensical pointless logic like the one with natural numbers.

By the semantics of a logic we mean its models and the logical entailment relation. On the other hand we have the syntax of a logic, meaning how formulae are constructed. Syntax tells us what a (“grammatically” correct) formula is, semantics tells us what a formula means (by telling us when/where it is true). The point of a proof calculus is to try and reduce the truth of a formula purely to its syntax. Before we rigorously define what a calculus (and what a proof) is, note that proofs always start with assumptions, i.e. statements, i.e. formulae. I prove a statement (i.e. formula) \(\varphi\) by stating things that I know (or for the sake of the proof assume) to be true, logically derive various statements (i.e. formulae) from these assumptions and (if I succeed) end with a derivation of the formula \(\varphi\) I set out to prove. All of this is – in a sense – just things-I-do-with-formulae, hence purely syntactical. So let’s reflect this in the following definitions:

Let \(\mathcal H\subset\mathbb L\) be a set of formulae. We say \(M\in\mathbb M\) is a model of \(\mathcal H\) (and write \(M\models\mathcal H\)) if and only if \(M\models\psi\) for every \(\psi\in\mathcal H\). We say \(\mathcal H\) entails a formula \(\varphi\) (and write \(\mathcal H\models\varphi\)) if \(\varphi\) is true in every model of \(\mathcal H\).

This way, we can talk about the truth of a formula relative to a set of axioms/assumptions \(\mathcal H\). Again, we call a set of formulae satisfiable if it has a model; i.e. if there is a model in which every formula in the set is true.

Let’s look at a couple of examples again: In propositional logic, we have e.g. \(\{A_0, A_1\}\models(A_0\wedge A_1)\) – after all, every assignment that satisfies both \(A_0\) and \(A_1\) also has to satisfy \((A_0\wedge A_1)\). Also, the set \(\{ A_0,A_1\}\) is satisfiable, as any assignment with \(f(A_0)=1\) and \(f(A_1)=1\) proves. An example for a non-satisfiable set would be \(\{ A_0,\neg A_0\}\) – even though both formulae on their own are satisfiable, no assignment can set both \(A_0\) and \(\neg A_0\) to \(1\).

For our pointless divisibility logic, we already know that \(0\) is not satisfiable. As a result, any set of numbers containing \(0\) also is not satisfiable. But we also know that every number (except \(0\)) divides \(0\), which means \(0\) is a model for every set of natural numbers (not containing \(0\)) – which makes every set that doesn’t contain \(0\) satisfiable. This is boring, so let’s get rid of \(0\) as a model by letting \(\mathbb M:=\mathbb N\setminus \{ 0 \}\). Now only finite(!) sets of numbers that don’t contain \(0\) are satisfiable. To prove that, we take an arbitrary finite set of numbers \(\mathcal H=\{ n_0,\ldots,n_k\}\) that doesn’t contain \(0\). We now need to find a model for this set, i.e. a number \(m\) that divides every one of the \(n_i\). But that’s easy: We just multiply them together, so we have \(\prod_{i=0}^kn_i \models \mathcal H\), hence \(\mathcal H\) is satisfiable. On the other hand, infinite sets of numbers can’t be satisfiable, since infinite sets of natural numbers are unbounded, i.e. contain arbitrarily large numbers. A number divisible by all of them would have to be larger than any natural number (or \(0\), which we got rid of). So the satisfiable sets are now exactly the finite sets not containing \(0\).

Of course a statement \(\mathcal H\models \varphi\) still belongs squarely in the realm of semantics even if it doesn’t explicitly mention models anymore – whether the statement is true or not still depends on the models, after all. So what we want is a second relation \(\vdash\) that is definable completely without models, purely on the basis of syntax, but happens to express (at least approximately) the same thing; i.e. ideally we want \(\mathcal H\vdash \varphi\) to be true if and only if \(\mathcal H\models \varphi\). So let’s use proof calculi to define such a relation:

A proof calculus \(\mathcal C\) for a language \(\mathbb L\) consists of a set \(\mathbb A\subseteq\mathbb L\) of \(\mathcal C\)-axioms and a set \(\mathbb I\) of inference rules that allow us to derive new formulae from ones already derived. For a set of formulae \(\mathcal H\) we say a formula \(\varphi\) is \(\mathcal C\)-provable in \(\mathcal H\), and write \(\mathcal H\vdash_{\mathcal C}\varphi\), if and only if one of the following cases applies (again a recursive definition):

  1. \(\varphi\in\mathcal H\) (i.e. \(\varphi\) is an assumption/axiom), or
  2. \(\varphi\in\mathbb A\) (i.e. \(\varphi\) is a \(\mathcal C\)-axiom of the calculus), or
  3. there are formulae \(\psi_1,\ldots,\psi_n\) with \(\mathcal H \vdash_{\mathcal C}\psi_i\) (for every \(i\)) and an inference rule \(I\in\mathbb I\) such that \(I(\psi_1,\ldots,\psi_n)=\varphi\) (i.e. \(\varphi\) is derivable from already proven formulae via some inference rule).

We write \(\vdash_{\mathcal C}\varphi\) short for \(\emptyset\vdash_{\mathcal C}\varphi\), i.e. if \(\varphi\) is provable in \(\mathcal C\) without any additional axioms. Note, that a calculus is defined directly on the formulae; purely syntactically. Our definition doesn’t mention models anywhere. Now, given some proof calculus \(\mathcal C\), what matters to us (and what we are interested in) is how that calculus relates to the semantics of our logical system – i.e. how similar \(\models\) and \(\vdash_{\mathcal C}\) are. The most obvious property a calculus should have in that regard is correctness:

A calculus \(\mathcal C\) is called correct or sound if and only if whenever \(\mathcal H\vdash_{\mathcal C}\varphi\) (for some set of formulae \(\mathcal H\) and some formula \(\varphi\)) we have \(\mathcal H\models\varphi\). In other words: if everything that is provable in \(\mathcal C\) is also true.

In other words again: If a calculus isn’t sound, we can prove things that aren’t true. In that case it is completely useless.

Let’s again look at propositional logic as an example. Without using a specific existent calculus, it should be clear that the axioms of the calculus should be tautologies (e.g. \(A\to A\) or \(A \vee\neg A\)…), since they are always provable, no matter my assumptions. Consequently, they need to be always true if we want our calculus to be sound, hence tautologies. Inference rules are usually written in the following style:


…meaning the inference rule maps the formulas above the bar to the formula below the bar. We can read this as “If \(\psi_1,\ldots,\psi_n\) are all provable, then I can also derive/prove \(\varphi\)”. Typical inference rules are e.g.:

Modus Ponens / Implication-Elimination\(\dfrac{(\varphi_1\to\varphi_2),\quad \varphi_1}{\varphi_2}\)
Conjunction Introduction\(\dfrac{\varphi_1,\quad \varphi_2}{(\varphi_1\wedge\varphi_2)}\)
Implication Introduction\(\dfrac{ \{ \varphi_1 \} \vdash_{\mathcal C} \varphi_2 }{(\varphi_1\to\varphi_2)}\)

The last one here is interesting: Whereas in the other examples, the premises (i.e. the stuff above the bar) are all just formulae, in the implication introduction the premise is itself a demand that some formula is provable from some other one.

The converse to soundness is completeness:

A calculus \(\mathcal C\) is called complete if and only if whenever \(\mathcal H\models\varphi\) (for some set of formulae \(\mathcal H\) and some formula \(\varphi\)) we have \(\mathcal H\vdash_{\mathcal C}\varphi\). In other words: if everything that is true is also provable in \(\mathcal C\).

Whereas soundness is not just (as good as) required, but also often quite easy to ensure (or prove), completeness is a much more difficult property to achieve (or prove). Let’s take our pointless calculus as an example to try to come up with a sound and complete calculus:

A set of axioms/assumptions is just a set of numbers \(\mathcal H=\{ n_1,\ldots,n_i \}\) (not necessarily finite, but for this example let’s assume it is). A “formula” (i.e. number) \(k\) is entailed by \(\mathcal H\) if any “model” (i.e. number) \(m\) that is divisible by all the \(n_i\) is also divisible by \(k\). As we already mentioned, \(1\) is the only “tautology” (since it divides every number, i.e. is “true” in every “model”), so we can take that as the only axiom of a calculus: i.e. \(\mathbb A = \{ 1 \}\). And we already have a sound calculus: The only provable “formulae” (i.e. numbers) so far are \(1\) and the ones we assume, so everything we can prove in this calculus also happens to be true. But it’s certainly not complete, since e.g. \(\{ 5,6 \} \models 10\) – every number that is divisible by \(5\) and \(6\) is also divisible by \(10\) after all. But the only “formulae” that are provable from the set \(\{ 5,6 \}\) are \(5\) and \(6\) themselves, and \(1\). So if we assume \(5\) and \(6\), then \(10\) is true but not provable.

We already know that \(1\) is the only possible axiom of our calculus – since axioms are always provable (no matter the assumptions), they need to be tautologies (assuming we want to stay correct!), and \(1\) is the only tautology in our pointless logical system. So the only way we can extend our calculus to something more useful is to add inference rules.

I’ll choose the following: If we know, that \(n\) is divisible by \(m\), then \(n+m\) is also divisible by \(m\). We can express that as \(\dfrac{ \{ n \}\vdash_{\mathcal C}m,\quad n+m}{m}\), which tells us: “If every number that is divisible by \(n\) is also divisible by \(m\), and we know/assume that some number is divisible by \(n+m\), then it is also divisible by \(m\)”. With this inference rule we can e.g. prove that \(\{6\}\vdash2\):

We trivially have \(\{2\}\vdash_{\mathcal C} 2\). With this and the assumption \(2+2\) we can use our inference rule to infer that \(\{4\}\vdash_{\mathcal C} 2\). And from this and the fact that \(6=4+2\) we can infer \(\{6\}\vdash_{\mathcal C} 2\). Similarly we can show that \(\{6\}\vdash_{\mathcal C} 3\). So this new inference rule allows us to infer all divisors of our axioms/assumptions. That’s great, and since our inference rule is obviously correct (i.e. it allows for only inferring valid formulae, provided the premises are valid), our calculus with this new inference rule is also still sound. But it’s still not complete: As before, I still cannot infer that \(\{ 5,6 \} \vdash_{\mathcal C} 10\) – since this new rules only lets us infer numbers that are smaller than the premises and \(10\) is larger than both \(5\) and \(6\).

For completeness we need one more rule: The fact that if a number is divisible by two numbers \(n,m\), then it is also divisible by the least common multiple \(\textsf{lcm}(n,m)\) of those two numbers, i.e.:

\[\dfrac{n,\quad m}{\textsf{lcm}(n,m)}\]

…and with this additional rule we can finally prove \(\{ 5,6 \} \vdash_{\mathcal C} 10\) (since \(\{6\}\vdash_{\mathcal C}2\) and \(\textsf{lcm}(5,2)=10\)). In fact, with this additional rule, the calculus is finally complete (proof left as an exercise 😉 ).