Knights, knaves, and Program Construction

Posted on November 11, 2010 by Tommy McGuire
Labels: theory, software development, notation, toy problems, books, Dijkstra, math, programming language
I have a confession to make: I was raised in the land of Dijkstra. Although I have posted frequently about exploring Haskell and other functional languages, the lambda calculus is comparatively new to me. Denotational semantics, category theory, generalized algebraic data types, these are all foreign from the approach I first learned: what I learned to call axiomatic semantics. So, when I finally dragged out a copy of Roland Backhouse's Program Construction: Calculating Implementations from Specifications and started reading (I am currently up to chapter five), I was brought back to the long-lost days of my youth, days of Phil. 313k, CS 336, Ham Richards, David Gries, ....

Anyway, chapter five is on calculational logic, using the properties of equality and negation to calculate the logical consequences of propositions. What I found especially entertaining is Backhouse's use of examples from Raymond Smullyan's knights and knaves puzzles to illustrate the principles. The cool part is that so many of the puzzles are so easy to solve using so simple a box of tools: negation, equality, and the properties of equality, transitivity, symmetry, and associativity.

For example


Briefly, these puzzles involve you as a traveler on an island inhabited only by knights, who only tell the truth, and knaves, who only tell lies. Consider the following question from To Mock a Mockingbird:

Question 1: Is it possible for any inhabitant of this island to claim that he is a knave?

Backhouse points out, "Smullyan's proofs invariably involve detailed case analysis." Case analysis is a very useful technique (in both logic and programming), but it quickly becomes complex and tedious, making it a less preferred alternative. Instead, Backhouse proposes something like the following approach (the example here is mine; don't blame Backhouse):
\[
\begin{equation}\begin{split}
& A \equiv \neg A & \\
\equiv & & \{ \textrm{Def. negation} \}\\
& A \equiv A \equiv \textbf{false} \\
\equiv & & \{ \textrm{Assoc., def. const. true} \}\\
& \textbf{true} \equiv \textbf{false} \\
\equiv & & \{ \textrm{Simpl.} \}\\
& \textbf{false}
\end{split}\end{equation}
\]

The initial formalization of the question begins with a native, named A; the logical proposition \(A\) indicates, "A is a knight", and the formalization of any statement by A begins, \(A \equiv \ldots\) because the entire statement will be true if and only if A is a knight. Therefore, the original equation can be read, "A is a knight if and only if A is a knave" or as "The statement 'A is a knave' is true if and only if A is a knight." The final statement, \(\textbf{false}\), indicates that no inhabitant of this island can claim he is a knave. In Smullyan's reasoning, that statement would be a lie for a knight and the truth for a knave.

(There are a small stack of identical operations here: equality \(=\), equivalence \(\equiv\), and bi-implication \(\leftrightarrow\) (a.k.a. "if and only if", or just "iff"). The first two directly indicate the same thing, that the thing on the left is the same as the thing on the right, but are kept separate because equality applies to any two things: 4, true, a puppy,..., while equivalence is limited to boolean values. This prevents confusion in statements like \(4 = 4 = \textbf{true}\), which would mean something completely different parenthesized like \(4 = (4 = \textbf{true})\). "Iff" simply has the same logical meaning as equivalence, but sometimes reads easier.)

Here is another example:

Question 4: [Suppose an inhabitant A says about himself and his brother B]: "Exactly one of us is a knave." What type is A and what type is B?

I would formalize the calculation as:
\[
\begin{equation}\begin{split}
& A \equiv A \not\equiv B & \\
\equiv & & \{ \textrm{Def. not equiv.} \}\\
& A \equiv A \equiv \neg B & \\
\equiv & & \{ \textrm{Def. negation} \}\\
& A \equiv A \equiv B \equiv \textbf{false} & \\
\equiv & & \{ \textrm{Def. const. true} \}\\
& \textbf{true} \equiv B \equiv \textbf{false} & \\
\equiv & & \{ \textrm{Assoc., simpl.} \}\\
& B \equiv \textbf{false} & \\
\equiv & & \{ \textrm{Def. negation} \}\\
& \neg B &
\end{split}\end{equation}
\]

The answer to the question is that B is a knave, but the statement gives no information about A.

The astute reader will notice that I skipped a few questions there. I had to because, after a good bit of thought I have no idea how to formalize "...to claim that he and his brother are both knaves" or "at least one of us is a knave" using only equivalence and negation. On the other hand, they can easily be formalized using conjunction and disjunction. I propose the following reason for my difficulties: it is impossible to express all logical propositions using equivalence and negation. In particular, it is not possible to express the proposition \(p \wedge q\).

Define a symmetric proposition to be one where the truth table for the proposition has an even number of true values, where the truth table has one row for each possible combination of truth assignments to the variables of the proposition. Define a asymmetric proposition to be one with an odd number of true values.


However, \(p \wedge q\) is asymmetric; it is only true in the one case where both \(p\) and \(q\) are true. An asymmetric proposition cannot be built with only symmetric components. (This argument could be formalized by induction on the structure of propositions.)

This matters...why?


This stuff, besides being an opportunity to play with MathJax, is valuable for a number of reasons: On the one hand, programs are proofs. Quite beyond the Curry-Howard correspondence, programs and formal proofs have a great deal in common, including the amount of detail required and the advantages of abstraction. Further, and more specifically, axiomatic semantics are fundamentally how I understand programs, particularly imperative programs in languages such as Java. (Axiomatic semantics are somewhat less useful in Haskell or Prolog, probably giving rise to the difficulties I have had with them over the years.)

Further, the ability to reason closely is somewhat important to a programmer. When I started, I did not know that conjunction could not be expressed using equivalence and negation; given the proof sketch I gave above, I am still not sure. I would bet that someone has proved it one way or another, but I am not familiar with that proof and could not easily find it. However, I have convinced myself, and have done so using basically the same techniques I used when I was fooling with a weird string fiddling problem in Java last weekend.

Expect more faffing about with proofs soon.
active directory applied formal logic ashurbanipal authentication books c c++ comics conference continuations coq data structure digital humanities Dijkstra eclipse virgo electronics emacs goodreads haskell http java job Knuth ldap link linux lisp math naming nimrod notation OpenAM osgi parsing pony programming language protocols python quote R random REST ruby rust SAML scala scheme shell software development system administration theory tip toy problems unix vmware yeti
Member of The Internet Defense League
Site proudly generated by Hakyll.