An exchange on Twitter yesterday led me to this post by Paco Jariego summarising this post by Jay Stanley of the ACLU, in which Stanley discusses the concept of a “Gödel’s incompleteness theorem for the law,” and the problems that this creates when embedding legal principles in Internet of Things devices:

no matter how detailed a set of rules is laid out, no matter how comprehensive the attempt to deal with every contingency, in the real world circumstances will arise that will break that ruleset. Applied to such circumstances the rules will be indeterminate and/or self-contradictory.

Stanley makes some good points on how the law cannot be treated like an algorithm, and how human judgement is always going to be necessary – judgement that is hard to build in to the type of automated, embedded systems that will increasingly surround us. But is this the same as being a “Gödel’s incompleteness theorem for the law”?

Gödel’s incompleteness theorem is one of those concepts that gets widely used as a metaphor – along with those other triumphs of early 20th century mathematics and physics, the theory of relativity and quantum mechanics (Schrödinger’s Cat etc.). In its metaphorical use, Gödel’s incompleteness theorem is usually taken to mean that there are areas of knowledge that are necessarily “fuzzy” (“indeterminate and/or self-contradictory”). However, that misunderstands the significance of what Gödel was saying.

What Gödel’s incompleteness theorem actually asserts is subtly different. An earlier post by Paco Jariego quotes Wikipedia’s summary:

For any self-consistent recursive axiomatic system powerful enough to describe the arithmetic of the natural numbers (for example Peano arithmetic), there are true propositions about the naturals that cannot be proved from the axioms.

In other words, what Gödel’s theorem states is that for any “axiomatic system”, there are propositions that are *true*, but cannot be *proved* from the axioms. The point that is often overlooked is this: a “Gödel proposition” is not “fuzzy”, or “indeterminate and/or self-contradictory”; it is *true*. In fact, if you can show that a proposition is a Gödel proposition then you have proved that it is *true* – you just haven’t proved it from the axioms you started with.

Another subtlety of Gödel’s result is that you can’t plug the gap in your axiomatic system by appending your Gödel proposition as an extra axiom. Gödel’s theorem will continue to apply to *that* system, so that there must be another proposition which cannot be proved from the expanded set of axioms.

So, what would a “Gödel’s incompleteness theorem for the law” actually look like? It wouldn’t simply mean a legal proposition that is “indeterminate and/or self-contradictory”; it would have to mean a legal proposition that is (in some appropriate sense) *true*, but which cannot be proved from the existing “axioms” of the law.

What are the “axioms” of the law? In English law, we might take this to mean statute law together with the fundamental principles of common law. A “theorem” of the law would then be “proved” when a court applies the existing “axioms” to produce new case law applicable to the circumstances before it. (The point made above about appending propositions to the axioms means we needn’t be too precise in drawing the line between “axioms” and “theorems”, though.)

What Jay Stanley’s post argues (correctly, in my view), is that the above analogy is deeply flawed, because the law *isn’t* an axiomatic system in which “theorems” are “proved” from “axioms” by specific rules of inference. It is a more organic system, in which human judgement is always necessary.

But if we keep running with the analogy for now, then we are left with the question of what would be a “true” legal proposition that cannot be “proved” from the axioms (i.e. existing case law and statutes).

The best example that comes to mind is what we might call “textbook law”: the legal principles that have never actually been litigated (and thus never actually the subject of a definitive judgment), but which are widely accepted on the authority of leading legal textbooks. “Chitty on Contracts says this…” – that type of thing. Although even then, arguably a closer analogy would be to unproved mathematical *conjectures* (such as Fermat’s Last Theorem was, before Andrew Wiles ruined everyone’s fun by proving it).

But if anyone has any better ideas, then the comment box is open…

**Edit:** a tweet from Jay Stanley reminds me that another example I considered was the role of equity, whose historical roots can be seen as lying in “filling in the gaps” left by the common law. However, judges applying equity today – even Lord Denning at his most, ah, “creative” – at least *pretend* to be following established principles and precedent; to be “proving” a “theorem” from existing “axioms”, to use the analogy discussed above, rather than simply discovering new legal “truths”.

Hi John

First of all, thank you for your post. I’m glad to see someone interested in this debate.

Second, clearly as you well notice, the comparison with Godel is metaphorical, not formal. I am not a lawyer myself, but to the best of my knowledge there is no point in a formal comparison with law given that our laws do not emanate from an axiomatic system. Furthermore Jay is not using “law” meaning a part of a legal body, but “a general relation proved or assumed to hold between mathematical or logical expressions”. When computers and machines enter into the equation, under our current computing paradigm, there is a stronger basis to pursue the comparison further, but even in that case I don’t think Jay is intending any formal comparison, neither am I, just making it clear that we must be cautious about the implicit assumptions we do about the behaviour of computer based systems. Very easily we can run into trouble. There are many examples of problematic (paradoxical) use cases, not necessarily Godel propositions.

Third, regarding your comments on Godel theorem itself, an undecidable proposition is neither true nor false (within the given axiomatic system). If you add the proposition as a new axiom to the system, of course, as you say, the system will continue to express other undecidable propositions. My remark about adding new axioms to the systems does not imply I think it is the way to close the gap, it would be a way to evolve the system. And yes, it seems to me also similar (metaphorically) as how the legal system expands through new court decisions.

Sorry for the poetic license.

Thanks for your comment! I completely appreciated the poetic licence that both you and Jay were taking: I was just seizing the opportunity to pick up the analogy and run with it a bit further – and to share what I think is the really interesting and exciting thing about Gödel’s theorem, which is not that there are parts of reality that are fuzzy and uncertain (true though that is), but there are things which are solidly

truebut which can’t be proved from the rules of the system which beget them.That, perhaps, is where the true analogy between maths/logic and the law can be found: that both are refreshingly open-ended and impossible to summarise finally and entirely by a single set of rules – as that wonderful quotation from Freeman Dyson in the second of your posts which I linked expresses so well.

according to the above lines i am somehow still astonished about the “fear” to cross discipline and to combine different insides. for me interdisciplinary thought experiments/“denkspiele” mark one of the differences between machinery thinking and freedom of thought, which is a great human invention. following the tradition of the beginning of the 20th century i tend to think that it is contradiction that brings new inside not the lack of it.

in this regard i like the line of luitzen brouwer who stated somewhere/somehow that freedom of thought means more than following the construction of a given rule set. it also means to question the rules set and to feel free to add to it.

this is one reason why i am triggered by this discussion as i take it as a nice exercise in exactly that: the freedom to combine, to pick one line, but not necessarily all.

however, i used the trigger to go back to the archives and papers collected by the gödel society. my thoughts are influenced, the interpretations are my own.

as we know already from lawrence lessig: “code is law”. so i personally don’t make that distinction between different rule sets any longer. more importantly i am eager to learn how to make progress in spite of laws, which i like to understand as a set of agreement within a given living society.

gödel brings the inside that derivability is not as strong as hilbert wanted it to be. neither law nor the arithmetic of natural numbers can be seen as an end in itself: a formula is true if it can’t proof that it can be derived from that formula. the decision if that makes a formula true is imho a different story. nobody can proof it. however, i found an interesting hint how to deal with that inside as there is much more to be found in the work of gödel in order to question mechanical thinking. thanks to this discussion, i found that interesting source: a script written by the gödel society for a tv show about gödel. (google books: kurt gödel und die mathematische logik, engl translation starts at bookpage 111, spanish at p160)

in the script peter weibel and werner depauli-schimanovich set a pointer to the question: is it possible to write a program which is able to check the correctness of an arbitrary number of programs without ending in an infinite loop. gödels answer is no. ergo human mind can’t be mechanised. turing, they argue, took gödels problem with formal systems and changed provability with calculability.

in the script they quote robin gandy: “gödel had shown that a particular form of this problem (about what can be done by routines) could not be done by routines. and i think that was the starting point, that suggested to turing one should be able to characterise what can be done and then to show that there are these things that can’t be done.”

i think in the current debate about what technology and automation/sensors can or should not do, we lost this inside from the early days of computing that human are capable to think bigger than words or rule sets allow. yet this implies that thoughts are not just there to be taken and implemented in a program but can be used to grasp what should not be done.

there is this tendency in computing the last couple of years to re-inventing ideas that have already been here since the 1950s, at least. (mechanical thinking as a desirable goal is with us much longer). the difference: i don’t assume that back in the 1950s the ideas were discussed in order to re-install the rule-set of the middle ages within society but to build something new. today we should be able to judge the early enthusiasm: the world of computers is not a perfect world. it’s not even provable rigid, but it follows today a tendency to ignore new combinations which could be seen as new possibilities/extensions for society. instead of making use of recursive functions we ignore in both worlds – law (ie civil rights, geneva conventions) and computing (useful extensions) – what we have achieved -in regard of differences in mindsets – since then.

Thanks for this. Your point about how “humans are capable to think bigger than words or rule sets allow” reminds me of Roger Penrose’s argument in The Emperor’s New Mind, which I’m currently re-reading (the emphasis on Gödel propositions as being

trueis one I learnt from Penrose): that what we experience as“insight”shows that the human mind cannot simply be algorithmic “software” running on the “hardware” of our brains; something else must be going on.But as you observe, mechanistic/computational metaphors for human thought remain very strongly embedded in discussions on these issues, despite Gödel and Turing having shown their limitations before electronic computers even existed.

the argument that systems are free of inconsistency can’t be shown within a system is an interesting one. reminds me of the old theme: who controls the controller. instead of tackling this problem shadow courts and ministries acting outside an accepted rule set are installed. once upon a time defined for secret services,that appears today as the most desirable thing for bureaucrats and businesses (trade secrets defined as intellectual property), and as the answer to all problems for a certain class. however, this appears to be inconsistent with the system of a modern society or state after the 20th century. invalid assumption? or is my system so different to theirs that i might be able to deliver the proof of inconsistency?

The gist of godel’s theorem is there being a statement which essentially says “I’m unprovable”.

This is actually quite simple to transfer to law (later mathematically systems where built to be resistant to self-reference, and it took some serious effort to force it (in particular Godel’s incompleteness theorem doesn’t apply to just a single formal system, but to *every* formal system that can encode arithmetic). Take the statement A: “A is not the conclusion of any valid legal argument.” This statement is true statement about our legal system, but yet our legal system is not capable of proving it. If A was referred to in a contract, the court could not rely on the fact that it was true.

This is where the “human” element comes in: The judge can simply throw out any case involving statement A.