Propositions Are Types, Proofs Are Programs
In the 1930s, Haskell Curry noticed something strange. He was working on combinatory logic, a system for manipulating abstract functions, and he realized that the rules governing his combinators looked identical to the rules of a logical system called intuitionistic propositional logic. It was as if he’d found two different maps of the same territory.
Three decades later, William Howard found the same thing in a richer setting. He showed that the simply typed lambda calculus, the foundation of functional programming, corresponds precisely to natural deduction, a standard system of logical proof. Every type corresponds to a proposition. Every program corresponds to a proof. Every function corresponds to an implication.
This is the Curry-Howard correspondence, and it is not a metaphor or a loose analogy.
#What Is Mathematics?
Mathematics looks like calculation from the outside: multiply these numbers, solve for x, compute an integral. But calculation is to mathematics what typing is to programming: a basic skill, not the thing itself.
Mathematics is the study of what must be true given a set of starting assumptions. You pick your axioms, the statements you accept without proof (like “zero is a natural number” or “every natural number has a successor”). Then you apply rules of inference to derive new statements. If you follow the rules exactly, your conclusions are guaranteed to be true within that system. Not probably true. Not true according to experiment. Logically, necessarily true.
A proof is what makes this work. A proof is a chain of steps, each one justified by a rule, starting from axioms and ending at the thing you want to show. Think of it like a board game. You don’t argue about whether a chess move is legal. Either the piece can go there or it can’t. A proof is a complete record of legal moves from the starting position to the final claim.
This has a remarkable consequence: proofs are mechanical. You don’t need intuition or genius to check a proof. You just need to verify that each step follows from the rules. A machine could do it. In fact, that’s exactly what interactive theorem provers do. Systems like Lean, Coq, and Agda let you write mathematical proofs as code. If the code compiles, the proof is valid. If it doesn’t, you made a mistake, and the system tells you where.
These tools are not toys. Lean’s math library has over 200,000 formally verified mathematical declarations. Coq was used to build CompCert, a verified C compiler used in aerospace. The seL4 microkernel, verified in Isabelle/HOL, runs in military helicopters. HACL*, a verified cryptographic library, ships in Firefox and Linux.
- Propositions are statements that are either true or false: “1 + 1 = 2”, “every even number greater than 2 is the sum of two primes”
- Proofs are step-by-step derivations that a proposition is true
- Axioms are the starting propositions you accept without proof
- Rules of inference tell you how to derive new truths from existing ones (like: if you know A and you know “A implies B,” you can conclude B)
- A proof is valid when every step follows from the rules, and checking that is mechanical.
Mathematics is a rule-following game precise enough for a machine to verify.
#What Is a Type, Really?
Most programmers first meet types as labels: int, string, bool, List<User>. That is useful, but it hides the interesting part. A type is not just a category. It is a claim about what values are allowed and what operations make sense on them.
When you write x: int, you are making a claim: “x will always be an integer.” The compiler holds you to it. A type is a constraint the compiler verifies before your code runs.
Simple types express simple constraints: int means “this is an integer.” More complex types express more complex constraints:
List<String>means “a list where every element is a string”(String) -> Intmeans “a function that takes a string and returns an integer”- In Rust,
&'a strmeans “a string reference that is guaranteed to be valid for lifetime'a”
The compiler checks all of these before the program runs. If the constraints do not hold, the code will not compile.
Every time the compiler accepts your code, it has proven something. When Rust accepts a program, it has proven there are no dangling pointers, no data races, no use-after-free bugs. These are real theorems about your program’s behavior, verified at compile time. The Rust compiler is a specialized theorem prover that does not call itself one.
But not every type system can express every claim. Rust’s types can talk about memory ownership and lifetimes, but they can’t state “every even number greater than 2 is the sum of two primes.” TypeScript’s types can talk about object shapes, but not about sorting correctness. These languages expose fragments of a deeper idea: types are machine-checked claims. Proof assistants like Lean take that idea all the way, so the claims can talk about mathematics itself, not just strings, lists, or lifetimes.
Behind this sits a formal model of computation, just like logic sits behind mathematical proof. The one that matters most here is the lambda calculus: variables, functions, and function application. When you write ordinary code, you are already working in descendants of that tradition. Curry-Howard matters because it connects that world of typed programs to the world of formal proofs.
#Two Formal Worlds
So we have two formal worlds:
| Mathematics | Computation | |
|---|---|---|
| Objects | Propositions | Types |
| Evidence | Proofs | Programs (values) |
| Rules | Rules of inference | Typing rules |
| Checking | Proof verification | Type checking |
| Foundation | Formal logic | Lambda calculus |
Both are rule-following games. Both have precise foundations from the 1930s. Both can be checked mechanically.
Curry-Howard says they are the same game.
#The Correspondence
The table above is the high-level view. Curry-Howard makes each row precise and mechanical:
| Logic | Programming |
|---|---|
| A statement that might be true or false | A type |
| A proof that a statement is true | A value of that type |
| “If A then B” | A function from type A to type B |
| “Both A and B” | A pair (A, B) |
| “Either A or B” | A tagged union / sum type |
| “For all x, P(x) is true” | A generic function that works for any x |
| “False” (a contradiction) | A type with no values |
This is not logic borrowing programming syntax, or programming borrowing logical notation. It’s a structural equivalence: the rules that make proofs valid and the rules that make programs type-check are the same rules, discovered independently in two different fields.
#Statements Are Types
The math statement “1 + 1 = 2” corresponds to a specific type. If you can create a value of that type by following the rules, the statement is true. If no such value can exist, the statement is false.
In a proof assistant like Lean, you’d write the type as 1 + 1 = 2 directly. In the tiny prover we’ll build in the next article, it will look more explicit and more mechanical. The notation differs. The idea is the same: the proposition is the type.
#Proofs Are Values
In programming, you prove a type “exists” by constructing a value of it. 42 proves that the type Int is inhabited. [1, 2, 3] proves that List<Int> is inhabited. Similarly, if you can construct a value of a proposition-type using only legitimate rules, you’ve proven the proposition.
This is why the correspondence is exact, not approximate. A proof isn’t a story about why something is true. It’s a concrete object, a term that the type checker can inspect, decompose, and verify mechanically.
#Functions Are Implications
A function from type A to type B says: “give me evidence of A, and I’ll produce evidence of B.” That’s exactly what “if A then B” means in logic. The function body is the derivation.
If you have a function f: A -> B and you have a value a: A, then f(a) gives you a value of type B. In logic: if you know “A implies B” and you know “A,” then you know “B.” That’s modus ponens, one of the oldest rules of inference, and it’s literally function application.
#Pairs Are Conjunctions
A pair (a, b) where a: A and b: B is evidence that both A and B hold. To construct it, you need evidence of each. To use it, you can project out either component. This matches exactly how conjunction (logical “and”) works: to prove “A and B,” prove both; given “A and B,” you can conclude either.
#Sum Types Are Disjunctions
A tagged union (like Rust’s enum or Haskell’s Either) that holds either a value of type A or a value of type B is evidence that at least one of A or B holds. To construct it, you need evidence of one. To use it, you must handle both cases. This is logical “or”: to prove “A or B,” prove one of them; given “A or B,” reason by cases.
#Generic Functions Are Universal Statements
“For all n, P(n) is true” just means: write a function that takes any n and returns a value of type P(n). If the function compiles for all inputs, you’ve proven the universal statement. No special quantifier machinery needed. It’s just generic programming, the same thing you do when you write a function that works on any list regardless of element type.
#False Is an Empty Type
A proposition that can never be proven corresponds to a type with no values. You can write down the type (you can state the proposition), but you can never construct a value of it (you can never prove it). “Not P” then means “a function from P to the empty type”: if assuming P lets you produce a value of an uninhabited type, something went wrong, and P must be false. This is proof by contradiction encoded as a function.
#Why This Matters
#Compilers Are Already Proving Theorems
This means type-checking, the thing compilers already do when they reject "hello" + 3, is a form of proof verification. When Rust accepts your code, it has proven memory safety. When Haskell type-checks code with GADTs, it has proven that your invariants hold. When a verified smart contract passes its type checker, properties like “this function never transfers more tokens than the balance” are established at compile time.
The mechanism is the same across all these systems. The difference is scope. Rust proves a fixed set of memory properties. Lean proves whatever you ask it to.
#Proof Assistants Are Programming Languages
Lean, Coq, and Agda are not separate from the programming world. They are programming languages where the type system is expressive enough to state arbitrary mathematical propositions. “Prove this theorem” means “construct a value of this type.” The tools, tactics, and automation are all about making that construction easier, but underneath, it’s the same activity as writing a program that type-checks.
#The Boundary Between Programming and Mathematics Is Artificial
The distinction between programming and proving is artificial. A sorting function that returns a value of type “a sorted list plus a proof that it’s a permutation of the input” is doing both at once: computing and proving. Dependently typed languages like Lean, Agda, and Idris let you write programs that carry their correctness proofs with them.
#Scope and Limits
The correspondence is exact in the right formal systems (the simply typed lambda calculus corresponds to intuitionistic propositional logic, System F corresponds to second-order logic, and so on). But real programming languages are messy. Most have features that break the correspondence:
- General recursion. If a language lets you write infinite loops, you can “prove” anything: a non-terminating function of type
A -> Btechnically has the right type but produces no evidence. Proof assistants require termination to keep the logic sound. - Exceptions and effects. Throwing an exception from inside a function of type
A -> Bmeans you never actually produce aB. Side effects similarly weaken the logical interpretation. - Unsound escape hatches. Rust’s
unsafe, Haskell’sunsafePerformIO, TypeScript’sanyall break the correspondence by letting you bypass the type checker.
This is why proof assistants are strict: no general recursion without termination proofs, no uncontrolled effects, no escape hatches. The restrictions that make them harder to program in are exactly the restrictions that keep the logic sound.
Everyday programming languages expose fragments of Curry-Howard. Proof assistants are what you get when you take it all the way.
#Where to Go From Here
The next article builds a tiny theorem prover from scratch in Python. You’ll see the exact architecture: terms, types, a checker, and a trust boundary you can point at with one finger. After that, the Julia article shows the same ideas embedded inside a host language’s type system, where the compiler itself becomes the proof checker.
#Further Reading
- Fede’s Guide to Type Systems. If you want more background on how type systems work in practice, from generics to dependent types, before diving into the proofs.
- “Propositions as Types” by Philip Wadler. The best overview of the correspondence, its history, and why it keeps showing up. Available free online.
- “Types and Programming Languages” by Benjamin Pierce. The standard textbook. Chapters 9-11 cover the simply typed lambda calculus; chapter 30 covers the correspondence.
- “The Little Typer” by Friedman and Christiansen. A gentle, Socratic introduction to dependent types, where Curry-Howard becomes the central design principle.
- “Software Foundations” (free online). An interactive, proof-based introduction using Coq.