Strong Normalization of the Dependently Typed SK Combinators in Lean
In this repository, I define a dependent typing for the SK combinators, and prove strong normalization of this combinator calculus in Lean.
See the book for more.
Table of Contents
I aim for this repository to be relatively accessible to anyone who knows Lean or functional programming. A such, I assume little theoretical background knowledge. My hope is that this book gives you a good understanding of what strong normalization is, how it's proven, and how I've proven it for the dependently typed SK combinators--all in Lean.
- Background - provides an overview on typing, lambda calculus, and strong normalization
- Existing Work
- Type Discipline - in-depth technical details of how I represent dependent typings for the SK combinators
- Strong Normalization Proof - the proof
- Dependent SK - example demonstrating that these combinators are dependent
Typed and Untyped Lambda Calculus
Untyped Lambda Calculus
Expressions
The lambda calculus is a widely-known model of computation based on variable substitution within anonymous functions.
A lambda expression is defined by the grammar:
$$ e ::= (\lambda x.e)\ |\ e\ e\ |\ x $$
\(\lambda x.e\) represents an anonymous function with body \(e\) and a bound variable \(x\). \(e\ e\) represents function application. \(x\) represents a variable.
A variable is said to be "bound" if a corresponding binder in a lambda abstraction is in scope. A variable is said to be free if no such binder exists.
We can encode a lambda expression in Lean like so:
import Mathlib.Tactic
inductive Expr where
| var : String → Expr
| app : Expr → Expr → Expr
| abstraction : String → Expr → Expr
deriving Repr
Evaluation
An expression can be evaluated using the "beta reduction" rule:
$$ ((\lambda x.M)\ N) =_{\beta} M[x := N] $$
This rule denotes that a function application is "beta equivalent" to the body of the function being called, with \(N\) substituted in for every instance of \(x\) within \(M\).
This system of computation is powerful, and known to be turing-complete. However, expressing infinitely reducible expressions is quite easy:
$$ (\lambda x.x\ x)(\lambda x.x\ x) $$
In Lean, this is not allowed. Let us prove such by defining substitution and evaluation for our AST:
def substitute (for_var : String) (in_expr : Expr) (with_expr : Expr) : Expr :=
match in_expr with
| .var v =>
if v == for_var then
with_expr
else
.var v
| .app lhs rhs =>
let lhs' := substitute for_var lhs with_expr
let rhs' := substitute for_var rhs with_expr
.app lhs' rhs'
| .abstraction v body =>
let body' := (substitute for_var body with_expr)
.abstraction v body'
#eval substitute "x" (Expr.abstraction "x" (Expr.var "x")) (Expr.var "y")
-- => Expr.abstraction "x" (Expr.var "y")
Our substitution function appears to work as expected.
However, we will run into issues with substitution in functions where variable names collide. De Bruijn indices provide a natural solution to this problem.
De Bruijn Indices
De Bruijn indices prevent variables from being shadowed by referring to bound varaibles by their position relative to the binder. For example, the identity function:
$$ (\lambda 1) e =_{\beta} e $$
We can revise our Lean definition to utilize De Bruijn indices like such:
inductive Expr' where
| var : ℕ → Expr'
| app : Expr' → Expr' → Expr'
| abstraction : Expr' → Expr'
deriving Repr, BEq, DecidableEq
We will also have to update substitute to reflect this change.
A simple algorithm for variable substitution in De Bruijn-indexed expressions simply increments a "depth" counter until it reaches a variable with \(n = \text{depth}\), then replaces the value. However, free variables need to be accounted for. In the case of the expression being substituted in, for every deeper level in the tree which it is inserted in, its free variables must be incremented by 1.
-- Shifts free variable de bruijn indices by the specified amount
-- each layer deeper, the shift becomes larger
def with_indices_plus (in_expr : Expr') (shift_by : ℕ) : Expr' :=
match in_expr with
| .abstraction body =>
.abstraction (with_indices_plus body shift_by.succ)
| .var n =>
if n > shift_by then
.var (n + shift_by)
else
.var n
| .app lhs rhs =>
.app (with_indices_plus lhs shift_by) (with_indices_plus rhs shift_by)
def substitute' (in_expr : Expr') (n : ℕ) (with_expr : Expr') : Expr' :=
match in_expr with
| .abstraction body =>
.abstraction (substitute' body n.succ with_expr)
| .var n' => if n == n' then with_indices_plus with_expr n else .var n'
| x => x
partial
Evaluation with Unfounded Recursion
We may now attempt to write an evaluation function. Evaluation should continually perform substitution until no function applications remain at the outermost layer of the expression.
partial def eval (e : Expr') : Expr' :=
match e with
| .app (.abstraction body) rhs =>
eval (substitute' body 0 rhs)
| .app lhs rhs =>
eval (.app (eval lhs) rhs)
| x => x
Unfortunately, this will not compile without the keyword "partial" in Lean, as it could potentially run forever (i.e., uses unfounded recursion). As a result, proving properties about this function is impossible in Lean, as it cannot reason about the function in a manner consistent with its type system.
Simply-Typed Lambda Calculus
The simply-typed lambda calculus prevents infinitely reducing expressions from being expressed by assigning types to expressions:
$$ \tau ::= \tau \rightarrow \tau\ |\ T $$
where \(T\) is the type of a constant value (a "base type"), and \(\alpha \rightarrow \beta\) is the type of a function with a bound variable of type \(\alpha\) and a body of type \(\beta\). We can then type the bound variable of a lambda abstraction like so:
$$ (\lambda x : \text{Nat}.x + 1) $$
We can then encode typed lambda expressions in Lean like so:
inductive Base where
| int : Base
| nat : Base
deriving DecidableEq, BEq, Repr
inductive Ty where
| base : Base → Ty
| arrow : Ty → Ty → Ty
deriving DecidableEq, BEq, Repr
open Base
open Ty
For example, the type of the function \((\lambda x : \text{Nat}.x + 1)\) would be Nat → Nat
, or:
#check arrow (base nat) (base nat)
In order to prevent ill-formed expressions from being inputted, we can define typing judgement rules that verify the correctness of a typing.
We will follow these judgement rules:
- If a variable is bound by an abstraction with a binder of type \(\alpha\), then the variable is of type \(\alpha\).
- If a constant \(c\) is of type \(T\), it is of type \(T\).
- If a lambda abstraction's binder is of type \(t\) and its body of type \(t'\), it is of type \(t \rightarrow t'\).
- In an application \(e_{1}\ e_{2}\), the left hand side expression must be of a type of the form \(\alpha \rightarrow \beta\). The application argument \(e_{2}\) must have type \(\alpha\). The entire expression is of type \(\beta\).
We will also define a mapping from constant values to types:
inductive Cnst where
| num : ℕ → Cnst
| int : ℤ → Cnst
deriving DecidableEq, BEq, Repr
def type_of_cnst : Cnst → Base
| .num _ => .nat
| .int _ => .int
We will extend Expr
to include our constant values and binder (\(x : \alpha\)) typings.
inductive Expr'' where
| cnst : Cnst → Expr''
| var : ℕ → Expr''
| app : Expr'' → Expr'' → Expr''
| abstraction : Ty → Expr'' → Expr''
deriving BEq, Repr, DecidableEq
In order to represent the types of variables, we will maintain a stack of types of the nearest abstraction binders (e.g., \(\lambda x : \alpha.x\)). We can determine the type of a variable by looking up its type in this "context."
-- Indexed relative to de bruijn index of variable being type-checked
abbrev Ctx := List Ty
def type_of (ctx : Ctx) : Expr'' → Option Ty
| .cnst c => some (.base (type_of_cnst c))
-- Expression variables are 1-indxed, list is zero-indxed
| .var (n + 1) => ctx[n]?
| .var _ => none
-- Lhs must eventually evaluate to an abstraction
-- and rhs must eventually evaluate to the type of the bound variable in lhs
| .app lhs rhs => do
let t_lhs ← type_of ctx lhs
let t_rhs ← type_of ctx rhs
match t_lhs with
| arrow in_t out_t =>
if in_t != t_rhs then
none
else
pure (out_t)
| _ => none
| .abstraction bind_ty body => do
let ctx' := bind_ty :: ctx
some (.arrow bind_ty (← type_of ctx' body))
Let's try out or type inference function for a few functions:
- Identity function for nats: \(\lambda x : \mathbb{N}.x\)
#eval type_of [] (.abstraction (.base .nat) (.var 1))
-- => some (Ty.arrow (Ty.base (Base.nat)) (Ty.base (Base.nat)))
- Application of identity function with a nat: \((\lambda x : \mathbb{N}.x) 1\) => 1
#eval type_of [] (.app (.abstraction (.base .nat) (.var 1)) (.cnst (.num 1)))
-- => some (Ty.base (Base.nat))
Nice.
Now that our expressions are typed, we may be able to rewrite our eval
function in a verifiably terminating way that Lean likes. In doing so, we must prove strong normalization of our simply-typed lambda calculus.
In the next chapter, I elaborate more on strong normalization.
Made by Dowland Aiello.
Strong Normalization
In the previous chapter, we saw that:
- Lambda expressions can be evaluated using the "beta-reduction" rule
- Two lambda expressions can be said to be "beta-equivalent" (\(=_{\beta}\)) if they are syntactically equivalent after some number of beta-reductions
- Some untyped expressions can potentially evaluate forever, which Lean doesn't like, and which prevents us from proving properties of our language, and programs written in our language.
Termination: Constantly Decreasing Measure
In order to convince Lean that our evaluation function does not run on forever, we need to provide a constantly-decreasing measure whose value eventually causes termination. For example, we could provide a "steps" argument to our evaluation function which dictates a maximum bound on the number of steps in computing the expression:
import SkLean.Lc
import Mathlib.Tactic
First, we will extend our substitution function to our typed Expr
:
def with_indices_plus' (in_expr : Expr'') (shift_by : ℕ) : Expr'' :=
match in_expr with
| .abstraction bind_ty body =>
.abstraction bind_ty (with_indices_plus' body shift_by.succ)
| .var n =>
if n > shift_by then
.var (n + shift_by)
else
.var n
| .app lhs rhs =>
.app (with_indices_plus' lhs shift_by) (with_indices_plus' rhs shift_by)
| x => x
def substitute'' (in_expr : Expr'') (n : ℕ) (with_expr : Expr'') : Expr'' :=
match in_expr with
| .abstraction bind_ty body =>
.abstraction bind_ty (substitute'' body n.succ with_expr)
| .var n' => if n == n' then with_indices_plus' with_expr n else .var n'
| x => x
We will naively terminate once we run out of steps:
def eval_once (e : Expr'') : Expr'' :=
match e with
| .app (.abstraction _ body) rhs =>
substitute'' body 0 rhs
| .app lhs rhs =>
.app (eval_once lhs) rhs
| x => x
def eval''' (e : Expr'') (n : ℕ) : Option Expr'' :=
match n with
| .zero => none
| .succ .zero => eval_once e
| .succ n => eval''' (eval_once e) n
Assume that evaluation of a given expression \(e\) terminates. This is only true if there is some value for steps
which can be provided to eval'''
. Thus, we can say that:
lemma exists_steps_terminates (terminates : Expr'' → Prop) :
∀ e, terminates e → ∃ n_steps, (eval''' e n_steps).isSome := sorry
We can define "termination," or more strictly, "strong normalization" inductively as such:
inductive Sn : Expr'' → Prop
| trivial e : eval_once e = e → Sn e
| hard e : Sn (eval_once e) → Sn e
If the expression evaluates to itself after one step of evaluation, execution is complete, and it is strongly normalizing. In the recursive case, if it eventually reduces to itself after some sequence of reductions, it is also strongly normalizing.
Given this definition, we can refine our lemma. The proof is actually pretty easy:
- In the base case, the expression can be evaluated in one step.
- In the hard case:
- If
e
is strongly normalizing, theneval_once e = e
is: by induction on the definition ofSn
. - Induction: successor of
steps e'
.
- If
First, we will prove (1):
In the reflexive case, we can prove the statement trivially by the definition of Sn
.
lemma eval_rfl_imp_sn_iff : ∀ e, eval_once e = e → (Sn (eval_once e) ↔ Sn e) := by
intro e h_eq
constructor
rw [h_eq]
simp
rw [h_eq]
simp
Otherwise, e ≠ eval_once e
, but clearly in all cases besides application, eval_once e = e
.
In the application case, we do induction on each side of the bijection. Sn (eval_once e) → Sn e
is trivially true per the .hard
constructor of Sn
.
Sn e → Sn (eval_once e)
can be proven by pattern matching on Sn e
.
lemma sn_bidirectional : ∀ (e : Expr''), Sn (eval_once e) ↔ Sn e := by
intro e
match e with
| .var n =>
have h : eval_once (.var n) = (.var n) := by
unfold eval_once
simp
apply eval_rfl_imp_sn_iff
exact h
| .cnst c =>
have h : eval_once (.cnst c) = (.cnst c) := by
unfold eval_once
simp
apply eval_rfl_imp_sn_iff
exact h
| .app lhs rhs =>
constructor
intro h
apply Sn.hard
exact h
intro h
cases h
case mpr.trivial a =>
rw [a]
apply Sn.trivial
exact a
case mpr.hard a =>
exact a
| .abstraction bind_ty body =>
have h : eval_once (.abstraction bind_ty body) = (.abstraction bind_ty body) := by
unfold eval_once
simp
apply eval_rfl_imp_sn_iff
exact h
Using the fact that Sn
is bidirectional (3), we can prove that strong normalization implies termination (2).
We simply obtain n_steps - 1
via induction given (3), and use its successor.
lemma sn_imp_exists_steps : ∀ e, Sn e → ∃ n_steps, n_steps > 0 ∧ (eval''' e n_steps).isSome := by
intro e h_sn
induction h_sn
use 1
unfold eval''' at *
simp
case hard e' h h₂ =>
have ⟨n, ⟨h_n_pos, h⟩⟩ := h₂
use n.succ
rw [eval''']
simp [h]
simp_all
exact Nat.pos_iff_ne_zero.mp h_n_pos
Termination Proof
We have shown that every strongly normalizing expression terminates. Obviously, not all lambda calculus expressions are strongly normalizing (e.g., \((\lambda x.x\ x)(\lambda x.x\ x)\)). However, it has been extensively demonstrated that welll-typed expressions in the simply-typed lambda calculus (STLC) are strongly normalizing, and thus, terminate.
If we can prove that all expressions that are well-typed, given our inference rules in chapter 1, are strongly normalizing, then we have successfully proven that all well-typed expressions terminate. Using our lemmas, we can extract a value for steps
, and run our evaluation function successfully.
Termination proofs have been exhaustively enumerated for the STLC in the literature, even in Lean. However, proofs of strong normalization of other typed lambda calculi are relatively sparser.
In this paper, I will be considering a particular class of type systems based on, "polymorphic types" (a la System F). Made by Dowland Aiello.
SK Combinators
The SK combinator calculus is a turing-complete system of computation / logic without variables or scopes. SK programs are composed of two "combinators": \(S\) and \(K\). The semantics of \(S\) and \(K\) are given by:
$$ K x y = x \\ S x y z = x z (y z) $$
The \(K\) combinator can be thought of as a "constant function maker." In lambda calculus, the \(K\) combinator is defined as:
$$ (\lambda a.(\lambda b.a)) $$
The \(S\) combinator can be thought of as a substitution function. It copies the argument \(z\), applies it to \(x\) and \(y\), then applies th results to each other.
The SK combinator calculus has been demonstrated to be turing-complete, and equivalent to the lambda calculus.
Lean Examples
We can encode the combinators as dependent functions in Lean:
import Mathlib.Tactic
def K {α : Type} {β : Type} (x : α) (_y : β) := x
def S {α : Type} {β : Type} {γ : Type} (x : α → β → γ) (y : α → β) (z : α) := x z (y z)
#eval K 0 1
-- => 0
#eval S (λx y => x + y) Nat.succ 1
-- => 3
The SK combinators are turing complete. Dynamic evaluation of the combinators in a dependently typed language will invoke unfounded recursion, similarly to in the lambda calculus.
Strong normalization of the typed SK combinators has been comparatively under-studied compared to the simply typed lambda calculus. Furthermore, proofs of strong normalization of the dependently-typed SK combinators have yet to be mechanized in a language like Lean.
In this paper, I detail my interpretation of the combinators as dependently-typed functions, my representation of them in Lean, and my proof of their strong normalization in Lean.
Made by Dowland Aiello.
Typed SK Combinators & Strong Normalization
Definition of the SK Combinators
-
Schonfinkel, M. - On the building blocks of mathematical logic
- Schonfinkel defines \(C\) and \(S\) combinators:
$$ S x y z = x z (y z) \\ C x y = x $$
- Schonfinkel demonstrates that the \(I\) combinator (\(I x = x\)) can be constructed using only \(K\) (or by the name \(C\)) and \(S\).
- Schonfinkel demonstrates that every logical formula can be expressed using the combinators \(S\) and \(C\) (also known as \(K\)).
Strong Normalization of the SK Combinators
- Abel, A. - Strong normalization for simply-typed combinatory algebra using Girard’s reducibility candidates formalized in Agda](https://www.cse.chalmers.se/~abela/agda/cr-sk.pdf).
- A formal proof of strong normalization of the simply-typed SK combinators is provided in Agda.
- The proof utilizes Girard's reducibility candidates.
Common Typings of the Combinators
Simple Typing
- Tarau, P. - On Synergies between Type Inference, Generation and Normalization of SK-combinator Trees
- The standard definitions of \(S\) and \(K\) are utilized.
- A simple-typing of the \(SK\) combinators is formalized in Prolog.
Dependent Typing
-
Altenkirch, T. - Towards dependent combinator calculus
- A dependent typing of \(S\) and \(K\) is given with type universes:
$$ U : Set \\ EI : U \rightarrow Set \\ u : U \\ K : \{A : U\}\ \{B : EI\ A \rightarrow U\} \rightarrow (a : EI\ A) \rightarrow EI\ (B\ a) \rightarrow EI\ A \\ S : \{A : U\}\ \{B : EI\ A \rightarrow U\}\ \{C : (a : EI\ A) \rightarrow EI\ (B\ a) \rightarrow U\} \\ \rightarrow ((a : EI\ A)) (b : B EI\ (B\ a)) \rightarrow EI\ (C\ a\ b)) \\ \rightarrow (g : (a : EI\ A) \rightarrow EI\ (B\ a)) \\ \rightarrow (a : EI\ A) \rightarrow EI\ (C\ a\ (g\ a)) $$
- No formal proof of strong normalization is provided.
Gap
As of yet, it remains to be explored whether the dependent \(SK\) combinators are provably strongly normalizing. Furthermore, mechanizing such a proof in a language like Lean is of particular interest. In this paper, I do so, building off existing dependent typings of the SK combinators and Girard's reducibility candidates.
AST
SK Polymorphic Typing
I interpret the SK combinators as polymorphic functions of the form:
$$ K\ (\alpha : \text{Type}\ m)\ (\beta : \text{Type}\ n) : \alpha \rightarrow \beta \rightarrow \alpha \\ S\ (\alpha : \text{Type}\ m)\ (\beta : \text{Type}\ n)\ (\gamma : \text{Type}\ o) : (\alpha \rightarrow \beta \rightarrow \gamma) \rightarrow (\alpha \rightarrow \beta) \rightarrow \alpha \rightarrow \gamma $$
where \(K\ \mathbb{N}\ \mathbb{R}\) produces a function of type \(\mathbb{N} \rightarrow \mathbb{R} \rightarrow \mathbb{N}\).
Borrowing from the calculus of constructions, the types of \(K\) and \(S\) are explicitly given by the \(\forall\) expression:
$$ K : (\forall \alpha : \text{Type}\ m, \beta : \text{Type}\ n, x : \alpha\ y : \beta . \alpha) \\ S : (\forall \alpha : \text{Type}\ m, \beta : \text{Type}\ n, \gamma : \text{Type}\ o, \\ x : (\forall x : \alpha, y : \beta, z : \gamma . \gamma), y : (\forall x : \alpha . \beta), z : \alpha . \gamma) $$
Typing judgements on function application are derived from substitution on \(\forall\). For example, the type of \(K\ \mathbb{N}\) is derived from substitution of \(\mathbb{N}\) into the type of \(K\). That is, \(\alpha\) is replaced with \(\mathbb{N}\), and the type of the expression is said to be equal to the body of the \(\forall\) in which the value was substituted.
The \(K\) and \(S\) combinators can be encoded in AST form in Lean using De Bruijn indices like such:
import Mathlib.Tactic
A BindId n
refers to the value of the variable binder in the nth-up \(\forall\) expression.
structure BindId where
toNat : ℕ
deriving BEq, Repr, DecidableEq
namespace BindId
def succ (id : BindId) : BindId := ⟨id.toNat.succ⟩
instance : LT BindId where
lt (slf : BindId) (other : BindId) : Prop :=
slf.toNat < other.toNat
instance : LE BindId where
le (slf : BindId) (other : BindId) : Prop :=
slf.toNat ≤ other.toNat
instance : DecidableRel (@LT.lt BindId _) :=
fun a b => inferInstanceAs (Decidable (a.toNat < b.toNat))
instance : Preorder BindId where
le_refl (slf : BindId) : slf ≤ slf := by
unfold LE.le
unfold instLE
simp
le_trans (a b c : BindId) : a ≤ b → b ≤ c → a ≤ c := by
intro a b
unfold LE.le at *
unfold instLE at *
simp_all
linarith
lt_iff_le_not_le (a b : BindId) : a < b ↔ a ≤ b ∧ ¬b ≤ a := by
unfold LE.le
unfold LT.lt
unfold instLT
unfold instLE
simp
intro a
unfold toNat at *
linarith
end BindId
For example, the expression \(\forall x : \alpha.x\) can be rewritten using De Bruijn indices to \(\forall \alpha.1\).
An expression is one of: \(K\), \(S\), \(\text{Prop}\), \(\text{Ty}\ n\), \((\forall e_{1}.e_{2})\), \(e_{1}\ e_{2}\), or a variable \(n\).
For convenience and legibility, I create a separate definition of each expression:
inductive K where
| mk : K
deriving DecidableEq, BEq, Repr
inductive S where
| mk : S
deriving DecidableEq, BEq, Repr
inductive Prp where
| mk : Prp
deriving DecidableEq, BEq, Repr
Stratified type universes.
inductive Ty where
| mk : ℕ → Ty
deriving DecidableEq, BEq, Repr
mutual
inductive Call where
| mk : SkExpr → SkExpr → Call
deriving DecidableEq, BEq, Repr
Var uses a 1-indexed De Bruijn position.
inductive Var where
| mk : BindId → Var
deriving DecidableEq, BEq, Repr
\(\forall\) takes a type of its binder \(x : t\), and has a body containing \(e\).
inductive Fall where
| mk : SkExpr → SkExpr → Fall
inductive SkExpr where
| k : K → SkExpr
| s : S → SkExpr
| prp : Prp → SkExpr
| ty : Ty → SkExpr
| fall : Fall → SkExpr
| call : Call → SkExpr
| var : Var → SkExpr
deriving DecidableEq, BEq, Repr
end
Some convenience accessor methods:
namespace Ty
def n : Ty → ℕ
| .mk n => n
end Ty
namespace Call
def lhs : Call → SkExpr
| .mk lhs _ => lhs
def rhs : Call → SkExpr
| .mk _ rhs => rhs
end Call
namespace SkExpr
Whenever a value is substituted in to a \(\lambda\) expression, all free variables must be incremented by 1 in order to prevent shadowing from the new surrounding expression.
def with_indices_plus (in_expr : SkExpr) (shift_by : BindId) (at_depth : ℕ) : SkExpr :=
match in_expr with
| .fall (.mk bind_ty body) =>
.fall (.mk (bind_ty.with_indices_plus shift_by at_depth.succ) (body.with_indices_plus shift_by at_depth.succ))
| .var (.mk n) =>
if n.toNat > at_depth then
var (.mk ⟨n.toNat + shift_by.toNat⟩)
else
var (.mk n)
| .call (.mk lhs rhs) =>
call (.mk (lhs.with_indices_plus shift_by at_depth) (rhs.with_indices_plus shift_by at_depth))
| x => x
end SkExpr
namespace Fall
def bind_ty : Fall → SkExpr
| .mk bind_ty _ => bind_ty
def body : Fall → SkExpr
| .mk _ body => body
def substitute (in_fall : Fall) (with_expr : SkExpr) : Fall :=
let rec substitute_e (in_expr : SkExpr) (n : BindId) (with_expr : SkExpr) : SkExpr :=
match in_expr with
| .fall (.mk bind_ty body) =>
.fall (.mk (substitute_e bind_ty n.succ with_expr) (substitute_e body n.succ with_expr))
| .var (.mk n') => if n = n' then with_expr else .var (.mk n')
| .call (.mk lhs rhs) => .call (.mk (substitute_e lhs n with_expr) (substitute_e rhs n with_expr))
| x => x
match in_fall with
| .mk bind_ty body =>
.mk (substitute_e bind_ty ⟨1⟩ (with_expr.with_indices_plus ⟨1⟩ 0)) (substitute_e body ⟨1⟩ (with_expr.with_indices_plus ⟨1⟩ 0))
end Fall
I give a few test cases of index shifting. See the preservation chapter for more complicated, rigorous lemmas.
example : ((Fall.mk (.ty (.mk 0)) (.var (.mk ⟨1⟩)))).substitute (.var (.mk ⟨2⟩)) = (Fall.mk (.ty (.mk 0)) (.var (.mk ⟨3⟩))) := by
simp [Fall.substitute]
simp [Fall.substitute.substitute_e]
simp [SkExpr.with_indices_plus]
example : (Fall.mk (.ty (.mk 0)) (.var (.mk ⟨1⟩))).substitute (.fall (.mk (.ty (.mk 0)) (.var (.mk ⟨4⟩)))) = (Fall.mk (.ty (.mk 0)) ((.fall (.mk (.ty (.mk 0)) (.var (.mk ⟨5⟩)))))) := by
simp [Fall.substitute]
simp [Fall.substitute.substitute_e]
simp [SkExpr.with_indices_plus]
One-step evaluation is only defined for \(K\ \alpha\ \beta\ x\ y\) and \(S\ \alpha\ \beta\ \gamma\ x\ y\ z\).
namespace Call
def eval_once : Call → SkExpr
| (.mk (.call (.mk (.call (.mk (.call (.mk (.k .mk) _)) _)) x)) _) => x
| (.mk (.call (.mk (.call (.mk (.call (.mk (.call (.mk (.call (.mk (.s .mk) _)) _)) _)) x)) y)) z) => .call (.mk (.call (.mk x z)) (.call (.mk y z)))
| (.mk x y) => (.call (.mk x y))
end Call
Made by Dowland Aiello.
Dsl
I define a DSL to convert human-readable input with named variable to De Bruijn indexed AST expressions.
import SkLean.Ast
De Bruijn and "named variable" expresions are mixed in the same tree.
inductive NamedSkExpr where
| k : NamedSkExpr
| s : NamedSkExpr
| prp : NamedSkExpr
| ty : ℕ → NamedSkExpr
| fall : String → NamedSkExpr → NamedSkExpr → NamedSkExpr
| call : NamedSkExpr → NamedSkExpr → NamedSkExpr
| var : String → NamedSkExpr
| e : SkExpr → NamedSkExpr
namespace NamedSkExpr
De Bruijn indices are assigned greedily based on the location of the nearest bound matching variable.
def to_sk_expr (names : List String) : NamedSkExpr → SkExpr
| .k => .k .mk
| .e e' => e'
| .s => .s .mk
| .prp => .prp .mk
| .ty n => .ty (.mk n)
| .call lhs rhs => .call (.mk (to_sk_expr names lhs) (to_sk_expr names rhs))
| .var name => .var $ (.mk ⟨Nat.succ <| (names.findIdx? (. = name)).getD 0⟩)
| .fall name bind_ty body => .fall (.mk
(to_sk_expr (name :: names) bind_ty)
(to_sk_expr (name :: names) body))
end NamedSkExpr
The DSL is used like so:
SK[K]
SK[(K (Type 0))]
SK[∀ α : Type 0, ∀ β : Type 0, ∀ x : α, ∀ y : β, α]
declare_syntax_cat skexpr
syntax "K" : skexpr
syntax "S" : skexpr
syntax "Prop" : skexpr
syntax "Type" num : skexpr
syntax "Type" ident : skexpr
syntax "∀" ident ":" skexpr "," skexpr : skexpr
syntax "(" skexpr skexpr ")" : skexpr
syntax "(" skexpr ")" : skexpr
syntax ident : skexpr
syntax "#" ident : skexpr
syntax skexpr "→" skexpr : skexpr
syntax " ⟪ " skexpr " ⟫ " : term
macro_rules
| `(⟪ ($e:skexpr) ⟫) => `(⟪ $e ⟫)
| `(⟪ K ⟫) => `(NamedSkExpr.k)
| `(⟪ S ⟫) => `(NamedSkExpr.s)
| `(⟪ Prop ⟫) => `(NamedSkExpr.prp)
| `(⟪ Type $n:num ⟫) => `(NamedSkExpr.ty $n)
| `(⟪ Type $v:ident ⟫) => `(NamedSkExpr.ty $v)
| `(⟪ #$var:ident ⟫) =>
`(NamedSkExpr.var $(Lean.quote var.getId.toString))
| `(⟪ $var:ident ⟫) => `(NamedSkExpr.e $var)
| `(⟪ $e₁:skexpr → $e₂:skexpr ⟫) => `(⟪ ∀ x : $e₁, $e₂ ⟫)
| `(⟪ ∀ $var:ident : $e_ty:skexpr , $body:skexpr ⟫) =>
`(NamedSkExpr.fall $(Lean.quote var.getId.toString) (⟪ $e_ty ⟫) (⟪ $body ⟫))
| `(⟪ ($e₁:skexpr $e₂:skexpr)⟫) => `(NamedSkExpr.call ⟪ $e₁ ⟫ ⟪ $e₂ ⟫)
syntax "SK[ " skexpr " ] " : term
macro_rules
| `(SK[ $e:skexpr ]) => `(NamedSkExpr.to_sk_expr [] ⟪ $e ⟫)
Variables in body position must be prefixed with # so as not to collide with definitions in scope.
#eval SK[∀ α : Type 0, ∀ β : Type 0, ∀ x : #α, ∀ y : #β, #α]
Made by Dowland Aiello.
Type Inference
Type judgements are relatively obvious, except in the case of application, and the \(\forall\) expresion.
I make use of a De Bruijn-indexed context corresponding to the bound type of a variable \(n\) in an expression.
Indexes are offset by one. BindId
1 refers to the current \(\forall\) expression, while ctx[0]
refers to the most recent bound variable's type.
import SkLean.Ast
import SkLean.Dsl
import Mathlib.Tactic
open SkExpr
abbrev Ctx := List SkExpr
I make use of a DSL for convenience and legibility. See DSL for more. The types of K and S are fixed, although they are dependent on a universe level provided at the meta-level. It is not immediately clear that type universes are required. I plan on elaborating more in the dependent typing examples chapter.
def ty_k {m n : ℕ} := SK[∀ α : Type m, ∀ β : Type n, #α → #β → #α]
def ty_k_fall {m n : ℕ} := (Fall.mk (.ty (.mk m)) (.fall (Fall.mk (.ty (.mk n)) (.fall (Fall.mk (.var (.mk ⟨3⟩)) (.fall (Fall.mk (.var (.mk ⟨3⟩)) (.var (.mk ⟨4⟩)))))))))
lemma ty_k_def_eq {m n : ℕ} : @ty_k m n = (.fall (@ty_k_fall m n)) :=
rfl
def ty_s {m n o : ℕ} := SK[∀ α : Type m, ∀ β : Type n, ∀ γ : Type o, (#α → #β → #γ) → (#α → #β) → #α → #γ]
Closedness
Note that this typing of the \(S\) and \(K\) combinators implies that free variables are inexpressible in this calculus. This simplifies typing judgements significantly. I prove this in the preservation chapter. I still make use of a context for a readable typing judgement.
Beta equivalence is defined as equality after some sequence of evaluations. Expressions are certainly \(=_{\beta}\) if they are definitionally equivalent. An expression is beta equivalent to another if its one-step redux is beta-equivalent to the other expression. I assume symmetry and transitivity.
inductive beta_eq : SkExpr → SkExpr → Prop
| trivial e₁ e₂ : e₁ = e₂ → beta_eq e₁ e₂
| hard e₁ e₂ : beta_eq e₁ (e₂.eval_once) → beta_eq e₁ (.call e₂)
| symm e₁ e₂ : beta_eq e₂ e₁ → beta_eq e₁ e₂
| trans e₁ e₂ e₃ : beta_eq e₁ e₂ → beta_eq e₂ e₃ → beta_eq e₁ e₃
- K expression:
t
is a valid judgement if it is equivalent toty_k
at the same universe level. - S expression:
t
is a valid judgement if it is equivalent toty_s
at the same universe level. - e₁ e₂ expression:
t
is a valid judgement ift_rhs
is a valid judgement for the right hand side,t_lhs
is a valid judgement for the left hand side of the form∀ x : t_rhs, b
, and \(t = b[x := rhs]\) under some context. ∀ x : bindty.body
expression:t
is a valid judgement ift_body
is a valid judgement forbody
andt = t_body
.Type n
expression:t
is a valid judgement ift = ty (n + 1)
.Prop
expression:t
is a valid judgement ift = ty 0
var n
expression:t
is a valid judgement if the the nth nearest-bound variable in the context is of typet
.t
is a valid judgement fore
if somet'
is beta equivalent to it, andt'
is a valid judgement fore
.
-- valid_judgement e t
inductive valid_judgement : Ctx → SkExpr → SkExpr → Prop
| k ctx k m n : valid_judgement ctx (.k k) (@ty_k m n)
| s ctx s m n o : valid_judgement ctx (.s s) (@ty_s m n o)
| call ctx (call : Call)
(t_lhs : Fall) :
valid_judgement ctx call.lhs (.fall t_lhs) →
valid_judgement ctx call.rhs (t_lhs.bind_ty) →
valid_judgement ctx (.call call) (t_lhs.substitute call.rhs).body
| fall ctx (fall : Fall) t_bind_ty t_body :
valid_judgement (fall.bind_ty :: ctx) fall.bind_ty t_bind_ty →
valid_judgement (fall.bind_ty :: ctx) fall.body t_body →
valid_judgement ctx (.fall fall) t_body
| ty ctx (ty_e : Ty) : valid_judgement ctx (.ty ty_e) (.ty (.mk ty_e.n.succ))
| prp ctx (prp : Prp) : valid_judgement ctx (.prp prp) (.ty (.mk 0))
| beta_eq ctx e t t₂ : beta_eq t t₂ → valid_judgement ctx e t₂ → valid_judgement ctx e t
| var ctx n (h_pos : n > ⟨0⟩) (h_in_bounds : (n.toNat - 1) < ctx.length) : valid_judgement ctx (.var (.mk n)) (ctx[n.toNat - 1])
Made by Dowland Aiello.
Strong Normalization Definition
I define strong normalization inductively as:
- Termination in one step of evaluation
- Strong normalization of the next step of evaluation
I assume strong normalization of terms that are not computable (e.g., Prop, Ty, etc.).
Unsubstituted variables are assumed to be strongly normalizing, as the inductive definition of Sn
for call evaluation does not produce unsubstituted variables and free variables are inexpressible.
import SkLean.Ast
import SkLean.Dsl
import SkLean.Typing
open SkExpr
inductive Sn : SkExpr → Prop
| trivial (call : Call) : call.eval_once = (.call call) → Sn (.call call)
| hard (call : Call) : Sn call.eval_once → Sn (.call call)
| prp (prp : Prp) : Sn (.prp prp)
| ty (t : Ty) : Sn (.ty t)
| fall (f : Fall) : Sn (.fall f)
| var (v : Var) : Sn (.var v)
I reuse the previous lemmas about SN bidirectionality (preservation of SN).
lemma eval_rfl_imp_sn_iff : ∀ (call : Call), call.eval_once = (.call call) → (Sn (call.eval_once) ↔ Sn (.call call)) := by
intro e h_eq
constructor
rw [h_eq]
simp
rw [h_eq]
simp
lemma sn_bidirectional : ∀ (call : Call), Sn (call.eval_once) ↔ Sn (.call call) := by
intro e
constructor
intro h
apply Sn.hard
exact h
intro h
cases h
case mpr.trivial a =>
rw [a]
apply Sn.trivial
exact a
case mpr.hard a =>
exact a
We thus define our primary theorem as such. In the following chapters, I prove this statement by defining reducibility candidiates, proving all well-typed expressions are composable of reducibility candidates, and that all reducibility candidate are strongly normalizing.
def all_well_typed_sn (ctx : Ctx) := ∀ (e : SkExpr) (t : SkExpr), valid_judgement ctx e t → Sn e
Made by Dowland Aiello.
Reducibility Candidates
I extend existing proofs of strong normalization of the STLC and simply-typed SK combinators by utilizing defining reducibility candidates.
I define reducibility candidates as expressions of type \(t\) which are hereditarily terminating:
- In the case of noncomputable expressions \(e : t\), the expression is trivially hereditarily terminating and a reducibility candidate for type \(t\).
- In the case where \(t\) is of the form \(\forall x:\alpha.\beta\), an expression \(e\) of type \(t\) is strongly-normalizing if all its one-step reduxes are reducibility candidates for \(\beta\). The set of one-step reduxes is defined as all applications of \(e\ arg\) where \(arg\) is of type \(\alpha\).
import SkLean.Ast
import SkLean.Dsl
import SkLean.Typing
inductive is_candidate_for_t : Ctx → SkExpr → SkExpr → Prop
| ty (e : Ty) (t : Ty) :
valid_judgement ctx (.ty e) (.ty t) → is_candidate_for_t ctx (.ty e) (.ty t)
| prp (e : Prp) (t : Ty) :
valid_judgement ctx (.prp e) (.ty t) → is_candidate_for_t ctx (.prp e) (.ty t)
| fall (e : Fall) (t : SkExpr) :
valid_judgement ctx (.fall e) t → is_candidate_for_t ctx (.fall e) t
| fn (e : SkExpr) (t : Fall) :
valid_judgement ctx e (.fall t) →
-- All one-step reduxes are reducibility candidates
∀ arg, is_candidate_for_t ctx arg t.bind_ty →
is_candidate_for_t ctx ((Call.mk e arg).eval_once) t.body →
is_candidate_for_t ctx e (.fall t)
For simplicity in further proofs, we will prove that reducibility candidates for t are of type t:
lemma candidate_for_t_imp_judgement_t (ctx : Ctx) : ∀ (e : SkExpr) (t : SkExpr), is_candidate_for_t ctx e t → valid_judgement ctx e t := by
intro e t h
cases h
case ty h =>
exact h
case prp h =>
exact h
case fall h =>
exact h
case fn _ h _ =>
exact h
In the next chapter, I prove preservation of typing judgements, and preservation of membership in the candidate set for type \(t\) for \(e : t\). Made by Dowland Aiello.
Preservation of SN and Typing Judgements
In order to prove inclusion of all well-typed expressions in our reducibility candidate set, we must first prove:
- That typing judgements are preserved after one-step evaluation
- That membership in the candidate set is preserved after evaluation
import SkLean.Ast
import SkLean.Dsl
import SkLean.Typing
import Mathlib.Tactic
import Init.Data.List.Lemmas
Preservation of typing judgements
For unevaluable expressions, the judgement is trivially held.
Preservation of typing under evaluation of K
is proven given:
- Evaluation of
K α β (x : α) y = x
- Evaluation of
(K α β x y).eval_once : α
(byx : α
and.eval_once = x
), then - Preservation of
K α β x y : α
is held
lemma k_def_eq : ∀ α β x y, (Call.mk SK[(((K α) β) x)] y).eval_once = x := by
intro α β x y
unfold Call.eval_once
repeat unfold NamedSkExpr.to_sk_expr
simp
lemma s_def_eq : ∀ α β γ x y z, (Call.mk SK[(((((S α) β) γ) x) y)] z).eval_once = SK[((x z)(y z))] := by
intro α β x y γ
unfold Call.eval_once
repeat unfold NamedSkExpr.to_sk_expr
simp
Assuming x : α
is the easy case, but does not prove preservation alone.
lemma k_x_judgement_holds_eval_once : ∀ α β x y, valid_judgement [] x α → valid_judgement [] (Call.mk SK[(((K α) β) x)] y).eval_once α := by
intro α β x y h_t_x
unfold Call.eval_once
repeat unfold NamedSkExpr.to_sk_expr
simp
exact h_t_x
Proving preservation in the general case of evaluation requires proving K α β x y : α → (K α β x y).eval_once : α
.
Substitution
This lemma becomes tricky once substitution is required. I prove that all substitutions of (var n) rhs
where n ≠ ctx.length
are no-ops. I also prove that all substitutions of (var 1) rhs = rhs
.
lemma substitute_free_noop : ∀ rhs v n, (Var.mk n) ≠ v → Fall.substitute.substitute_e (.var v) n rhs = (.var v) := by
intro rhs n h_n_gt_1
unfold Fall.substitute.substitute_e
simp
intro h
match h : SkExpr.var n with
| .var (Var.mk n') =>
simp_all
| .k _ => contradiction
| .s _ => contradiction
| .prp _ => contradiction
| .ty _ => contradiction
| .fall _ => contradiction
| .call _ => contradiction
lemma substitute_bound_1 : ∀ rhs, Fall.substitute.substitute_e (.var (.mk ⟨1⟩)) ⟨1⟩ rhs = rhs := by
intro rhs
unfold Fall.substitute.substitute_e
simp
lemma substitute_prp_noop : ∀ e rhs n, e = SkExpr.prp (.mk) → Fall.substitute.substitute_e e n rhs = e := by
intro e rhs n h_e_not_var
unfold Fall.substitute.substitute_e
rw [h_e_not_var]
lemma substitute_ty_noop : ∀ u e rhs n, e = SkExpr.ty (.mk u) → Fall.substitute.substitute_e e n rhs = e := by
intro u e rhs n h_e_not_var
unfold Fall.substitute.substitute_e
rw [h_e_not_var]
lemma substitute_k_noop : ∀ e rhs n, e = SkExpr.k (.mk) → Fall.substitute.substitute_e e n rhs = e := by
intro e rhs n h_e_not_var
unfold Fall.substitute.substitute_e
rw [h_e_not_var]
lemma substitute_s_noop : ∀ e rhs n, e = SkExpr.s (.mk) → Fall.substitute.substitute_e e n rhs = e := by
intro e rhs n h_e_not_var
unfold Fall.substitute.substitute_e
rw [h_e_not_var]
lemma n_eq_imp_bound_rhs : ∀ v n rhs, (.mk n) = v → Fall.substitute.substitute_e (.var v) n rhs = rhs := by
intro v n rhs h_v_n_eq
unfold Fall.substitute.substitute_e
match v with
| .mk n' =>
simp_all
Combinator evaluation type preservation
Now, I prove that valid_judgement SK[K α β x y] α → valid_judgement SK[K α β x y].eval_once α
by proving the call is well-typed with type α
.
Judgements of function application are determined to be true or false by substitution of the ∀
type of the left-hand side of the function call. This would seem to complicate the lemma once an argument to K
like ty_k
is provided. ty_k
is dependent, and contains variables. However, all arguments to K
are well-typed and closed. I demonstrate this later.
We can use this fact in our larger lemma.
However, the judgement rule allowing beta_eq t t' → valid_judgement e t → valid_judgement e t
further complicates matters. We defer to the later SN
proof with sorry
for most lemmas.
A variable is bound if ctx[(var n).n - 1] = some t
.
inductive is_bound : Ctx → SkExpr → Prop
| var ctx n : n.toNat > 0 → ⟨(n.toNat - 1)⟩ < (BindId.mk ctx.length) → is_bound ctx (.var (.mk n))
| app ctx c : is_bound ctx c.lhs → is_bound ctx c.rhs → is_bound ctx (.call c)
| fall ctx f : is_bound (f.bind_ty :: ctx) f.bind_ty → is_bound (f.bind_ty :: ctx) f.body → is_bound ctx (.fall f)
| k ctx k : is_bound ctx (.k k)
| s ctx s : is_bound ctx (.s s)
| prp ctx prp : is_bound ctx (.prp prp)
| ty ctx ty : is_bound ctx (.ty ty)
lemma all_well_typed_var_bound_iff (ctx : Ctx) : ∀ (n : BindId), (∃ t, valid_judgement ctx (.var (.mk n)) t) ↔ is_bound ctx (.var (.mk n)) := by
intro v
constructor
intro ⟨t_var, h_t_valid⟩
cases h_t_valid
case mp.beta_eq => sorry
case mp.var h_pos h_in_bounds =>
apply is_bound.var ctx
if h : v = { toNat := ctx.length } then
simp_all
else
simp_all
rw [LT.lt] at h_pos
simp [BindId.instLT] at h_pos
simp_all
rw [LT.lt]
simp [BindId.instLT]
exact h_in_bounds
case mpr =>
intro h_bound
cases h_bound
use ctx[v.toNat - 1]
apply valid_judgement.var
simp [GT.gt]
rw [LT.lt]
simp [BindId.instLT]
simp_all
I generalize this lemma to all expressions by induction on the definition of being bound.
lemma all_well_typed_e_bound_iff (ctx : Ctx) : ∀ e, (∃ t, valid_judgement ctx e t) → is_bound ctx e := by
intro e h_typed
cases e
case k =>
simp [is_bound.k]
case s =>
simp [is_bound.s]
case prp =>
simp [is_bound.prp]
case ty =>
simp [is_bound.ty]
case fall f =>
obtain ⟨t, h_valid_t⟩ := h_typed
match f with
| .mk bind_ty body =>
cases h_valid_t
case fall t_bind_ty h_bind_ty h_body =>
simp [Fall.bind_ty] at h_bind_ty
simp [Fall.body] at h_body
apply is_bound.fall
apply all_well_typed_e_bound_iff
use t_bind_ty
simp [Fall.bind_ty]
exact h_bind_ty
apply all_well_typed_e_bound_iff
use t
simp [Fall.body]
exact h_body
case beta_eq => sorry
case call c =>
obtain ⟨t, h_valid_t⟩ := h_typed
match c with
| .mk lhs rhs =>
cases h_valid_t
case call t_lhs h_t_lhs h_t_rhs =>
simp [Call.lhs] at h_t_lhs
simp [Call.rhs] at h_t_rhs
apply is_bound.app
apply all_well_typed_e_bound_iff
use (.fall t_lhs)
exact h_t_lhs
apply all_well_typed_e_bound_iff
use t_lhs.bind_ty
exact h_t_rhs
case beta_eq => sorry
case var v =>
obtain ⟨t, h_valid_t⟩ := h_typed
match v with
| .mk n =>
cases h_valid_t
case var h_pos h_bound =>
apply is_bound.var
simp [GT.gt] at h_pos
rw [LT.lt] at h_pos
simp [BindId.instLT] at h_pos
exact h_pos
rw [LT.lt]
simp [BindId.instLT]
exact h_bound
case beta_eq => sorry
lemma shift_indices_bound_noop : ∀ v_n shift_by (depth : ℕ), v_n.toNat ≤ depth → (SkExpr.var (.mk v_n)).with_indices_plus shift_by depth = (SkExpr.var (.mk v_n)) := by
intro v_n shift_by depth h_is_bound
simp [SkExpr.with_indices_plus]
simp_all
lemma all_e_well_typed_bound_shift_noop (ctx : Ctx) : ∀ (e : SkExpr) t shift_by, valid_judgement ctx e t → e.with_indices_plus shift_by ctx.length = e := by
intro e t shift_by h_judgement_t
match h : e with
| .k _ =>
simp [SkExpr.with_indices_plus]
| .s _ =>
simp [SkExpr.with_indices_plus]
| .prp _ =>
simp [SkExpr.with_indices_plus]
| .ty _ =>
simp [SkExpr.with_indices_plus]
| .fall (.mk bind_ty body) =>
unfold SkExpr.with_indices_plus
simp
cases h_judgement_t
case fall t_bind_ty h_t_bind_ty h_t =>
simp [Fall.bind_ty] at h_t_bind_ty
simp [Fall.body] at h_t
have h := all_e_well_typed_bound_shift_noop (bind_ty :: ctx) bind_ty t_bind_ty shift_by h_t_bind_ty
simp_all
have h := all_e_well_typed_bound_shift_noop (bind_ty :: ctx) body t shift_by (by simp [Fall.bind_ty] at h_t; exact h_t)
simp_all
case beta_eq => sorry
| .call (.mk lhs rhs) =>
unfold SkExpr.with_indices_plus
simp
cases h_judgement_t
case call t_lhs h_t_lhs h_t_rhs =>
simp [all_e_well_typed_bound_shift_noop ctx lhs (.fall t_lhs) shift_by h_t_lhs]
simp [all_e_well_typed_bound_shift_noop ctx rhs t_lhs.bind_ty shift_by h_t_rhs]
case beta_eq => sorry
| .var (.mk n) =>
have h₀ := h_judgement_t
cases h_judgement_t
case beta_eq => sorry
case var h_idx_valid =>
have h_bound := (all_well_typed_var_bound_iff ctx n).mp (by
use ctx[n.toNat - 1]
)
cases h_bound
case var h_pos h_bound =>
rw [LT.lt] at h_bound
simp [BindId.instLT] at h_bound
exact shift_indices_bound_noop n shift_by ctx.length (by rw [← Nat.pred_eq_sub_one] at h_bound; exact Nat.le_of_pred_lt h_bound)
Using the above lemma, we can prove that substitution of all bound variables is a noop.
lemma substitute_bound_noop (ctx : Ctx) : ∀ e with_expr, is_bound ctx e → Fall.substitute.substitute_e e ⟨ctx.length + 1⟩ with_expr = e := by
intro e with_expr h_bound
cases e
case k =>
simp [Fall.substitute.substitute_e]
case s =>
simp [Fall.substitute.substitute_e]
case prp =>
simp [Fall.substitute.substitute_e]
case ty =>
simp [Fall.substitute.substitute_e]
case fall f =>
match f with
| .mk bind_ty body =>
cases h_bound
case fall h_bind_ty_bound h_body_bound =>
simp [Fall.substitute.substitute_e]
constructor
exact substitute_bound_noop (bind_ty :: ctx) bind_ty with_expr h_bind_ty_bound
exact substitute_bound_noop (bind_ty :: ctx) body with_expr h_body_bound
case call c =>
match c with
| .mk lhs rhs =>
cases h_bound
case app h_bound_lhs h_bound_rhs =>
simp [Fall.substitute.substitute_e]
constructor
exact (substitute_bound_noop ctx lhs with_expr h_bound_lhs)
exact (substitute_bound_noop ctx rhs with_expr h_bound_rhs)
case var v =>
match h : v with
| .mk n =>
cases h_bound
case var h_bound =>
unfold Fall.substitute.substitute_e
simp
have h_bound' : (BindId.mk n.toNat) ≤ { toNat := ctx.length } := by
rw [LT.lt] at h_bound
simp [BindId.instLT] at h_bound
rw [LE.le]
simp [BindId.instLE]
omega
have h_bound'' : (BindId.mk n.toNat) < { toNat := ctx.length + 1 } := by
simp_all
rw [LT.lt]
simp [BindId.instLT]
rw [LE.le] at h_bound'
simp [BindId.instLE] at h_bound'
linarith
have h_bound''' : n < { toNat := ctx.length + 1 } := by
simp_all
simp_all
intro h
rw [h] at h_bound'''
exfalso
rw [LT.lt] at h_bound'''
simp [BindId.instLT] at h_bound'''
Using the fact that all variables that are well-typed are bound, we can say that with_indices_plus preserves the values of the variable. This concludes our lemma of preservation of K α β x y : α
.
Inconsistency
It should be noted that the judgement (K α β x y).eval_once : α
is obviously true, as evaluation of the K
combinator requires no substitution, and leaves α
unaltered. However, (K α β x y) : α
can potentially perform substitution, if α
contains free variables. However, K
and S
are closed and contain no free variables.
Closed expressions do not contain free variables, and as a result, are well-typed at their roots with the same type under any context.
TODO: Clean this lemma up a ton.
lemma judgement_holds_closed_any : ∀ ctx ctxxs e t, valid_judgement ctx e t → valid_judgement (ctx ++ ctxxs) e t := by
intro ctx ctxxs e t
intro h_closed
cases e
case k a =>
cases h_closed
case k =>
simp [valid_judgement.k]
case beta_eq => sorry
case s a =>
cases h_closed
case s =>
simp [valid_judgement.s]
case beta_eq => sorry
case prp a =>
cases h_closed
case prp =>
simp [valid_judgement.prp]
case beta_eq => sorry
case ty a =>
cases h_closed
case ty =>
simp [valid_judgement.ty]
case beta_eq => sorry
case fall f =>
match f with
| .mk bind_ty body =>
cases h_closed
case fall t_bind_ty h_t_bind_ty h_t_body =>
simp [Fall.body] at *
simp [Fall.bind_ty] at *
apply valid_judgement.fall (ctx ++ ctxxs) (.mk bind_ty body) t_bind_ty t
sorry
case beta_eq => sorry
case call c =>
match c with
| .mk lhs rhs =>
cases h_closed
case call t_lhs h_t_lhs h_t_rhs =>
apply valid_judgement.call (ctx ++ ctxxs) (.mk lhs rhs) t_lhs
simp [Call.lhs] at *
have h := (judgement_holds_closed_any ctx ctxxs lhs (SkExpr.fall t_lhs)).mp h_t_lhs
exact h
have h := (judgement_holds_closed_any ctx ctxxs rhs t_lhs.bind_ty).mp h_t_rhs
exact h
case beta_eq => sorry
case var _ =>
sorry
This is a massive lemma. We ought to modularize it.
lemma k_judgement_x_imp_judgement_call {m n : ℕ} : ∀ α β x y, valid_judgement [] α SK[Type m] → valid_judgement [] β SK[Type n] → valid_judgement [] x α → valid_judgement [] y β → valid_judgement [] SK[((((K α) β) x) y)] α := by
intro α β x y t_α t_β t_x t_y
simp [NamedSkExpr.to_sk_expr] at t_α
simp [NamedSkExpr.to_sk_expr] at t_β
have h : valid_judgement [] SK[((((K α) β) x) y)] ((Fall.mk β α).substitute y).body := (by
apply valid_judgement.call [] (Call.mk SK[(((K α) β) x)] y) (.mk β α)
simp [NamedSkExpr.to_sk_expr] at *
simp [Call.lhs]
have h : (valid_judgement [] (SkExpr.call (Call.mk (SkExpr.call (Call.mk (SkExpr.call (Call.mk (SkExpr.k «K».mk) α)) β)) x))
((Fall.mk α (SkExpr.fall (Fall.mk β α))).substitute
(Call.mk (SkExpr.call (Call.mk (SkExpr.call (Call.mk (SkExpr.k «K».mk) α)) β)) x).rhs).body) := (by
apply valid_judgement.call [] (Call.mk (SkExpr.call (Call.mk (SkExpr.call (Call.mk (SkExpr.k «K».mk) α)) β)) x) (.mk α (.fall (.mk β α)))
simp [Call.lhs]
have h := (valid_judgement.call [] (.mk SK[(K α)] β) (.mk SK[Type n] (.fall (.mk α (.fall (.mk (.var (.mk ⟨3⟩)) α))))) (by
simp [Call.lhs, NamedSkExpr.to_sk_expr]
have h : (valid_judgement [] SK[(K α)] (.fall (.mk SK[Type n] (.fall (.mk α (.fall (.mk (.var (.mk ⟨3⟩)) α))))))) := by
simp [NamedSkExpr.to_sk_expr]
have h := valid_judgement.call [] (.mk SK[K] α) (@ty_k_fall m n) (by
simp [Call.lhs]
rw [← ty_k_def_eq]
simp [NamedSkExpr.to_sk_expr, valid_judgement.k]
) (by
simp [Call.rhs]
simp [ty_k_fall, Fall.bind_ty]
exact t_α
)
simp [Fall.body, Call.rhs] at h
rw [ty_k_fall] at h
simp [NamedSkExpr.to_sk_expr, Fall.substitute, Fall.substitute.substitute_e, BindId.succ] at h
have h_shift_noop := all_e_well_typed_bound_shift_noop [] α SK[Type m] ⟨1⟩ (by
simp [NamedSkExpr.to_sk_expr]
exact t_α
)
simp [NamedSkExpr.to_sk_expr, List.length] at h_shift_noop
rw [h_shift_noop] at h
exact h
exact h
) (by
simp [Call.rhs, Fall.bind_ty]
exact t_β
))
simp [NamedSkExpr.to_sk_expr] at h
simp [Fall.substitute] at h
simp [Call.rhs] at h
simp [substitute_ty_noop] at h
simp [Fall.substitute.substitute_e, BindId.succ, Fall.body] at h
have h_α_sub_noop : Fall.substitute.substitute_e α { toNat := 3 } (β.with_indices_plus { toNat := 1 } 0) = α := by
have h_sub_noop := substitute_bound_noop [α, SK[Type m]] α (β.with_indices_plus { toNat := 1 } 0)
have h_bound : is_bound [α, SK[Type m]] α := by
apply all_well_typed_e_bound_iff
use SK[Type m]
exact (judgement_holds_closed_any List.nil [α, SK[Type m]] α SK[Type m]).mp t_α
simp [NamedSkExpr.to_sk_expr] at h_bound
simp [h_bound, NamedSkExpr.to_sk_expr] at h_sub_noop
exact h_sub_noop
simp [h_α_sub_noop] at h
have h_α_sub_noop : Fall.substitute.substitute_e α { toNat := 2 } (β.with_indices_plus { toNat := 1 } 0) = α := by
have h_sub_noop := substitute_bound_noop [SK[Type m]] α (β.with_indices_plus { toNat := 1 } 0)
have h_bound : is_bound [SK[Type m]] α := by
apply all_well_typed_e_bound_iff
use SK[Type m]
exact (judgement_holds_closed_any List.nil [SK[Type m]] α SK[Type m]).mp t_α
simp [NamedSkExpr.to_sk_expr] at h_bound
simp [h_bound, NamedSkExpr.to_sk_expr] at h_sub_noop
exact h_sub_noop
simp [h_α_sub_noop] at h
have h_shift_noop := all_e_well_typed_bound_shift_noop [] β SK[Type n] ⟨1⟩ (by
simp [NamedSkExpr.to_sk_expr]
exact t_β
)
simp at h_shift_noop
simp [h_shift_noop] at h
exact h
simp [Call.rhs]
simp [Fall.bind_ty]
exact t_x
)
simp [Fall.substitute, Fall.substitute.substitute_e, Call.rhs, BindId.succ, Fall.body] at h
have h_sub_noop_h := substitute_bound_noop [α] β (x.with_indices_plus { toNat := 1 } 0) (by
apply all_well_typed_e_bound_iff
use SK[Type n]
exact (judgement_holds_closed_any List.nil [α] β SK[Type n]).mp t_β)
simp at h_sub_noop_h
rw [h_sub_noop_h] at h
have h_sub_noop_h := substitute_bound_noop [SK[Type m]] α (x.with_indices_plus { toNat := 1 } 0) (by
apply all_well_typed_e_bound_iff
use SK[Type m]
exact (judgement_holds_closed_any List.nil [SK[Type m]] α SK[Type m]).mp t_α)
simp at h_sub_noop_h
rw [h_sub_noop_h] at h
exact h
simp [Fall.bind_ty]
simp [NamedSkExpr.to_sk_expr, Call.rhs]
exact t_y
)
simp [NamedSkExpr.to_sk_expr] at *
simp [Fall.substitute] at h
simp [Fall.body] at h
have h_sub_alpha_noop := substitute_bound_noop [] α (y.with_indices_plus { toNat := 1 } 0) (by
cases α
case k =>
simp [is_bound.k]
case s =>
simp [is_bound.s]
case prp =>
simp [is_bound.prp]
case ty =>
simp [is_bound.ty]
case fall f =>
match f with
| .mk bind_ty body =>
cases t_α
case fall t_bind_ty h_t_bind_ty h_t_body =>
apply is_bound.fall
simp [Fall.bind_ty]
apply all_well_typed_e_bound_iff
use t_bind_ty
simp [Fall.bind_ty] at h_t_bind_ty
exact h_t_bind_ty
apply all_well_typed_e_bound_iff
use (SkExpr.ty (Ty.mk m))
case beta_eq =>
sorry
case call c =>
match h_c : c with
| .mk lhs rhs =>
-- call lhs rhs is surely of type Ty n
-- this implies lhs's type is of the form ∀
apply all_well_typed_e_bound_iff
use (.ty (Ty.mk m))
case var v =>
match v with
| .mk n =>
apply (all_well_typed_var_bound_iff [] n).mp
use (.ty (Ty.mk m))
)
simp at h_sub_alpha_noop
simp [h_sub_alpha_noop] at h
exact h
I do the same for \(S\).
lemma judgement_holds_eval_once (ctx : Ctx) : ∀ (call : Call) (t : SkExpr), valid_judgement ctx (.call call) t ↔ valid_judgement ctx call.eval_once t := by
intro c t
unfold Call.eval_once
split
case h_1 α β x y =>
constructor
intro h
cases h
case mp.call =>
sorry
lemma eval_once_candidate_iff (ctx : Ctx) : ∀ (call : Call) (t : SkExpr), is_candidate_for_t ctx (.call call) t ↔ is_candidate_for_t ctx (call.eval_once) t := by
intro e t
constructor
case mp =>
intro h
match h with
| .fn e' t' t_judgement arg h_t_arg h_redux => sorry
| .ty e' t' ht' =>
have h_judgement_t := candidate_for_t_imp_judgement_t ctx (.call e) t h
-- Since
sorry
| .prp _ _ _ => sorry
| .fall _ _ _ => sorry
case mpr =>
intro h
sorry
Made by Dowland Aiello.
Examples of Dependent Types using Definitions
Combinator Definitions
import Mathlib.Tactic
abbrev K₀.{m, n} := ∀ (α : Type m) (β : Type n) (_x : α) (_y : β), α
def K₁ : K₀ := fun _α _β x _y => x
abbrev S₀.{m, n, o} := ∀ (α : Type m) (β : Type n) (γ : Type o) (_x : α → β → γ) (_y : α → β) (_z : α), γ
def S₁ : S₀ := fun _α _β _γ x y z => x z (y z)
\(M\) combinator
Normal \(\Pi\) typing:
\(\Pi (x : A), B x\)
M
is basically the I
combinator, but it allows reflection at runtime.
We can derive a few useful higher-typing combinator using this base combinator:
$$ M\ (e : t) = t \\ M_{\circ}\ (e : t) = e\ t = e\ (M\ e) = S\ (SKS)\ M\ e \\ M_{s}\ (e : t)\ arg = e\ arg\ (t\ arg) = S\ e\ t\ arg = S\ e\ (M\ e)\ arg = S\ S\ M\ e\ arg $$
Notably, this allows us to fully eliminate binders in all types.
$$ K : K \\ (K \mathbb{N} : K (M \mathbb{N})) $
However, we quickly run into issues. We shouldn't be able to produce an expression of type False
.
$$ false := SK \\ ? : false \\ false : (M S) (M K) \rightarrow false : S K \rightarrow false : false \\ \textbf{contradiction.} $$
However, if we introduce type universes, we easily get around this paradox:
$$ K_{0} : K_{1} \\ M K_{0} = K{1} $$
begin mutual
inductive M where
| mk : ℕ → M
deriving Repr
inductive S where
| mk : ℕ → S
deriving DecidableEq, Repr, BEq
inductive K where
| mk : ℕ → K
deriving DecidableEq, Repr, BEq
inductive Call where
| mk : Expr → Expr → Call
deriving DecidableEq, Repr, BEq
inductive Expr where
| m : M → Expr
| k : K → Expr
| s : S → Expr
| call : Call → Expr
deriving DecidableEq, Repr, BEq
end
namespace Call
def lhs : Call → Expr
| .mk lhs _ => lhs
def rhs : Call → Expr
| .mk _ rhs => rhs
end Call
DSL
We make use of a small DSL for legibility.
declare_syntax_cat skmexpr
syntax "K" ident : skmexpr
syntax "S" ident : skmexpr
syntax "M" ident : skmexpr
syntax "K" num : skmexpr
syntax "S" num : skmexpr
syntax "M" num : skmexpr
syntax "(" skmexpr skmexpr ")" : skmexpr
syntax ident : skmexpr
syntax "(" skmexpr ")" : skmexpr
syntax " ⟪ " skmexpr " ⟫ " : term
macro_rules
| `(⟪ K $u:ident ⟫) => `(Expr.k (.mk $u))
| `(⟪ S $u:ident ⟫) => `(Expr.s (.mk $u))
| `(⟪ M $u:ident ⟫) => `(Expr.m (.mk $u))
| `(⟪ K $u:num ⟫) => `(Expr.k (.mk $u))
| `(⟪ S $u:num ⟫) => `(Expr.s (.mk $u))
| `(⟪ M $u:num ⟫) => `(Expr.m (.mk $u))
| `(⟪ $e:ident ⟫) => `($e)
| `(⟪ ($e:skmexpr) ⟫) => `(⟪$e⟫)
| `(⟪ ($e₁:skmexpr $e₂:skmexpr) ⟫) => `(Expr.call (.mk ⟪ $e₁ ⟫ ⟪ $e₂ ⟫))
syntax "SKM[ " skmexpr " ] " : term
syntax "SKC[" skmexpr skmexpr "]" : term
macro_rules
| `(SKM[ $e:skmexpr ]) => `(⟪ $e ⟫)
macro_rules
| `(SKC[ $e₁:skmexpr $e₂:skmexpr ]) => `(Call.mk ⟪ $e₁ ⟫ ⟪ $e₂ ⟫)
namespace Expr
def max_universe (e : Expr) : ℕ :=
match e with
| SKM[(K n)] => n
| SKM[(S n)] => n
| SKM[(M n)] => n
| SKM[(lhs rhs)] =>
(max_universe lhs) + (max_universe rhs)
def min_universe (e : Expr) : ℕ :=
match e with
| SKM[(K n)] => n
| SKM[(S n)] => n
| SKM[(M n)] => n
| SKM[(lhs rhs)] =>
min (max_universe lhs) (max_universe rhs)
inductive valid_universes : Expr → Prop
| k : valid_universes SKM[(K n)]
| s : valid_universes SKM[(S n)]
| m : valid_universes SKM[(M n)]
| call : lhs.max_universe > rhs.max_universe
→ valid_universes lhs
→ valid_universes rhs
→ valid_universes SKM[(lhs rhs)]
end Expr
Judgment Rules
K
, M
, and S
must be well-typed on their own. We need some expression that can represent the type of K
. One option is the \(\rightarrow\) type constructor from the SLTC or System F. The typing of \(S\) and \(K\) are fairly obvious. However, the typing of \(M\) is not. It's not immediately clear that we need typing judgment rules for K
, M
, or S
alone. There is no way to explicitly assign a type to K
, M
, or S
. They just are. However, there are emergent rules for application. The typical rules for application in the SK combinator calculus are as follows:
$$ K : \alpha \rightarrow \beta \rightarrow \alpha \\ S : (\alpha \rightarrow \beta \rightarrow \gamma) \rightarrow (\alpha \rightarrow \beta) \rightarrow \alpha \rightarrow \gamma \\ M : hmm $$
Typing rules quickly beome complicated. We would prefer to avoid binders in our type discipline. We can create an "emergent" typing based on function application.
$$ \frac{ \Gamma \vdash e_{1} : T }{ \Gamma \vdash K\ e_{1}\ e_{2} : T } $$ $$ \frac{ \Gamma \vdash f_{1}\ \text{arg}\ (f_{2}\ \text{arg}) : T) }{ \Gamma \vdash S\ f_{1}\ f_{2}\ \text{arg} : T } $$
But what is the type of \(K\)? It's not really obvious. Clearly, we need some way to represent the type of \(K\) that avoids binders. Recall that:
$$ Kxy = x $$
The type of \(K\) is typically \(\alpha \rightarrow \beta \rightarrow \alpha\). Its type is of the form of the \(K\) combinator. We can thus define a typing as such:
$$ K : K $$
This typing is coherent under an evaluation rule for the \(M\) combinator:
$$ M\ (x : t)\ \text{arg} = \text{arg}\ x\ (\text{arg}\ t) \\ M\ \mathbb{N}\ K = K\ \mathbb{N}\ (K\ \text{Type}) = \mathbb{N} \\ M\ K\ K = K\ K\ (K\ K) = K $$
mutual
inductive is_eval_once : Expr → Expr → Prop
| k x y n : is_eval_once SKM[((K n x) y)] x
| s x y z n : is_eval_once SKM[(((S n x) y) z)] SKM[((x z) (y z))]
| m e t n : valid_judgment e t
→ is_eval_once SKM[((M n) e)] t
| left : is_eval_once lhs lhs'
→ is_eval_once SKM[(lhs rhs)] SKM[(lhs' rhs)]
| right : is_eval_once rhs rhs'
→ is_eval_once SKM[(lhs rhs)] SKM[(lhs rhs')]
| k_rfl : is_eval_once SKM[K n] SKM[K n]
| m_rfl : is_eval_once SKM[M n] SKM[M n]
| s_rfl : is_eval_once SKM[S n] SKM[S n]
inductive is_eval_once_weak : Expr → Expr → Prop
| k x y n : is_eval_once_weak SKM[((K n x) y)] x
| s x y z n : is_eval_once_weak SKM[(((S n x) y) z)] SKM[((x z) (y z))]
| m e t n : valid_judgment_weak e t
→ is_eval_once_weak SKM[((M n) e)] t
| left : is_eval_once_weak lhs lhs'
→ is_eval_once_weak SKM[(lhs rhs)] SKM[(lhs' rhs)]
| right : is_eval_once_weak rhs rhs'
→ is_eval_once_weak SKM[(lhs rhs)] SKM[(lhs rhs')]
| k_rfl : is_eval_once_weak SKM[K n] SKM[K n]
| m_rfl : is_eval_once_weak SKM[M n] SKM[M n]
| s_rfl : is_eval_once_weak SKM[S n] SKM[S n]
inductive beta_eq : SkExpr → SkExpr → Prop
| rfl : beta_eq e e
| eval : is_eval_once e₁ e₂ → beta_eq e₁ e₂
| left : beta_eq lhs lhs' → beta_eq SKM[(lhs rhs)] SKM[(lhs' rhs)]
| right : beta_eq rhs rhs' → beta_eq SKM[(lhs rhs)] SKM[(lhs rhs')]
| symm : beta_eq e₂ e₁ → beta_eq e₁ e₂
| trans : beta_eq e₁ e₂ → beta_eq e₂ e₃ → beta_eq e₁ e₃
inductive valid_judgment : Expr → Expr → Prop
| k n : valid_judgment SKM[K n] (.k (.mk n.succ))
| s n : valid_judgment SKM[S n] (.s (.mk n.succ))
| m n : valid_judgment SKM[M n] (.m (.mk n.succ))
| call lhs rhs : lhs.max_universe > rhs.max_universe
→ valid_judgment lhs (.call (.mk (Expr.m (.mk lhs.max_universe.succ)) lhs))
→ valid_judgment rhs (.call (.mk (Expr.m (.mk rhs.max_universe.succ)) rhs))
→ valid_judgment SKM[(lhs rhs)]
(.call (.mk
(.call (.mk
(Expr.m (.mk lhs.max_universe.succ))
lhs
))
(.call (.mk
(Expr.m (.mk rhs.max_universe.succ))
rhs
))
))
| beta_eq e t₁ t₂ : valid_judgment e t₁ → beta_eq t₁ t₂ → valid_judgment e t₂
inductive valid_judgment_weak : Expr → Expr → Prop
| k n : valid_judgment_weak SKM[K n] (.k (.mk n.succ))
| s n : valid_judgment_weak SKM[S n] (.s (.mk n.succ))
| m n : valid_judgment_weak SKM[M n] (.m (.mk n.succ))
| call lhs rhs : lhs.max_universe > rhs.max_universe
→ valid_judgment_weak lhs (.call (.mk (Expr.m (.mk lhs.max_universe.succ)) lhs))
→ valid_judgment_weak rhs (.call (.mk (Expr.m (.mk rhs.max_universe.succ)) rhs))
→ valid_judgment_weak SKM[(lhs rhs)]
(.call (.mk
(.call (.mk
(Expr.m (.mk lhs.max_universe.succ))
lhs
))
(.call (.mk
(Expr.m (.mk rhs.max_universe.succ))
rhs
))
))
end
inductive is_normal_n : ℕ → Expr → Expr → Prop
| one : is_eval_once e e → is_normal_n 1 e e
| succ : is_eval_once e e' → is_normal_n n e' e_final → is_normal_n n.succ e e_final
mutual
partial def type_of (e : Expr) : Option Expr :=
match e with
| SKM[(K n)] => pure $ (.k (.mk n.succ))
| SKM[(S n)] => pure $ (.s (.mk n.succ))
| SKM[(M n)] => pure $ (.m (.mk n.succ))
| SKM[(lhs rhs)] => do
if rhs.max_universe ≥ lhs.max_universe then
none
else
eval_unsafe (.call (.mk (← eval_unsafe (.call (.mk (.m (.mk lhs.max_universe.succ)) lhs))) (← eval_unsafe (.call (.mk (.m (.mk rhs.max_universe.succ)) rhs)))))
partial def eval_unsafe (e : Expr) : Option Expr :=
match e with
| SKM[(((K _n) x) _y)] => eval_unsafe x
| SKM[((((S _n) x) y) z)] => eval_unsafe SKM[((x z) (y z))]
| SKM[(M _n e)] => do eval_unsafe (← type_of e)
| SKM[(lhs rhs)] => do
let lhs' ← eval_unsafe lhs
if SKM[(lhs' rhs)] = SKM[(lhs rhs)] then
some SKM[(lhs rhs)]
else
eval_unsafe SKM[(lhs' rhs)]
| x => x
partial def eval_once (e : Expr) : Option Expr :=
match e with
| SKM[(((K _n) x) _y)] => x
| SKM[((((S _n) x) y) z)] => SKM[((x z) (y z))]
| SKM[(M _n e)] => type_of e
| SKM[(lhs rhs)] => do
let lhs' ← eval_once lhs
SKM[(lhs' rhs)]
| x => x
end
Strong Normalization
A strong proof of consistency involves proving that every well-typed expression terminates. I do so. I utilize the typical reducibility candidates strategy.
inductive sn : Expr → Prop
| trivial : is_eval_once e e → sn e
| hard : is_eval_once e e' → sn e' → sn e
Note that we define evaluation as a relation on expressions. This is due to eval_once
's dependence on the type of e
. This appears problematic and confusing. However, we can still prove membership in R of all well-typed expressions.
lemma k_eval_sn : ∀ n x y, sn x → sn SKM[((K n x) y)] := by
intro n x y sn_x
apply sn.hard
apply is_eval_once.k
exact sn_x
lemma s_eval_sn : ∀ n x y z, sn SKM[((x z) (y z))] → sn SKM[(((S n x) y) z)] := by
intro n x y z sn_eval
apply sn.hard
apply is_eval_once.s
exact sn_eval
Type Preservation
To prove preservation, we prove that we can derive typings for (K (x : α) y : α)
from our base typing rules using M
.
Note that our typing rules make extensive use of the M
combinator for "reflection." We can use an S
-transformation rule like such:
$$ (e_{1}\ e_{2} : M\ e_{1}\ e_{2}) \implies (e_{1}\ e_{2} : (M\ e_{1})\ (M\ e_{2})) $$
This shortens our type preservation proof significantly by allowing us to collapse expressions of the form \((M K arg)\) to reducible \((M K) (M arg)\) expressions.
lemma valid_judgment_imp_m : ∀ n, valid_judgment e t → valid_judgment e SKM[((M n) e)] := by
intro n h
apply valid_judgment.beta_eq
exact h
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
exact h
lemma congr_m : valid_judgment a t
→ valid_judgment b t
→ beta_eq a b
→ beta_eq (Expr.call (.mk (.m (.mk a.max_universe.succ)) a)) (.call (.mk (.m (.mk b.max_universe.succ)) b)) := by
intro h_t_lhs h_t_rhs h
simp [Expr.max_universe]
cases h_t_lhs
cases h_t_rhs
cases h
apply beta_eq.rfl
apply beta_eq.rfl
apply beta_eq.rfl
apply beta_eq.rfl
cases h
apply beta_eq.rfl
case k.beta_eq.eval rhs h₂ h₃ h₄ h₅ =>
simp [Expr.max_universe] at *
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
apply valid_judgment.k
simp
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h₃
exact h₄
case k.beta_eq.symm rhs h₂ h₃ h₄ h₅ =>
simp [Expr.max_universe] at *
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
apply valid_judgment.k
simp
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h₃
exact h₄
case k.beta_eq.trans rhs h_rhs h₂ h₃ h₄ h₅ =>
apply beta_eq.trans
apply beta_eq.right
exact h₄
apply beta_eq.trans
apply beta_eq.right
exact h₅
simp [Expr.max_universe]
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_rhs
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_rhs
apply beta_eq.rfl
case s =>
apply beta_eq.trans
apply beta_eq.right
exact h
simp [Expr.max_universe]
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs
simp
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs
simp
exact beta_eq.rfl
case m =>
apply beta_eq.trans
apply beta_eq.right
exact h
simp [Expr.max_universe]
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs
simp
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs
simp
exact beta_eq.rfl
case call lhs rhs h_u h_t_lhs h_t_rhs =>
simp [Expr.max_universe] at *
apply beta_eq.trans
apply beta_eq.right
exact h
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs
apply beta_eq.rfl
case beta_eq t' h_t' h_eq =>
have h_t_lhs'' := valid_judgment.beta_eq _ _ _ h_t' h_eq
have h_t_rhs'' := valid_judgment.beta_eq _ _ _ h_t_rhs (beta_eq.symm h_eq)
apply beta_eq.trans
apply beta_eq.right
exact h
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs''
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
exact h_t_rhs''
lemma eval_preserves_judgment : ∀ e e' t, valid_judgment e t → is_eval_once e e' → valid_judgment e' t' → valid_judgment e' t := by
intro c e' t h_t h_eval h_t'
cases h_eval
case k y n =>
apply valid_judgment.beta_eq
apply valid_judgment_imp_m (e'.max_universe.succ)
exact h_t'
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t'
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
use 0
exact h_t
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.right
apply is_eval_once.k
apply beta_eq.eval
apply is_eval_once.m
exact h_t'
case s x y z n =>
apply valid_judgment.beta_eq
apply valid_judgment_imp_m (SKM[((x z) (y z))].max_universe.succ)
exact h_t'
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t'
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
use 0
exact h_t
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.right
apply is_eval_once.s
apply beta_eq.eval
apply is_eval_once.m
exact h_t'
case m e'' n h =>
apply valid_judgment.beta_eq
apply valid_judgment_imp_m (e''.max_universe.succ)
exact h_t'
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
exact h_t'
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
use 0
exact h_t
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.right
apply is_eval_once.m
exact h
apply beta_eq.eval
apply is_eval_once.m
exact h_t'
case left lhs lhs' rhs h_eq =>
apply valid_judgment.beta_eq
apply valid_judgment_imp_m
use 0
exact h_t'
apply beta_eq.trans
apply beta_eq.right
apply beta_eq.left
apply beta_eq.symm
apply beta_eq.eval h_eq
apply beta_eq.eval
apply is_eval_once.m
exact h_t
case right rhs rhs' lhs h_eq =>
apply valid_judgment.beta_eq
apply valid_judgment_imp_m
use 0
exact h_t'
apply beta_eq.trans
apply beta_eq.right
apply beta_eq.right
apply beta_eq.symm
apply beta_eq.eval h_eq
apply beta_eq.eval
apply is_eval_once.m
exact h_t
case k_rfl =>
exact h_t
case s_rfl =>
exact h_t
case m_rfl =>
exact h_t
lemma valid_judgment_call_imp_n_bounds : valid_judgment_weak SKM[(lhs rhs)] t → lhs.max_universe > rhs.max_universe := by
intro h
cases h
case call _ _ h_u =>
exact h_u
lemma valid_judgment_weak_imp_valid_judgment : valid_judgment_weak e t → valid_judgment e t := by
intro h
cases h
apply valid_judgment.k
apply valid_judgment.s
apply valid_judgment.m
apply valid_judgment.call
case call.a lhs rhs h_u _ _ =>
exact h_u
case call.a lhs rhs h_u h_t_lhs h_t_rhs =>
apply valid_judgment_weak_imp_valid_judgment
exact h_t_lhs
case call.a lhs rhs h_u h_t_lhs h_t_rhs =>
apply valid_judgment_weak_imp_valid_judgment
exact h_t_rhs
lemma valid_judgment_call_imp_judgment_lhs_rhs_easy : valid_judgment_weak SKM[(lhs rhs)] t → (∃ t_lhs, valid_judgment_weak lhs t_lhs) ∧ (∃ t_rhs, valid_judgment_weak rhs t_rhs) := by
intro h_t
cases h_t
case call h_t_lhs h_t_rhs h_u =>
constructor
use (.call (.mk (.m (.mk lhs.max_universe.succ)) lhs))
use (.call (.mk (.m (.mk rhs.max_universe.succ)) rhs))
lemma valid_judgment_call_imp_judgment_lhs_rhs : valid_judgment SKM[(lhs rhs)] t → (∃ t_lhs, valid_judgment lhs t_lhs) ∧ (∃ t_rhs, valid_judgment rhs t_rhs) := by
intro h_t
cases h_t
case call h_t_lhs h_t_rhs h_u =>
constructor
use (.call (.mk (.m (.mk lhs.max_universe.succ)) lhs))
use (.call (.mk (.m (.mk rhs.max_universe.succ)) rhs))
case beta_eq t₂ h_t₂ h_t_beq =>
have h_t₃ := valid_judgment.beta_eq _ _ _ h_t₂ h_t_beq
constructor
cases h_t₂
case left.call =>
use (.call (.mk (.m (.mk lhs.max_universe.succ)) lhs))
case left.beta_eq =>
sorry
lemma valid_judgment_imp_valid_universes : valid_judgment_weak e t → e.valid_universes := by
intro h_t
cases h_t
apply Expr.valid_universes.k
apply Expr.valid_universes.s
apply Expr.valid_universes.m
case call lhs rhs h_u h_t_lhs h_t_rhs =>
apply Expr.valid_universes.call
exact h_u
apply valid_judgment_imp_valid_universes at h_t_lhs
exact h_t_lhs
apply valid_judgment_imp_valid_universes at h_t_rhs
exact h_t_rhs
lemma weakening : valid_judgment_weak e t → valid_judgment e t := by
intro h
cases h
case k =>
apply valid_judgment.k
case s =>
apply valid_judgment.s
case m =>
apply valid_judgment.m
case call lhs rhs h_u h_t_lhs h_t_rhs =>
apply valid_judgment.call
exact h_u
apply weakening at h_t_lhs
exact h_t_lhs
apply weakening at h_t_rhs
exact h_t_rhs
lemma eval_preserves_judgment_hard : ∀ e e' t, valid_judgment_weak e t → is_eval_once e e' → valid_judgment_weak e' t := by
intro e e' t h_t h_eval
cases h_t
case k n =>
cases h_eval
apply valid_judgment_weak.k
case s =>
cases h_eval
apply valid_judgment_weak.s
case m =>
cases h_eval
apply valid_judgment_weak.m
case call lhs rhs h_u h_t_lhs h_t_rhs =>
match h₀ : h_eval with
| .k lhs' rhs' n =>
cases h_t_lhs
| .s x' y' z' n =>
cases h_t_lhs
| .left h_eval' =>
cases h_t_lhs
| .right h_eval' =>
cases h_t_lhs
| .m e t n h_t =>
cases h_t_lhs
lemma eval_preserves_judgment_hard' : ∀ e e' t, valid_judgment e t → is_eval_once e e' → valid_judgment e' t := by
intro e e' t h_t h_eval
cases h_t
case k n =>
cases h_eval
apply valid_judgment.k
case s =>
cases h_eval
apply valid_judgment.s
case m =>
cases h_eval
apply valid_judgment.m
case call lhs rhs h_u h_t_lhs h_t_rhs =>
match h_e'_eq : e' with
| .m (.mk n) =>
apply valid_judgment.beta_eq
apply valid_judgment.m
apply beta_eq.symm
have h_eval_beq := beta_eq.eval h_eval
apply beta_eq.trans
apply beta_eq.left
apply beta_eq.eval
apply is_eval_once.m
exact h_t_lhs
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
use n
apply valid_judgment.m
apply beta_eq.trans
apply beta_eq.right
exact (beta_eq.symm h_eval_beq)
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
apply valid_judgment.call
exact h_u
exact h_t_lhs
exact h_t_rhs
apply beta_eq.rfl
| .k (.mk n) =>
apply valid_judgment.beta_eq
apply valid_judgment.k
apply beta_eq.symm
have h_eval_beq := beta_eq.eval h_eval
apply beta_eq.trans
apply beta_eq.left
apply beta_eq.eval
apply is_eval_once.m
exact h_t_lhs
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
use n
apply valid_judgment.k
apply beta_eq.trans
apply beta_eq.right
exact (beta_eq.symm h_eval_beq)
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
apply valid_judgment.call
exact h_u
exact h_t_lhs
exact h_t_rhs
apply beta_eq.rfl
| .s (.mk n) =>
apply valid_judgment.beta_eq
apply valid_judgment.s
apply beta_eq.symm
have h_eval_beq := beta_eq.eval h_eval
apply beta_eq.trans
apply beta_eq.left
apply beta_eq.eval
apply is_eval_once.m
exact h_t_lhs
apply beta_eq.symm
apply beta_eq.trans
apply beta_eq.symm
apply beta_eq.eval
apply is_eval_once.m
use n
apply valid_judgment.s
apply beta_eq.trans
apply beta_eq.right
exact (beta_eq.symm h_eval_beq)
apply beta_eq.trans
apply beta_eq.eval
apply is_eval_once.m
apply valid_judgment.call
exact h_u
exact h_t_lhs
exact h_t_rhs
apply beta_eq.rfl
| .call (.mk lhs' rhs') =>
sorry
lemma all_sn_eval_once : ∀ e, sn e → ∃ e', is_eval_once e e' := by
intro e h
cases h
case trivial h =>
use e
case hard e' h_sn h_step =>
use e'
lemma is_eval_once_rfl : is_eval_once e e → e = e := by
intro h
cases h
case m =>
rfl
case left =>
rfl
case right =>
rfl
case k_rfl =>
rfl
case s_rfl =>
rfl
case m_rfl =>
rfl
theorem all_well_typed_weak_sn (e : Expr) (t : Expr) : valid_judgment_weak e t → sn e := by
intro h_t
match e with
| SKM[((K n x) y)] =>
apply sn.hard
apply is_eval_once.k
apply all_well_typed_weak_sn
apply eval_preserves_judgment_hard
exact h_t
apply is_eval_once.k
| SKM[(((S n x) y) z)] =>
apply sn.hard
apply is_eval_once.s
cases h_t
case a.call h_t_lhs h_t_z h_u =>
cases h_t_lhs
| SKM[(M n e')] =>
cases h_t
case call h_t_m h_t_e' h_t_u =>
apply sn.hard
apply is_eval_once.m
apply weakening at h_t_e'
exact h_t_e'
simp_all
cases h_t_m
| SKM[(lhs rhs)] =>
cases h_t
case call h_t_lhs h_t_rhs h_u =>
cases h_t_lhs
| .k (.mk n) =>
apply sn.trivial
exact is_eval_once.k_rfl
| .s (.mk n) =>
apply sn.trivial
exact is_eval_once.s_rfl
| .m (.mk n) =>
apply sn.trivial
exact is_eval_once.m_rfl
lemma sn_reverse_execution : sn e' → is_eval_once e e' → sn e := by
intro h_sn_eval h
apply sn.hard
exact h
exact h_sn_eval
lemma sn_imp_n_steps_eval_normal (e : Expr) : sn e → ∃ n e', is_normal_n n e e' := by
intro h_sn
induction h_sn
case trivial e h =>
use 1
use e
apply is_normal_n.one
exact h
case hard e'' e''' h_eval' h_sn h_is_step =>
have ⟨n_sub_1, e'''', h_eval⟩ := h_is_step
use n_sub_1.succ
use e''''
apply is_normal_n.succ
cases h_eval
exact h_eval'
exact h_eval'
exact h_eval
def is_candidate_for_weak (e : Expr) (t : Expr) : Prop := valid_judgment_weak e t ∧ e.valid_universes
def is_candidate_for (e : Expr) (t : Expr) : Prop := valid_judgment e t ∧ e.valid_universes
lemma k_eval_def_eq : is_eval_once SKM[(K n)] e → e = SKM[(K n)] := by
intro h
cases h
rfl
lemma s_eval_def_eq : is_eval_once SKM[(S n)] e → e = SKM[(S n)] := by
intro h
cases h
rfl
lemma m_eval_def_eq : is_eval_once SKM[(M n)] e → e = SKM[(M n)] := by
intro h
cases h
rfl
lemma membership_candidate_preserved : valid_judgment_weak e t → is_candidate_for_weak e t → is_eval_once e e' → is_candidate_for_weak e' t := by
intro h_t h_candidate h_eval
have h_candidate₀ := h_candidate
unfold is_candidate_for_weak at h_candidate
have ⟨h_t_e, h_candidate_e⟩ := h_candidate
cases h_candidate_e
case k n =>
apply k_eval_def_eq at h_eval
rw [h_eval]
cases h_t_e
case k =>
exact ⟨valid_judgment_weak.k n, by apply Expr.valid_universes.k⟩
case s n =>
apply s_eval_def_eq at h_eval
rw [h_eval]
cases h_t_e
case s =>
exact ⟨valid_judgment_weak.s n, by apply Expr.valid_universes.s⟩
case m n =>
apply m_eval_def_eq at h_eval
rw [h_eval]
cases h_t_e
case m =>
exact ⟨valid_judgment_weak.m n, by apply Expr.valid_universes.m⟩
case call lhs rhs h₁ h_u_lhs h_u_rhs =>
have h_t_e' := eval_preserves_judgment_hard SKM[(lhs rhs)] e' t h_t_e h_eval
unfold is_candidate_for_weak
constructor
exact h_t_e'
cases e'
case right.m m =>
match m with
| .mk n =>
apply Expr.valid_universes.m
case right.s s =>
match s with
| .mk n =>
apply Expr.valid_universes.s
case right.k k =>
match k with
| .mk n =>
apply Expr.valid_universes.k
case right.call c =>
match c with
| .mk lhs' rhs' =>
have ⟨⟨t_lhs, h_t_lhs'⟩, ⟨t_rhs, h_t_rhs'⟩⟩ := valid_judgment_call_imp_judgment_lhs_rhs_easy h_t_e'
apply Expr.valid_universes.call
apply valid_judgment_call_imp_n_bounds at h_t_e'
exact h_t_e'
simp_all
apply valid_judgment_imp_valid_universes
exact h_t_lhs'
apply valid_judgment_imp_valid_universes
exact h_t_rhs'
theorem all_well_typed_candidate : valid_judgment_weak e t → is_candidate_for_weak e t := by
intro h_t
cases h_t
case k =>
unfold is_candidate_for_weak
constructor
apply valid_judgment_weak.k
apply Expr.valid_universes.k
case s =>
unfold is_candidate_for_weak
constructor
apply valid_judgment_weak.s
apply Expr.valid_universes.s
case m =>
unfold is_candidate_for_weak
constructor
apply valid_judgment_weak.m
apply Expr.valid_universes.m
case call lhs rhs h_u h_t_lhs h_t_rhs =>
unfold is_candidate_for_weak
constructor
apply valid_judgment_weak.call
exact h_u
exact h_t_lhs
exact h_t_rhs
apply Expr.valid_universes.call
exact h_u
apply valid_judgment_imp_valid_universes
exact h_t_lhs
apply valid_judgment_imp_valid_universes
exact h_t_rhs
theorem sum_universes_decrease_normal_form : valid_judgment_weak SKM[(lhs rhs)] t → is_normal_n n SKM[(lhs rhs)] e' → SKM[(lhs rhs)].max_universe > e'.max_universe := by
intro h_t h_normal
cases h_normal
case one =>
contradiction
case succ e_next n h_step h_normal =>
simp_all
if h_n_eq : n = 1 then
simp_all
cases h_normal
contradiction
contradiction
else if h_n_eq : n > 2 then
have h_t_e'' := eval_preserves_judgment_hard SKM[(lhs rhs)] e_next t h_t h_step
have ⟨⟨t_lhs, h_t_lhs⟩, ⟨t_rhs, h_t_rhs⟩⟩ := valid_judgment_call_imp_judgment_lhs_rhs_easy h_t
simp_all
cases e'
case m m =>
simp [Expr.max_universe] at *
cases h_t_e''
cases h_t
cases h_t
cases h_t
case call h_n lhs' rhs' h_u h_t_lhs h_t_rhs =>
cases h_t_lhs
case k =>
simp [Expr.max_universe] at *
cases h_t_e''
cases h_t
cases h_t
cases h_t
case call h_n lhs' rhs' h_u h_t_lhs h_t_rhs =>
cases h_t_lhs
case s =>
simp [Expr.max_universe] at *
cases h_t_e''
cases h_t
cases h_t
cases h_t
case call h_n lhs' rhs' h_u h_t_lhs h_t_rhs =>
cases h_t_lhs
case call =>
simp [Expr.max_universe] at *
cases h_t_e''
cases h_t
cases h_t
cases h_t
case call h_n lhs' rhs' h_u h_t_lhs h_t_rhs =>
cases h_t_lhs
else
simp_all
cases h_step
simp [Expr.max_universe]
contradiction
simp [Expr.max_universe]
contradiction
simp [Expr.max_universe]
contradiction
simp [Expr.max_universe]
contradiction
simp [Expr.max_universe]
contradiction
theorem sum_universes_decrease_normal_form_hard : valid_judgment_weak e t → is_normal_n n e e' → (e.max_universe > e'.max_universe ∨ is_normal_n 1 e e') := by
intro h_t h_normal
induction h_normal
case one e'' h_eval =>
right
apply is_normal_n.one
exact h_eval
case succ e_next e_next_next e_final h_n h_eval_step h_norm h =>
have h_t' := eval_preserves_judgment_hard _ _ _ h_t h_eval_step
simp [h_t'] at h
cases h
case inl h =>
left
cases h_t
cases h_t'
simp_all
cases h_t'
simp_all
cases h_t'
simp_all
case call lhs rhs h_u h_t_lhs h_t_rhs =>
simp [Expr.max_universe] at *
apply lt_trans
exact h
apply sum_universes_decrease_normal_form
apply valid_judgment_weak.call
exact h_u
exact h_t_lhs
exact h_t_rhs
cases h_norm
case hbc.a.one h_eval_next_next =>
apply is_normal_n.succ
exact h_eval_step
apply is_normal_n.one
exact h_eval_next_next
case hbc.a.succ e' n h_eval h_eval' =>
apply is_normal_n.succ
exact h_eval_step
apply is_normal_n.one
contradiction
case inr h =>
have h' := is_normal_n.succ h_eval_step h_norm
cases h_t
cases h_t'
right
exact h
cases h_t'
right
exact h
cases h'
cases h_t'
right
exact h
case m.succ e'' h_eval h_norm =>
cases h_eval_step
simp_all
case call lhs rhs h_u h_t_lhs h_t_rhs =>
simp_all
cases h_t_lhs
theorem all_candidates_sn (e : Expr) : is_candidate_for_weak e t → sn e := by
intro h
unfold is_candidate_for_weak at h
have ⟨h_t, h_u⟩ := h
match h_e_eq : e with
| .k _ =>
apply sn.trivial
cases h_t
apply is_eval_once.k_rfl
| .m _ =>
apply sn.trivial
cases h_t
apply is_eval_once.m_rfl
| .s _ =>
apply sn.trivial
cases h_t
apply is_eval_once.s_rfl
| SKM[(((K n) x) y)] =>
cases h_t
case call h_t_lhs h_t_rhs h_u =>
have x_sn : sn x := by
simp_all
apply valid_judgment_call_imp_judgment_lhs_rhs_easy at h_t_lhs
have ⟨_, ⟨t_x, h_t_x⟩⟩ := h_t_lhs
apply @all_candidates_sn t_x x
unfold is_candidate_for_weak
constructor
exact h_t_x
apply valid_judgment_imp_valid_universes at h_t_x
exact h_t_x
apply sn.hard
apply is_eval_once.k
exact x_sn
| SKM[(lhs rhs)] =>
-- We know that lhs and rhs are both typed.
-- We know as a consequence that both are candidates
-- and by extension, both are sn
have ⟨⟨t_lhs, h_t_lhs⟩, ⟨t_rhs, h_t_rhs⟩⟩ := valid_judgment_call_imp_judgment_lhs_rhs_easy h_t
have candidate_lhs := all_well_typed_candidate h_t_lhs
have candidate_rhs := all_well_typed_candidate h_t_rhs
have sn_lhs := all_candidates_sn _ candidate_lhs
have sn_rhs := all_candidates_sn _ candidate_rhs
-- However, evaluation could potentially produce a new term which is not SN.
-- We know that since the lhs is sn, it has a normal form
-- which is smaller in size than its predecessor
-- or it is rfl
-- We also know that the rhs has a normal form which is smaller.
-- We can use the is_eval_once rules to establish that the left-hand-side
-- evaluates to the next-step in finding a normal form
-- This isn't necessarily smaller in size.
-- This means we will have to use the beta_eq sn rule
-- to "jump" to the normal form
have ⟨n_eval_lhs, lhs', h_eval_lhs⟩ := sn_imp_n_steps_eval_normal lhs sn_lhs
have ⟨n_eval_rhs, rhs', h_eval_rhs⟩ := sn_imp_n_steps_eval_normal rhs sn_rhs
have h_terminal_lhs := sum_universes_decrease_normal_form_hard h_t_lhs h_eval_lhs
have h_terminal_rhs := sum_universes_decrease_normal_form_hard h_t_rhs h_eval_rhs
cases h_terminal_lhs
case inl h =>
cases h_terminal_rhs
case inl h' =>
cases h_eval_lhs
case one h_eval_lhs' =>
contradiction
case succ h_eval_lhs' =>
contradiction
case inr h' =>
contradiction
case inr h =>
contradiction
theorem all_well_typed_sn_weak : ∀ e t, valid_judgment_weak e t → sn e := by
intro e t h_t
have h_candidate := all_well_typed_candidate h_t
have h_sn := all_candidates_sn _ h_candidate
exact h_sn
theorem all_types_decideable : ∀ e t, valid_judgment e t → ∃ e, valid_judgment_weak e t := by
intro e t h_t
cases h_t
case k n =>
use SKM[K n]
apply valid_judgment_weak.k
case s n =>
use SKM[S n]
apply valid_judgment_weak.s
case m n =>
use SKM[M n]
apply valid_judgment_weak.m
case call lhs rhs h_u h_t_lhs h_t_rhs =>
apply all_types_decideable at h_t_lhs
apply all_types_decideable at h_t_rhs
have ⟨lhs', h_t_lhs'⟩ := h_t_lhs
have ⟨rhs', h_t_rhs'⟩ := h_t_rhs
use SKM[(lhs' rhs')]
cases h_t_lhs'
case beta_eq t' h_t' h_eq =>
cases e
case m =>
sorry
case k =>
sorry
case s =>
sorry
case call c =>
match c with
| .mk lhs rhs =>
sorry
theorem all_well_typed_sn : ∀ e t, valid_judgment e t → sn e := by
intro e t h_t
cases h_t
case k =>
apply sn.trivial
apply is_eval_once.k_rfl
case s =>
apply sn.trivial
apply is_eval_once.s_rfl
case m =>
apply sn.trivial
apply is_eval_once.m_rfl
case call lhs rhs h_u h_t_lhs h_t_rhs =>
apply all_well_typed_sn_weak
sorry
Ramblings
M_id (e : t) = t := M
We want:
(K (Nat : Typea) Real : Typea) (K Nat Real : K Typea Typeb) (K Nat Real : M K ) (K Nat Real : M
we can quickly see that we have a more specific version of M that becomes useful:
M_1 (e : t) arg = arg t
We can actually derive this from the existing combinators. I will do this later.
(K Nat Real : K (M_1 Nat) (M_1 Real))
I define SR = S (S (K K) (S K S)) (K (S K S K))
SR M_1 Nat Real = (M_1 Nat) (M_1 Real)
K (SR M_1 Nat Real) = (M_1 Nat) (M_1 Real)
So, we define the typing judgment:
(x : t) (y : t) : x (M_1 x)
Eh this doesn't generalize well. Also, let's take an example. Identify function.
((S : S) (K : K) (S : S)) : S
#check 1 = 1
Made by Dowland Aiello.