Top 3 applications I mostly use

Here’s a list of items I spend most time of, daily (both work and non-work related):

  1. Terminal – This should not come as a surprise. vim is my main tool for editing code, and I spend a lot of time on it. I also spend a lot of time using grep when searching for something, or running PHP/Coq/etc from command line to quick test some code. git is also there.
  2. Chrome – After I write the code, I need to test it. Using the browser usually involves testing, but also stuff like social media, this blog, or reading something else interesting.
  3. Notes – I don’t think about this application that much, but it’s a fact that I spend a lot of time using it (both mobile and desktop). You can write just anything in Notes, but I classify mine mostly to be:
    1. TODOs
    2. Schedules, appointments
    3. Temporary information (what work I’ve done for the week, random thoughts, ideas (e.g. this post))
    4. Personal information (card id, pin codes, bike codes)
    5. Short Math proofs

There’s my list. What applications do you use on a daily basis? Feel free to put some in the comments section.

Associativity of Elvis operator

I had written a piece of code in the format a ?: (b ?: c) and I was wondering if I could omit the brackets. That’s right, we need to figure if this operator is associative.

That is, we need to see if we can show that (a ?: b) ?: c = a ?: (b ?: c).

The definition of it is:
a ?: b = a (if a is not empty)
a ?: b = b (otherwise)

So that means we can consider these cases:

  1. Suppose a is empty.
    Now we are left with b ?: c = b ?: c, which is true by reflexivity.
  2. Suppose a is not empty.
    Then, (a ?: b) evaluates to a, so we have a ?: c on the LHS, and thus a ?: c = a ?: (b ?: c).

Since a is not empty, and by definition of this operator, we have that a = a, which is true by reflexivity.

Bonus: here’s the same proof in Coq:

(* Definition of data type for empty and value one *)
Inductive value {a : Type} : Type :=
  | Value : a -> value
  | Empty : value.

(* Definition of Elvis operator *)
Definition elvis {x : Type} (x1 : @value x) (x2 : @value x) : (@value x) :=
  match x1 with
  | Empty => x2
  | Value _ => x1
  end.

(* Fancy notation for Elvis operator *)
Notation "x ?: y" := (elvis x y) (at level 50).

Theorem elvis_assoc : forall (X : Type) (x1 : @value X) (x2 : @value X) (x3 : @value X),
  x1 ?: (x2 ?: x3) = (x1 ?: x2) ?: x3.
Proof.
  intros X x1 x2 x3.
  case x1.
  - (* x1 is not empty *) simpl. reflexivity.
  - (* x1 is empty     *) simpl. reflexivity.
Qed.

Now that both considered cases are true, we’ve shown that ?: is associative and you can skip any brackets in your code 🙂

Calculus of Constructions

As an additional extension to the hierarchy of logical systems we’ve discussed, we’ll consider type theory. In type theory, every “term” has a “type” and operations are restricted to terms of a certain type.

Type theory was created to avoid paradoxes in a variety of formal logics and rewrite systems. For example, Russell’s paradox is not applicable since every sentence has its own type.

The most powerful concept in Coq is not about Coq; it’s the calculus of constructions. Coq is just a lot of awesome automation machinery on top of that.

Calculus of Constructions is a type theory based on intuitionistic logic and simply typed lambda calculus. Together with Dependent Types, Inductive Types, Recursion, they provide a neat way to do proofs.

To demonstrate why they are powerful, let’s consider one of the inference rules as an example:
{\displaystyle {\Gamma ,x:A\vdash B:K\qquad \qquad \Gamma ,x:A\vdash N:B \over {\Gamma \vdash (\lambda x:A.N):(\forall x:A.B):K}}}

What this means is that if above the line holds, then so does everything below the line.

In this case, K is either a Prop or a Type. For simplicity, assume K is Type. To translate it, it means that:
1. If we have a judgment x of type A, from which it follows that B : K (B is a Type)
and
2. If we have a judgment x of type A, from which it follows that N : B (N has type of B)
then we can deduce that
(\x : A . N) has type (\forall (x : A) . B), and the term (\forall (x : A) . B) has type Type.

In other words, this is the rule that ties together lambda abstraction and universal quantification. It also says that a universal quantification is a type.

This rule is the reason why this flows nicely:

(* Make sure you have enabled "Display all basic low-level contents" *)
Definition test (s : Set) : bool := true.

Check test.                 (* test : forall _ : Set, bool *)

Check forall _ : Set, bool. (* forall _ : Set, bool : Type *)

Other data types are encoded similarly as with Church Numerals.

The interesting thing about this system is that it has the strong normalization property, which means that every sequence of rewrites eventually terminates.

While this is not useful for IO, it is very useful for proofs.

In any case, we can take advantage of IO by extracting code to other languages as seen in Coq to Haskell.

Coq to Haskell

Lately I spent some time learning Coq. It is a very useful programming language, but guess what? It has no IO!

So, once we prove our theorems, what do we do? We extract our code!

Let’s pick a language that has IO, for example Haskell.

For fun, let’s try to prove the following and extract the definitions to Haskell:
The sum of a list containing natural numbers is always greater than or equal to 0.

But, in order to prove this, we must define everything on a core level, that is:
1. We need to define what a list is
2. We need to define what a natural number is

We can, but we won’t do that since they are already defined in Coq itself. So we’ll just go ahead and use “list” and “nat” in our code.

Here’s the definition of our recursive function that will calculate the sum of all the elements in a list of natural numbers:

Fixpoint my_sum (l : list nat) : nat :=
  match l with
  | nil     => 0
  | x :: l' => x + my_sum l'
  end.

It’s a pretty straight-forward definition. Let’s write some examples:

Example my_sum1 : my_sum [1 ; 2 ; 3] = 6.
Proof.
  simpl. reflexivity.
Qed.

Example my_sum2 : my_sum [] = 0.
Proof.
  simpl. reflexivity.
Qed.

Example my_sum3 : my_sum [1] = 1.
Proof.
  simpl. reflexivity.
Qed.

And now, for the proof of our theorem:

(* Sum of a list of naturals will always produce a result greater than or equal to 0 *)
Theorem my_sum_nat_gt_0 : forall l : list nat, 0 <= my_sum l.
Proof.
  intros l.
  induction l.
  - (* base *) simpl. reflexivity.
  - (* i.h. *) simpl. case a.
    + (* zero *) simpl. exact IHl.
    + (* S n  *) intros n. rewrite IHl. exact (Plus.le_plus_r (S n) (my_sum l)).
Qed.

Cool! Now for the final part, which is the extraction:

Require Extraction.
Extraction Language Haskell.
Recursive Extraction my_sum.

And finally, our Haskell code generated by the commands above:

data Nat =
   O
 | S Nat

data List a =
   Nil
 | Cons a (List a)

add :: Nat -> Nat -> Nat
add n m =
  case n of {
   O -> m;
   S p -> S (add p m)}

my_sum :: (List Nat) -> Nat
my_sum l =
  case l of {
   Nil -> O;
   Cons x l' -> add x (my_sum l')}

We can now use our function my_sum for which we know our theorem to be true, and we can use it in a programming language that supports IO.

Pretty cool stuff, right? 🙂

Bonus: You can’t do these kind of proofs in Haskell itself, since it has no support for dependent types (we’ll cover that in a future post).

Curry–Howard correspondence

For most of my schooling (basic school, high school), I’ve had a private math teacher thanks to my parents. I believe this has shaped my career path for the better.

I’ve always wanted to be a programmer, and I used to be interested in nothing else. Now I am lucky for the past years to be doing exactly what I’ve always wanted, and that is programming.

Back in basic school, I remember one thing that one of my math teachers kept reminding me: “Study math, it is very closely related to programming”. Now I think I really understand what that statement means.

In any case, I’ve recently started digging into Lean Theorem Prover by Microsoft Research. Having some Haskell experience, and experience with mathematical proofs, the tutorial is mostly easy to follow.

I don’t have much experience with type theory, but I do know some stuff about types from playing with Haskell. I’ve heard about the Curry-Howard correspondence a bunch of times, and it kind of made sense, but I haven’t really understood it in depth. So, by following the Lean tutorial, I got to get introduced to it.

An excerpt from Wikipedia:

In programming language theory and proof theory, the Curry–Howard correspondence (also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation) is the direct relationship between computer programs and mathematical proofs.

In simpler words, a proof is a program, and the formula it proves is the type for the program.

Now as an example, consider your neat function of swapping 2 values of a product type:

swap :: (a, b) -> (b, a)
swap (a, b) = (b, a)

What the Curry-Howard correspondence says is that this has an equivalent form of a mathematical proof.

Although it may not be immediately obvious, think about the following proof:
Given P and Q, prove that Q and P.
What you do next is use and-introduction and and-elimination to prove this.

How does this proof relate to the swap code above? To answer that, we can now consider these theorems within Lean:

variables p q : Prop
theorem and_comm : p ∧ q → q ∧ p := fun hpq, and.intro (and.elim_right hpq) (and.elim_left hpq)

variables a b : Type
theorem swap (hab : prod a b) : (prod b a) := prod.mk hab.2 hab.1

Lean is so awesome it has this #check command that can tell us the complete types:

#check and_comm -- and_comm : ∀ (p q : Prop), p ∧ q → q ∧ p
#check swap     -- swap     : Π (a b : Type), a × b → b × a

Now the shapes are apparent.

We now see the following:

  • and.intro is equivalent to prod.mk (making a product)
  • and.elim_left is equivalent to the first element of the product type
  • and.elim_right is equivalent to the second element of the product type
  • forall is equivalent to the dependent type pi-type
    A dependent type is a type whose definition depends on parameters. For example, consider the polymorphic type List a. This type depends on the value of a. So List Int is a well defined type, or List Bool is another example.

    More formally, if we’re given A : Type and B : A -> Type, then B is a set of types over A.
    That is, B contains all types B a for each a : A.
    We denote it as Pi a : A, B a.

As a conclusion, it’s interesting how logical AND being commutative is isomorphic to a product type swap function, right? 🙂