Structural induction

Structural induction is a generalization of mathematical induction.

To see how, consider the definition of naturals (which mathematical induction is based on):

Inductive nat : Set :=
  | O => nat
  | S => nat -> nat.

So, generally what we do is prove the base case P(O), and then prove the recursive step P(S (S n)) by assuming P(n).

Structural induction can be applied more generally, say lists.

Inductive natlist : Set :=
  | nil  => natlist
  | cons => nat -> natlist -> natlist.

In this case we prove the base case P(nil), and then prove the recursive step P(l') by assuming P(l), where l' = cons(x, l).

So as an example, let’s try to prove that map(id) = id.

First, let’s look at the definitions for map and id:

map _ []     = []
map f (x:xs) = f x : map f xs
id x         = x

We use induction on x.

Base case: map(id, []) = [] = id([])

Inductive step: Assume that map(id, xs) = id(xs), and prove that map(id, (x:xs)) = id(x:xs).

map(id, (x:xs)) = id(x) : map(id, xs) = I.H. = id(x) : id(xs) = x : xs = id(x : xs).

Eta-reduction: map(id) = id.

Q.E.D.

Prime check

Here is a tiny proof, when doing primality test, why it is enough to check divisors only up to sqrt(n) instead of n:

Let n divide q.

If n < sqrt(q) < q
then n < sqrt(q) < q/n < q

q/n < q is trivial

sqrt(q) < q/n
1/sqrt(q) < 1/n

sqrt(q) > n

Alternatively

If sqrt(q) < n < q
then q/n < sqrt(q) < n < q

sqrt(q) < n
sqrt(q) > q/n

Implementation of the algorithm in Haskell:

primetest :: Int -> Bool
primetest x = primetest' x $ (ceiling.sqrt.fromIntegral) x
where
primetest' 1 _ = False
primetest' x 1 = True
primetest' x y = if x `mod` y == 0 then False else primetest' x (y - 1)

The part where I do (ceiling.sqrt.fromIntegral) may look magical, but what it does is:
1. Convert x from integer to something that sqrt accepts
2. Calculate the sqrt of it
3. Convert it back to integer


	

Recursive types in Haskell

I was playing around today with attempting to re-define the definition of Haskell’s list. The list in Haskell is well defined using the symbols [ and ], and some additional operators such as : and ++

For example, if we have a list [1, 2, 3] we can easily append a number to the beginning (prefix) by doing 0:[1, 2, 3], or we can append a number to the end (postfix) by doing [1, 2, 3] ++ [4].

The difference between these two operations is that the former only takes constant time, since it is appending a value to the beginning of the list, while the latter takes O(n) time, because it first needs to traverse the list and then append another list ([value]) to the end of it.

So anyway, back to playing with recursive definitions. My definition of the list was:

data List a = Nil | Cons a (List a) deriving (Show)

What this says is that the type List can have 2 constructors, either Nil or Cons. If it’s Nil, then it’s Nil (end of list), however, if it’s Cons, then we can add a value to it (e.g. Cons 3), but after Cons we must also add another “List a” type (which can be either Nil or another recursive Cons).

So this recursive definition allows us to define a list. And this is how lists are basically implemented in LISP. For example, if we wanted to represent [1, 2, 3] using our Cons Nil representation, we would have something like:

Cons 1 (Cons 2 (Cons 3 Nil))

You can play around with it as much as you like; what I did was only implement the ++ operator:

add' :: List a -> List a -> List a
add' Nil ys = ys
add' (Cons x xs) ys = Cons x (add' xs ys)

Example usage:


*Main> add' (Cons 1 (Cons 2 (Cons 3 Nil))) (Cons 4 Nil)
Cons 1 (Cons 2 (Cons 3 (Cons 4 Nil)))

Recursion in Haskell and LC

When we speak of recursion for Haskell, we can define fix point like this:

Prelude> let fix f = f (fix f)
Prelude> :t fix
fix :: (t -> t) -> t

We can see from its type that it is well defined, i.e. the argument should be a function that takes a t and returns a t.

So now consider we have factorial defined as:
Prelude> let fact n = if n == 0 then 1 else n * fact(n-1)

So we can use it like:
Prelude> (fix (\x -> fact)) 4
24

This is great!

So, regarding lambda calculus, recursion is defined as:
Y := λg.(λx.g (x x)) (λx.g (x x))

But, if we tried to do that in Haskell, we’d get something like:

Prelude> \g -> (\x -> g (x x)) (\x -> g (x x))
:2:35:
    Occurs check: cannot construct the infinite type: t1 = t1 -> t0
    In the first argument of `x', namely `x'
    In the first argument of `g', namely `(x x)'
    In the expression: g (x x)

So this is Haskell warning us about infinite types. And why does that recursion definition work with lambda calculus?

Well, the reason is that lambda calculus is untyped (unless we are talking about typed lambda calculus, which we aren’t :-)), and Haskell is typed.

So in the world of typed languages, that lambda expression is not possible.
Consider x :: a -> a for some choice of a. To apply x to something, that something must have type a. So, to apply to x, x :: a. But that means a == a->a.

But hey, we still have fixed point combinators!

Lambda calculus

Lambda calculus is a theoretical programming language (with same expressive power as a Turing Machine). It is interesting because there is no state, and it consists only of two operators: . and λ.

If you want to learn more, visit the Wikipedia article http://en.wikipedia.org/wiki/Lambda_calculus as this will be only a short introduction.

So, since we have no state, we have no variables. We only have constants. Numbers are defined like:

1 := λf x. f x
2 := λf x. f (f x)

In other words, we can define 1 as f(x), and 2 as f(f(x)) (we get to define f and x ourselves).

Functions in lambda fall in two categories: abstraction and application.

Abstraction is when we define a function and apply no arguments to it.
Application is when we apply arguments to some abstraction (already defined function).

SUCC (successor) example:

SUCC is defined as

SUCC := λnfx. f (n f x)

So, doing SUCC 1 we get

(λnfx. f (n f x))(λf x. f x) =
λfx = f ((λf x. f x) f x) = f (f x) = 2