Chapter3

Type System

Posted by Dino on December 27, 2020

Type

Below is a problematic Racket function.

1
2
3
(define (f x)
    (+ x (first x))
)

The above function will fail in any cases. Since the parameter x cannot simultaneously be a number (to be a valid argument of +), and a list (to be a valid argument of first). Calling above function will raise a type error.

The aim of the Chapter 3 is to introduce the type system of the programming language.

Type: A set of values together with a set of behaviours on those values. Although the value of a type can be different among languages. The core is that for any language if an expression has be told that its type is A. Then, the language will constrain the possible value of the expression as well as the allowed behaviour of the expression.

Type System: A set of rules governing the semantics of types in the language. This includes how types are defined, the syntax rules governing where and how types can be written, and how types affect the semantics of the language.

Strong Typing & Weak Typing

Strong Typing: Every value has a fixed type during the execution of a program. Another way of interpretation is that without using explicit casting, the type of expression will always be bound to the same type.

Weak Typing: Value can be implicitly interpreted as having a completely different type at runtime than what was originally intended.

Type Coercion: The operands of the operator will be converted to an ‘equivalent’ value of the other operand’s type in case the operands are different types. (Auto boxing in java is also an example of the type coercion)

1
2
3
4
5
6
7
8
9
10
    -- In javascript
    "5" + 6
    "56"
    -- Since in js the 6 in the content will be auto casted to a string "6" and adding string in js will concante them

    -- Also in js
    "5" == 5
    True
    -- This is also due to the type coercion
    -- Thus, in js if you want to compare whether two values are exactly same, you have to use ===

In fact, the type coercion cannot be used to determine whether a language is strong typing or weak typing. Type coercion also exists in strong typing language. The type coercion also happens when you add an integer with a floating number. Nevertheless, none of existing modern languages prohibits that.

Finally, strong/weak typing is not a binary property, but a set of nuanced rules about what types can be coerced and when.

A side note here is that whether C is a strong typing language has been an open discussion for a while.

Dynamic Typing & Static Typing

Static Typing: The type of every expression is determined directly from the source code, before any code is actually executed. In languages that permit mutation, static typing requires that even as variables change values, they can only change to value within the same type.

Dynamic Typing: The typing-checking is performed until the program is run. In such languages, a type error is a runtime-error.

Informally, languages that require explicit type declaration (for instance Java, C) are static typing languages. Whereas, most of interpreted languages like Python, Javascript that does not require explicit type declaration are dynamic typing languages. Note, this is not true for all cases. For instance, Haskell is a statically-typing interpreted language that does not require explicit type declaration.

Mostly, the dynamic typing language allows more expressible programs than static typing language.

On the other hand, the static typing language assures better safety during the execution.

Haskell Type System

In Haskell, the type of any expression can be examined using :t. You can examine the type annotation of a function even you did not write the type annotation. The ability for compilers to detect the type without explicit type declaration is known as the Type Inference

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
    :t True
    True :: Bool
    
    a = \x -> not x
    :t a
    a :: Bool -> Bool
    -- Bool -> Bool means that the function takes a Bool and return a Bool
    
    b = \x y -> x && y
    :t b
    b :: Bool -> Bool -> Bool
    -- Note that in Haskell -> is right-associate this literally means that
    -- b is Bool -> (Bool -> Bool)
    -- In Haskell this is interpreted as 
    -- b is a unary function takes a Boolean and return another currying function which takes a Boolean and return Boolean
    -- This is known as the currying
    -- In Haskell every multi-parameter function will be brokwn down into a nested composition of unary functions
    -- Thus, the real body of the b in Haskell is
    -- b = \x -> \y -> x && y

Haskell provides a special syntax for partial application on infix operators. Essentially, you only give one of the argument to the infix operator, and it represents a function which intuitively takes an argument and puts it on side of the infix operator. This is known as sectioning

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    f = (&& True) -- This is equivalent to f = \x -> x && True
    
    g = (True &&) -- This is equivalent to g = \x -> True && x
    
    -- Example on hof
    f x lst = map (\y -> x + y) lst
    
    -- With sectioning
    f x lst = map (+ x) lst
    
    -- With currying
    f x = \lst -> map (+ x) lst
    
    -- With sectioning
    f x = map (+ x)

Algebraic Data Type

In Haskell, there are mainly two major types.

The first type is known as Product Type which is similar to the struct in C.

The grammar used to create a Product type is

` data = ... `

The Constructor is also a function which takes different Types and return a new type

1
2
3
4
5
6
7
8
9
10
    -- Here we define a new type named Point
    -- The P is the constructor used to create a Point
    data Point = P Float Float
    a = P 3 4
    
    :t a
    a :: Point
    
    :t P
    P :: Float -> Float -> Point 

To actually operate with the Point, the pattern-matching is required in the function body. In fact, we can apply pattern-matching for any value constructor. (A side note here is that the cons operator (:) is also a value constructor for the list that is the reason why you can apply the pattern matching on a list)

1
2
3
4
5
6
7
8
9
10
11
    distance :: Point -> Point -> Float
    distance (P x1 y1) (P x2 y2) = 
        let square = ( ^ 2) in
        (sqrt ((abs (square (x1 - x2))) + (abs (square (y1 - y2)))))
        
    a = P 3 4
    
    b = P 1 2
    
    distance a b
    2.828427

The second type is known as the Union Type. This is similar to the enum in other primitive languages.

The grammar for creating a union type is

1
data <TypeName> = <Constructor> <values> ... | <Constructor> <values> ... ... 

An example of the sum type is

1
    data Day = Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday

The algebraic data type is the data type formed by any combination of sum and product types.

1
2
    data Shape = Circle Point Float -- Centre Point & Radius
                | Rectangle Point Point -- Top left Point & Bottone Right Point

Polymorphism

Polymorphism: The ability of an entity to have valid behaviour in contexts of different types.

A constraint on the usage of list in the Haskell is that every element of the list has to be the same type. Another way of looking at this is that as long as we adhere the usage of putting the same type into a list. We can create lists for any types.

An interesting question here is since we can create list of any type, what is the return type of some list-related functions like head?

The answer is that this could be anything

1
2
3
4
5
6
7
8
    :t (head [True, False])
    (head [True, False]) :: Bool
    
    :t (head "abc")
    (head "abc") :: Char
    
    :t head
    head :: [a] -> a

The a in [a] -> a is known as the type variable. A type variable is an identifier in a type expresion that can be instantiated to any type.

An example of type checking with type variable is here :t head ([True, False])

  1. Haskell determines that the type of argument list is [Bool].

  2. Haskell matches the argument type [Bool] against the parameter type [a], and instantiates a = Bool in the function type.

  3. Haskell takes the return type a, with a instantiated to Bool, to recover the final concrete type Bool for the return type, and therefore the overall type of the function call.

To enable the list that can be formed of any type, the type variable is included into the definition.

The concrete definition of List in Haskell is

data [] a = [] | (:) a ([] a)

The [] on the definition plays two roles

  1. The [] on the left of the vertical bar represents the constructor for an empty list

  2. The [] on the right of the vertical bar represents the constructor takes a type variable a and return a type of list a([a])

1
2
3
4
5
    :t (:) -- The type of cons
    (:) :: a -> [a] -> [a] -- Takes a value in type a and a list of type a results another list which is also a list of a
    
    :t ([]) -- The type of empty list
    ([]) :: [a] -- The empty list can be mixed with any type

Generic Polymorphism

Generic Polymorphism(Generic): An entity(function or class) behaves in the same way regardless of the type context.

List in Haskell is a genetic container that can be used to store value of any type since how these lists are constructed ([] & (:)) is the same regardless of what type of element is being stored.

Also, every built-in list function is generic. Meaning that, they operate on their input list regardless of what this input list contains. Thinking it in another way, the generic function only cares the outer shape of the container without revealing the detail of the inner elements.

i.e) The container is a sequence in the linear order without knowing the type of every element in the container.

The java supports <> operator for introducing generic. The convention that java uses is that the T in the generic stands for the Type, and the T is the type variable in Java. Almost all the Java collections support the generic. Thus, you can use all those collections to add an element or finding the length of the collection without worrying the concrete type. That is the point of Java providing generic. That is also the aim of the generic polymorphism.

Ad hoc Polymorphism

Type Class: A set of types plus a set of functions that must be able to operator on these types, similar to the interface in java. We say that a type is a member of a type class if the type class’ functions have been implemented for that type.

The Haskell provides a type class named Show which is used to reveal detail of value.

1
2
    class Show a where
        show :: (Show a) => a -> String

The a in after the Show is still a type variable with slightly different meaning.

It stands for that

'’A type a belongs to the Show class if it provides an implementation of show function’’

There are usually two approaches to create a member of a Type class.

  1. Providing the implementation of the required function(s)
    • You can provide a custom implementation of the required function
  2. Using the deriving clause
    • The compiler will automatically generate an instance of the class for the type.

    • But not all classes can be automatically generated

An example to make a Point be a member of Show

1
2
3
4
5
6
7
8
9
10
11
12
13
    -- Approach 1
    data Point = P Float Float
    instance Show Point where
        show(P x y) = "(" ++ (show x) ++ "," ++ (show y) ++ ")"
    a = P 1 2
    > a
    (1.0 2.0)
    
    -- Approach 2
    data Point = P Float Float deriving (Show)
    a = P 1 2
    > a
    P 1.0 2.0

The show function is a function with multiple implementations, one for each type that is a member of Show.

Recall the type signature of the show

` :t show :: (Show a) => a -> String ` The (Show a) before => stands for a type class constraint. Meaning that the type variable a can only be instantiated to a member of the Show type class.

Ad-hoc Polymorphism: The ability of an entity to have different behaviours depending on the type context. In Haskell, Ad-hoc polymorphism is achieved by using the type class. Whereas, in many Object-Oriented Programming Languages, the overloading is used to achieve the Ad-hoc Polymorphism.

For instance, the toString method in java or __str__ method in the python can have different implementations depends on the concreting Class that the method implemented.

Some built-in type classes

  1. Eq: Members of this type class support the (==) and (/=) functions to test for equality. Only (==) needs to be implemented; (/=) ia given a default implementation in terms of (==)

  2. Ord: Members can be ordered using (<), (<=), (>), and (>=). Members must also be members of the Eq type class; We say that Ord is a subclass of Eq. Only (<) and ((==) from Eq) need to be implemented.

  3. Num: Member provides basic arithmetic operations such as (+), (-), and (*). Notably, it doesn’t provide a division function, as this is handled differently by its subclasses. Also provides fromInteger.

  4. Integral: Represents integers, and provides the div(integer division) and mod functions.

  5. Fractional: Represents non-integral numbers, and provides the standard division operator (/), among others.

An interesting point is that numeric literals are also impacted by type classes.

1
2
    :t 1
    1 :: Num p => p

Higher order type class

Beyond the List, there are also many other “container” in the Haskell. Suppose we want to make all of those containers implement map. The simplest naive approach is to implement one for each of containers just like what Racket does.

1
2
3
4
5
6
    (define (mapContainer f container)
        (cond
            [(list? container) (map f container)]
            [(set? container) (set-map f container)]
            [(vector? container) (vector-map f container)])
    ) 

The problem of this approach is we lost the polymorphism of the code. The solution proposed by the Haskell is make map as a part of a type class. The “mappable” type class in Haskell (also in Category Theory) is called Functor.

1
2
3
4
5
6
7
8
9
    class Functor f where
        -- the f stands for Functor / The mappable container
        -- The type annonation stands for this takes a function maps type a to type b and a Functor/Container that
        -- Stores element with type a
        -- Return a Functor/Container stores elements of type b
    fmap :: (a -> b) ->  f a -> f b

    fmapContainer :: Functor f => f Int -> f Int
    fmapContainer items = fmap (+1) items

Representing failing computation

Error handling is one of the essential jobs in the programming language design. The error in some languages is handled by using exceptions or by checking each step to see if the returned value indicates an error. Both of these strategies often introduce extra runtime risk into our program, because the compiler cannot always determine whether (or what kind of) an exception will be raised, or distinguish between valid and erroneous return values, leaving it to the programmer to remember to make the check.

Haskell’s type system provides with a static-checkable approach with a type constructor, Maybe.

1
2
3
4
    data Maybe a = Nothing | Just a
        -- The a is the type variable
        -- Nothing stands for the failing result
        -- The success result is encapsulated / wrapped into Just

Note that Just Int is not same as the Int

Some example functions wrapped to the safe version

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
    safeDiv :: Int -> Int -> Maybe Int
    safeDiv _ 0 = Nothing
    safeDiv x y = Just (div x y)

    safeHead :: [a] -> Maybe a
    safeHead [] = Nothing
    safeHead (x:_) = Just x

    safeTail :: [a] -> Maybe [a]
    safeHead [] = Nothing
    safeTail (_:xs) = Just xs
    
    assert :: (a -> Bool) -> a -> Maybe a
    assert pred x = 
            if pred x
            then
                Just x
            else
                Nothing

Lifting pure functions

Suppose we want to apply the safeHead with an easy function which only adds 1 to it. A faulty implementation is

1
2
    safeAdd1ToHead items :: Num a => [a] -> Maybe a
    safeAdd1ToHead items = safeHead + 1

The reason why the above implementation is wrong is that it doesn’t type-check. Recall that + requires both operands need to be Num where safeHead is returning a Maybe a. The + cannot add a Maybe a with 1.

The simplest approach is that we can write Maybe version of it like

1
2
3
    add1Maybe :: Num a => Maybe a -> Maybe a
    add1Maybe Nothing = Nothing
    add1Maybe (Just x) = Just (x + 1)

The drawback of the simplest approach is that it does not scale. The ideal case is to have one solution that can be applied to all types without writing the implementation for different types. The way of doing this in Haskell is to apply a higher-order function which take any pure a -> b function and turn it into one that works on Maybe as instead. This function is called lift

1
2
3
    lift :: (a -> b) -> (Maybe a) -> (Maybe b)
    lift f Nothing = Nothing
    lift f (Just x) = Just (f x)

Now, we can correct the safeAdd1ToHead by using lift ` safeAdd1ToHead items = lift (+ 1) (safeHead items) `

Technically, the Maybe is a member of Functor where the signature of the lift is same with fmap with the type variable is instantiated to Maybe. Thus, another way of writing the safeAdd1ToHead is

1
2
3
4
5
6
    -- The built-in implementaion of Maybe's fmap
    instance Functor Maybe where
    fmap f (Just x) = Just (f x)
    fmap f Nothing = Nothing

    safeAdd1ToHead items = fmap (+ 1) (safeHead items)

Composing failing computations

A step above the safeHead is safeSecond which is the safe version of returning the second element of the List. Again, a simple but faulty implementation is

1
2
    safeSecond :: [a] -> Maybe a
    safeSecond items = safeHead (safeTail items)

The problem is that safeTail returns Maybe [a] and safeHead is expecting a [a]. The type is not correct.

Well, using lift or fmap is incorrect too.

1
2
3
4
5
6
7
8
9
    safeSecond :: [a] -> Maybe a
    safeSecond items = fmap safeHead (safeTail items)
    -- The type of fmap is (a -> b) -> f a -> f b
    -- Let we substitute the f with Maybe
    -- The type of fmap is now (a -> b) -> Maybe a -> Maybe b
    -- The type of safeHead is [a] -> Maybe a
    -- The type of safeTail is [a] -> Maybe [a]
    -- We can substitute b with Maybe a
    -- The result type is (a -> Maybe a) -> Maybe a -> Maybe (Maybe a) 

A simple approach is to unwrap the value in the Maybe using pattern matching just like the lift

1
2
3
4
5
6
7
    safeSecond :: [a] -> Maybe a
    safeSecond items = 
        let xs' = safeTail items
        in 
            case xs' of
            Nothing -> Nothing
            Just xs'' -> safeHead xs''

The drawback of such approach is that as we are trying to access later indices, the block will become extreme nested.

For instance, suppose we want to access the fourth element of the list

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    safeFourth :: [a] -> Maybe a
    safeFourth items = 
        let xs' = safeTail items
        in 
            case xs' of
            Nothing -> Nothing
            Just xs1 ->
                let xs1' = safeTail xs1
                in
                    case xs1' of
                    Nothing -> Nothing
                    Just xs2 ->
                        let xs2' = safeTail xs2
                        case xs2' of
                        Nothing -> Nothing
                        Just xs3 -> safeHead xs3

To save our life, we can move pattern-matching into another function namely andThen (Spoiler Alert the andThen will also appear again in the later part of the note when we talk about the Monad)

1
2
3
    andThen :: Maybe a -> (a -> Maybe b) -> Maybe b
    andThen Nothing _ = Nothing
    andThen (Just x) f = f x

The new version of safeSecond is

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    -- Version 1
    safeSecond :: [a] -> Maybe a
    safeSecond items = 
            let xs' = safeTail items
            in
                safeTail xs' safeHead
    
    -- Version 2 - Using backticks to make andThen be in the infix position
    safeSecond :: [a] -> Maybe a
    safeSecond items = 
            let xs' = safeTail items
            in
                xs' `andThen` safeHead
    
    -- Version 3 - Simplify the version 2
    safeSecond items = safeTail xs `andThen` safeHead

We can also rewrite the safeFourth as

1
2
3
4
5
    safeFourth items = 
        safeTail xs `andThen`
        safeTail `andThen`
        safeTail `andThen`
        safeHead

Error Reporting

One limitation of using Maybe is that the chained computations don’t preserve information about where or why the computation failed. In practice, the Either type is often used instead of Maybe. data Either a b = Left a | Right b

By convention, the Left is used to store the information of failing computation where the Right encodes the success result just similar to the Maybe.

1
2
3
    safeDiv2 :: Int -> Int -> Either String Int
    safeDiv2 _ 0 = Left "Zero Division Error"
    safeDiv2 x y = Right (x `div` y)

Modeling Mutation

One of the controversy between the imperative language and functional language is the mutation. The way how imperative language implements the mutation is through the state. Literally, the idea of state also exists in the functional languages. The proof is the usage of fold in Lisp or reduce in JS/Python. The fold takes the input accumulator as a state apply the update function to the accumulator to produce the new state. Through chaining such computations the fold achieves the stateful operations just like imperative language.

There are two essential operations for the computations with state that are read and write also called get and put.

1
2
3
4
5
6
7
8
9
10
    -- Both of get and put need to return a state but to give a distinguish between return value, the actual return
    -- is a tuple in the order of (<return item>, <new state>)
    -- get takes the current state and return it as the result, meanwhile the new state is still the old state
    get :: s -> (s, s)
    get state = (state, state)
    
    -- The put takes an item and a state and write the item to be the new state
    -- The put does not return anything in Haskell this is represented by ()
    put :: s -> s -> ((), s)
    put item state = ((), item)

The State in Haskell is used to wrap an operation which takes the current state and produce the return and new state.

The general form of such operation is \s -> (r, s_new)

The Haskell State definition is data State s a = State (s -> (a, s))

Running and combing stateful computations

Although, we have make the stateful operation possible. A similar problem to the Maybe is that the operation is wrapped “inside” the State, we need to extract it before using it. The Haskell provides runState function to do this.

1
2
3
4
5
6
7
8
9
10
11
12
13
    -- The built-in implementation
    runState :: State s a -> (s -> (a, s))
    runState (State operation) = operation 
    
    -- Import the State package
    import qualified Control.Monad.State as State
    
    -- We need to call the functions inside the State package
    (State.runState State.get) 5
    (5, 5)
    
    -- The reason of the parenthese is to extract the function wrap inside the get
    -- And then apply the unwarped function with the input to produce the output

A simple example of applying stateful operations

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    -- Source code in python
    def f():
        x = 10
        x = x * 2
        return str(x)
    
    -- In Haskell
    f :: State.state Int String
    f = State.state (\s0 -> 
                    let (_, s1) = (State.runState (State.put 10)) s0 
                        (x ,s2) = (State.runState State.get) s1
                        (_, s3) = (State.runState (State.put (x * 2))) s2
                        (x', s4) = (State.runState State.get) s3
                    in 
                        (show x', s4)
                    )

To simplify the code, we can add andThen with State version which takes a State operation, and a function takes the result of the operation and returns a new State operation.

Recall the andThen in the Maybe the type signature is

andThen Maybe a -> (a -> Maybe b) -> Maybe b.

Similarly, the andThen in the State will have a similar signature that is

andThen State s a -> (a -> State s b) -> State s b

The implementation is

1
2
3
4
5
6
7
    andThen State s a -> (a -> State s b) -> State s b
    andThen op1 transitionFun = State.state (\state0 -> 
                        let (x1, state1) = (State.runState op1) state0
                            op2 = transitionFun x1
                            (x2, state2) = (State.runState op2) state1
                        in
                            (x2, state2))

The andThen in the State does the following:

  1. Run its first State computation op1

  2. Use the result, and the transition function to create a second State computation

  3. Run the newly-created State computation, and return the result

The new implementation is

1
2
3
4
5
6
    f :: State Int String
    f = State.put 10 `andThen` \_ -> 
        State.get `andThen` \x ->
        State.put (x * 2) `andThen` \_ ->
        State.get `andThen` \x' ->
        State.state (\state_ -> (show x', state_))

The final State value is special, since it simply “produces” a value (show x’) without any interaction with the underlying state at all. We’ll call this type of computation a “pure” one. (Spoiler Alert we will talk about pure in the Monad section again)

1
2
    pure :: a -> State s a
    pure item = State.state (\state_ -> (item, state_))

Also, we can add replace the code with pure such that

1
2
3
4
5
6
    f :: State Int String
    f = State.put 10 `andThen` \_ -> 
        State.get `andThen` \x ->
        State.put (x * 2) `andThen` \_ ->
        State.get `andThen` \x' ->
        pure (show x'))

Another example

1
2
3
4
5
6
7
8
9
10
11
12
13
    -- Source code in python
    i = 0
    def fib_count(n):
        global i
        i += 1
        return n if i < 2 else fib_count(n - 1) + fib_count(n - 2)
    
    -- Convert it in Haskell    
    fibCount 0 = State.state $ \count -> count + 1
    fibCount 1 = State.state $ \count -> count + 1
    fibCount 2 = fibCount (n - 1) `andThen` \f1 ->
                 fibCount (n - 2) `andThen` \f2 ->
                 State.state $\c -> (f1 + f2, c + 1) 

Impure I/O in a pure functional world

Up to now, the notes focused on pure functions which are functions that do not cause any side effects. In reality, a language requires the capability to interact the outside of the world. The Haskell provides the IO to do the I/O operations.

Just similar to the get and write in the State, the IO has two functions to read input and write to output.

1
2
3
4
5
    -- Function to read input
    getLine :: IO String
    
    -- Function to write output
    putStrLn :: String -> IO ()

Finally, a Hello World example can be written by Haskell

1
2
3
4
    :t helloWord :: IO()
    helloWorld = putStrLn "Hello, World"
    > helloWorld
    Hello,World

An interesting problem raised by the Haskell is that how can we produce consecutive I/O s at once? In most of imperative languages this can be done by writing multiple print statements. But, the Haskell centers on the expressions not the statements. Simply writing another expression after it will not work.

Thus, to solve this problem, we need to chain I/O actions just like what we did in the State. The IO provides two implementations for andThen both of them can be used as the infix operator.

1
2
3
4
5
6
7
    -- Discard the result produced by the first
    -- By convention, called "then"
    (>>) :: IO a -> IO b -> IO b
    
    -- Using the result of the first to create the second
    -- By convention, called "bind"
    (>>=) :: IO a -> (a -> IO b) -> IO b

Then, we can apply them to chain multiple I/O actions

1
2
3
4
5
6
7
8
9
10
    greet :: IO
    greet = 
            putStrLn "Hello, World" >>
            putStrLn "Second Line"
    
    prompt :: IO ()
    prompt = 
            putStrLn "What's your name" >>
            getLine >>= \userName -> 
            putStrLn "Hello " ++ userName

Monad

This chapter ends with the Monad which is another level of abstraction.

In Maybe, State, and IO, there is a common function appeared in every one of them which is andThen. Let us take a look on the type signature of andThen.

1
2
3
    andThen :: Maybe a -> (a -> Maybe b) -> Maybe b
    andThen :: State s a -> (a -> State s b) -> State s b 
    >>= :: IO a -> (a -> IO b) -> IO b

The signature seems very similar unless they all contain the specific type. Suppose we use another type variable m to replace those concrete type we can obtain a single type signature which is

` andThen :: m a -> (a -> m b) -> m b`

It is nice to see that we can remove duplicate concrete types within the type signature with a type variable. This is again the ad-hoc polymorphism!. Recall that in Haskell we can achieve the ad-hoc polymorphism using type class. In fact, all the Maybe, State and IO are members of a type class namely Monad.

1
2
3
4
    class Monad m where
    return :: a -> m a -- This is literally the pure function we mentioned in State
    (>>) :: Monad m => m a -> m b -> m b
    (>>=) :: Monda m => m a -> (a -> m b) -> m b

Do-notation

The Haskell provides a macro for the monads that is the do notation. Consider the following monadic code.

1
2
3
4
5
6
7
    greetingLambda :: IO()
    greetingLambda = putStr "What is your first name? " >>
                     getLine >>= \ first -> 
                     putStr "What is your last name? " >>
                     getLine >>= \ last ->
                     let full = first ++ " " ++ last
                     in putStrLn ("Pleased to meet you, " ++ full ++ "!")

We can convert the above code to an imperative-style code by using do notation.

1
2
3
4
5
6
7
    greetingDo :: IO()
    greetingDo = putStr "What is your first name? "
                 first <- getLine
                 outStr "What is your last name? "
                 last <- getLine
                 let full = first ++ " " ++ last
                 in putStrLn ("Pleased to meet you, " ++ full ++ "!")