Simple Functions in Haskell

I wasn’t really sure of quite how to start this off. I finally decided to just dive right in with a simple function definition, and then give you a bit of a tour of how Haskell works by showing the different ways of implementing it.


So let’s dive right in a take a look at a very simple Haskell definition of the factorial function:
fact n = if n == 0
then 1
else n * fact (n – 1)
This is the classic implementation of the factorial. Some things to notice:
1. In Haskell, a function definition uses no keywords. Just “`name params = impl`”.
2. Function application is written by putting things side by side. So to apply
the factorial function to `x`, we just write `fact x`. Parens are only used for
managing precedence. The parens in “`fact (n – 1)`” aren’t there because the
function parameters need to be wrapped in parens, but because otherwise,
the default precedence rules would parse it as “`(fact n) – 1`”.
3. Haskell uses infix syntax for most mathematical operations. That doesn’t mean that
they’re special: they’re just an alternative way of writing a binary function
invocation. You can define your own mathematical operators using normal function
definitions.
4. There are no explicit grouping constructs in Haskell: no “{/}”, no “begin/end”. Haskell’s
formal syntax uses braces, but the language defines how to translate indentation changes
into braces; in practice, just indenting the code takes care of grouping.
That implementation of factorial, while correct, is not how most Haskell programmers would implement
it. Haskell includes a feature called pattern matching, and most programmers would use
pattern matching rather than the `if/then` statement for a case like that. Pattern matching separates
things and often makes them easier to read. The basic idea of pattern matching in function
definitions is that you can write what *look like* multiple versions of the function, each of which
uses a different set of patterns for its parameters (we’ll talk more about what patterns look like in
detail in a later post); the different cases are separated not just by something like a conditional statement, but they’re completely separated at the definition level. The case that matches the values at the point of call is the one that is actually invoked. So here’s a more stylistically correct version of the factorial:
fact 0 = 1
fact n = n * fact (n – 1)
In the semantics of Haskell, those two are completely equivalent. In fact, the deep semantics of Haskell are based on pattern matching – so it’s more correct to say that the if/then/else version
is translated into the pattern matching version than vice-versa! In fact, both will basically expand to the Haskell pattern-matching primitive called the “`case`” statement, which selects from
among a list of patterns: *(Note: I originally screwed up the following, by putting a single “=” instead of a “==” inside the comparison. Stupid error, and since this was only written to explain things, I didn’t actually run it. Thanks to pseudonym for pointing out the error.)*
fact n = case (n==0) of
True -> 1
False -> n * fact (n – 1)
Another way of writing the same thing is to use Haskell’s list type. Lists are a very fundamental type in Haskell. They’re similar to lists in Lisp, except that all of the elements of a list must have the same type. A list is written between square brackets, with the values in the list written
inside, separated by commas, like:
[1, 2, 3, 4, 5]
As in lisp, the list is actually formed from pairs, where the first element of the pair is
a value in the list, and the second value is the rest of the list. Pairs are *constructed* in Haskell using “`:`”, so the list could also be written in the following ways:
1 : [2, 3, 4, 5]
1 : 2 : [3, 4, 5]
1 : 2 : 3 : 4 : 5 : []
If you want a list of integers in a specific range, there’s a shorthand for it using “`..`”. To generate the list of values from `x` to `y`, you can write:
[ x .. y ]
Getting back to our factorial function, the factorial of a number “n” is the product of all of the
integers from 1 to n. So another way of saying that is that the factorial is the result of taking the list of all integers from 1 to n, and multiplying them together:
listfact n = listProduct [1 .. n]
But that doesn’t work, because we haven’t defined `listProduct` yet. Fortunately, Haskell provides
a ton of useful list functions. One of them, “`foldl`”, takes a function *f*, an initial value *i*,
and a list *[l1, l2,…,ln]*, and basically does *f(ln(…f(l2, f(i,l1))))*. So we can use `foldl` with the multiply
function to multiply the elements of the list together. The only problem is that multiplication is written as an infix operator, not a function. In Haskell, the solution to *that* is simple: an infix operator is just fancy syntax for a function call; to get the function, you just put the operator
into parens. So the multiplication *function* is written “`(*)`”. So let’s add a definition of `listProduct` using that; we’ll do it using a *`where`* clause, which allows us to define
variables or functions that are local to the scope of the enclosing function definition:
listfact n = listProduct [ 1 .. n ]
where listProduct lst = foldl (*) 1 lst
I need to explain one more list function before moving on. There’s a function called “`zipWith`” for performing an operation pairwise on two lists. For example, given the lists “`[1,2,3,4,5]`” and “`[2,4,6,8,10]`”, “`zipWith (+) [1,2,3,4,5] [2,4,6,8,10]`” would result in “`[3,6,9,12,15]`”.
Now we’re going to jump into something that’s going to seem really strange. One of the fundamental
properties of how Haskell runs a program is that Haskell is a *lazy* language. What that means is
that no expression is actually evaluated until its value is *needed*. So you can do things like
create *infinite* lists – since no part of the list is computed until it’s needed. So we can do
things like define the fibonacci series – the *complete* fibonacci series, using:
fiblist = 0 : 1 : (zipWith (+) fiblist (tail fiblist))
This looks incredibly strange. But if we tease it apart a bit, it’s really pretty simple:
1. The first element of the fibonacci series is “0”
2. The second element of the fibonacci series is “1”
3. For the rest of the series, take the full fibonacci list, and line up the two copies
of it, offset by one (the full list, and the list without the first element), and add the
values pairwise.
That third step is the tricky one. It relies on the *laziness* of Haskell:
Haskell won’t compute the *nth* element of the list until you *explicitly*
reference it; until then, it just keeps around the *unevaluated* code for computing the part of the list you haven’t looked at. So when you try to look at the *n*th value, it will compute the list *only up to* the *n*th value. So the actual computed part of the list is always finite – but you can act as if it wasn’t. You can treat that list *as if* it really were infinite – and retrieve any value from it that you want. Once it’s been referenced, then the list up to where you looked is concrete – the computations *won’t* be repeated. But the last tail of the list will always be an
unevaluated expression that generates the *next* pair of the list – and *that* pair will always be the next element of the list, and an evaluated expression for the pair after it.
Just to make sure that the way that “`zipWith`” is working in “`fiblist`” is clear, let’s look
at a prefix of the parameters to `zipWith`, and the result. (Remember that those three are all
actually the same list! The diagonals from bottom left moving up and right are the same list elements.)
fiblist = [0 1 1 2 3 5 8 …]
tail fiblist = [1 1 2 3 5 8 … ]
zipWith (+) = [1 2 3 5 8 13 … ]
Given that list, we can find the *n*th element of the list very easily; the *n*th element of a list *l* can be retrieved with “`l !! n`”, so, the fibonacci function to get the *n*th fibonacci
number would be:
fib n = fiblist !! n
And using a very similar trick, we can do factorials the same way:
factlist = 1 : (zipWith (*) [1..] factlist)
factl n = factlist !! n
The nice thing about doing factorial this way is that the values of all of the factorials *less than* *n* are also computed and remember – so the next time you take a factorial, you don’t need to
repeat those multiplications.

0 thoughts on “Simple Functions in Haskell

  1. Mark C. Chu-Carroll

    JY:
    I love that link, it’s absolutely hysterical. And I have to admit to having passed through a couple of those phases (although I’ll *never* admit which ones!).

    Reply
  2. Justin

    Nice start. I have been learning Haskell myself over the last 10 weeks, and I’m interested to see where you go. You may have introduced a little too much syntax at once (for example, ‘where’) and I know I didn’t get ‘foldl’ for a long time, so I’m not sure beginners will grok it here. The description of the arguments to foldl are great, but maybe you should make the accumulator aspect more obvious.
    One thing you may also consider is making these posts literate Haskell, so they could be pasted directly into an editor and run in an interpreter. Bird notation (that is, preceding each code line with “>”) is the easiest to use on a blog, IMHO.
    Thanks for this series, I’m excited to read the rest!

    Reply
  3. Coin

    And using a very similar trick, we can do factorials the same way:
    factlist = 1 : (zipWith (*) [1..] factlist)
    factl n = factlist !! n
    The nice thing about doing factorial this way is that the values of all of the factorials less than n are also computed and remember – so the next time you take a factorial, you don’t need to repeat those multiplications.

    How fast is the “!! n” access? That is to say, is the !! operator smart enough to somehow just jump to the nth entry of the list, or will it have to iterate over the first n entries of a stored linked list (an operation that offhand seems like it could be much, much slower than just calculating the factorial manually again, due to n memory accesses being probably slower than n multiplications)?
    Also, wouldn’t this eventually create memory problems? Like, let’s say that I call the factorial once in the entire program, ever, at the beginning. And let’s say I foolishly ask for element 2,403,324. Do I understand correctly that haskell will then calculate and preserve a list of all the factorials 1..2,403,324, and there will be no way to reclaim that memory ever?
    It’s fascinating the language lets you do that, but this sounds like a dangerous feature.

    Reply
  4. Mark C. Chu-Carroll

    Coin:
    There’s no single answer to your question; it’s up to the compiler to decide. If it’s a frequently used function, there’s a good chance that the compiler is memo-izing the
    results, which will make it very fast. If not, then it needs to traverse the list, which is potentially slow. Also remember that the compiler doesn’t need t promise that the results will *always* be available without computation. It can generate code that allows the list to be discarded on GC and replaced by the thunk that computes it, so that the next time you reference it, it’ll be recomputed, so it doesn’t necessarily use the memory forever. (And of course, if the
    compiler can show that the list won’t be referenced again after some point, it can be GCed as normal.)
    For the memory issue: it’s one of those cases where you, as the developer, need to make an intelligent choice about what’s appropriate for your system. If the function is going to be used rarely, and the way that it gets used will cause the infinite-list version to use huge amounts of space, then it’s not an appropriate implementation technique.
    Frankly, I would never use a factorial function that was written like that in real code. But there *are* definitely
    places where the infinite-list/indefinite-list approach is
    the correct approach in real code. And it’s a good example
    of what the laziness property of Haskell execution means.

    Reply
  5. Koray

    Coin, the indexed access is as slow as it for lists in other languages. There are other datastructures better suited For that kind of access. This is just a simple example.
    Haskell implementations have proper GCs, so the generated list will be reclaimed at some point.

    Reply
  6. Daniel Martin

    Note that the standard binary representation of the integer (2,403,324)! will itself require approximately 5.66 MB of memory to store. The base 10 representation of that number is over 14 million digits long. Each additional multiplication near the end of that computation will require over a million 32-bit integer multiplications.
    I know that GHC links in some impressive bignum libraries, and I know that some wicked fast stuff can be done with vector operations, but we may have other problems here aside from running out of memory.

    Reply
  7. Mark C. Chu-Carroll

    Daniel:
    Just for fun, I decided to see how far Hugs can go with the infinite-list factorial function… On my MacBook, it can compute the factorials up to somewhere a little above 15000; trying to call it for numbers larger than that results in a control stack overflow; calling it for numbers somewhere over 45000 turns into an illegal instruction that crashes hugs.
    GHCI can compute the factorial of 100,000 (!!!), but at a cost of several hundred megabytes. But it can’t get much beyond that without dying.
    So I don’t think we need to worry much about the factorial of 2 million. 🙂

    Reply
  8. Mark C. Chu-Carroll

    Actually, using the *recursive* version, ghci can compute the factorial of 500,000! (At a cost of approximately 180 megabytes!) But beyond that, it hits a stack overflow. And it manages to run both cores of my CPU at close to 100% for several minutes doing it!

    Reply
  9. Cale Gibbard

    Colin: While it is true as others have pointed out that the performance of (!!) is up to the compiler, in practice, xs !! n takes O(n) time, because lists really are linked lists. Haskell has arrays with O(1) access time if you want them, but of course, they can’t be infinite. Also, because factlist never goes out of scope, it really will be memoised — it’s easy to prevent that by making it a part of the definition of factl, which would result in it being thrown away after each use. Of course, which of these options you want in any given case is dependent on your problem. Personally, I’d tend to write
    factorial n = product [1..n]
    Another thing I’d like to mention is that foldl is not the fold that you most commonly use in Haskell. This is counterintuitive to those who are coming from a strict functional language background, because they know that foldl is tail-recursive, and hence can be optimised into a loop by the compiler. While this is true, due to the nature of lazy evaluation, this loop does little more than building a large expression to evaluate.

    foldl f z [] = z
    foldl f z (x:xs) = foldl f (f z x) xs

    Lazy evaluation is outermost-first. (Together with the provision that values which come from the duplication of a single variable will only be evaluated once and the results shared.) As an example of how lazy evaluation proceeds with foldl:

    foldl (+) 0 [1,2,3]
    = foldl (+) (0 + 1) [2,3]
    = foldl (+) ((0 + 1) + 2) [3]
    = foldl (+) (((0 + 1) + 2) + 3) []
    = ((0 + 1) + 2) + 3
    = (1 + 2) + 3
    = 3 + 3
    = 6

    So you can see that most of the time saved in implementing the foldl as a loop will be lost in the reduction of the expression it builds. You can also see the O(n) space usage.
    Looking at the code for foldl above, since it does nothing but call itself until it reaches the end of the list, foldl also won’t work well with infinite lists.
    So most of the time, what you really want in Haskell is foldr:

    foldr f z [] = z
    foldr f z (x:xs) = f x (foldr f z xs)

    You can think of foldr f z as replacing each (:) in the list with f, and the [] at the end with z.
    Note that in the second case, foldr calls f immediately. If for any reason, f doesn’t end up needing its second parameter, the rest of the foldr is never computed — this often occurs when not all of the result is demanded by the rest of the code. So foldr interacts nicely with laziness.
    There are still a few cases where that’s not really what you want — those cases when the result can’t be partially evaluated (like in taking the sum or product of a list of numbers).
    These remaining cases (I’d say ~5%, but at worst ~25%) are mostly covered by a variant of foldl called foldl’. The difference between foldl and foldl’ is that foldl’ forces the evaluation of the result being accumulated before recursing (it’s strict). So in that case, you get back the efficiency that foldl has in strict languages, foldl’ can still be implemented as a tight loop, and will operate in constant space.
    As an aid to figuring out what the various fold and scan functions in Haskell are doing, I’ve produced some diagrams here: http://cale.yi.org/index.php/Fold_Diagrams

    Reply
  10. Cale Gibbard

    Mark: In the case where you got a stack overflow, try foldl’ (*) 1 [1..n]. The stack overflow is being caused because the intermediate calculations of actually multiplying the numbers are not being done until the very end, and you’re running out of stack by then. I would expect foldl’ (*) 1 [1..n] to work for all cases until the numbers involved start getting too large to hold in memory. It also runs in log(n) space.

    Reply
  11. Don Stewart

    The illegal instruction after stack overflow in hugs is fixed in the newest hugs. Also, did you look at compiling the code with GHC, and using the compiled version in the GHCi interpreted environment? GHCi doesn’t optimise the code, in order to have fast turn around.
    Interpreting A.hs as bytecode:
    $ ghci A.hs
    *M> :t facts
    facts :: [Integer]
    Compiling A.hs to native code, with optimisations:
    $ ghc -c -O A.hs
    $ ls A.o
    A.o
    $ ghci A.hs
    Prelude M> :t facts
    facts :: [Integer]

    Reply
  12. Pseudonym

    Mark, there’s a bug in your case code. It should read:
    case n == 0 of
    not:
    case (n=0) of
    It’s similar to the difference between = and == in C, except that in both cases, they really mean equality, not assignment. The former is “strong equality”, the latter is “semantic equality”.

    Reply
  13. Pseudonym

    Oh, one more thing. You can compute factorials beyond 500,000 using only slightly more sophisticated code.
    There are two problems with the recursive factorial function as presented.
    1. It uses O(n) stack space to compute fact n.
    2. It performs O(n^2) operations. The reason for this is not obvious, but it’s to do with the complexity of large integer multiplication. It’s less expensive if you try to arrange to multiply only numbers of similar magnitude, because then you can use faster multiplication algorithms, like Katsuba or FFT.
    Try this on for size:

    factBinary n = recursiveProduct 1 n
    recursiveProduct m n
    | m – n
    O(log n) stack space, and something like O(n * log n * log log n) operations.

    Reply
  14. Pseudonym

    Killed by HTML markup again. That second function should read:

    recursiveProduct m n
    | m – n < 5 = product [m..n]
    | otherwise = recursiveProduct m mid * recursiveProduct (mid+1) n
    where mid = (m+n) `div` 2

    Reply
  15. Mark C. Chu-Carroll

    Pseudonym:
    Ack. Yes, I did blow the case statement; the *one* thing that I didn’t bother to test, because it was just for explanatory purposes, and that’s the one where I make a stupid mistake!
    Blerg.

    Reply
  16. Thomas Sutton

    Are you sure that the definition of fact using pattern matching is sugar for that particular case expression? It seems strange that the compiler would translate a pattern into something that requires that the type being matched be a member of Eq and would require nested expressions with more than two patterns (rather than just more branches on the one case expression).
    I’d expect it to transform to something like:
    fact n = case (n) of
        0 -> 1
        _ -> n * fact (n - 1)

    Reply
  17. Pseudonym

    Kobold: GMP won’t use the fast multiplication algorithms effectively if you give it a couple of bad numbers to multiply.
    Incidentally, I should plug my own little library of Haskell code, which implements a couple of the fast factorial algorithms.

    Reply
  18. Pseudonym

    Thomas Sutton, the Haskell language report states that this is an identity:

    case v of { k -> e; _ -> e’) = if (v==k) then e else e’

    where k is a numeric, character or string literal. This works because the built-in Eq instances for numeric, character and string types all do the right thing.
    So while you’re right that it probably gets translated into a case expression, Mark’s translation is completely equivalent.

    Reply
  19. Don Stewart

    To find out what GHC actually does to your code, add -ddump-simpl -O to the command line, and we see:

    fact 0 = 1
    fact n = n * fact (n – 1)

    compiles (yay for compilation via transformation!), to:

    fact n = case n of
        0 -> 1
        _ -> case fact (n – 1) of m -> m * n

    (modulo a worker/wrapper split)

    Reply
  20. ittay

    > factlist = 1 : (zipWith (*) [1..] factlist)
    won’t that produce a list [1 1 2 6 …] ? then factl 2 gives you 1

    Reply
  21. Masklinn

    > won’t that produce a list [1 1 2 6 …] ? then factl 2 gives you 1
    That will produce a list [1,1,2,6,24,..] indeed, but Haskell lists are 0-indexed, so (factlist !! 2) will yield 2, and (factlist !! 20) will yield 2432902008176640000 which should be correct.

    Prelude> take 20 factlist
    [1,1,2,6,24,120,720,5040,40320,362880,3628800,39916800,479001600,6227020800,8717
    8291200,1307674368000,20922789888000,355687428096000,6402373705728000,1216451004
    08832000]

    Reply
  22. Daniel Martin

    Masklinn, for people familiar with other languages and not with Haskell, you might point out that the expression

    fact = product.enumFromTo 1

    means this:

    fact = (product) . (x -> enumFromTo 1 x)

    And that . is the Haskell way of saying “function composition”.
    It’s just that currying (especially the automatic currying that Haskell does) is a very uncommon language feature, and many languages use . as a syntax feature to access properties that are part of a larger structure, and not as an operator with relatively low binding. This makes your rephrasing of factorial particularly cryptic. (I must say, I’ve never understood the “point free” school of Haskell programming that says “avoid formal arguments whenever possible”)

    Reply
  23. r.g.

    I did the following in python 2.3.4:
    def f(x): return reduce(lambda a, b: a*b, xrange(1, x+1))
    Calculating factorial of 100.000 took 3.5 cpu minutes on Intel(R) Pentium(R) D CPU 3.20GHz

    Reply

Leave a Reply to ittay Cancel reply