Functional Programming Comes to the Macintosh! Introducing Swift!

So, Apple just recently announced a new programming language! For people like me, that’s big news. (In case you’re a relatively new reader of this blog, by background I’m not a mathematician. I’m a computer scientist, and I did my PhD in programming language design and implementation. I’m utterly obsessed with programming languages. Last time I set out to count how many language I’d learned, it was over 120, and I’ve at least read the specifications of another 30 or so since then. I don’t claim to be able to program in all of those today – but they’re all things that I have written something in at one time or another. Like I said, this is an obsession for me!)

Before I go into describing swift, I’m going to go through a bit of history. Swift is a hybrid language, in a way that reflects where it came from. To understand that, we need to see where it came from.

Where it came from: a step back into Swift’s history

To understand Swift, you need to know a bit of where it came from.

Since the dawn of OSX, the bulk of software development on the Macintosh has been done in a language caled Objective-C. Objective-C is a lovely programming language (for its time), which never really caught on outside of OSX and it’s predecessor NeXTStep. Swift is a hybrid language, combining object-oriented programming and functional programming; Objective-C, it’s ancestor, was a hybrid between old-fashioned procedural imperative programming and object-oriented programming.

Objective-C dates back to around 1984 or so. Back then, the main programming languages that were used for developing applications were predominantly procedural imperative languages, like Pascal and C. What the “imperative” part of that means is that they’re languages where you specify what you want the computer to do using a sequence of very precise steps. The procedural part means that the main structural principle in those languages is dividing code up into functional elements called procedures or subroutines.

For example, suppose you wanted to implement a binary search tree. In a procedural language, you’d write something like:

typedef struct BinarySearchTreeNode {
    char *value;
    struct BinarySearchTreeNode *left;
    struct BinarySearchTreeNode *right;
} BinarySearchTree;

BinarySearchTree *create_tree(char *str) { ... }

void add_value(BinarySearchTree *tree, char *str) { ... }

void _rebalance(BinarySearchTree *tree) { ... }

Even though you’re implementing a data structure, the fundamental code structure doesn’t reflect that. You’ve got a declaration of a datatype (the struct), and you’ve got a collection of functions. The functions aren’t associated with the declaration, or with each other. The code is organized around the functions, and any connection between the functions and the data structures – or between different functions – isn’t reflected in the structure of the code.

If you think in terms of abstractions: the only abstraction in the procedural version is that one procedure can’t access local variables defined in other procedures. All of the details of the BinarySearchTree data structure are open and accessible to anyone who feels like messing with them. Even operations – like the _rebalance function, which is intended to be only used in the implementation of other BinarySearchTree methods, is callable by anyone who wants to call it.

In the late 1980s, that started to change. The idea of object orientation came onto the scene. In object-oriented programming, you’re still writing imperative code, but the overall organization is based on data structures. You implement a data structure, and associated with it are the operations on that structure. In the object-oriented system, the operations on a data structure effectively became part of the structure itself. Re-writing our example above, in an object-oriented way, the code is similar, but the principle organizational structure changes. Instead of data type declaration and a set of functions that are only related to one another by our understanding, the functions (now called methods) are declared and implemented as part of the data structure.

@interface BinarySearchTree
- init: (NSString)str;
- addKey: (NSString)str withValue: val;
@end

@implementation BinarySearchTree {
  NSString key;
  id value;
}
- init: (NSString)str { ... }
- addNode: (NSString)str { ... }
- rebalance;
@end

Only the implementor of the BinarySearchTree can actually see the internals of the implementation. Rebalance isn’t part of the public interface of the class, and users of the BinarySearchTree can’t access it.

There’s more to object-orientation than this – in particular, there’s an idea called inheritance. Inheritance lets you declare a new data structure as being based on another:

@interface BinaryHeap: BinarySearchTree { }
- (NSString) pop;
- push: (NSString)s;
@end

Inheritance makes it easy to reuse code. To implement a heap, we don’t need to re-implement the binary search tree maintenance methods. We just implement the new methods of BinaryHeap using the BinarySearchTree operations that is inherited by the heap implementation.

Object-orientation seems obvious in retrospect, but back in the late 80s, it was absolutely revolutionary! All of us who’d been programming in old languages were entranced by this new idea, and we wanted to try it!

But there was a problem. When object-oriented programming started becoing popular, most of us were programming in C. We couldn’t just abandon C for some new language, because we relied on tons of existing code, all of which was written in C. For example, back then, I was doing most of my programming on an Amiga. The Amiga UI libraries were all written in C, and if I wanted to write an Amiga program, I needed to use those libraries. I couldn’t just grab Smalltalk and start coding, because Smalltalk couldn’t call the libraries I needed to use.

The way that we started to write object-oriented code was with hybrid languages based on C – languages that contained the familiar C that could interface with our existing systems, but which also had extensions for writing object-oriented programs. There were two main hybrids for C programmers, based on the two main schools of object-oriented programming. There was C++, based on the Simula model of object-orientation, and Objective-C, based on the Smalltalk model. With C++ or Objective-C, you could use object-oriented programming for all of your new code, while still be able to use non object-oriented libraries.

C++ is much more familiar to most people. In the mainstream community, Objective-C never caught on, but C++ became the dominant systems programming language. But Apple, because of OSX’s roots in NeXTStep, embraced Objective-C. If it wasn’t for them, Objective-C would likely have completely disappeared – outside of the Apple world, no one really used it.

Objective-C was an absolutely lovely language for its time. But it was invented around 1984 – it’s an old programming language. Lots of new ideas have come along since the days of Objective-C, and with its roots in C, it’s really hard to make any big changes to Objective-C without breaking existing code.

Now, we’re at the cusp of another big change in programming. Functional programming seems to be becoming the newest rage. The way that functional programming is catching on is very similar to the way that object-orientation did it before: by hybrids. If you look at many of the new, trendy programming languages – such as Scala, Clojure, and Rust – what you find is basically a subset which is the old familiar object-oriented language, and a bunch of functional stuff integrated into it.

With objective-C really starting to look pretty crufty, Apple did the obvious thing: they made a break, and introduced a new hybrid functional/object-oriented language, which they named Swift.

Introducing Swift

In the announcement, they described Swift as “Objective-C without the C”, but I’d describe it more as “Objective-C meets SML”. Swift is a hybrid language in the tradition of Objective-C, but instead of being a hybrid of sprocedural and object-oriented languages, it’s a hybrid of object-oriented and functional languages.

Let’s start with a simple example. In most places, it’s traditional to start with “hello world”, but I think that that’s both boring, and not particularly enlightening. I like to start with the factorial function.

func fact(n: Int) -> Int {
  if n == 0 {
    return 1
  } else {
    return n * fact(n - 1)
  }
}

let x = fact(10)
println("The factorial of 10 is \(fact(10))")

It’s pretty standard for a modern-ish language. The only thing that’s at all unusual is that if you look inside the println, you can see that Swift does string interpolation: if you put \( expression) into a string, then the result of that expression is converted into a string, and inserted into the enclosing string. That’s been common in scripting languages for a long time, but it hasn’t been a standard thing in systems programming languages like Swift. It’s a small thing, but it’s a good one.

The first big surprise in Swift is how you can test that function. In Objective-C, you would have had to create a full program, compile it, and run the executable. That’s not the case in Swift. In Swift, you can open a playground, which is a fully interactive scripting environment. It’s not that Swift comes with an interpreter – a playground is a workspace with fully access to the compiler, and it can compile and evaluate expressions on the fly! The playground isn’t limited to just text – you can create UI elements, interface builder based UIs, interactive games – just put the code into a workspace, and edit it. As the code changes, the moment it becomes syntactically valid, the playground will dynamically compile it using the LCC backend, and execute the generated code for you!

All I can say is: It’s about time! I’ve been seeing toy versions of this kind of interactivity since I was an undergrad in the 1980s. It’s about time that we got that in full blown systems language. Just this one little feature makes me really want to like Swift. And the rest of the language, while not being exceptional, is good enough to make me want to start coding in it.

In terms of syntax, Swift is mostly pretty straightforward. It’s got a clear C heritage, but with a whole lot of cleanups.

The Type System

People like me really care about type systems. Type systems can be controversial – some people love to program with strong types, and others really hate it. I’m very much in the camp that loves it. In my experience, when I’m programming in a language like SML or OCaml or Scala, which have really powerful, expressive type systems, my code very frequently ends up running correctly the first time I execute it. That’s not because I’m some kind of superhuman genius programmer, or because the languages I like somehow prevent me from making mistakes. It’s because in a strong type system, the particular kinds of mistakes that I’m most likely to make are exactly the kinds of things that will show up as a type error. I make just as many mistakes in a strongly statically typed language as I would in a dynamically typed one, but the type system catches them up front, before I can ever run my program. For my way of programming, I like that – I get all of those stupid errors out of the way quickly, so that I can spend my time tracking down the hard stuff.

Swift has a strong type system, in roughly the vein of the SML family of languages. You have the usual array of simple types (String, Int, Float, Character, etc.) In addition, you can create your own data types, and those types can be generic, with type parameters: in any declaration, you can include type parameters in angle brackets. For example, if you wanted to implement a generic list type, you could declare it as:

class List<T> {
  func insert(v: T) { ... }
  ...
}

T is a type parameter. When you declare a variable that can store a list, you need to specify the value of that type parameter:

  var l: List<Int> = ...;

You can also declare constraints on type parameters. A constraint is a way of saying “You can only use a type parameter that meets these requirements”. For example, if you wanted to implement a generic binary search tree, you’d need to be able to compare elements. If you just wrote BinarySearchTree<T>, you wouldn’t be able to use the less-than operator, because not all types can be compared that way. So you need to declare a constraint:

class BinarySearchTree<T: Comparable> { ... }

Now, if you were to try to create BinarySearchTree<Dictionary<String, String>>, you’d get a compile error. You can’t use the less-than operator on dictionaries, so they don’t satisfy the constraints.

The constraints are built using the object oriented features of Swift. Swift allows you to define something called a protocol, which is equivalent to an interface in Java, or a trait in Scala. A protocol just defines a set of methods; when you declare a class, you can declare that it implements a protocol, and the compiler will check that it implements the needed methods. When you write a type T: P, it just means that the type T implements the protocol P.

The main innovation in the type system (at least for a systems programming language) is optional types. In Swift, when you declare a variable using var foo, the variable foo will always contain a value. It can’t be nil. If you want a variable to potentially be empty, you need to explicitly say so, by declaring it with an optional type, which is written as a type-name followed by a question mark, like “var foo: String?“.

When a variable or parameter has an optional type, you can’t just access the value in that variable directly. You need to do something to check if it’s got a value in it or not. The easiest way to do that is using a conditional:

let opt_value: String? = some_function_call()
if let value = opt_value {
  value.do_something()
}

In many cases, you can make the code even simpler, by using option chaining:

let opt_value: String? = some_function_call()
opt_value?.do_something()

The question-mark in an expression does an automatic test-and-dereference. If the value is present, then the following operation is executed. If it’s absent, then the expression returns nil. The thing after the question mark doesn’t have to be a method call – it can also be a subscript, or a field reference. And you can (as the name suggests), chain these things together:

    thing_which_returns_option?.my_foo?.do_something()?[42]

Finally, you can force a dereference of a value directly using !. If you know that an option value is present, and you don’t want to write out a conditional to test it, you can append an explanation point to the end of the expression. It will generate a run-time nil-dereference error if the optional value is empty.

Handling optionality this way works a lot like what I said about static typing before. It doesn’t make errors go away – the errors that would have turned up an null pointer dereferences can still happen in your code. But most of them will be caught in advance by the compiler; and the places where null pointer reference can happen that won’t be caught by the compiler will be easy to find, because they’ll be explicit dereferences of an option, using “!”. (I’ve written about this idea before; see here for details

Object-Orientation in Swift

Let’s move on a bit. I spent a lot of time talking about data structures and object-orientation. Swift’s version of object-orientation is pretty standard for a modern programming language. For example, here’s a sketch of an implementation of a simple tree-based dictionary. (This doesn’t compile yet; it should, but right now, it crashes the compiler.)

class BinarySearchTreeDictionary<T: Comparable, S> {
  let key: T
  let value: S
  var left: BinarySearchTreeDictionary<T, S>?
  var right: BinarySearchTreeDictionary<T, S>?

  init(key: T, value: S) {
    self.key = key
    self.value = value
    self.left = nil
    self.right = nil
  }

  func insert(key: T, value: S) {
    if key > self.key {
      if let l = self.left {
        l.insert(key, value)
      } else {
        self.left = new BinarySearchTreeDictionary<T, S>(key, value)
      }
    } else {
      if let r = self.right {
        r.insert(key, value)
      } else {
        self.right = new BinarySearchTreeDictionary<T, S>(key, value)
      }
    }
  }
}

If you’ve been doing much programming in any modern language, this should mostly look pretty familiar. There are a couple of things that might need explanation.

  • One of the fields of our class is declared with “let”, and two with “var”. As I said in the introduction, Swift is a hybrid between functional and object-oriented programming. The two declaration types reflect that: identifiers declared with “let” are constants, and identifiers declared with “var” are variables. In this binary search tree, we’re not going to try to do in-place rebalancing, so we don’t need the key to be mutable.
  • The type of the left and right fields are option types, as we saw in the previous section.
  • Inside of the implementation of insert, you can see one of the effects of the optional type declaration. You can’t just use an optional value, because it might be nil. There are several ways of using optionals. In this case, we use conditional unwrapping, where you put a constant declaration into the condition of the if statement. If the optional value is present, then the constant is bound, and the body of the conditional is executed.

In Swift, classes are values that are always passed by reference. If you want to implement a class that’s passed by value, you can use a struct. Structs are declared the same way as classes, except that they (obviously) use the struct keyword instead of class. In a struct, by default, all methods are constant: they’re not allowed to modify the structure in any way. If you want a method to be able to modify the structure, you need to explicitly declare that using the mutating keyword. If you declare a method as mutating, you can change anything about it that you want – up to and including entirely replacing it by assigning a new structure value to self!

struct SearchTreeNode<Key: Comparable, Value> {
  mutating func pivot_right() {
    var tmp = self
    self = self.left
    tmp.right = self.right
    self.right = tmp
  }
}

Swift even goes as far as incorporating a bit of aspect-oriented programming in its object system. Aspect-oriented programming is a crazy thing that mostly grew out of object-orientation. The idea of it is that sometimes, you’d like to be able to inject behaviors into your code to add functionality to things like assignments. The way that you do that is by specifying that there’s some action you want to augment, and providing a function containing the code that you want to inject into that action.

For a pretty typical example, you might want to provide a public field in your class, but you want to make sure that no one can assign an invalid value to it, so when someone assigns a value, you provide code to check it for validity.

In Swift, for any field or global variable, there’s a pair of implicit functions that you can define, called willSet and didSet. willSet is called before the assignment happens, and didSet is called after.

For example, if you were implementing a volume control, and you wanted to make sure that no one set the volume higher than a user defined maximum, you could write:

struct VolumeControl {
  var max: Int
  var current: Int {
    didSet(newVolume) {
      if newVolume > max {
        current = max
      }
    }
  }
}

There’s yet another type of object or data structure that you can create with Swift. They’re called enums. From the name, you might think that enums are enumerations. They can certainly be used to implement enumerations, but they’re useful for much more than that. They’re what programming languages geeks like me call algebraic types, or what Scala calls case classes. An enum is any data type whose value can take multiple forms.

Enum types are a bit more limited than I might have liked, but they’re a really nice thing to have. They’re intended to be used for simple, lightweight values that could take several different forms. For example, a few weeks ago, I posted my parser combinator library. In the parser combinators, a parse result could be one of three things: a successful parse (which includes a product value and a parser input containing the unconsumed inputs), a failed parse (which didn’t include anything, but which indicated that the parse failed, but there wasn’t an error), or an error. To implement that in Swift, I’d use an enum:

enum ParseResult<In, Out> {
  case Success(Out, ParserInput<In>)
  case Failure
  case Error(String)
}

The way that you use enums usually comes down to pattern matching:

  val e: ParseResult<In, Out> = parser.parse(original_in)
  switch (e) {
    case .Success(let result, var rest): return other.parse(rest)
    case .Failure: return Failure
    case .Error(let msg): return Error(msg)
  }

Finally, there’s one more kind of data structure: tuples. Tuples are light-weight structures which don’t require a separate declaration of the type. You can just write tuple-values directly, and the type is inferred:

  let tuple = (1, 3, "hi there")

Tuples are a wonderful addition. The main use for tuples is allowing functions to return multiple values. You no longer need to muck about with bunching stuff into untyped lists, or stuffing some of your results into out-parameters. You can just return multiple values.

Functional Programming

The really new stuff in Swift is all functional programming stuff. Functional programming is the new hot thing in programming. For good reason. There’s two main facets to Swift’s version of functional programming: managing mutability, and first-class functions and closures.

Managing Mutability

In programming, you can’t avoid mutable state. It’s a fact. Most of the time, it’s the reason that we’re using a computer. For example, I’m using a program called Atom to write this blog post. There wouldn’t be any point is using Atom if I couldn’t modify the file I’m writing.

But mutable state makes things harder. In a large, complex software system, code which avoids mutating things is usually easier to understand, less prone to errors, less prone to unexpected side effects, and generally easier to interact with.

That’s the driving force behind the new hybrid languages: we want to be able to write functional code when it’s possible. But programming in a purely functional language is frequently really awkward, because you need state – so, in a functional language, you need to find a way to fudge the state in, using Monads, streams, or something else.

The key to using functional programming in a mutable-state world is controlling mutability. That is, you want to be able to glance at some code, and say “This is not going to change anything”, or “This might change stuff”. It’s less about making it impossible to change stuff than it is about making it clear just where stuff could be changed, and making sure that it can’t change unless you specifically declared its mutability. I’m a big fan of keeping things functional whenever it’s reasonable, as I wrote about here”.

Swift does a bunch of things to create that guarantee:

  1. Identifiers are declared with either “let” or “var”. If they’re declared with “let”, then the identifier names a constant which cannot be altered by assignment. If the value of a constant identifier is a struct or enum, its fields cannot be altered by assignment either.
  2. Methods on structs cannot alter the underlying struct unless the method is specifically annotated as “mutating”. Without that annotation in the declaration, the object cannot be altered by any of its methods.
  3. Function parameters are, by default, immutable. To allow a parameter to be changed, you need to specifically annotate it with var or inout in its declaration. If it’s a var, then changes to the parameter will not affect the original value in the caller; they will be made on a local copy.
  4. Structs and enums are passed by value. That means that the structure is (conceptually) copied when it’s passed as a parameter to a function. Unless you annotate the function parameter as an output parameter, the parameter cannot be modified; even if you call a mutating method, the mutation will be on the local copy of the value, not on the caller. (The reason for the “conceptually” up there is that the object is copied lazily; if you try to alter it, the copy happens at the point of the first alteration, so passing complex structures by-value doesn’t incur a copy cost unless you modify it.)

Functions and Closures

The other side of the functional programming support isn’t about restricting things, but about enabling new things. And here Swift really shines. Swift has support for first class functions (function parameter and return values), anonymous functions, curried functions, and full closures.

Swift’s support for first class functions means that functions are just values, like any other value type. A swift function can be assigned to a variable, passed to a function, or returned as the result type of a function.

For example, suppose you wanted to write a generic sort function – that is, a sort function that didn’t just compare values using the standard less-than operator, but which could take any function that a user wanted to do comparisons. In Swift, you could write:

  func sort_generic<T>(list: Array<T>, comparator: (T, T) -> Bool) -> Array<T> {
    ...
    if comparator(v, w) { ... }
    ...
  }

This is something that those of us with experience with Lisp have been absolutely dying to have in a mainstream language for decades.

Closures are a closely related concept. A closure is a function value with one really interesting property. It closes over the environment in which it was declared. To understand what I mean, let’s look at a really simple example, copied from the Swift specification.

func makeIncrementor(amount: Int) -> () -> Int {
  var runningTotal = 0
  func incrementor() -> Int {
      runningTotal += amount
      return runningTotal
  }
  return incrementor
}

This function returns a value which is, itself a function. The interesting thing is that the function can use any variable defined in any of the scopes enclosing its declaration. So the function incrementor can access the amount parameter and the runningTotal variable, even after the makeIncrementor function has returned. Since those are local to the invocation, each time you call makeIncrementor, it creates new variables, which aren’t shared with other invocations.

So let’s look at what happens when you run it:

let f = makeIncrementor(2)
let g = makeIncrementor(5)
f() 2
f() 4
g() 5
f() 6
g() 10

Anonymous functions make it really easy to work with first-class functions. You don’t need to write a function declaration and then return it the way the example above did. Any time you need a function value, you can create it in-place as an expression.

func makeAnonIncrementor(amount: Int) -> () -> Int {
  var runningTotal = 0
  return {
    runningTotal += amount
    return runningTotal
  }
}

If the function anonymous function takes parameters, you declare them before the function body with “in”:

  sort_generic(mylist, { (x: Int, y: Int) -> Bool in return x > y})

Currying, finally, is a shorthand way of writing function values. The idea is that if you have a two-parameter function, you can rewrite it as a one parameter function that returns another one parameter function. That sounds confusing until you see it:

  func add_numbers(x: Int, y: Int) -> Int {
      return x + y
  }

  func curried_add_numbers(x: Int) -> Int -> Int {
    return { (y: Int) -> Int in return x + y }
  }

If I want to add 3+2, I can either call add_numbers(3, 2), or curried_add_numbers(3)(2): they do the same thing.

Swift provides a special way of declaring curried functions:

  func better_curried_add_numbers(x: Int)(y: Int) -> Int {
    return x + y
  }

Pattern Matching

Pattern matching isn’t, strictly speaking, a functional language thing; but it’s an idea that was introduced to most programmers by its inclusion in functional programming languages. The idea is simple: you can write assignments or conditionals that automatically decompose a data structure by matching the structural pattern of that data structure.

As usual, that becomes a lot clearer with an example.

    let (x, y, z) = (1, 2.7, "three")
  

The right hand side of that assignment structure is a tuple with three values. The first one is an integer, the second is a floating point number, and the third is a string. The left-hand side of the assignment has exactly the same structure – so the Swift compiler will match the pieces. That’s equivalent to:

    let x = 1
    let y = 2.7
    let z = "three"
  

That’s particularly useful for multiple return values. Strictly speaking, Swift doesn’t really have multiple return values: every function returns exactly one value. But that one value may be a tuple containing multiple values. So you can write
things like:

  let result, error_core = my_function(parameters)

Pattern matching also happens in switch statements, where each branch of the switch can be a different pattern, as we saw earlier in the ParseResult example.

All of this stuff means that I can write really beautiful functional code in Swift. And that’s a really, really good thing.

The Warts

For the most part, I’ve been saying nice things about Swift. But it’s not all goodness. There are some problems, and some of them are pretty big.

The biggest thing is: no concurrency, no threading, no locks, no message passing. There’s absolutely no mention of concurrency at all. These days, that’s downright shocking. My phone has 4 CPU cores. Every machine that this language is intended to program, from the iPhone to the iPad to the Macintosh, has multiple CPUs, and programs for them need to deal with concurrency. But there’s not a shred of support in the language. That is, to put it mildly, absolutely insane. You can, of course, hack concurrency in via libraries, but most recent languages – Rust and Go come to mind – make concurrency a fundamental goal for a really good reason: concurrency is hard to get right, and concurrency hacks come at a significant cost in efficiency, correctness, and optimizibility. (Even older languages like Java have concurrency deeply embedded in the structure of the language for exactly that reason; leaving it out of swift is just shocking.)

Another big one is exception handling. Again, there’s absolutely nothing in Swift. This is particularly surprising because Objective-C had an exception handling system retrofitted into it. The libraries that Swift needs to be able to interact with use an ugly macro-based exception handling system – why is there nothing for it in Swift?

There are also a fair number of smaller issues:

  • The syntax is decidedly odd at times. Not a big deal. But there are some particular syntactic warts. There’s a range expression, which is fantastic. In fact, there are two, and they’re really hard to tell apart. 1..4 is a shorthand for (1, 2, 3); 1...4 is a shorthand for (1, 2, 3, 4). I predict much pain from off-by-one errors caused by using the wrong one.
  • The different ways of declaring objects have seemingly arbitrary rules. A struct is allowed to have class variables; a class can’t. Why? I have no idea; the limitation is simple stated in the docs as a fact.
  • Optionals aren’t as powerful as they could be. If you look at a language like Scala, optionals are implemented using the same type semantics as anything else: they’re a parametric type, Option[T]. That makes it possible to do a bunch of really nice stuff without needing to have any special cases wired into the language. In Swift, they’re a special case, and limits in the type system make it much harder to do many things. As a result, there’s a bunch of special case syntax for things like “option chaining” and forced deferencing. This isn’t a huge deal, but it’s frustrating.

And the big problem at the moment: the implementation is incredibly buggy. Not just a little bit buggy: virtually every code example in this article crashed the compiler. Obviously, it’s very early days for Swift, so it shouldn’t surprise anyone that the implementation is immature and buggy, but the degree of bugginess is astonishing. Examples from Apple’s own book on Swift crash the compiler! They’re claiming that with the release of OSX Yosemite this fall that you should be able to start writing real applications using Swift. They’ve got a long way to go, and not a lot of time if that’s going to be true.

Conclusion

This article, despite being ridiculously long, still only really scratches the surface of Swift. I haven’t talked about how Swift does arrays and dictionaries, which is really interesting. I haven’t talked much about syntax. I barely touched on pattern matching and options. I didn’t mention the module system at all. I didn’t talk about named parameters, which are a bit weird, but nice. There’s so much more to Swift than I could possible talk about here.

The thing to take away, then, is that overall, Swift is very exciting. Functional/Object-Oriented hybrid programming, with a strong parametric type system, and first-class interactivity in a systems programming language! I’m really looking forward to using it. I’m desperately hoping that Apple will open-source the compiler, because I want to be able to use Swift for lots of things, not just Macintosh programming. I don’t have super-high hopes for it, but it’s definitely possible, and I expect that open-source people are probably already working on a GCC front-end.

As an experiment, I’ve been trying to port my parser combinators to Swift. But at the moment, the Swift compiler chokes on the code. When I manage to debug my code/Apple manages to debug their compiler enough to make it work, I’ll post an update with that code, and more info about what it’s really like to write Swift code.

35 thoughts on “Functional Programming Comes to the Macintosh! Introducing Swift!

    1. markcc Post author

      That’s fantastic! From the book, I got the idea that they weren’t just sugar for an ordinary type, and I’ve had so much trouble getting *anything* through the compiler that I didn’t even bother to try.

      I’m delighted to hear that I’m wrong about that!

      Reply
  1. Trevor

    “1..4 is a shorthand for (1, 2, 3); 1…4 is a shorthand for (1, 2, 3, 4). ”

    They cleverly made it the opposite of Ruby for maximum confusion.

    Reply
    1. MPR

      I would say Apple’s version makes more sense, using two dots for a smaller range, three dots for a larger one.

      Reply
  2. testing2

    “Swift has a strong type system, in roughly the vein of the SML family of languages. ”

    I’m more interested in the type-inference. It seems they use local-type inference like scala. But what about the hairy details? (reification, type refinement, value restriction, monomorphic restriction, et cetera) It also seems, with all the bugs you’ve experienced, that maybe those details aren’t hashed out.

    Reply
    1. markcc Post author

      Yeah – the documentation isn’t clear about exactly what the type inference semantics are, and the implementation is so hopelessly flakey at the moment that there’s no way that I could experiment to try to figure it out.

      Reply
  3. pzmyers

    Grumble, grumble…Object Pascal! It used to be THE language on Mac systems, way way back when.

    Oh, well, this was a useful summary to help me make a transition.

    Reply
  4. Kay-Uwe Kirstein (@KayUweKirstein)

    Thanks for the nice summary! I am afraid that Swift will be restricted to the Apple world like Objective-C (there is no real library/framework support for Windows or Linux). So, on Windows & Linux we have to stick with Rust, which is not a bad choice at all…

    Reply
  5. Leon H

    In the .NET world we have F#. Very mature and tremendously powerful hybrid language. I recommend it to everyone who programs on Windows.

    Reply
    1. markcc Post author

      I’ve heard very good things about F#, but since I haven’t had anything that runs windows in over 8 years, I haven’t had a chance to try it out.

      Reply
    1. markcc Post author

      Yeah, but to use it, I’d have to use windows. I think that missing out on F# is a fair trade for staying away from that mess :-).

      Reply
  6. Franklin Chen

    In any case, the real story is that ML (which I still remember programming in for Sun SPARCstations in the mid-1990s, in the form of Standard ML of New Jersey and Caml Light) is finally winning. Meanwhile, it’s interesting that while Microsoft and Apple have come out with dialects of ML, Google has not entered this language space.

    Reply
  7. Vilx-

    “This is something that those of us with experience with Lisp have been absolutely dying to have in a mainstream language for decades.” – I don’t get it. Javascript and C# – two VERY mainstream languages – have had this for over a decade. What am I missing here?

    Reply
    1. markcc Post author

      In terms of those two:

      I don’t think of C# as mainstream. It’s probably to do with the kind of thing that I do for a living; but a proprietary language that only works on Windows, which is a platform that I have zero access to. Over the last 10 years, I’ve worked for four different companies – and at all of them, I’ve had a Mac laptop, and Linux servers. I forget how unusual that it. But the fact remains that out of all of my coworkers at IBM, Google, Foursquare, and Twitter, I think I’ve known one guy who did anything real in C# – and he was a former Microsoft eng. So it doesn’t feel mainstream to me.

      As far as Javascript… I should have said mainstream systems programming language. I’m not a Javascript fan – I prefer strong typing, it’s got the most brain-dead broken object system I’ve ever seen, I hate the way it handles numbers, and I don’t like the syntax – but aside from the underlying object stupidity, which can be worked around, it’s a very powerful language with a lot to recommend it – in particular, closures. But one thing it clearly is not is a systems language.

      Reply
      1. wds

        I guess Java doesn’t technically count as a systems language. C++ templates allow something similar though, no?

        Reply
        1. markcc Post author

          Java didn’t have closures until just a couple of months ago, and even so, they’re pretty damned broken.

          C++ templates are not closures. Not by any reasonable stretch of the imagination! If you work hard at it, you can make them behave almost like anonymous functions, but they have the “close” part of “closure”: they don’t close over environments, and they can’t really, because C++ doesn’t have any way to do that.

          Reply
          1. Vilx-

            Well, C# has been in the top-10 of any programming language popularity poll for I don’t know how many years already. So I’d say it’s pretty mainstream. It’s not used as a systems language, true, but in the Enterprise sector (which really is a huge chunk of the market) it’s been a de-facto standard for the last decade (Java too, and maybe PHP).

            In the web development world however closures are old news. Most popular Javascript libraries today (like jQuery) have embraced them, so any web developer who is worth their salt is familiar with them. I do believe that this has also been the driving force to implement them in other languages (C# might be popular, but it’s not very influential on other programming languages).

            By the way – C++11 has closures, doesn’t it? I don’t really know the language (though it fascinates me, I’ve never had the time to study it properly), but I do read the news and there was definitely something about it… Googling for it also reveals lots of incriminating evidence.

            I agree though – one’s working environment determines a lot about what languages you get to see in everyday use. For example, I don’t know anyone who would use C/C++/Objective-C. 🙂

          2. wds

            Hmm, I didn’t really read that section like that. You only write the part about closures after the sort example, to which the comment was attached. I was only thinking of that example, i.e.

            func sort_generic(list: Array, comparator: (T, T) -> Bool) -> Array {

            if comparator(v, w) { … }

            }

            I think it’s just a bit of a weak example. C has functions as parameters of course (well, pointers to…) and C++ surely has the necessary support to implement typed comparators. I guess it’s not a first-class function and as such, is not quite as succinct, but as far as examples of where first-class functions shine, this one seems fairly weak to me. I think it would be more accurate to say that we’ve been waiting for a simple, easy, strongly-typed way to do that. Okay, nitpicking.

            Thanks for the great overview BTW, I really like a lot of the language, but seems to me it’s going to end up having a compat-breaking version 2 at some point.

          3. markcc Post author

            Sorry the example wasn’t great; the post was already so ridiculously long, trying to put in a good example that demonstrated the use of closures in function parameters just didn’t seem like a good idea.

  8. David Starner

    It may make the iPhone a nicer environment to program, but it doesn’t seem to be anything that I’d want to switch from Scala for if I had a choice.

    I’ve been a bit curious why no one has produced a good pure functional programming language for the JVM or .NET. Why make a hybrid when you can do the heavy lifting in Haskell (or the like) and rely on Java for access to the outside world?

    From another direction, I’ve read the SPARK (provable Ada subset) documentation and I’ve poked at the Coq book recently. Neither of them are tools I’d love to work with yet, but the idea that you write your code in a language and if it compiles, it simply can’t crash like that, is very tempting. SPARK actually wouldn’t be hard for most programmers to pick up, but it’s so limited (no pointers, no unbounded containers) that it would be hard to program in. I guess the compiler isn’t bootstrapping so not really a nitpick on Swift.

    Oh, and the book is available for iPhones, Macs, etc. Silly me, thinking if you’re giving away for the new language for your system, you might want to make it available on other systems to lure new developers in. Whatever.

    Reply
    1. markcc Post author

      I remain highly skeptical of “provable” programming.

      I’ve heard lots of people talk about how functional languages are great, because they let you prove correctness – but I’ve never seen a real-world functional program that anyone actually bothered to prove correct. In fact, I haven’t ever seen a useful definition of correctness for a large system.

      The problem with correctness proofs is that in real-world systems, we don’t have a formal definition of correctness. Things are sufficiently complicated that descriptions of correct behavior are as complex as the code we use to implement that behavior – which means that the definition of correctness is as prone to bugs as the code!

      In terms of your comment about the book: It’s not yet clear whether Apple has any intention of making Swift available outside of the Mac. In Objective-C, they never did anything more than they were legally required to by licenses to share their Objective-C changes with the non-apple world. I think from their perspective, having Swift available on non-Apple platforms isn’t a priority, if they care at all.

      Reply
      1. David Starner

        If you’re willing to write in as limited a language as SPARK (a subset of Ada with the power of Modula-2 minus the pointers and exceptions), if it passes the SPARK compiler, it will never divide by zero, attempt to access outside the bounds of an array, overflow an integer variable, or barring recursion, run out of memory. A function with preconditions will never be called with arguments that violate those preconditions, and a function with postconditions will never return values that violate them. That’s a pile of properties that may be far from a formal definition of correctness, but still prevent your program from misbehaving or crashing in certain ways.

        Coq is abstract and complex, and apparently has successfully been used to make a bug-free C compiler that does some moderate optimizations. SPARK is concrete and simplistic, and is apparently used in avionics and other high-safety fields.

        I thought it might be interesting to think about going into iPhone programming. I was thinking that even if the language wasn’t leaving Apple hardware, the books explaining it might.

        Reply
      2. wds

        People are doing correctness proofs at the function/method level in languages like C too. It’s a useful thing when you’re doing the kind of byte-bit fiddling that human eyes are horrible at finding bugs in. I believe some linux kernel developers have found some value in doing that (in the I/O drivers?).

        I guess people are doing formal proofs of things like smartcard programs where the binary is guaranteed to be fixed in a ROM for a long time and the cost of bugs is extremely high, but I doubt there are any other areas in which you could justify the cost. Basically formal proofing is akin to writing the software twice and will still miss bugs.

        Reply
        1. David Starner

          SPARK is widely used for aeronautics, where the cost of bugs is measured in human lives and specifications are incredibly tight. I understand some projects in Ada are so expensive because both code and the output assembly have to be checked to prove they aren’t doing bad things.

          Everything misses bugs. But we use Java (and co.) because array overflows and use-after-frees have cost literally millions, maybe billions of dollars in damages after security holes have been exploited. In Java, the heartbeat bug would have thrown an exception; in SPARK it would have failed to compile unless the programmer lied to the compiler and told it to trust him that an array overflow wouldn’t happen.

          Declarative programming is easier then imperative programing, so a proof is going to be easier then writing the code in the first place. Pre- and postconditions are basically making the code self-testing, and I’d argue that Ada 2012 and Eiffel pulling them front and center as part of programmer-programmer communication is a good thing, formal proving or not. SPARK is basically taking those pre- and postconditions, and forcing you to add assertions until it can prove all those pre- and postconditions hold (including those implicitly involved in division and array accesses.)

          Besides that, in neither SPARK or Coq do you have to prove any property about your code you don’t want. The author of the Coq book proves his red-black trees are binary trees, but doesn’t prove they’re balanced, for example.

          To me, Coq is arcane and slow, and even if I could use it, finding another programmer who uses it would be hard. SPARK is easy; I wouldn’t want to work with a programmer who can’t figure out its Ada-subset substrate, and I don’t believe finding what assertions you need to convince the compiler is a huge deal. It’s just that no one really wants to write Modula-2 level languages any more.

          I think, hope, that formal proving will creep in through greater uses of pre- and postconditions, where it becomes normal for a compiler to say “Error: Assertion at line 450 will fail if function f (starting at line 445) is passed (1, 0, 2147483647, NULL).” Sort of like how pure functional languages are still a rarity, but functional programming elements are getting more common.

          (My apologies to Mark, as this seems to be getting long for such a tangent.)

          Reply
          1. wds

            I’d like to add that nobody really constructs large C code bases anymore without some form of static and dynamic analysis, basically weak versions of what those languages can do. Coverity and valgrind-type tools catch a surprising amount of bugs. I think out of simple necessity of having to manage complexity, a certain amount of proofing is necessary, i.e. in network programming it should really be possible to easily be able to tell if you can or can’t overflow an input buffer. It’s unfortunate that the language of choice often makes this extremely easy to mess up.

      3. Joker_vD

        Well, a correct system would do good things, and not do bad things. I.e., it would process your e-mails as instructed, and it won’t send everyones’s password in responce to a malformed e-mail.

        That’s, I suspect, is why many people are interested in proofs of correctness: they actually want proofs of security. That some actions will not happen no matter what inputs are given. You can’t test for that, can you?

        And with current approaches to testing, test suites become de-facto program specifications… well, I guess that explains why many people still dislike unit testing—it’s essentially writing a spec.

        Reply
  9. Jonathan Badger

    Maybe it is just because they are both new languages taking cues from currently trendy ideas, but Swift reminds me a lot of Julia, the mathematical language which was the topic of the Wired article you disliked so much.

    They both have the idea of a language that “feels” like a scripting language while actually being efficient and lower level (this is what the claim you disliked about “not needing slow languages” is all about; if you can get interactivity from compiled languages, you don’t need interpreted scripting languages and their slowness)

    And they both are really into inferred types, which I know has been a feature of many functional languages since ML but only got (somewhat) mainstream with Scala.

    Reply
    1. markcc Post author

      As I said then: it’s not an issue of compilation technology, but of language design.

      Swift’s playgrounds are absolutely lovely. But I would never use Swift for the things that I currently do in Python.

      You could retrofit Python with an LCC backend. But it would still be slow as all hell compared to Swift. The very dynamic language semantics, which a lot of scripting work can depend on, make many optimizations impossible. The effort of building an LCC backend for Python wouldn’t have enough payoff to make it worth doing: it would be faster than the current CPython – but compared to C++ or OCaml or Swift, or even Java, it would remain extremely slow. But for scripting, that’s fine.

      That was my point back then, and my opinion hasn’t changed. Different languages for different purposes, and you match the technology that you use to implement a language to its intended purpose.

      Reply
      1. Jonathan Badger

        The problem is that scripting languages *aren’t* in general “used for their intended purpose” (which was historically to help system administrators automate small tasks on UNIX systems). Instead they become the standard language for *everything* by self-taught programmers.

        In my field of genomics, where many people come from a biological rather than CS background, people traditionally process data using Perl (and other scripting languages to a degree). This isn’t because scripting languages are particularly well suited to the task, but that they seem less intimidating to non-CS types (in earlier decades they probably would have used BASIC for similar reasons). These days scripts can be a significant bottleneck in data processing. But you aren’t going to get a biology postdoc to learn C and deal with segmentation faults and the like. But you just *might* get them to use a compiled language that seems as friendly as a scripting language.

        Reply
  10. Pingback: Visto nel Web – 137 | Ok, panico

  11. Pingback: Víkendové surfovanie « life in progress

Leave a Reply