Innovation isn’t just hardware!

I’m a bit late to the party on this, but hey, such is life! I work on infrastructure at Twitter, and we’ve been going crazy trying to get stuff deployed in time to be ready for the world cup. So I haven’t had time to write before now!

Anyway, late last week, a professor known for stupid self-promoting stunts announced that the Turing test had been passed! This was, according to said professor, a huge thing, a really big deal, a historic event!

(Perhaps almost as big a deal as the time he had an RFID chip implanted in his arm, and announced that now he was the world’s first cyborg!)

Lots of people have written about the stupidity of the claim. The alleged “winner” was a program that pretended to be a teenaged kid with ADD who wasn’t a native english speaker. It didn’t even attempt to simulate intelligence, just to mislead the judges by providing excuses for its incoherence.

But that’s not what I wanted to comment on. Like I said, I’m late to the game, and that means that the problems with the alleged winner of the competition to pass the Turing test have been covered many times already. What I wanted to comment on was a theme I saw in several of the explanations. Here’s a typical example, taken from an article in the New Yorker:

Here’s what Eugene Goostman isn’t: a supercomputer. It is not a groundbreaking, super-fast piece of innovative hardware but simply a cleverly-coded piece of software, heir to a program called ELIZA that was first developed—as a joke—in the nineteen-sixties.

This is an example of what I call “IBM Thinking”. I used to work for IBM, and one of the (many) frustrations there was a deep-seated belief that “innovation” and “technology” meant hardware. Software is just the silly unimportant stuff that runs on top of hardware; hardware is what matters.

According to this attitude, any advance in technology, any new achievement, must happen because someone created better hardware that made it possible. A system that beats the turing test? If it’s real, it must be a new supercomputer! If it’s not, then it’s not interesting!

This is, in a word, nonsense.

Hardware advances. But so does software. And these days, the big advances are more likely to be in software than in hardware.

There’s a mathematical law, called Church’s thesis, which we’ve known about for a long time. Put simply, it says that there is a maximum limit in computation. All computing devices, no matter how they’re designed or built, are ultimately, at best, equivalent to other computing computing devices. It doesn’t matter whether you’re a Turing machine, or a PC, or a supercomputing cluster – the set of problems that can be solved by computation is fixed. Different devices may be able to solve a given problem faster by some amount, but it can’t solve problems that are truly unsolvable.

We’ve gotten to the point where we can build incredibly fast computers. But those computers are all built on the same basic model of computing that we’ve been using for decades. When a problem that seemed unsolvable before becomes solvable now, most of the time, that’s not because there was some problem with old hardware that made the problem unsolvable – it’s because people have figured out how to write software that solves the problem. In fact, a lot of recent innovations in hardware became possible not because of some fundamental change in the hardware, but because people worked out clever software to help design and lay out circuits on silicon.

To take one example that’s very familiar to me (because my wife is one of the technical leads of the project), consider IBM’s Watson – the computer that beat the two best human Jepoardy players. IBM has stressed, over and over again, that Watson was a cluster of machines built on IBM’s Power architecture. But the only reason that they used Power was marketing. What made Watson special was that a team of brilliant researchers solved a very hard problem with clever software. Watson wasn’t a supercomputer. It was a cluster of off-the-shelf hardware. It could easily have been built from a collection of ultra-cheap PC motherboards stacked together with a high-speed network. The only thing that made Watson special – the thing that made Watson possible had nothing to do with hardware. It’s just a bunch of cleverly-coded software. That’s it.

To get even closer to home: I work on software that lets Twitter run its system on a cluster of thousands and thousands of cheap machines. The hardware is really unimpressive, except for its volume. It’s a ton of cheap PC motherboards, mounted in rack after rack, with network cables connecting the racks. Google has the same thing, but with even more machines. Amazon has the same thing, except that they might even have more machines that Google! If you handed me a ton of money, I – an idiot when it comes to hardware – could build a datacenter like Twitter’s. It would be a huge amount of work – but there’d be nothing inventive about building a new cluster. In fact, that’s the whole point of cluster-based computing: it’s both cheaper and easier to just buy a couple of thousand cheap machines, and distribute your work among them than it is to buy and program one huge machine capable of doing it all by itself.

What makes those clusters special isn’t the hardware. It’s all software. How do you take a thousand, or ten thousand, or a hundred thousand, or a million computers, and make them useful? With software. It’s just clever software – two systems, called Mesos and Aurora, which take that collection of thousands upon thousands of machines, and turn it into something that we can easily program, and easily share between thousands of different programs all running on it.

That doesn’t take away from the accomplishment. It just puts the focus where it belongs. “Clever software” isn’t some kind of trick. It’s the product of hard work, innovation, and creativity. It’s where many – or even most – of the big technological advances that we’re watching today are coming from. Innovation and advance aren’t just something that happens in hardware.

12 thoughts on “Innovation isn’t just hardware!

  1. Kyle Szklenski

    Like this post, and agree that much of the time, emphasis is placed on innovation in hardware rather than in software because people mistakenly believe that’s the only place it can happen.

    My question for you is: Do you think there’s a similar limit to Church’s thesis, only for the human ability to solve problems? I know it’s a rather abstract and possibly unanswerable question, but it’s things like that which keep me up at night and writing inane puns on my blog.

    Reply
    1. markcc Post author

      I think that that questions like that are, for the moment, best left to theologists.

      On one level: If we’re really just biochemical computing devices, then we’re subject to the Turing limits, like any other computing device.

      More deeply, I think it really depends on whether free will actually means anything. Are we really free, or are we just puppets on strings, acting out the completely predictable process of entropy?

      Reply
      1. John Fringe

        Well, if I’m not wrong on this, Chuch’s thesis is essentially a definition of computability, so we have something to work with. An incredibly useful definition which has stood a century of math. But definitions do not set limits 🙂

        So I will take those limits with a grain of salt.

        Any comments about the idea of more powerful models of computation? They can make a very beautiful (and difficult) post.

        Reply
  2. Ingvar Mattsson

    I think neither Church, Turing nor Tarski says anything explicit on the subject of what is “computable by a human” and carefully limit themselves to what is computable by algorithm.

    Reply
  3. m

    Turing explicitly argues in §9 of “on computable numbers,” that his automata can compute everything a human computer can compute—and IMHO does so quite convincingly. But the question raised by K.Sz. then just becomes: is there information processing/problem solving/mental activity beyond computation? Well… leave it to the philosophers.

    Reply
    1. John Fringe

      I too believe that Church and Turing are probably right and I am pretty convinced by their arguments, but in the end it’s just and assumption, an axiom. I believe the study of alternative models of computation is a legitimate mathematical enterprise that no philosopher should ever touch, please XD XD XD

      We shouldn’t mistake the realm of math with the physical world. In this sense, there are no right axioms, just useful ones.

      I would not be specially surprised if tomorrow somehow we discover a physical realization of an alternative model for computation. It would be great to know its properties beforehand!

      Reply
  4. Pingback: Visto nel Web – 135 | Ok, panico

  5. PaulC

    “This is an example of what I call “IBM Thinking”. I used to work for IBM, and one of the (many) frustrations there was a deep-seated belief that “innovation” and “technology” meant hardware.

    Maybe so, but in Silicon Valley, I mostly find myself wanting to remind people “Innovation isn’t just software.” From my perspective, the shiny new user features usually get most of the buzz, when the basic ideas behind them aren’t necessarily new, and only appear to be, because the requisite CPU speeds, storage capacity, and network bandwidths were unavailable.

    It’s not that I’m biased towards hardware. I’m a computer scientist with a background in algorithms, and I’m more than happy to take credit (when due) for an asymptotic speedup that no amount of hardware improvement will supply. (Not that I get to do that much in my day to day work.) And universality, sure, but there is a wide gulf between “will complete in finite time” and “will complete before it’s too late to be useful.”

    So, IBM thinking aside, it seems to me that today’s hardware engineers are doing the grunt work of innovation, and usually deserve more, not less, credit from the media (as if that’s an important thing).

    “The hardware is really unimpressive, except for its volume. It’s a ton of cheap PC motherboards, mounted in rack after rack, with network cables connecting the racks. Google has the same thing, but with even more machines.”

    Not sure about Twitter in particular, but there is nothing unimpressive about a large scale data center such as one of Google’s, and you’re fooling yourself if you think you could build one if you just had “a ton of money” (except by using it to hire people who know how to, and even then you’d need the management skills). The individual components are indeed cheap, though I’m not sure how much of Google’s are off-the-shelf anymore. But even getting the cooling system needed to make it cost effective is a specialty. You don’t get this by throwing money into the wind. You need smart engineers who come up with new solutions to tricky problems. That is called “innovation.”

    It’s not a “ton of cheap PC motherboards” any more than the human brain is a big cluster of eukaryote cells. I mean, it is that too, but that kind of misses the point.

    So in short, I agree that innovation isn’t just hardware, but I’m not sure if many people in 2014 think it is. I also don’t think you need to undermine the importance of hardware innovation to make the point that software innovation matters.

    Reply
  6. Vilx-

    While I do agree to the general idea of the article, I just wanted to add that in *this particular* case there is a reason to throw in the phrase “this wasn’t a hardware innovation after all”.

    The Turing Test has always been seen as “a milestone for artificial intelligence, when a computer finally becomes as smart as a human (or close enough to be mistaken for one)”. This isn’t what it really is (as nicely demonstrated by this recent “victory”), but that’s what the general public thinks about it. And the main problem for AI’s becoming human-like-smart has so far been insufficient hardware. It’s not that the algorithms are unknown – neural networks are plenty understood – but we just don’t have the hardware to perform the gazillions of parallel computations necessary. And here comes something that did it on a couple desktop PCs. So, yeah, it’s a bit of a surprise for the layperson.

    Reply
    1. markcc Post author

      I couldn’t disagree with you more!

      There’s two main levels to my disagreement.

      (1) Neural networks aren’t the solution to artificial intelligence. They’re one very limited method for implementing specific kinds of machine learning. They’re called neural networks, but they’ve really got very little to do with an accurate simulation of how neuron’s work. With a datacenter like google, we could easily get a system of neural networks comparable in size to a human brain – but even if they were behaviorally comparable to a neuron, we have no idea how to string them together.

      (2) Neural networks aren’t a hardware thing at all. We implement them entirely in software. If we knew how to build an actual thinking machine, hardware would make it possible to run it faster, but software is what would make it “intelligent”.

      Reply
      1. Vilx-

        Well… I guess that goes to show what I know. 🙂 But consider – the average layperson knows even less. Though, on second thought, I concede that this is more of an explanation than justification.

        Reply
  7. Sean Holman

    I have been told that software, or more generally algorithmic, innovations are given less emphasis by businesses and/or investors because it is difficult to claim intellectual property rights on them. Math is not patentable.

    Reply

Leave a Reply