Basics: Parallel, Concurrent, and Distributed

This came up in a question in the post where I started to talk about π-calculus, but I thought it was an interesting enough topic to promote it up to a top-level post. If you listen to anyone talking about computers or software, there are three worlds you’ll constantly hear: parallel, concurrent, and distributed. At first glance, it sounds like they mean the same thing, but in fact, they’re three different things, and the differences are important.


pdc.jpg

The connection between them is that they’re all terms that describe systems made up of computers and software that are doing more than one thing at a time. The difference is are in why and how they do it.

Parallelism, or parallel code, or parallel systems talks about how to take a given system, and make it run faster by breaking into pieces that can run simultaneously. So suppose you want to do something really complicated. It’s got three steps, A, B, and C. A and B each prepare things for C, but they don’t interact with each other at all. Then you can run that by running A, waiting until it’s done, then running B, waiting until it’s done, and then running C. Or, if you’ve got a spare processor, you could run A and B simultaneously, and then when they’re done, run C. When you’re making a program run parts at the same time in order to make it
run faster, then you’re doing parallelism.

Concurrency talks about systems that have multiple parts that are designed with the explicit goal of taking place simultaneously, not because it makes it faster, but because it’s a necessary part of the functionality of the system. The backend system that we use for ScienceBlogs handles lots of concurrency, because it’s designed to simultaneously support thousands of readers viewing pages at the same time, as well as allowing us to write and publish new posts without interfering with the operation of the system. If things happening at the same time is a necessary part of the semantics of your system, then you’re doing concurrency.

Distribution is talking about systems that are made up of multiple physical parts
connected by a communication network. It’s fundamentally a study of how to build systems where
the system itself is broken into physical pieces, which may be located in different places, have a variety of constraints on communication, etc. If your system is is specifically designed to be run as multiple programs running simultaneously on on many different pieces of hardware, but behaving in some sense as a single system, then you’re doing a distributed system.

So, an example of each:

  1. Weather forecasting software is usually parallel code. Doing the computational fluid
    dynamics work to generate accurate predictions requires an enormous amount of computation,
    and dividing it up among many processors makes it run at a (more) reasonable rate.
  2. Database systems are often built for concurrency. The idea is that there’s a huge database
    somewhere, and it’s being pounded with lots and lots of queries. When one user starts
    a query, the system doesn’t shut down and stopping doing anything else until that query
    is done. It allows multiple users to be running queries all at the same time. And most
    databases even guarantee that if one user is performing an update query, other users that are performing concurrent queries while that update is in progress, the queries will return consistent results representing the state of the database either before or after the update, but not a combination of the two.
  3. An example of a distributed system would be a piece of software like writely, which is
    a word processor that runs inside of a web browser. In writely, you can edit a document
    in your web-browser, and you can share editing with multiple people – so you can have three or four web browsers all editing the same document. In terms of the system, the web browsers are
    each running little Java applications that talk to the server and each other by messaging; and they have absolutely none of the code for actually changing the document. The server has no code for doing things like rendering the UI, but it communicates with the clients to get
    receive and process edit commands, and send out updates so that all of the UIs are rendering
    the same thing. The entire design of the system is built around the idea that there are
    these multiple pieces, each running on different machines.

0 thoughts on “Basics: Parallel, Concurrent, and Distributed

  1. Harald Korneliussen

    “And most databases even guarantee that if one user is performing an update query,” …
    Most, but not all. Quote from my network database‘s reference manual:
    “Locking requirements: Allowed with no locks, but may read old or deleted data. Use read lock on *** to ensure current data.”

    Reply
  2. Mark C. Chu-Carroll

    Harald:
    That’s why I said most 🙂
    I’m all too familiar with the wierdnesses of different databases about concurrency. A few years ago, I was the lead of a project called Stellation, which was an open-source code management system that stored everything in an RDB. We tried to support multiple RDBs, and had an unbelievable nightmare doing it. Databases are supposedly standard, and they supposedly all use the same standard language for reading and writing database records. But in reality, they’re astonishingly inconsistent. We wound up supporting Derby (under its earlier name, which I can’t remember), Postgres, DB2, and MySQL. And we needed to invent all sorts of tricks, because no two of those interpret SQL in exactly the same way.
    And SQL was the easy part… The differences in concurrency management between the different DBs was even worse.

    Reply
  3. rodrigo

    And, just because my masters was done in this: There’s also ‘Mobile computing’, which are special kinds of distributed sytems where it’s assumed that things move around or get disconnected or such. This is important because most distributed systems try to hide the distributedness, and make things look as tough they were ‘just’ concurrent. Mobile computing systems are those were you simply can’t try that.
    Anyone interested? Search for Luca Cardelli’s Ambient Calculus.

    Reply
  4. ArtK

    FYI: The proprietary version of Derby is Cloudscape.
    My work is with SQL federation — making heterogeneous data sources, including other RDBMS, look like one particular RDBMS to the user. Mapping isolation levels is one of the ugliest things we have to do — it gets even worse when you’re updating multiple sources at the same time and have to use two-phase commit.
    I spent some time on the ANSI SQL committe — it alwasy amazes me that something that is so standardized still leaves room for issues like this. But I shouldn’t complain too much. It keeps me employed.

    Reply
  5. Jonathan Vos Post

    I wrote an 800-page book for the U.S. Air Force in 1979-1980 on Distributed Computing. Unfortunately, my only harcopy of it, as a Boeing technical report, was literally thrown in the dumpster by a vice president who later admitted not reading the “this is my only copy, please return!” post-it. It was, as I understand, actually published as a contract deliverable, and copies are probably stacked in that warehouse next to the Ark of the Covenant.
    This vanished megalith predicted the rise of HTTP and the military consequences of what came to be called the Web, introduced the Pentagon to the term “virtuality” and featured my invention of and algorithm for computing the Moment of Inertia of geographically distributed networks with time-varying bandwidths, and Mobile computing systems. The computing power of each node is the Mass, you see, and the bandwidth of the link between two nodes is a distance, and so…
    I took Parallel Processing in grad school, 1973 or 1974, from Prof. Caxton Foster, who alleges that he was the first person in the world to teach such a course, and that his original course notes are in the Smithsonian. My M.S. thesis was on Automated Theorem Proving with a clever parallized algorithm and database, which ran great on simulations, because we had none of what came to be called Massively Parallel Processors. Thinking Machines, Inc., was aware, via Feynman, of my parallelized implementation in 1974-1976 of the Genetic Algorithm for successfully evolving working source code. Mine was in APL (the evolved code, I mean), Danny Hillis’ Thinking Machines was in LISP.
    Concurrency, Petri nets, neuromorphic computing — don’t even get me started.
    This is deep stuff. Very cool.

    Reply

Leave a Reply