I came across a link to an excellent article that provides an example of one of my professional bugaboos: the truly awful way that we often design software in terms of how the implementer thinks of it, instead of how the user will think of it.
Take a look at that link to see what I mean. The short version of it is: xerox produces a copy machine that includes a billing system. Attached to the copier is a little card reader. You can’t use the machine without inserting a card into the reader telling it who should pay for the paper/toner you use. The card reader’s software is implemented as if it’s a separate machine. It provides prompts in terms of its state as an independent machine. So when you walk up to the copier, the card reader display says “READY”. The fact that it says “READY” means that the card reader is ready to read a card. But the copy machine is not ready. In fact,t he copy machine isn’t ready to copy until the card reader display doesn’t say ready. In fact, the glowing “READY” sign attached to the copier means that the copy machine is not ready.
To the user of the copy machine, it’s one machine. The user walks up to it, and does whatever they need to do to make their copies. They don’t think of it as “A copier and a card reader that communicate with one another”. They think of it as “A copier with a card reader for billing”.
But the designers of the machine didn’t think of it that way. To the guys implementing the card reader, the reader was a separate machine, and they designed its displays and user interactions around the idea that it’s a separate machine. So the reader says “READY” when it’s ready to do its job – never mind that it’s job isn’t a separate task in the mind of the user.
This kind of thing happens constantly in software. In my own area of specialiation – software configuration management – virtually every tool on the market presents itself to users in terms of the horribly ugly and complicated concepts of how the SCM system is implemented. Looking at popular SCM systems, you’ll constantly see tons of things: branches, merges, VOBS, WOBS, splices,
gaps, configurations, version-pattern-expressions. To use the systems as they’re presented to you, you need to learn whatever subset of those concepts your system uses. But those concepts are all completely irrelevant to you, as a user of the system. What you’re trying to do is to use a tool that preserves the history of how your system was developed, and that lets you share changes with your coworkers in a manageable way. What does a VOB or a VPE have to do with that?
I’m not trying to claim that I’m perfect. I spent the majority of the my time working in SCM building a system with exactly those flaws. I’m as guilty as anyone else. And I didn’t realize the error of doing things that way by myself. I had to have it pointed out to me by someone who’s a lot smarter than I am. But once he made me aware of it, it made me aware of this as a ubiquitous problem. It doesn’t just happen in things like embedded systems (the Xerox card reader) and SCM systems. It’s in word processors and spreadsheets, file browsers, web browsers, desktop shells, cell phones, music players…
Software developers – like me – need to learn that users don’t view systems the same way that developers do, and the right way to build a system is by focusing on the view of the user. That copy machine should not say ready until it’s ready to copy: the user doesn’t give a damn that the card readeris ready. My SCM system should allow me to say “I want share my changes with the guy at the desk next to me”, not “Create a new branch derived from the latest integration baseline containing the set of changes in my workspace and then tell me the name of that new branch so that I can email it to my neighbor”: as a user of an SCM system, I don’t care that you needed to create a new branch. I don’t want to know what the root of that new branch is. I don’t want to know about the internal identifiers of branches. What I want to do is share my changes with my coworker – and the system was built to let me do that. Why is it designed to make that so unnatural and confusing? Because the developer was focused on “How do I implement the capability to do that?”, and then presented it to the users in terms of how they built it, not in terms of how the user was going to make use of it.