Bad Software Design: Getting the Level Wrong

I came across a link to an excellent article that provides an example of one of my professional bugaboos: the truly awful way that we often design software in terms of how the implementer thinks of it, instead of how the user will think of it.

Take a look at that link to see what I mean. The short version of it is: xerox produces a copy machine that includes a billing system. Attached to the copier is a little card reader. You can’t use the machine without inserting a card into the reader telling it who should pay for the paper/toner you use. The card reader’s software is implemented as if it’s a separate machine. It provides prompts in terms of its state as an independent machine. So when you walk up to the copier, the card reader display says “READY”. The fact that it says “READY” means that the card reader is ready to read a card. But the copy machine is not ready. In fact,t he copy machine isn’t ready to copy until the card reader display doesn’t say ready. In fact, the glowing “READY” sign attached to the copier means that the copy machine is not ready.

To the user of the copy machine, it’s one machine. The user walks up to it, and does whatever they need to do to make their copies. They don’t think of it as “A copier and a card reader that communicate with one another”. They think of it as “A copier with a card reader for billing”.

But the designers of the machine didn’t think of it that way. To the guys implementing the card reader, the reader was a separate machine, and they designed its displays and user interactions around the idea that it’s a separate machine. So the reader says “READY” when it’s ready to do its job – never mind that it’s job isn’t a separate task in the mind of the user.

This kind of thing happens constantly in software. In my own area of specialiation – software configuration management – virtually every tool on the market presents itself to users in terms of the horribly ugly and complicated concepts of how the SCM system is implemented. Looking at popular SCM systems, you’ll constantly see tons of things: branches, merges, VOBS, WOBS, splices,
gaps, configurations, version-pattern-expressions. To use the systems as they’re presented to you, you need to learn whatever subset of those concepts your system uses. But those concepts are all completely irrelevant to you, as a user of the system. What you’re trying to do is to use a tool that preserves the history of how your system was developed, and that lets you share changes with your coworkers in a manageable way. What does a VOB or a VPE have to do with that?

I’m not trying to claim that I’m perfect. I spent the majority of the my time working in SCM building a system with exactly those flaws. I’m as guilty as anyone else. And I didn’t realize the error of doing things that way by myself. I had to have it pointed out to me by someone who’s a lot smarter than I am. But once he made me aware of it, it made me aware of this as a ubiquitous problem. It doesn’t just happen in things like embedded systems (the Xerox card reader) and SCM systems. It’s in word processors and spreadsheets, file browsers, web browsers, desktop shells, cell phones, music players…

Software developers – like me – need to learn that users don’t view systems the same way that developers do, and the right way to build a system is by focusing on the view of the user. That copy machine should not say ready until it’s ready to copy: the user doesn’t give a damn that the card readeris ready. My SCM system should allow me to say “I want share my changes with the guy at the desk next to me”, not “Create a new branch derived from the latest integration baseline containing the set of changes in my workspace and then tell me the name of that new branch so that I can email it to my neighbor”: as a user of an SCM system, I don’t care that you needed to create a new branch. I don’t want to know what the root of that new branch is. I don’t want to know about the internal identifiers of branches. What I want to do is share my changes with my coworker – and the system was built to let me do that. Why is it designed to make that so unnatural and confusing? Because the developer was focused on “How do I implement the capability to do that?”, and then presented it to the users in terms of how they built it, not in terms of how the user was going to make use of it.

41 thoughts on “Bad Software Design: Getting the Level Wrong

  1. Ron

    One of the most humbling experiences in my professional career was a week watching a usability study from behind a two-way mirror while folks struggled to use our software. After months agonizing over every little user interface detail, it drove home that programmers are the least qualified people to design systems for users.

    Reply
  2. krisztian pinter

    i’m communicating this for years, including running a blog, partially about such ergonomy issues. in my experience, most programmers don’t even understand what i’m talking about. i feel like a martian.
    but if i think about it in more depth, i realize that they make bad software unaware of what they do, while i make bad software completely aware i’m doing it. we simply have no time to do it right, and market does not pay for better quality.
    still, i will not stop distributing ideas about user centered thinking. even if we don’t do it in practice, sometime it is better just to know how it *could* be done better.
    be honest: how many of you, developers knew, that tabbed dialogs are far superior then their big button alternatives? tabbed dialogs are near 100% understandable to users, while the “big button bar” page selection method is something like 30%(!). not much of a science, but what a difference it makes!

    Reply
  3. MaxPolun

    The only SCM system I’ve seen that makes sense from a user point of view is darcs. I think this is because the developer used a “calculus of patches” inspired by quantum physics operators rather than by just developing ad-hoc solutions to various problems he (or his users) ran into (well probably it was a bit of both).
    so good math -> good programming

    Reply
  4. Miko

    On the flip side, if you design the system for the ignorant user in mind, you’ll frustrate to no end the user that actually does want to create a new branch derived from the latest integration baseline.
    As a major case in point, I’ve found the Windows operating system progressively more difficult to use with each successive version: one case in point is Vista’s file explorer: earlier versions of Windows introduced the back/forward arrows and I totally ignored them because I knew where I wanted to navigate and could do just fine with the drop down box and the “Up to parent folder” button. Now I have to deal with a file explorer that has removed the “Up to parent” button entirely and that has replaced the drop-down box of folder views with a list of “recent” weblinks. Can I still do what I want to do? Yes. Did attempts to make it easier for general users just create problems for those who knew what they were doing? Again yes.

    Reply
  5. Mark C. Chu-Carroll

    Miko:
    I’m not talking about dumbing down the interface, but about presenting it in the correct way.
    The windows thing you mentioned is any example of what I’m talking about. The developers stick details of the filesystem in your face. People complain about that, and instead of understanding what about it people have on on problem with, they add on on bunch of bells and whistles that seem *to on on professional developer* like the kind of thing that make it easier.
    The pro developer is using corporate websites for all kinds of stuff- so to them, making that more transparent is good. But they do it at the expense of the stuff that the normal people were using to navigate.
    (Sorry for the on on stuff.. I’m using on on blackberry, and it’s doing wierd stuff.)

    Reply
  6. Jonathan Vos Post

    The first time that I tried to help a PC user use a Mac, in the days of the 128K Mac, I started to drag his diskette icon into the trashcan icon, saying: “This is how you pop out your diskette, unless you want to use a paperclip.”
    He grabbed my mouse-hand,yelling: “But I don’t want to throw away everything on that disk!”
    After 41 years of writing software (yes, I started in high school in 1966) I have a semi-infinite supply of bad design stories. Indeed, I gave a talk at one of those big Software Engineering conferences (circa 1980, maybe IEEE) entitled “How Not To Engineer Software” which started with me tearing into little pieces a one dollar bill, and throwing the pieces into the front-row audience.
    In the example you gave, I can imagine that the head of the programming team could have been a freshly-minted PhD in Computer Science, thinking to himself that a compiler-compiler is “ready” when it is prepared to input a meta-language description of the compiler, only after which it is then ready to input the first virtual machine input…
    I did that in 120 lines of APL in 1974, but that’s another story.
    See also: ANTLR, Coco/R, CUP, GNU bison, Eli, FSL, META 5, MUG2, Parsley, Pre-cc, SableCC, JavaCC and MixedCC, yacc.
    What is the fixed point of compiler-compiler, compiler-compiler-compiler,
    compiler-compiler-compiler-compiler, …?
    I think I’ve told the story here of reverse engineering DOL (Day-Of-Launch) software for the Space Shuttle, when I was in the Software Engineering Department of the Space Shuttle Division of Rockwell International. In a sense, when it said “ready” it meant “ready to input trajectory, known atmospheric conditions from sounding rockets and balloons, and any patches to the engineering model” rather than “ready to launch.” The output of this huge and almost incomprehensible package was a single bit, designated as “green/red.”

    Reply
  7. Benjamin Franz

    Back around 1994 I was working in a University computer lab that had about 100 IBM compatible PCs and 25 Macintosh computers.
    The brand new top of the line Macintoshes came with a small round power button mounted directly below one corner of the floppy drive slot. It looked remarkably like a disk eject button for a PC floppy drive.
    One guess what happened almost every day when students decided to eject a disk so they could put a different one in to save something.

    Reply
  8. ancientTechie

    I am fortunate to work with graphic designers who design software interfaces that make sense to them. Writing code to implement those interfaces is certainly not easy, but is worth the trouble. I can usually envision ways in which I could implement software in ways that would make life easier for me and, as far as I can tell, work just as well for end users. Experience tells me, though, that the graphic artists’ approaches lead to better usability.

    Reply
  9. Kevin

    all you need is a piece of paper that says:
    “Ready does not mean ready. Follow instructions”

    Reply
  10. Andy

    all you need is a piece of paper

    People never read the pieces of paper, and they shouldn’t have to anyway.

    Reply
  11. Mark C. Chu-Carroll

    Kevin:
    The point isn’t that you can work around the bad design. Of *course* you can work around badly designed stuff. You can always find a way to work around a poorly designed, stupid system. That doesn’t make it any less poorly designed or stupid.
    It’s *stupid*, lousy system design to have a part of a copy machine that only says “Ready” when the copy machine is *not* ready. It’s stupidity on the part of the system designer who was focused on the one narrow little thing he was building, without thinking about what it was part of, or how it was going to be used.

    Reply
  12. Anonymous

    Just like Windows – where the “Start” button does everything from starting programs to shutting down the computer. Why didn’t they name it “Computer” or something…

    Reply
  13. Mark C. Chu-Carroll

    Anonymous:
    Yes, exactly. That’s one that I’ve had some grief with trying to help family on the phone… “Yeah, you need to turn of the computer and then restart it. To turn it off, go to the button labelled start, and then pick shut down”. What idiot decided that the right place to put “Stop the computer” was on a menu labelled “start”?
    If it’s called a “Start” menu, then it should damn well be how you *start* things. Having a big green button which is used to start things is a good idea: it makes it clear where you go to start things. But to have big green button labelled “start” which doesn’t just start things, but starts things, stops things, finds things, rearranges things, sets things up.. That’s inexcusable.
    If it’s a start menu, it should damn well be a *start* menu. Not a general purpose menu with 5 submenus *one* of which actually starts things. If it’s the general purpose menu with 5 different kinds of things on it, then label it “Windows Menu”, or “Main Menu”, or just put a windows icon on it.

    Reply
  14. MSR

    I agree with your what you have to say about bad design and the need to build design around what the user actually needs. I can tell you though in my experience this is not an easy thing to do. I work on the development of ground system software for instruments on the Hubble and James Webb Space Telescopes. It is very important to us to not have these kinds of bad design features. (Basically we are constantly reminding ourselves that at some point ten years in the future we’ll be woken up at 3:30 in the morning and have to figure out why there was a telescope anomaly. Needless to say having a good design aimed at the user is important.) But doing this is very, very hard. It takes a lot of time and effort to keep all the different parties in communication so that the system works in a manner that is comprehensible to the user. I don’t disagree that better design, design with the user in mind, isn’t called for. But it is also hard to do.
    On the matter of the Windows Start button. I actually find this kind of amusing. Basically you are right, a different name would be better. On the other hand, personally when I go to turn off my car, I use the ignition key. I’ve never found that to be excessively confusing. Furthermore, I also use the same ignition key to say set the car so that a passenger can use the radio while I run into a store, even though absolutely nothing is ignited in that case. This is a sub-optimal design, but it should not get quite as much grief as it does. Other parts of the Windows system should get that grief.

    Reply
  15. ArtK

    Ah… welcome to my version of hell. Brilliantly written software that’s completely unusable. The solution in big corporate software seems to be to throw more process at the problem, but I’ve never seen a process that can address this, whether you call it User Centered Design or Outside-In Design, or Fred’s Design.
    Favorite UI story: I was looking at the GUI for a system that had a component called a COmmunications Processor (or COP for short.) The icon to access the configuration and status for the COP? An image of a western policeman with his hand up in the traditional “stop” gesture. Confusing to western users and completely nonsensical to anyone else.
    I’m working on a new product now and I hope that we can produce something usable — our goal is a near-zero touch admin, but I don’t have a lot of hope. We’re in the concept design phase and people are coming up with configuration parameters by the dozen. *sigh*
    BTW, I worked with that SCM system with the VOBs back in its earliest incarnation and work with it today. It stinks, but it’s still the best and most powerful that I’ve ever found, IMO. Just don’t get your configuration spec wrong or you’ll never get things to build right.
    MSR: Yeah, but your ignition key doesn’t have a big sign on it that says “ignition key,” reminding you, every time you use it, of some developer’s idiocy. I don’t think of it as an “ignition key,” just “the key.”

    Reply
  16. Susan B.

    My favorite (recent) example of bad usability is the controls for a fan that I bought last summer. It has a semicircle of five buttons on the top, each with a nearly incomprehensible symbol (with some effort I was able to figure them out) and a row of LEDs of various shapes meant to display the current settings. The worst part (though I got a kick out of it) is the display lights for the timer. The fan has a timer that can be off, or set to anything up to 7.5 hours, in 30 min. increments. There are four little LEDs, labeled with 4, 2, 1, and 1/2, and as you push the timer button, the lights go on and off in a (to the untrained eye) completely random pattern. It turns out the lights are indicating the time remaining in BINARY! How many non-computer/math folks are familiar with binary, would recognize it in a completely unexpected context, and can easily translate it into decimal?
    The same fan also has a remote control with the same five control buttons, except with nice clear text labels instead of incomprehensible symbols. The only problem is, each label corresponds to the button above it, but the labels are actually much closer to the buttons below them, with the result that I spent the entire summer hitting the “off” button when I meant to change the speed.
    Sigh. At least the fan cooled the room very well.

    Reply
  17. Phaedrus

    We computer programmers have always been geeks. We’ve never really done the jobs that the people who use our stuff do. We’ve never really met or talked to the normal people that use our stuff. And we don’t really want to. We spend our evenings reading blogs written by a guy who loves math… I mean, come on.

    Reply
  18. Chris' Wills

    Coming from a slightly different angle.
    I am presently making a living trying to correct (make suitable for use by end users and allow easy input and extraction of useful data) a really bad ERP implementation.
    First off, the ERP package is excellent, well structured and the documentation for standard use is logical and easy to implement. I have seen it implemented and working well in one factory operations and literally worldwide at hundreds of sites for multi-nationals.
    Why the problem?
    Those who implemented it tried to be clever and switched on functions that aren’t required in this particular business, reformated standard forms for no good reason, hide parts of the menu, didn’t leave intelligable documentation and also wrote some additional code rather than use standard patches.
    The functions force manual intervention to process/release objects, not a big deal if you do it once or twice, a royal pain when you have to do it for thousands every week (the initial implementers said; “it’s only requires two clicks”, in some cases it actually requires four. They didn’t seem to understand that it didn’t add value).
    They had milestones to meet and so threw the data in without actually checking that it meet minimum data quality standards (they hadn’t even set quality standards).
    They made life easy for themselves and hard for the end user (low capital cost, high operational cost).
    In this case a large part of the problem is having “experts” who have passed a 1 month course and are now certified. No business experience and no interest in actually learning about the business they are installing the system in.
    I don’t expect programmers/implementers to actually know about the business.
    The fault lies squarely with the consultancy’s experts for not being honest with the clients (e.g. Client “will it grant me access to paradise?”, consultant “of course it will; now sign the contract”) and for not bothering to learn the business processes and clearly explaining the requirements to the programmers in their language.
    Now, because the ERP isn’t meeting the businesses needs, rather than correct the ERP (a very large and time consuming job requiring that the VP IT admit his error) new software is being installed to act as an interface. Oh well, keeps me employed.

    Reply
  19. Brian

    This kind of thing is a problem, but it’s not always caused by technical people who “don’t understand that they have to look at it from the user’s perspective.” Every time this issue comes up it turns into a giant tech-bashing party, and I just want to point out that there are other causes:
    * The technical people often don’t have enough information about the users and context of use. It’s possible that someone designed this message without even knowing the reader would be attached to a copier. They might even have copied a status bit definition from the spec sheet of a reader component directly to the screen because they had no other information at all.
    * The technical people are often not allowed to spend time figuring out what the user thinks. It is usually not an official part of their job.
    * Figuring out what the user thinks can be hard or impossible. They don’t all think the same way, and their background can be just as inscrutable to outsiders as any technology.
    And many more . . .

    Reply
  20. Curt Sampson

    Personally, I have more trouble with software that hides and distorts the internal model it’s using, so that I can’t figure out how to do things, or what it’s really doing when I give it various commands. Hiding things like branch behaviour in an SCM system seems to me a recipe for disaster. On the one hand, you’re going to have a guy wondering why the guy next door to him isn’t seeing new changes to the material because he didn’t know that when he shared his stuff, he was creating a branch and sharing that, and other, you have a guy wondering how to create a branch, and having to remember or figure out that the “share” command creates a branch.
    A perfect example is drag-and-drop: I always find myself wondering if it’s going to move the item, copy the item, or create a shortcut or link to the item.
    The software I find the most usable has a clean, consistent internal model and exposes it as simply as possible to the user. If that means that the user has to learn new concepts in order to be able to use the software, so be it; it’s still simpler than trying to work out funny behaviour from software that’s pretending to be something that it isn’t.

    Reply
  21. Curt Sampson

    Interesting. I had my e-mail address in angle-brackets at the end of the last comment, and it was entirely stripped out. Here’s another example of where I’m wondering whether I’m supposed to be entering text, HTML, or something else here. There’s no way to tell, except by mucking about with the system and remembering what works and what doesn’t.
    cjs@cynic.net

    Reply
  22. Mark C. Chu-Carroll

    Curt:
    When it comes to SCM, I’m not talking about *hiding* the internals, but *presenting* them differently.
    For a really simple example of the kind of thing I mean, try comparing an SCM system to a spreadsheet. In a spreadsheet, you’ve got tables full of numbers. The average user of the spreadsheet doesn’t know how the numbers in the spreadsheet are represented. They know that, say, C5 contains 270.50. But they *don’t* know whether that’s actually stored as a string of characters, a BCD number, an fixed point number, an IEEE single precision float, or a IEEE double precision float. They don’t know, and they don’t care. They know that when they change the number in C5, that the number two rows down in C7 is going to change, because it’s a formula. How the spreadsheet ends up implementing the update of the formula cell C7 when they change C5, they don’t know, and they don’t care. And if you gave them a spreadsheet where they actually needed to know which number representation they needed, and where they needed to work out the update process by hand for every formula cell, well, they’d be seriously pissed, and you’d find people using spreadsheets a whole lot less; and even when they used them, they wouldn’t do nearly as much with them, because it would be too hard to set up. (And, of course, when someone pointed out how stupid it was to force users to manually describe the update flow, you’d have tons of people popping up and saying “But if you hid that, then people wouldn’t be able to make it work, because they wouldn’t have control when they needed it…)
    SCM is in that situation. Everything is exposed, down to the last ugly little detail. And users of SCM systems are typically forced to understand those details in order to use their systems. But most users don’t understand the details of the SCM system. So they *don’t* take advantage of what it could do for them.
    Just look how many people end up doing things like emailing patch sets back and forth to share changes without munging “HEAD”. That’s something that the SCM system could do: but when it’s presented in terms of every horrible implementation detail, figuring out *how* to do the equivalent of “knock off a patch set and email it to the guy next door” is so much work that they *don’t* do it. And if they try, odds are, something goes wrong, and the lesson they learn is “it’s too hard to use the SCM system for that”.
    Just look at how many guides to CVS there are that say “Don’t use branching if you can avoid it”. That’s just *wrong*. As a developer in a team, branches are your friend: they give you the ability to do managable isolation and sharing, which are incredibly important. But to make it work in CVS, you need to so *something* like make a branch, create a pair of tags, assign the tags to the point of branching in main and the root of the new branch, work in the branch. Then when you merge, do the merges relative to the tags marking the point of the last merge, and then update the tags in both parent and child after your done.
    What I just described is a formula for disaster. If you don’t get those tags set up right, or you every forget to update them, or you get the update *wrong*, because someone else did a checkin between the time you did the merge and the time you got the tags updates, then you’ll get a horrendous string of merge errors next time you need to do the merge – and that will probably convince you to never use a branch ever again.
    The right abstractions give you the same capabilities as the primitives, but they do it in terms of how you’re going to use the system. There’s no loss of power in a spreadsheet because it computes the update orders automatically based on a dependency graph. The spreadsheet is still, typically, Turing complete, and in the odd case where the default recompute order isn’t what you want, you can program around it. But 999 times out of 1000, it’s doing the right thing, because it’s presenting itself to the user in terms that make sense. That one time in 1000, you *can* make it do what you want – and it will be slightly harder to do that odd one case in 1000, but it’s vastly easier the other 999 times.
    SCM is the same kind of story: you can re-arrange the abstractions of an SCM system so that it does the right thing without any manual intervention 999 times out of 1000, and lets you do the manual thing the one time in 1000 that you need to. And that’s a better way of doing it than trying to force you to do it the manual way 1000 times out of 1000.
    To be concrete… Look at ClearCase. For most of its history, everything has had to be done manually in terms of configuration expressions, which are a horribly obtuse and error-prone abstraction. Then a few years ago, Rational came out with something called UCM – Unified Change Management. UCM was *very* restrictive. But working in ClearCase with UCM, you didn’t need to write configuration expressions yourself. The effect of UCM was that most developers ended up being able to do *more* of their own configuration management – even though they were working in a much more restrictive environment, most people perceived it as an *increase* in flexibility. Because in the configuration expression days, they needed to get the CC admin to set up anything they wanted to do; in the UCM environment, they could do it themselves. UCM, which frankly sucked, definitely couldn’t do everything that configuration expressions could. And there were definitely times when it got in the way, and you needed to do something the old CE way. But when that happened, you just went and edited the CEs yourself, instead of letting UCM do it automatically. So in the rare case where you needed to do the primitives, you could. But almost all of the time, you didn’t – and by having the right abstractions, you were able to do it right yourself.

    Reply
  23. blf

    My own particular hobbyhorse about software interfaces is what happens when something does not work. Since I myself am a developer, I tend to concentrate on error messages, both from the programs I use, and from those I am developing/supporting/whatevering.
    My canonical example is a program which says something along the lines of “cannot open file”. And that’s it. That isn’t helpful to anyone: Not the user, not the support people, not the developer, not the admins, and not the boss.
    Just what does need to be said depends on context; but typically would be something like “PROGRAM cannot open file NAME to read BLAH because GIBBER”. A concise statement in the active voice providing feedback of the input and a synopsis of the result. In some contexts, you might need to add something like “This is not serious. No data has been lost. Check if the name is correct …”. Arguably, this example is also an illustration of Mark’s point; i.e., it hints at the underlying design. On the other hand, it also hints at what to do to fix the problem, which is perhaps just as, if not more, important?
    I usually suggest everyone–designers, developers, testers, reviewers, etc.–think of messages (again, here I tend to focus on error messages) as documentation. It needs to explain, to be a reference, and to be understandable.
    Earlier today, I wasn’t able to print a file, allegedly because the file didn’t exist (that was the GIBBER). Naturally, it did exist. However, a close study of the error message showed that the é in the file’s name had, somewhere, been “converted” into the gibberish %E9 (as per the NAME). Since that wasn’t in the name of the file, it could be opened and queued for printing (the BLAH). Easy to work-around, just rename the file, but note I would have been lost had the bogus NAME not been fed back to me.

    Reply
  24. Graham Douglas

    In (partial) defence of the “Start” button, I think its original intent was as a “Start Here” point, rather than a “Start Things”. As in: “You want to run a program? Start here.” or “You want to shut down the computer? Start here.” The problem being that a secondary interpretation took over, one that was inconsistent with the intent.
    On the design side, I spent nearly 20 years working on a CAD program for the chemical industry. The guy in charge of the project was an engineer who knew nothing about programming. His philosophy was to tell us how engineers wanted to do things, and to give no ground whatsoever if we wanted to implement something that was easier for us to do, but not the way he wanted to happen. It kept us honest…

    Reply
  25. Sara

    I’m part of a study of accessibility issues of various types of webpages for people with disabilities. As part of this, we did usability studies of several instiutions’ websites.
    The biggest surprise? On many tasks, our “control” group was just as stuck as the blind users and users with reading disabilities. The sites were designed to make sense to people inside the institutions, but were meant to be *used* by people new to the organization. Thus, terminology issues and structure issues became huge stumbling blocks – and in some cases, the blind users had an advantage in that they were able to access just the text without all the distracting add-ons.

    Reply
  26. Amy

    This is a good (but not novel) point to make. Thank you for helping bring the idea to more people.
    But isn’t SCM source control management? Software configuration management — never heard of it.

    Reply
  27. trrll

    It can be surprising how different the user mindset and the developer mindset can be. One of my first encounters with this was writing a data-entry routine. It seemed eminently sensible to include an “Is this information correct?” check at the end of each block of data.
    Then I tried to use my own software. And discovered that since I made errors only occasionally, confirming that the data was correct rapidly became an automatic reaction, so that by the time that I realized that I had made an error, my fingers had already entered the confirmation. The idiot who wrote the routine (me) only succeeded in adding an additional step that achieved nothing worthwhile.

    Reply
  28. Mark C. Chu-Carroll

    trrll:
    That’s actually a really good example. It often seems like having people verify that things are correct is a good idea. And in some applications, it’s definitely right – for example, I just bought my mom a new cellphone, and amazon showed me the new shipping address I’d entered to make sure it was correct.
    But you need to think about how the user is going to be using your system. If you have someone entering records a dozen at a time, putting in a confirmation after each one *worse* that putting in no confirmation at all! If they knew that you didn’t confirm at all, the user would tend to be a bit more careful about entry. (For example, I tend to keep “one-click” off on Amazon. When I used it, I did tend to be more careful about what I clicked in Amazon. I didn’t like having to be that careful, so I turned one-click off so that I’d get a confirmation before I ordered anything.)
    But with the confirm, they’re naturally going to be less careful; and yet, at the same time, by making the confirmation such a frequent and easily-dismissed distraction, you make dismissing the confirmation into an automatic, thoughtless part of finishing the entry of a record!
    The right thing is stuff like that is to not put in a confirmation after every single element of a long sequence of actions. There are a lot of different strategies that you can use for trying to prevent entry errors; you need to look at the possible strategies, and pick one that will do the best job of avoiding or catching errors – confirm after each entry is almost never the right one.

    Reply
  29. Jud

    Wonderful thread.
    My war story comes from the engineering section of a company I worked for years ago. I was reviewing a contract for some equipment, and a description of the equipment was attached to the contract. There was a drawing of the control panel. The panel had lights of several different colors, but only two white ones, which were located together in a vertical line. One white light indicated the equipment was on and ready to run. The other indicated a ground fault on the control panel, i.e., if you touched the metal control panel there was a good chance you’d be electrocuted.
    I suggested to one of the fellows working on the project that, though I wasn’t an engineer and thus not an expert, they might want to consider changing the warning light setup.

    Reply
  30. Confused

    So what happens? One loudmouth manager and another one fight for more power and more money from the really big one who is playing golf most of the time. So salesmen (err managers) come up with this brilliant idea of combining a credit card reader with the copy machine. They get great bonuses for the idea but the big guy doesn’t want to favor any of his puppies, plus, like any good Stalin or Hitler he likes subordinates to fight among themselves. Then corporate middle-management comes up with a corporate process where, at the bottom of the food chain, two separate groups of peons who don’t collaborate (because they are not supposed to since managers must keep their turf under the tight control), have to do something.
    Implementors, naturally, don’t give a damn about the stupidity of the whole project. It is just one of many self-serving stupidities and internal top-of-the-food-chain fights which invariably end up with managers taking home millions while implementors continue taking home a mediocre salary which is actually lower than it was 10 years ago, if adjusted for inflation.
    The end product, naturally, just reflects all that went into it.
    What do the geeks do? Throw ash on themselves and *their* software/firmware/hardware design!

    Reply
  31. Chris' Wills

    …What do the geeks do? Throw ash on themselves and *their* software/firmware/hardware design!
    Posted by: Confused | May 8

    That is because geeks (does geeks include nerds?) have some things that the managers, in your scenario, do not.
    Professionalism, self worth, integrity, honesty etc…
    As we age and become more cynical we can sometimes look around and think “why bother”, but the problems will still occur and guess who will be tasked with resolving them.
    Out of interest, does anyone know how much of the IT software/wetware business is spent correcting problems?
    Would it actually be to our benefit to get it right first time?
    I suspect that a large number of people make a fair living out of improving (correcting mistakes in) installations/configurations/designs etc, so getting it right first time could reduce employment opportunities.

    Reply
  32. Torbjörn Larsson, OM

    My own favorite example of design improvement is when I revised an in house produced laptop test program for a measurement equipment that needed update.
    [The original developer was gone, and my development and implementation of measurement algorithms was ended, so I was tasked with it.]
    Originally, the interface was character commands – input a character from a looooong menu and see or change a valve position, et cetera.
    Turned out our technicians really needed a simple context sensitive GUI with real time sensor readings to efficiently test and tune the equipment.
    [It was more fun and educational, too. A win-win, I guess. And of course, the original developer had to hurry, so that design was intentional – my development was “off line”.]

    they might want to consider changing the warning light setup.

    Bad indicator or actuator design is common. I have one obvious example that is sort of getting the level wrong type as well: the key sets in some GSM and DECT phones.
    At one time I had to have both (private and professional). Since the on/off switches were interchanged, it took me quite some time until I learned not to routinely disconnect one or the other instead of accepting incoming call.
    And would you believe the phones happened to have the same manufacturer? Couldn’t they take one step back and up a level and think of users with several products? I mean, they must assume they would sell a few. 😐

    Reply
  33. Michael Swart

    I wanted to weigh in on the ‘Start’ button topic. It’s real purpose is so well understood that the word could have been ‘Hippo’ and users would use it as effectively as they do today. Maybe in recognizing this, the button name was specified by marketing (That Rolling Stones tune is going through my head as I type).
    About the two-way mirror usability study… I totally sympathize.

    Reply
  34. Mark C. Chu-Carroll

    Michael:
    I’ve got to disagree with you on the “Start” button issue. Among *proficient* computer users, or people who’ve been using windows for a long time, it’s true that the meaning of “Start” is well understood. In fact, for those users, the label on the button could be renamed pretty much anything without experienced users even *noticing* that it had been changed.
    But among novices – which are the people that labels like “Start” are intended for – it remains a problem. (And this is another one of those places where levelling errors occurs. There’s a constant problem with software that forgets that there’s more than one kind of user. In the pre-eclipse days, I spent time with UI designers of a programming environment who argued that misleading UI buttons (things like the start button) are irrelevant, because the users know what it’s for (meaning “not a problem because the users are proficient”); and then argue for things like 4 page tabbed dialog windows to create a simple Java class (because the user is too naive to understand how to do things by themselves). The end result was, IMHO, a total disaster: an IDE which wasn’t comprehensible for beginners, but which was *always* getting in the way for experienced developers.
    Back to the start button thing… I’ve been on the wrong end of a phone call with my parents after they bought a windows machine a few years ago, trying to explain things. (“Yeah, the printer driver crashed, so you’ll need to turn the machine off and restart it. Go to the ‘Start’ menu, and pick ‘Shut down’.”. “But I don’t want to it to start anything”. “Don’t worry, it’ll do the right thing, it’s just a silly name for the menu.” “But if I click it, it’s going to start doing something, right?”. After a few exchanges, this degenerates into “Yeah, you’re going to tell it to *start* turning off the computer, OK? See, it’s going to start something.”.) For a genuine neophyte – someone who really doesn’t know what they’re doing – the *labels* on things matter. They’re looking very closely at things because they want to make sure they don’t do the wrong thing. The experienced people pretty much don’t look at labels *at all*: they know where things are. So when you’re picking things like labels in the UI, you need to keep in mind *who* is going to be using them, and *how* they’re going to get used. A misleading label is going to screw up your new users: a good system needs to be careful to make sure that they don’t provide “help” for new users that’s going to be more of a hindrance. And you also need to keep in mind that the new user phase is limited duration: providing easy stuff is great, but *forcing* people to use the “easy” stuff is at least as bad as not including the easy stuff at all.
    Again, using my experience with family: my parents recently got a Mac to replace the old PC. There’ no big green “Start” button on their Mac. But there is an apple in the top-left corner of the screen, and it’s easy to get them to understand that things like configuration, preferences, logout, and shutdown are all on the apple menu. Using the little blue apple icon makes things less misleading. And getting them to know that the way to start a program is to click the smiley face for the finder, and then select “Applications” for the list of apps is actually much easier than getting them to understand how to navigate the windows “Start” menu. (I found that latter thing surprising. My intuition would be that a menu of programs would be easier than opening a finder window, and finding the app in the filesystem through the “Applications” shortcut. But it’s not: the finder is something that they got used to using within the first hour of sitting in front of the computer, and opening a program from the applications directory is just using their understanding of how to look at things; whereas the start menu is a *different* kind of interaction than anything else they’re doing – instead of just more file navigation, it’s a totally different kind of menu, which has gotten progressively more complicated with each successive release of windows.

    Reply
  35. Torbjörn Larsson, OM

    I have had the same problem with explaining “Start” to neophytes.
    But the thing I remember most is when I got to use a Sun workstation. I had migrated from Mac to Windows, and three button operation didn’t sit well for me right then. I had a hard time finding correct menus.
    Hunting down Help I saw that I could turn it off, but I couldn’t find the menu. Right, because it was hidden by the action of the three button operation, and IIRC the Help was based on the two button menu tree. Took me half an hour before I stumbled on the correct menu and could turn the damn thing off.
    Sort of the same thing as when I tested Japanese on my cellular….

    Reply
  36. Anonymous

    If I’m not mistaken, the card reader devices are a system built, designed and manufactured by another company, such as HID, Xerox just provides the spec for the interface for such a thing to work.

    Reply
  37. GeorgeT

    Actually you can’t start the computer from the Windows Start button. You’ve got to use the power switch. For that matter, a quick press of the power switch starts the shutdown procedure, too.

    Reply
  38. IanD

    Mark,
    There’s a great book called “About face: The Essentials of User Interface Design” by Alan Cooper (who is sometimes called “The Father of Visual Basic” He made me aware of some really stupid things I was doing by making me reappraise my design practices.
    One of his bug-bears is the error message that says something like “I’m just about to delete (or lose) your data” and follows that with a single [OK] button. The point being that, no it’s not okay for you to delete my data, but you haven’t given me any options. And why should that menu item be called “File” when all it does is print and exit?
    The book is well worth buying … or at least borrowing.

    Reply

Leave a Reply