Liars: No Information Allowed

Bad from the Bad Ideas Blog sent me a link to some clips from Ben Stein’s new Magnum Opus, “Expelled”. I went and took a look. Randomly, I picked one that looked like a clip from the movie rather than a trailer – it’s the one titled “Genetic Mutation”.

Care to guess how long it took me to find an insane, idiotic error?

4 seconds.

It’s the old “evolution can’t create information” scam.

The clip is Ben Stein interviewing a guy named “Maceij Giertych”,
who is allegedly a population geneticist. (I say allegedly because looking
the guy up, he appears to be an agricultural biologist studying tree-growth
patterns in forests.) Said gentleman is not currently a working scientist,
but a policitian. He’s a leader of a right-wing nationalist political party
in Poland, and currently is a member of the European parliament representing Poland.)

What Giertych has to say is the usual dishonest claptrap:

Giertych: “Evolution does not produce new genetic information. and for evolution …”

Stein: “there has to be new genetic information. And where is the new genetic information going to come from?”

Giertych:”Well, that is the big question. Darwin(ists?) assume that the
information comes from natural selection. But natural selection reduces genetic
information, and we know this from all of the genetic population studies that we
have.”

It continues in the same vein for the remainder of the interview. I’ve written
about this particular line of bullshit so many times that I’m downright bored with
it. But it’s pure rubbish.

Before I get to the math part, there’s one obvious and obnoxious scam
in the above that it doesn’t take much to notice. “Darwinists assume that
the information comes from natural selection”. No, Darwinists don’t assume that.
Natural selection is an information reducer: because not all
individuals survive, and individuals with less adaptive traits don’t
survive, you’re pruning the information for the non-adaptive traits out of the
population.

But evolution is not just natural selection. Evolution is change plus natural selection. Natural selection chooses from the
varieties that exist; change produces the new varieties. Change comes
from many places – basic mutations can be the result of copying errors,
of radiation or chemical agents altering the genetic code, of recombination,
etc. That’s the change – and that does introduce new information.

And that brings us to the mathematical part.

Never do they bother to define information. There’s a good reason
for that: because every scientific definition of information absolutely
defeats their argument. Copy something? You’ve got more information because
you copied it. Change something? The changed version is different information
than the original. So if you’ve got a population, and you measure the amount of
information in its genome; and then a new individual is born with a change
in one gene – there’s now more information in that population’s
genome.

They rely on a simple confusion: information versus meaning. As human
beings, we’re naturally quite obsessed with meaning. And there’s nothing wrong
with that. But we tend to attribute our intuitive notion of meaning
to things. Our intuitive idea of meaning doesn’t have any necessary relationship
to information – or even to an actual definition of meaning! Our intuitive notion of meaning is based on language and writing – on the way that we communicate with one another, and the properties of what we’re trying to express using that
information.

One good way of thinking about meaning is Shannon information. Shannon’s
definition of information is based on the idea of reduction in uncertainty. That is, you pass a message to someone over some medium: the
quantity of information is the amount of uncertainty that you’ve eliminated
by sending the message. The meaning is the set of possibilities
that have been eliminated.

Anything can have information. Anything can ultimately have meaning. But recognizing whether something has meaning can be extremely difficult. For
example… Look at the two strings below.

M'XL("!0@]4<``V9O;RYT>'0`=53+;MLP$+SK*[8G7Q+GGAHIVMZ;0W+I<26M
MY$4E4N$CJOZ^LY0,RTT+&+!-S,,]34_T0]XE4.LIG66AVN,K4/)TDO&I
QTOX;*H/J(.2[/2[Q'?80DaQ+A4/"*1HPC#%2AR/JC_3=3P#SHZ2SNOX+_?3Y
M*N@XQRT06[@-@;5XaC4U?E)I2WBS*Z7/0A:H::LM@2VT8HU4JM=AQY=VJ-6

One of those two strings is an except from the the result of
compressing this post up to the word “below”. The other is random. Which
one has meaning? How can you tell?

I can tell you that by one very naive view of information, they’ve
got exactly the same amount of information. (That’s using compressibility
via the LZW-2 encoding as the measure of information.) But one of them
has meaning as english text, and the other is random noise.

What people like Ben Stein want you to do is assume that
your intuition about information and meaning is correct. And then
they want to play on that – using scare tactics about mutation –
to make you think that mutations can’t generate new information – because
it doesn’t look like information from your intuitive perspective.

But then, neither does the genetic code of a living thing. How
much meaning does “CCA GCA TGC TGA GGT TCA” have to you? Does it look like
a meaningful string? How about “GAA GTT GTC ATT TTA TAA ACC TTT”?

One of the two strings right up there is an actual excerpt from
a human gene. The other is random stuff that I made up. Which one has
more meaning? Which one has more information?

Stein and Giertych won’t answer those questions. They can’t. And
they really don’t want you to know that they can’t. Because their argument
is a fraud. And if they answered questions like the ones I posed above, or
if they even bothered to actually articulate a definition of what
they mean by “genetic information”, they’d have to admit that it’s all
a fraud.

0 thoughts on “Liars: No Information Allowed

  1. J-Dog

    Thank you for re-posting. As much as I read blogs here, and yours as often as I can, I can not remember all the information (or even understand it all). However, I do find that as I get older, repetition helps the information sink in, so although it might bore the heck out of you, it’s good for dummies like me.
    And since Stein is a new type of crazy, it’s good to have this as a reference tool also.

    Reply
  2. ERV

    AAAAAAAAAHAHAHAHAHAHAHAHA!
    THE ONE CALLED ‘EXPELLED: A CELL” HAS THE ‘TOTALLY NEW AND ORIGINAL’ ANIMATION! AAAAAAHAHAHAHAHAHAHA ITS STOLEN WHOLE-SALE FROM ‘INNER LIFE’!!! AAAAAAAAAAAAHAHAHAHAHAHA!!!
    THEY JUST REMADE IT LOL!! SCORE!

    Reply
  3. Frank Quednau

    I don’t think I understood your passage on Shannon very well. Either way, both examples you show have in common that they are a working source for some kind of process that will produce something else. With “working” I mean that some expectations placed upon the produce, the target, will be met for one input while not for the other. I am wondering whether for the inputs you called random a process and expectations can be devised that classifies the input as “working”. Then indeed, everything basically contains information.
    On my own blog I mused about whether evolution itself should be considered intelligent, based on some similarities to our own intelligence, like trial and error, the ability to classify a “design” as successful as well as the ability to memorize successful designs for later retrieval. I am wondering how far this goes. Is new information in the genome generated in a purely random fashion or are mechanisms in place that ensure that a “working” information is obtained faster than just by natural selection? Could a difference between the two be measured? And so on, and so on…definitely an interesting subject to talk about and I’m always keen to hear more well-founded information about this.

    Reply
  4. spudbeach

    Having read and appreciated Shannon’s 1948 paper on information as well as many other pioneering papers, I’m impressed by how much real scientists start out with very clear definitions of what they are talking about. After all, how can we investigate and discuss what we haven’t even defined?
    Shannon defined information in a possible message as the log of the number of different possible messages (assuming that all messages are equally likely).
    Applying that to DNA, we can define DNA information in a gene pool as the log of the number of individuals in the gene pool. That is a definition, possibly not the best one, but one that can be used. This definition, like any other definition of “information”, makes the idea of “evolution can’t increase information” ludicrous. Every time a child is born with a mixture of genes from her parents or a teensy mutation, information has increased.
    It’s so obvious that only someone without a definition of information can fail to see it. Gotta love the ability of some to be blinded against science.

    Reply
  5. Roger Witte

    I agree with you about the scam
    I think that Shannon’s definition is quite a good definition of information (although it isn’t unique; there are other workable definitions)
    But I think of meaning quite differently
    I think ‘meaning’ has to do with interpretation of a message by a receiver.
    Thus when you hear a speech and understand it in a particular way you attribute a meaning to it. Someone else may interpret it differently ie attribute a different meaning to it. The transmitter of the information may have intended a different meaning altogether.
    By and large, genetic code has no meaning to humans (there may be the odd biologist who can interpret it directly for all I know). On the other hand, a ribosome has no difficulty attributing meaning to genetic code (even though it is not an ‘intelligent’ system per se). It attributes meaning to the code by assembling proteins.
    Similarly neither the strings you present have meaning to a human. One (or possibly both) have meaning to the relevant decompression program. It attributes meaning to any string that it can successfully decompress into a different string. In the case of one of the two strings you present the resulting string will have meaning when read by an English speaker (But if you only speak Latvian, it still won’t mean much without further translation).

    Reply
  6. Michael Fridman

    Thanks for clarifying — I’ve always found this part of the argument the most ridiculous in the whole range of ID tricks (and there are aplenty to choose from). The whole claim that mutations can’t create information is complete BS since a single mutation WILL create information if you’re talking about the standard definition of information I remember from computing lectures (in terms of entropy). And this interpretation of information is the only one that even makes the smallest amount of sense from their rantings.

    Reply
  7. MartinM

    ITS STOLEN WHOLE-SALE FROM ‘INNER LIFE’!!!

    Having seen neither until now, I’d have to say; yes, absolutely. Seems pretty damn blatant.

    Reply
  8. Bad

    Thanks for responding to my solicitation to irritate yourself by watching it: I almost feel bad. I don’t think I’ll further demand that you watch the other clips as well, even though they feature a gay with a delightful accent explaining how bricks cannot construct a house.
    Berlinski’s computer analogy always slays me. When you get right down to it, computer code is just about the worse model to use when thinking about genes. Biology in a very real sense is about chains of causality: physical interactions that cascade off of each other. No one writes computer programs even remotely like they way genes work to construct function, and the reason really seems to be that genes.
    I’m a little interested in hearing more about natural selection as purely an information reducer though. While I agree that it is indisputably so from the frame of reference and definition of information you are using, couldn’t it also be said to increase it in a different frame of reference? Lets say I have an audiotape with white noise, and very selectively I cut away various frequencies until a distinct musical theme emerges. Can’t pruning, in this fashion, a create signal from noise?
    If you think of a gene pool as being noisy (i.e. lots of traits cropping up here and there), then natural selection is in a very real sense imprinting the nature of the environment on that gene pool over time: so much so that we can look at a creature and know a lot about what the demands of its environment/environments were. A species introduced into a completely new environment has genes that are basically random with respect to the needs of that environment. But over time, we’d expect to see distinct functions carved out of that randomness: cats from one environment, bears from another, both originally descended from basal carnivores.

    Reply
  9. QrazyQat

    It SHOULD be a gay, but they wouldn’t allow that, would they? I mean not an out-of-closet gay anyway. 🙂

    Reply
  10. James F

    I knew that name sounded familiar! He wrote a letter to the editor at Nature (444:265, 2006) entitled “Creationism, evolution: nothing has been proved.” He invoked the Global Darwinist Conspiracy™ as follows: “I believe that, as a result of media bias, there seems to be total ignorance of new scientific evidence against the theory of evolution.”
    I’ve never seen a letter provoke so many responses, including one from the director of the Institute of Dendrology of the Polish Academy of Sciences in Kórnik, Gabriela Lorenc-Plucińska (“Creationist views have no basis in science,” Nature 444:679, 2006).

    Reply
  11. Felix Salmon

    Just got back from a screening of the film. FYI, the genetic mutation clip did NOT make it into the final cut (or at least the cut I saw today). In fact, they had pretty much zero in the way of attacking evolution in the film.

    Reply
  12. Gilbert

    Roger,
    I too was a bit perturbed by Mark’s use of “meaning”. It seems to me that Shannon’s information (while a very useful notion of “information”) is not a very good way of thinking about meaning. That is, I agree that meaning requires an act of interpretation.
    I was thinking, perhaps such a notion could be formalized; perhaps as a function or operator on a piece of information. Thus, the pairing of an interpreter with information produces meaning. However, I’m not sure whether such a definition would be powerful enough to capture common uses of the word. As an alternative, perhaps we could think of the function as a transformer with state. (sadly I do not know enough about Programming Languages or logic to properly name what I mean) For instance, if I read this post and responded “hmm!”, certainly the meaning was more than “hmm!” to me. In this case, its effect on my internal state would have to be part of the meaning of the article to me.

    Reply
  13. Kyle

    My wife just had what I think amounts to a pretty good analogy.
    Put a straight line on a piece of paper. Copy that line using a copier. Now copy the copy. Continue this maybe 10 more times. By the time you’re done, is the resulting copy still a straight line? Does the last copy not count as something new in the universe of shapes, i.e. new information?
    Now, it’s true that it may not be anything which is predictable (you may try it again and end up with something completely different), and probably not generating anything which would be considered useful. But that’s irrelevant, because the universe we’re talking about is simply the shapes on a piece of paper. You have copied it, and added new information to the pool. And maybe here’s the real killer: consider if we had a copier that combined two versions of a so-called line and used that as the resulting line-figure. Now, instead of lines, you started with (say) a circle and a square, and we combine them and generate more. I think we can see this getting complicated really fast, not unlike some of Mark’s fractal posts. 🙂

    Reply
  14. Torbjörn larsson, OM

    I’m a little interested in hearing more about natural selection as purely an information reducer though. While I agree that it is indisputably so from the frame of reference and definition of information you are using, couldn’t it also be said to increase it in a different frame of reference?

    Bad, this is my understanding as well. Information is AFAIU relative a system and a chosen observer.
    I have related it repeatedly here before, but as it is relevant, evolutionary biologist Richard Dawkins has written an essay on information in the genome based on an analogy to Shannon information:

    Shannon wanted to capture this sense of information content as “surprise value”. It is related to the other sense – “that which is not duplicated in other parts of the message” – because repetitions lose their power to surprise. Note that Shannon’s definition of the quantity of information is independent of whether it is true. The measure he came up with was ingenious and intuitively satisfying. Let’s estimate, he suggested, the receiver’s ignorance or uncertainty before receiving the message, and then compare it with the receiver’s remaining ignorance after receiving the message.

    Mutation is not an increase in true information content, rather the reverse, for mutation, in the Shannon analogy, contributes to increasing the prior uncertainty. But now we come to natural selection, which reduces the “prior uncertainty” and therefore, in Shannon’s sense, contributes information to the gene pool. In every generation, natural selection removes the less successful genes from the gene pool, so the remaining gene pool is a narrower subset.

    This is analogous to the definition of information with which we began: information is what enables the narrowing down from prior uncertainty (the initial range of possibilities) to later certainty (the “successful” choice among the prior probabilities). According to this analogy, natural selection is by definition a process whereby information is fed into the gene pool of the next generation.
    If natural selection feeds information into gene pools, what is the information about? It is about how to survive. Strictly it is about how to survive and reproduce, in the conditions that prevailed when previous generations were alive. [My emphasis.]

    AFAIU this analogy focuses on biologists frame of reference for the genome, statistics over populations and change over generations to fixation (evolution). It seems to suggest that the genome learns of the environment by bayesian inference.
    This, I think, can be made more precise than the loose analogy above suggests as I hear that some population genetics models of asexual populations looks bayesian. An analogy with trial-and-error learning would be alleles as hypotheses and those retained after selection representing knowledge of what works.
    And in the ev program by research biologist Thomas Schneider you can study how selection produces Shannon information in the chosen reference frame by switching it on and off.

    This control experiment shows that when the ev program is run without selection there is no information increase. Therefore we can attribute the information increases observed with selection on entirely to that selection. In other words, an evolutionary algorithm does far better (almost 13 standard deviations!) than ‘pure chance’ which is the situation when there is no selection.

    Reply
  15. Torbjörn Larsson, OM

    ITS STOLEN WHOLE-SALE FROM ‘INNER LIFE’

    Um, is it? I find scenes here that I don’t find in Harvard’s Inner Life.
    But it is absolutely a cheap ripoff, look at the similarity between initial sequences.

    I knew that name sounded familiar!

    IIRC he is also the guy that made a creationist seminar announced in European parliament web.

    Reply
  16. Bad

    Felix Salmon:”Just got back from a screening of the film. FYI, the genetic mutation clip did NOT make it into the final cut (or at least the cut I saw today). In fact, they had pretty much zero in the way of attacking evolution in the film.”
    Er, what? Everyone whose seen the film I’ve heard of, be it scientist or creationist, has listed out lots and lots of different and specific accusations and allegations made about the weakness and perfidiousness of evolutionary theory, and how that all ties together with how scientists supposedly must suppress dissent to even get it to hold up.

    Reply
  17. RBH

    Frank Quednau wrote

    On my own blog I mused about whether evolution itself should be considered intelligent, based on some similarities to our own intelligence, like trial and error, the ability to classify a “design” as successful as well as the ability to memorize successful designs for later retrieval. I am wondering how far this goes.

    One commenter, a theist, was actually banned from Uncommon Descent for making just that argument, her argument being based explicitly on Dembski’s definition of “intelligence.”

    Reply
  18. Christophe Thill

    Maciej Giertych is a well known crackpot. In the European Parliament, he fights for creationism as hard as he can. His son Roman Giertych is the leader of a radical right party called the League of Polish Families (with a name like that, you just know that anti-gay hate plays a big part in their ideas) and was the education minister in the conservative Kaczynski government. His antics played some part in this government’s loss of confidence and eventual fall. Father and son are of one mind on all things pertaining to religion, science, society, morality etc. They’re such a shame to their country that some scientific luminaries (such as the great paleontologist Zofia Jaworowska-Kielan) felt compelled to speak out against them.
    Well, Ben Stein sometimes travels far to find his allies… but he seems to have some sort of inverted Golden Compass that always leads him to the worst cranks !

    Reply
  19. mj

    Maciej Giertych is a well known idiot who has spent his time in the European Parliament not only persuing a wildly anti-scientific agenda but also publishing a booklet with anti-semitic content and rallying against homosexuals. His son, Roman, the prensent leader of the party LPR, was minister of education in Poland before the last elections when the party was luckliy voted out of parliament. That the “expelled”-people give time to a cook as Giertych is a clear sign of desperation.

    Reply
  20. Ian Calvert

    “Never do they bother to define information.”
    That doesn’t matter. They’re wrong unless the definition states that everything has the same amount of information.
    A quick proof (disclaimer: not exactly formal/rigorous/well written 🙂 )
    Consider 3 genetic operators, point mutation, deletion and insertion.
    Any genetic sequence A can be changed using these to any other possible sequence.
    For their statement to be true, this must have less information.
    This new sequence B can be converted into any other sequence, again with less information.
    Since the set of all possible sequences includes A, we get:
    information(A) > information(B) > information(A)
    Unless all possible sequences contain the same amount of information, they are wrong.
    If all possible sequences have the same amount of information, then evolution can happily turn one creature into another without upsetting the information amount, thus invalidating the rest of their argument.

    Reply
  21. David Marjanović

    Maciej

    Just for the record: this is the correct spelling.

    Zofia Jaworowska-Kielan

    Kielan-Jaworowska.

    Since the set of all possible sequences includes A, we get:
    information(A) > information(B) > information(A)

    This falls under “how stupid of me not to have thought of this myself”.

    Reply
  22. SteveM

    Since the set of all possible sequences includes A, we get:
    information(A) > information(B) > information(A)
    Unless all possible sequences contain the same amount of information, they are wrong.

    Wait, I may not be the sharpest tack in the pail, but does it matter how much information there is? Didn’t you just “prove” a contradiction, therefore invalidating at least one of the assumptions of the argument? Specifically that your 3 generatic operators only reduce the amount of information in a gene? Even if all possible sequences had the same amount of information, your conclusion would still be a contradiction since there does not exist an A where A>A.
    Actually “all sequences must have the same information content” is not a valid conclusion for another reason, since it would imply that all genetic operations are null operators. Your proof only shows that at least one of your assumptions is false, not that they all are.

    Reply
  23. SteveM

    BTW, the “Bad from Bad Ideas Blog” hyperlink at the start of the article just points here.

    Reply
  24. MartinM

    Didn’t you just “prove” a contradiction, therefore invalidating at least one of the assumptions of the argument?

    No; typically, the claim is that mutations cannot produce new information, and so the >’s should be >=’s.
    I’ve tried this line of argument myself on many occasions. The problem is that most of the creationists don’t have the faintest clue what they’re talking about anyway, and so they usually utterly fail to grasp the point.

    Reply
  25. xebecs

    A simple linguistic example proves them wrong.
    “to” is different from “too”, yet the only change is a duplication of the ‘o’.
    I would think that even a slow lay person could grasp that example.

    Reply
  26. ZaBong

    To SteveM: Calverts Proof seems to be correct. Our creationist friends assume that mutations never increase information. But as soon as you have mutations whose effect can invert that of earlier mutations – and we know, there are -, we get a contradiction, *regardless of how information is defined*, if not all sequences have the same amount of information.
    So either mutations can increase information, or all sequences must have the same information.
    I find this result quite beautiful.

    Reply
  27. Mark C. Chu-Carroll

    Anonymous:
    Actually, I don’t use QWERTY. I use dvorak. Due to some rather painful ulnar-nerve RSI, I find it less painful to type on a dvorak keyboard – there’s less of the pinky extension that causes me pain in the dvorak layout.

    Reply
  28. Torbjörn Larsson, OM

    So either mutations can increase information, or all sequences must have the same information.

    Or it is selection that really does the job of increasing information, this time depending on how information is defined (for the individual or, evolutionary, for the population).
    But I agree, it is a d’oh! derivation.

    Reply
  29. Ian Calvert

    SteveM, I’m going to have to call in my disclaimer of “not well written” :).
    I tried to point out that reducing information with each mutation would be entirely contradictory, followed by an afterthought that my proof doesn’t work if you allow the information content to remain constant with any change. Then an after-afterthought that even this was bollocks.
    Cheers for the clarification though.
    “Actually “all sequences must have the same information content” is not a valid conclusion for another reason, since it would imply that all genetic operations are null operators.”
    Null operators with regards to information content, yes. With any even vaguely sensible measure (anything other than basically information(sequence A)=1), yeah their argument implies that mutation does not exist.
    Of course, we all know how this one continues:
    “But have you ever seen it happen!”
    “Yes we have *points to experiments*”
    “They were done in a lab, by INTELLIGENCE (or darwinismists LOLOLOLOLOLOLOL) !”
    “Firstly: bollocks. Secondly, *points to field experiments*”
    “But that didn’t increase the complex-information-Dembski content”
    “That measure isn’t defined properly and contains glaring err-”
    “Show me a cup changing into a whale. You can’t can you? HA”
    “How do you not fall down more?”

    Reply
  30. trrll

    On my own blog I mused about whether evolution itself should be considered intelligent, based on some similarities to our own intelligence, like trial and error, the ability to classify a “design” as successful as well as the ability to memorize successful designs for later retrieval. I am wondering how far this goes

    There is reason to suspect that all human intelligence might be based upon randomization/selection mechanisms that are analogous to natural selection on at least some levels. The basic machinery for this is clearly present. Gerald Edelman wrote a book on this topic entitled “Neural Darwinism.”
    The usual difference cited between human intelligence and natural selection is that human design is (sometimes) goal directed. But this can be viewed in terms of applying randomization/selection discovery within a neural simulation of reality, and then “replaying” in reality the sequences of actions that yielded positive results in the simulation. Moreover, randomization/selection can be used to construct such a simulation, by a selection algorithm that compares the output of the “simulator” to reality.

    Reply
  31. Bob Munck

    there’s less of the pinky extension that causes me pain in the dvorak layout.

    What part of your body is your “dvorak layout?”

    Reply
  32. wrymouth

    I’m more of a statistician than geneticist, but I think, sometimes, that those who don’t understand the evolutionary model think of great buildings, but fail to consider all the scaffolding, etc., that was around the buildings as they went up before the scaffolding was taken away to reveal the new functioning edifice.
    I’m not saying this very well, bc I am in a hurry, but i hope a shadow of the point I want to make gets across.

    Reply
  33. Mark Chu-Carroll

    Bob:
    Ha, ha.
    Seriously though, the Dvorak keyboard layout is good for RSI. There are a lot of false claims about Dvorak. It’s not faster than Qwerty. It’s not easier to use that qwerty. But one of its advantages is that it’s key arrangement allows you to type with less extension than Qwerty.
    Contrary to the common urban legend, Qwery wasn’t designed to slow you down. In fact, it became dominant as a keyboard because it was the fastest of the various proposed keyboards when typewriters were being invented. But it did have a mechanical constraint – to be able to be typed on quickly, it had to be designed to avoid jams between the levers for they keys.
    The main effect of that, in terms of how the keyboard works, is that some keys get spread apart on the keyboard. Your hands need to move more on a qwerty keyboard – and that particularly effects the pinkies. Dvorak works out to reduce finger extension by around 30% in routine typing. For programmers, who use a lot of strange symbol characters, it’s slightly less that that. So it’s a modest change, but when you’re in pain, every little bit helps.

    Reply
  34. Bob Munck

    Re: Dvorak
    I’ve been interested in alternatives for text entry since seeing Doug Englebart’s chord keyboard in 1968. In the late 70’s I bought something called the WriteHander, a 3″ gray hemisphere with buttons for the four fingers and four pairs of buttons for the thumb. Never got very good with it; I messed it up by trying to combine it with a mouse, at the level of duct-taping the two together. Now I’m playing around with the Fitaly touch-screen/stylus keyboard, incidentally developed by Jean Ichbiah of Ada fame. Of course, one finger or stylus typing is very different from touch typing.

    Reply
  35. Charlie B.

    I use Dvorak too. I’d seen the speed claims, I’d seen the finger extension claims. But what finally did it for me was that I couldn’t touch type on a QWERTY, so by changing my keyboard layout I was forced to learn to touch type.
    And it’s a bloody good security feature – I’ve returned from the water cooler a number of times to find a coworker sitting at my desk going “What’s wrong with your keyboard?” and looking very puzzled…

    Reply

Leave a Reply to wrymouth Cancel reply