Monthly Archives: September 2006

Manual Calculation: Using a Slide Rule (part 1)

Several people in the geekout thread asked me to explain how a sliderule works, and I’ve been meaning to write a couple of article about manual computing devices. So I thought I’d do it. There’s a nice slide-rule simulator at Derek’s Virtual Slide Rule Gallery, which is what I used to generate the images in this article.

I know a lot of people think that the idea of learning to use something like a slide rule is insane in an age of computers and calculators, and that this is a silly thing to post about. But I really love slide rules, and not just because I’m a geek. Slide rules make math tactile. Using a slide rule makes you understand how certain kinds of math work; and not just a theoretical understanding, but an understanding on a very concrete, physical level. My dad taught me to use one not because I needed to know (I’m not that old!), but because he loved it and thought it was cool; my slide rule is the one that he used in college. He gave it to me when I was in high school. It’s a beautiful K&E log-log duplex decitrig.

There are a couple of things to be said about slide rules up front. They’re beautiful things, and the guy who invented them is an incredible genius. But they’re not a tool for the weak-of-heart. Using a slide rule isn’t like using an electronic calculator. You actually need to do an approximation of the calculation in your head, because the slide rule doesn’t do powers of ten; you need to do that by yourself! Also, in general, the slide rule is used for the “hard stuff”; multiplication and division, logarithms, exponents, and trigonometry. Addition and subtraction you do by yourself, either in your head, or on paper.

The basic idea of the slide rule comes from logarithms, in particular this fundamental identity: x * y = blogb(x) + logb(y). That is: adding logarithms is equivalent to multiplying numbers. The slide rule places numbers onto a ruler on a *logarithmic* scale; so the distance from “1” to a number “n” on the rule is the logarithm of “n”. That’s the whole fundamental trick to make it work.

Let’s take a look at a slide rule. This is a picture of a Pickett Microline sliderule. That’s a very simple rule, which is easy to see on the computer, but it’s relatively wimpy. It doesn’t have a lot of scales (which is equivalent to a calculator with very few buttons); and it’s really only good for 2 to 2.5 significant digits. (Personally, I’m not a pickett fan; I prefer the big old K&Es, but that’s just because they’re what I’m used to.)

rule.jpg

For multiplication and division, we only need two scales: the D scale, which is the top row of the lowest third of the rule; and the C scale, which is the bottom row of the moving slide in the center. C and D are done with the same logarithmic scale. We’ll also use the *cursor*, which is the vertical line on the transparent view slide.

Let’s say we wanted to multiply 22.5 by 3.7. We move the center slide so that “1” on the C scale lines up with 2.25 on the “D” scale below it:

mult-step1.jpg

Now – since adding logarithms is multiplying numbers, and the position of a number on the C and D scales are determined by the same logarithm, that means that “3.7” on the “C” scale is in the same position as “2.25*3.7” on the D scale. So what’s on the D scale at 3.7? We slide the cursor over (both to mark the position, and to make it easier to read), and find that it’s at 8.3.

mult-result.jpg

So the answer is 8.3 times 10 to the something. The rule doesn’t tell us what. So we do it approximately in our heads. It’s about 20 times 3 and a half, which is around 80. So the answer is 83. (The exact answer is 83.25, but this rule isn’t big enough for us to see that.)

See? Simple. Now, if we wanted to multiply that by, say, 18, we’d slide the “1” over so that it lined up with the cursor… Except that then, the answer is off the end of the rule. But no problem! There’s *also* a one on the *other* end of the rule. We can slide the C scale so that its *right hand* 1 is over 83 where we’ve left the cursor. Now we slide the cursor down to 1.8 on the C scale:

mult2.jpg

And you can see it’s sitting at about 1.49. But since we only used two digits, we can only read two digits, so we say 1.5. Now we need to do our powers of ten: it’s about 20 times 80, which is 1600. So it’s 1.5×103, or about 1500. (Exact result is 1494.)

Division is almost the same thing done backwards: x/y = aloga(x) – loga(y). So, to divide x by y, we put “y” on the C scale over “x” on the D scale, and slide the cursor over to 1 on C. For example, let’s take π/2. Most rules have a specific mark for π to make that easy. So we slide 2 on C to line up with π on D:

div-setup.jpg

And slide the cursor to one on C:

div-result.jpg

And our answer is: about 1.57. (The cursor is about half-way between the marks for 1.56 and 1.58; and π is positioned to three significant digits.) We need to do the powers of ten for division to, but that’s easy; we know π/2 is between 1 and 2, so it’s 100, so the answer is just 1.57.

What’s the real answer? About 1.5708.

See? Isn’t that cool?

[sr]: http://www.antiquark.com/sliderule/sim/

The Geekoff Intensifies

Orac is refusing to surrender and acknowledge the obvious fact that he simple *is not* as much of a geek as I am. So I am obligated to point out several further facts in my attempt to make him surrender the crown of geekiness.
———-
First: compare our professsions. Orac is a cancer surgeon: a person whose professional life is dedicated to *saving peoples lives*. There are people living today who would be dead but for the efforts of Orac. It is an honorable profession, deserving of nothing but respect.
In contrast, I am a software engineering researcher; aka a professional computer geek. I spend my life designing and writing software (most of which will never be used) for other people to use to write software.
———–
Back in high school, I spent most of a year saving up money to buy a super-cool pocket calculator that was programmable in Basic. Then after I got it and used it for a while, I decided to switch. To a slide-rule.
Because it’s *faster*.
I still own a [K&E log-log duplex decitrig slide rule.][sliderule] (And know how to use *all* of the scales.) It’s a beauty. I look forward to teaching my children to use it. (Using a slide-rule gives you a tactile sense of how a lot of things fit together in simple math.) Here’s a pic I found of the same model that I have:
0098-ke4181-3-01-front-left.jpg
Mine’s a lot more beaten up than this one; it’s thoroughly yellowed up the full length; the view slide is a bit scraped up and missing the top-left screw. But it’s still in great working condition.
————–
Let us, for a moment, consider my name. Mark Chu-Carroll. Where do you suppose “Chu-Carroll” came from?
Obviously, it’s a combination of the last names of me and my wife before we got married. But why “Chu-Carroll” rather than “Carroll-Chu”? Is it for aesthetics? No. The real reason is *far* geekier than anything like mere aesthetics.
No. The real reason why we chose “Chu-Carroll” is… Bibliographies.
When we were married, my wife had more publications than I did. And so we decided to use “Chu-Carroll” so that people doing literature searches for *her* name would be more likely to find her papers, because “Jennifer Chu-Carroll” would appear immediately after “Jennifer Chu” in any bibliographic listing likely to contain her work; whereas “Jennifer Carroll-Chu” would be separated by some distance, and would be more likely to be missed.
So my last name was chosen based on how it would be alphabetized in bibliographies.
————————
Let’s take a look at genetics for a moment. My parents recently went on vacation, and brought back gifts for my children. One of the gifts was a set of pens with their names on them. Give a new pen and a stack of paper to a three year old boy, and what do you *think* that he would do?
*My* three-year-old son took the pen apart. He’d never seen a “click” pen before, and he wanted to know how it worked.
[sliderule]: http://sliderule.ozmanor.com/rules/sr-0098-ke4181-3-01.html

Shapes, Boundaries, and Interiors

When we talk about topology, in general, the way we talk about it is in terms of *shapes*: geometric objects and spaces, surfaces, bodies that enclose things, etc. We talk about the topology of a *torus*, or a *coffee mug*, or a *sphere*.
But the topology we’ve talked about so far doesn’t talk about shapes or surfaces. It talks about open sets and closed sets, about neighborhoods, even about filters; but we haven’t touched on how this relates to our *intuitive* notion of shape.
Today, we’ll make a start on the idea of surface and shape by defining what *interior* and *boundary* mean in a topological space.

Continue reading

Friday Random Ten: The "What a Geek" Edition

Haven’t done one of these in a while. In light of the “Geek-off” this week, I made a playlist out of what I think of as my “geekier” music, and let ITunes assemble a random list from that playlist.
1. **Elizabeth and the Catapult, “Waiting for the Kill”**. E&tC is a NYC band that plays what they call “baroque pop”; pop music, with heavy jazz and classical influence. I heard them interviewed on the local NPR station, and immediately grabbed their first album – isn’t just an EP, but it’s fantastic. This is the best track.
2. **Flook, “The Tortoise and the Hare”**. The worlds greatest trad Irish flute-based band. Flook is really unbelievable: so full of energy, it’s impossible to *not* like them.
3. **Frank Zappa, “Drowning Witch”**. Old stuff from Zappa; incredibly goofy, and yet pretty darn cool musically.
4. **Genesis, “Here Comes the Supernatural Anaesthetist”**. A very strange track off of Genesis’ masterpiece from their Peter Gabriel days, “The Lamb Lays Down on Broadway”.
5. **Gordian Knot, “Komm Susser Tod, Kom Sel’ge”**. Bach, performed on the electric touch bass guitar.
6. **Mogwai, “Acid Food”**. Another one of those “Post-Rock Ensembles” that I’m so fascinated by. Mogwai is simply amazing; a bit more loud than the Clogs or the Dirty Three, but brilliant.
7. **Moxy Fruvous, “King of Spain”**. My wife’s favorite MF song. MF is a Canadian band that specializes in goofy a-capella. “Once I was the king of spain, Now I eat humble pie, I’m telling you I was the king of the Spain, Now I vaccum the turf at Skydome”.
8. **Steve Reich & Maya Beiser, “Cello Counterpoint”**. An amazing composition by Steve Reich. It’s all performed by Maya Beiser on cello – there are *16* tracks of Maya, all overlaid. Unbelievable. She performs it live with a recording of 15 of them, and plays the 16th live.
9. **Thinking Plague, “Blown Apart”**. Another post-rock ensemble. By far the strangest of the PREs that I listen to. Thinking Plague often goes totally atonal; and even when they don’t, they have a strange sound. One fascinating thing about them is that the vocalist treats her voice as just another instrument in the band. She’s in no way a “lead vocalist” like you’d find in a traditional band; she’s just another instrument in the mix. Her voice is as likely to be part of the background rhythm supporting the guitarist as it is to be singing a melody.
10. **Philip Glass, “Train 1” from “Einstein on the Beach”**. A small piece of Glass’s strange but brilliant opera. The opera is about four hours long, with no intermission. This section is formed from arpeggios played by saxaphone and keyboard, plus a chorus singing a pulsing counterpoint. Other parts of the opera consist of the voices chanting numbers. It’s strange, and not the easiest thing to listen to, but it’s worth it.

Pathological Programming: The Worlds Smallest Programming Language

For todays dose of pathological programming, we’re going to hit the worlds simplest language. A Turing-complete programming language with exactly *two* characters, no variables, and no numbers. It’s called [Iota][iota]. And rather than bothering with the rather annoying Iota compiler, we’ll just use an even more twisted language called [Lazy-K][lazyk], which can run Iota programs, Unlambda programs, as well as its own syntax.
[unlambda]: http://scienceblogs.com/goodmath/2006/08/friday_pathological_programmin_3.php
[lazyk]: http://esoteric.sange.fi/essie2/download/lazy-k/
[Iota]: http://ling.ucsd.edu/~barker/Iota/

Continue reading

Neighborhoods (Updated)

The past couple of posts on continuity and homeomorphism actually glossed over one really important point. I’m actually surprised no one called me on it; either you guys have learned to trust me, or else no one is reading this.
What I skimmed past is what a *neighborhood* is. The intuition for a
neighborhood is based on metric spaces: in a metric space, the neighborhood of a
point p is the points that are *close to* p, where close to is defined in terms of the distance metric. But not all topological spaces are metric spaces. So what’s a neighborhood in a non-metric topological space?

Continue reading

Obnoxious Answers to Obnoxious Questions

A few of my recent posts here appear to have struck some nerves, and I’ve been
getting lots of annoying email containing the same questions, over and over again. So rather than reply individually, I’m going to answer them here in the hope that either (a) people will see the answers before send the question to me, and therefore not bother me; or (b) conclude that I’m an obnoxious asshole who isn’t worth the trouble of writining to, and therefore not bother me. I suspect that (b) is more likely than (a), but hey, whatever works.
Answers beneath the fold.

Continue reading

Topological Equivalence: Introducing Homeomorphisms

With continuity under our belts (albeit with some bumps along the way), we can look at something that many people consider *the* central concept of topology: homeomorphisms.
A homeomorphism is what defines the topological concept of *equivalence*. Remember the clay mug/torus metaphor from from my introduction: in topology, two topological spaces are equivalent if they can be bent, stretched, smushed, twisted, or glued to form the same shape *without* tearing.
The rest is beneath the fold.

Continue reading

Back to Topology: Continuity (CORRECTED)

*(Note: in the original version of this, I made an absolutely **huge** error. One of my faults in discussing topology is scrambling when to use forward functions, and when to use inverse functions. Continuity is dependent on properties defined in terms of the *inverse* of the function; I originally wrote it in the other direction. Thanks to commenter Dave Glasser for pointing out my error. I’ll try to be more careful in the future!)*
Since I’m back, it’s time to get back to topology!
I’m going to spend a bit more time talking about what continuity means; it’s a really important concept in topology, and I don’t think I did a particularly good job at explaining it in my first attempt.
Continuity is a concept of a certain kind of *smoothness*. In non-topological mathematics, we define continuity with a very straightforward algebraic idea of smoothness. A standard intuitive definition of a *continuous function* in algebra is “a function whose graph can be drawn without lifting your pencil”. The topological idea of continuity is very much the same kind of thing – but since a topological space is just a set with some additional structure, the definition of continuity has to be generalized to the structure of topologies.
The closest we can get to the algebraic intuition is to talk about *neighborhoods*. We’ll define them more precisely in a moment, but first we’ll just talk intuitively. Neighborhoods only exist in topological metric spaces, since they end up being defined in terms of distance; but they’ll give us the intuition that we can build on.
Let’s look at two topological spaces, **S** and **T**, and a function f : **S** → **T** (that is, a function from *points* in **S** to *points* in **T**). What does it mean for f to be continuous? What does *smoothness* mean in this context?
Suppose we’ve got a point, *s* ∈ **S**. Then f(*s*) ∈ **T**. If f is continuous, then for any point p in **T** *close to f(s)*, f-1(p) will be *close to* *s*. What does close to mean? Pick any distance – any *neighborhood* N(f(s)) in **T** – no matter how small; there will be a corresponding neighborhood of M(*s*) around s in **S** so that for all points p in N(f(s)), f-1 will be in M(*s*). If that’s a bit hard to follow, a diagram might help:
continuity.jpg
To be a bit more precise: let’s define a neighborhood. A neighborhood N(p) of a point p is a set of points that are *close to* p. We’ll leave the precise definition of *close to* open, but you can think of it as being within a real-number distance in a metric space. (*close to* for the sake of continuity is definable for any topological space, but it can be a strange concept of close to.)
The function f is continuous if and only if for all points f(s) ∈ **T**, for all neighborhoods N(f(s)) of f(s), there is some neighborhood M(s) in **S** so that f(M(s)) ⊆ N(f(s)). Note that this is for *all* neighborhoods of *all* points in **T** mapped to by f – so no matter how small you shrink the neighborhood around f(s), the property holds – and it implies that as the neighborhood in **T** shrinks, so does the corresponding neighborhood in **S**, until you reach the single points f(s) and s.
Why does this imply *smoothness*? It means that you can’t find a set of points in the range of f in **T** that are close together, but that weren’t close together in **S** before being mapped by f. f won’t put things together that weren’t together originally. And it won’t pull things apart that weren’t
close together originally. *(This paragraph was corrected to be more clear based on comments from Daniel Martin.)*
For a neat exercise: go back to the category theory articles, where we defined *initial* and *final* objects in a category. There are corresponding notions of *initial* and *final* topologies in a topological space for a set. The definitions are basically the same as in category theory – the arrows from the initial object are the *continuous functions* from the topological space, etc.