According to this, the shapes of the numbers was derived from a notation where for each numeral contains its own number of angles. It’s a really interesting idea, and it would be really interesting if it were true. The problem is, it isn’t.
Look at the numerals in that figure. Just by looking at them, you can see quite a number of problems with them.
For a couple of obvious examples:
You don’t even need to notice stuff like that to see that this is rubbish. We actually know quite a lot about the history of arabic numeral notation. We know what the “original” arabic numerals looked like. For example, this wikipedia image shows the standard arabic numerals (this variant is properly called the Bakshali numerals) from around the second century BC:
It’s quite fascinating to study the origins of our numeric notation. It’s true that we – “we” meaning the scholarly tradition that grew out of Europe – learned the basic numeric notation from the Arabs. But they didn’t invent it – it predates them by a fair bit. The notation originally came from India, where Hindu scholars, who wrote in an alphabet derived from Sanskrit, used a sanskrit-based numeric notation called Brahmi numerals (which, in turn, were derived from an earlier notation, Karosthi numerals, which weren’t used quite like the modern numbers, so the Brahmi numerals are considered the earliest “true” arabic numeral.) That notation moved westward, and was adopted by the Persians, who spread it to the Arabs. As the arabs adopted it, they changed the shapes to work with their calligraphic notations, producing the Bakshali form.
In the Brahmi numerals, the numbers 1 through 4 are written in counting-based forms: one is written as one horizontal line; 2 as two lines; 3 as three lines. Four is written as a pair of crossed lines, giving four quadrants. 5 through 9 are written using sanskrit characters: their “original” form had nothing to do with counting angles or lines.
The real history of numerical notations is really interesting. It crosses through many different cultures, and the notations reform each time it migrates, keeping the same essential semantics, but making dramatic changes in the written forms of individual numerals. It’s so much more interesting – and the actual numeral forms are so much more beautiful – than you’d ever suspect from the nonsense of angle-counting.
]]>One of the dishes we had was Tsukune, which is a standard at Izakayas. It’s basically a meatball made from ground chicken, and served drizzled with a sauce that’s a lot like a thick, rich teriyaki. These suckers were the tenderest, most succulent meatballs I’ve ever eaten. I was strongly motivated to try to reproduce them.
So I did some research. And it turned out that there were two tricks to how really good tsukune come out that way.
One part is what goes in: pretty much any part of the bird that can get finely minced. Breast, thigh, neck, legs, wings. If the cartilage is soft enough to be minced, it goes right in with the meat.
The other part is how it’s treated. The meat has the living hell beaten out of it. It’s finely minced, and then the ground meat is kneaded and pounded until it’s all soft and firmly stuck together. That process both binds the meat without adding any extra binding agents like eggs, and tenderizes it.
I really don’t like working with cartilage. I’ve tried it in some recipes in the past, and I find it really unpleasant. It’s slimy, and it tends to sometimes form sharp edges that can cut you. Getting cut by raw meat is not something I like doing.
So I decided to try to reproduce as much of the flavor as I could, using ingredients that I had on hand. And I tried couple of tricks to get the tender texture. It’s not as tender as a true Izakaya tsukune, but it was awfully good.
I relied on three things to get the tenderness I wanted.
The result was, honestly, not very much like the tsukune from the Izakaya. But it was really, really delicious. I mean really, seriously delicious – like some of the best meatballs I’ve ever eaten. Not quite as good as the Izakaya ones, and quite different – but so, so good.
It’s not a difficult dish to make, but it is time consuming. Not really that different from other meatball recipes I’ve made: meatballs are often the kind of dish that cooks all day.
The sauce cooks for a while on its own, and then cooks more with the meatballs in it. But the chicken needs to marinate for an hour or two – so you really need to start both around the same time.
The sauce is basically just a slightly amped up teriyaki sauce – but homemade is so much better than the crap you get out of a bottle, and it’s really easy to make.
Serve this on a bed of simple steamed rice, and some stir fried veggies on the side. Home cooking doesn’t get better than this.
]]>φ is the so-called golden ratio. It’s the number that is a solution for the equation (a+b)/a = (a/b). The reason that that’s interesting at all is because it’s got an interesting property when you draw it out: if you take a rectangle where the ratio of the length of the sides is 1:φ, then if you remove the largest possible square from it, you’ll get another rectangle whose sides have the ratio φ:1. If you take the largest square from that, you’ll get a rectangle whose sides have the ratio 1:φ. And so on.
The numeric value of it is (1+sqrt(5))/2, or about 1.618033988749895.
The problem with φ is that people are convinced that it’s some kind of incredibly profound thing, and find it all over the place. The problem is, virtually all of the places where people claim to find it are total rubbish. A number that’s just a tiny bit more that 1 1/2 is really easy to find if you go looking for it, and people go looking for it all over the place.
People claim it’s in all sorts of artwork. You can certainly find a ton of things in paintings whose size ratio is about 1 1/2, and people find it and insist that it was deliberately done to make it φ. People find it in musical scales, the diatonic and pentatonic scales, and the indian scales.
People claim it comes up all over the place in nature: in beehives, ant colonies, flowers, tree sizes, tree-limb positions, size of herds of animals, litters of young, body shapes, face shapes.
People claim it’s key to architecture.
And yet… it seems like if you actually take any of those and actually start to look at it in detail? The φ isn’t there. It’s just a number that’s kinda-sorta in the 1 1/2 range.
One example of that: there’s a common claim that human faces have proportions based on &phi. You can see a bunch of that nonsense here. The thing is, the “evidence” for the claim consists of rectangles drawn around photographs of faces – and if you look closely at those rectangles, what you find is that the placement of the corners isn’t consistent. When you define, say, “the distance between the eyes”, you can measure that as distances between inner-edges, or between pupils, or between outer edges. Most of these claims use outer edges. But where’s the outer edge of an eye? It’s not actually a well-defined point. You can pick a couple of different places in a photo as “the” edge. They’re all close together, so there’s not a huge amount of variation. But if you can fudge the width a little bit, and you can fudge other facial measurements just a little bit, you’ve got enough variation that if you’re looking for two measurements with a ratio close to φ, you’ll always find one.
Most of the φ nonsense is ultimately aesthetic: people claiming that the golden ratio has a fundamental beauty to it. They claim that facial features match it because it’s intrinsically beautiful, and so people whose faces have φ ratios are more beautiful, and that that led to sexual-selection which caused our faces to embody the ratio. I think that’s bunk, but it’s hard to make a mathematical argument against aesthetics.
But then, you get the real crackpots. There are people who think φ has amazing scientific properties. In the words of the crank I’m writing about today, understanding φ (and the “correct” value of π derived from it) will lead humanity to “enter into a veritable Space Age”.
I’m talking about a guy who calls himself “Jain 108″. I’m not quite sure what to call him. Mr. Jain? Mr. 108? Dr 108? Most of the time on his website, he just refers to himself as “Jain” (or sometimes “Jain of Oz”) so I’ll go with “Jain”).
Jain believes that φ is the key to mathematics, science, art, and human enlightenment. He’s a bit hard to pin down, because most of his website is an advertisement for his books and seminars: if you want to know “the truth”, you’ve got to throw Jain some cash. I’m not willing to give money to crackpots, so I’m stuck with just looking at what he’s willing to share for free. (But I do recommend browsing around his site. It’s an impressive combination of newage scammery, pomposity, and cluelessness.)
What you can read for free is more than enough to conclude that he’s a total idiot.
I’m going to focus my mockery on one page: “Is Pi a Lie?”.
On this page, Jain claims to be able to prove that the well-known value of π (3.14159265….) is wrong. In fact, that value is wrong, and the correct value of π is derived from φ! The correct value of π is , or about 3.144605511029693.
For reasons that will be soon explained, traditional Pi is deficient because historically it has awkwardly used logical straight lines to measure illogical curvature. Thus, by using the highest level of mathematics known as Intuitive Maths, the True Value of Pi must be a bit more than anticipated to compensate for the mysterious “Area Under The Curve”. When this is done, the value, currently known as JainPi, = 3.144… can be derived, by knowing the precise Height of the Cheops Pyramid which is based on the Divine Phi Proportion (1.618…). Instead of setting our diameter at 1 unit or 1 square, something magical happens when we set the diameter at the diagonal length of a Double Square = 2.236… which is the Square Root of 5 (meaning 2.236… x 2.236… = 5). This is the critical part of the formula that derives Phi , and was used by ancient vedic seers as their starting point to construct their most important diagram or ‘Yantra’ or power-art called the Sri Yantra. With a Root 5 diameter, the translation of the Phi’s formula into a geometric construct derives the royal Maltese Cross symbol, concluding that Phi is Pi, that Phi generates Pi, and that Pi must be derived with a knowledge of the Harmonics of Phi. When this is understood and utilized, we will collectively enter into a veritable Space Age.
How did we get the wrong value? It’s based on the “fact” that the computation of π is based on the use of “logical” straight lines to measure “illogical” curvurature. (From just that one sentence, we can already conclude that Jain knows nothing about logic, except what he learned from Mr. Spock on Star Trek.) More precisely, according to Jain:
In all due good respects, we must first honour Archimedes of Syracuse 2,225 years ago, who gave the world his system on how to calculate Pi, approximated to 22÷7, by cutting the circle into say 16 slices of a pizza, and measuring the 16 edge lengths of these 16 triangular polygons (fig 3), to get a good estimate for the circumference of a circle. The idea was that if we kept making the slices of pizza smaller and smaller, by subsequently cutting the circle into 32 slices, then 64, then 128 then 256 slices, we would get a better and more accurate representation for the circumference. The Fundamental Flawed Logic or Error with Archimede’s Increasing Polygon Method was that he failed to measure The Area Under The Curve. In fact, he assumed that The Area Under The Curve, just magically disappeared. Even in his time, Archimedes admitted that his value was a mere estimate!
This explanation does a beautiful job of demonstrating how utterly ignorant Jain is of math. Archimedes may have been the first person from the western tradition to have worked out a mechanism to compute a value for π – and his mechanism was a good one. But it’s far from the only one. But let’s ignore that for a moment. Jain’s supposed critique, if true, would mean that modern calculus doesn’t work. The wedge-based computation of π is a forerunner of the common methods of calculus. In reality, when we compute the value of almost any integral using calculus, our methods are based on the concept of drawing rectangles under the curve, and narrowing those rectangles until they’re infinitely small, at which point the “area under the curve” missed by the rectangles becomes zero. If the wedge computation of π is wrong because it misses are under the curve, then so will every computation using integral calculus.
Gosh, think we would have noticed that by now?
Let’s skip past that for a moment, and come back to the many ways that π comes into reality. π is the ratio of the diameter of a circle to its radius. Because circles are such a basic thing, there are many ways of deriving the value of π that come from its fundamental nature. Many of these have no relation to the wedge-method that Jain attributes to Archimedes.
For example, there is Viete’s product:
Or there’s the Gregory-Leibniz series:
These have no relation to the wedge-method – they’re derived from the fundamental nature of π. And all of them produce the same value – and it’s got no connection at all to φ.
As supportive evidence for the incorrectness of π, Jain gives to apocryphal stories about NASA and the moon landings. First, he claims that the first moon landing was off by 20 kilometers, and that the cause of this was an incorrect value of π: that the value of π used in computing trajectories was off by 0.003:
NASA admitted that when the original Mooncraft landing occurred, the targeted spot was missed by about 20km?
What could have been wrong with the Calculations?
NASA subsequently adjusted their traditional mathematical value for Pi (3.141592…) by increasing it in the 3rd decimal by .003!
Let’s take just a moment, and consider that.
It’s a bit difficult to figure out how to address that, because he’s not mentioning what part of the trajectory was messed up. Was it the earth-to-moon transit of the full apollo system? Or was it the orbit-to-ground flight of the lunar lander? Since he doesn’t bother to tell us, we’ll look at both.
π does matter when computing the trajectory of the earth-to-moon trip – because it involves the intersection of two approximate circles – the orbit of the earth around the sun, and the orbit of the moon around the earth. (Both of these are approximations, but they’re quite useful ones; the apollo trajectory computations did rely on a value for π.
Let’s look at earth-to-moon. I’m going to oversimplify ridiculously – but I’m just trying to give us a ballpark order-of-magnitude guess as just how much of a difference Mr. Jain’s supposed error would cause. THe distance from the earth to the moon is about 384,000 kilometers. If we assume that π is a linear factor in the computation, then a difference in the value of pi of around 1 part in 1000 would cause a difference in distance computations of around 384 kilometers. Mr. Jain is alleging that the error only caused a difference of 20 kilometers. He’s off by a factor of 15. We can hand-wave this away, and say that the error that caused the lander to land in the “wrong” place wasn’t in the earth-moon trajectory computation – but we’re still talking about the apollo unit being in the wrong place by hundreds of kilometers – and no one noticing.
What if the problem was in the computation of the trajectory the lander took from the capsule to the surface of the moon? The orbit was a nearly circular one at about 110 kilometers above the lunar surface. How much of an error would the alleged π difference cause? About 0.1 kilometer – that is, about 100 meters. Less than what Jain claims by a factor of 200.
The numbers don’t work. These aren’t precise calculations by any stretch, but they’re ballpark. Without Jain providing more information about the alleged error, they’re the best we can do, and they don’t make sense.
Jain claims that in space work, scientists now use an adjusted value of π to cover the error. This piece I can refute by direct knowledge. My father was a physicist who worked on missiles, satellites, and space probes. (He was part of the Galileo team.) They used good old standard 3.14159 π. In fact, he explained how the value of π actually didn’t need to be that precise. In satellite work, you’re stuck with the measurement problems of reality. In even the highest precision satellite work, they didn’t use more that 4 significant digits of precision, because the manufacturing and measurement of components was only precise to that scale. Beyond that, it was always a matter of measure and adjust. Knowing that π was 3.14159265356979323 was irrelevant in practice, because anything beyond “about 3.1416″ was smaller that the errors in measurement.
Mr. Jain’s next claim is far worse.
Also, an ex-Engineer from NASA, “Smokey” admitted (via email) that when he was making metal cylinders for this same Mooncraft, finished parts just did not fit perfectly, so an adjusted value for Pi was also implemented. At the time, he thought nothing about it, but after reading an internet article called The True Value of Pi, by Jain 108, he made contact.
This is very, very simple to refute by direct experience. This morning, I got up, shaved with an electric razor (3 metal rotors), made myself iced coffee using a moka pot (three round parts, tight fitted, with circular-spiral threading). After breakfast, I packed my backpack and got in my car to drive to the train. (4 metal cylinders with 4 precisely-fitted pistons in the engine, running on four wheels with metal rims, precisely fitted to circular tires, and brakes clamping on circular disks.) I drove to the train station, and got on an electric train (around 200 electric motors on the full train, with circular turbines, driving circular wheels).
All those circles. According to Jain, every one of those circles isn’t the size we think it is. And yet they all fit together perfectly. According to Jain, every one of those circular parts is larger that we think it should be. To focus on one thing, every car engine’s pistons – every one of the millions of pistons created every year by companies around the world – requires more metal to produce than we’d expect. And somehow, in all that time, no one has ever noticed. Or if they’ve noticed, every single person who ever noticed it has never mentioned it!
It’s ludicrous.
Jain also claims that the value of e is wrong, and comes up with a cranky new formula for computing it. Of course, the problem with e is the same as the problem wiht π: in Jain’s world, it’s really based on φ.
In Jain’s world, everything is based on φ. And there’s a huge, elaborate conspiracy to keep it secret. Any Jain will share the secret with you, showing you how everything you think you know is wrong. You just need to buy his books ($77 for a hard-copy, or $44 for an ebook.) Or you could pay for him to travel to you and give you a seminar. But he doesn’t list a price for that – you need to send him mail to inquire.
]]>That’s it. Best damned steak ever.
]]>I keep harping on this, but it’s the heart of type theory: type theory is all about understanding logical statements as specifications of computations. Or, more precisely, in computer science terms, they’re about understanding true logical statements as halting computations.
In this post, we’ll see the ultimate definition of truth in type theory: every logical proposition is a set, and the proposition is true if the set has any members. A non-halting computation is a false statement – because you can never get it to resolve an expression to a canonical value.
So remember as we go through this: judgements are based on the idea of logical statements as specifications of computations. So when we talk about a predicate P, we’re using its interpretation as a specification of a computation. When we look at an expression 3+5, we understand it not as a piece of notation that describes the number 8, but as a description of a computation that adds 3 to 5. “3+5″ not the same computation as “2*4″ or “2+2+2+2″, but as we’ll see, they’re equal because they evaluate to the same thing – that is, they each perform a computation that results in the same canonical value – the number 8.
In this post, we’re going to focus on a collection of canonical atomic judgement types:
The definition of the meanings of judgements is, necessarily, mutually recursive, so we’ll have to go through all of them before any of them is complete.
When I say that and are equal sets, that means:
The only tricky thing about the definition of this judgement is the fact that it defines equality is a property of a set, not of the elements of the set. By this definition, tt’s possible for two expressions to be members of two different sets, and to be equal in one, but not equal in the other. (We’ll see in a bit that this possibility gets eliminated, but this stage of the definition leaves it open!)
This nails down the problem back in set equivalence: since membership equivalence is defined in terms of evaluation as canonical values, and every expression evaluates to exactly one canonical expression (that’s the definition of canonical!), then if two objects are equal in a set, they’re equal in all sets that they’re members of.
Truth in type theory really comes down to membership in a set. This is, subtly, different from the predicate logic that we’re familiar with. In predicate logic, a quantified proposition can, ultimately, be reduced to a set of values, but it’s entirely reasonable for that set to be empty. I can, for example, write a logical statement that “All dogs with IQs greater that 180 will turn themselves into cats.” A set-based intepretation of that is the collection of objects for which it’s true. There aren’t any, because there aren’t any dogs with IQs greater than 180. It’s empty. But logically, in terms of predicate logic, I can still say that it’s “true”. In type theory, I can’t: to be true, the set has to have at least one value, and it doesn’t.
In the next post, we’ll take these atomic judgements, and start to expand them into the idea of hypothetical judgements. In the type-theory sense, that means statements require some other set of prior judgements before they can be judged. In perhaps more familiar terms, we’ll be looking at type theory’s equivalent of the contexts in classical sequent calculus – those funny little s that show up in all of the sequent rules.
]]>A couple of notes on ingredients:
In the last couple of posts, we looked at Martin-Löf’s theory of expressions. The theory of expressions is purely syntactic: it’s a way of understanding the notation of expressions. Everything that we do in type theory will be written with expressions that follow the syntax, the arity, and the definitional equivalency rules of expression theory.
The next step is to start to understand the semantics of expressions. In type theory, when it comes to semantics, we’re interested in two things: evaluation and judgements. Evaluation is the process by which an expression is reduced to its simplest form. It’s something that we care about, but it’s not really a focus of type theory: type theory largely waves its hands in the air and says “we know how to do this”, and opts for normal-order evaluation. Judgements are where things get interesting.
Judgements are provable statements about expressions and the values that they represent. As software people, when we think about types and type theory, we’re usually thinking about type declarations: type declarations are judgements about the expressions that they apply to. When you write a type declaration in a programming language, what you’re doing is asserting the type theory judgement. When the compiler “type-checks” your program, what it’s doing in type theory terms is checking that your judgements are proven by your program.
For example, we’d like to be able to make the judgement A set
– that is, that A is a set. In order to make the judgement that A is a set in type theory, we need to know two things:
To understand those two properties, we need to take a step back. What is a canonical instance of a set?
If we think about how we use predicate logic, we’re always given some basic set of facts as a starting point. In type theory, the corresponding concept is a primitive constant. The primitive constants include base values and primitive functions. For example, if we’re working with lispish expressions, then cons(1, cons(2, nil))
is an expression, and cons
, nil
, 1
and 2
are primitive constants; cons
is the head of the expression, and 1
and cons(2, nil)
are the arguments.
A canonical expression is a saturated, closed expression whose head is a primitive constant.
The implications of this can be pretty surprising, because it means that a canonical expression can contain unevaluated arguments! The expression has to be saturated and closed – so its arguments can’t have unbound variables, or be missing parameters. But it can contain unevaluated subexpressions. For example, if we were working with Peano arithmetic in type theory, succ(2+3)
is canonical, even though “2+3″ hasn’t be evaluated.
In general, in type theory, the way that we evaluate an expression is called normal order evaluation – what programming language people call lazy evaluation: that’s evaluating from the outside in. Given a non-canonical expression, we evaluate from the outside in until we get to a canonical expression, and then we stop. A canonical expression is considered the result of a computation – so we can see succ(2+3)
as a result!
A canonical expression is the evaluated form of an expression, but not the fully evaluated form. The fully evaluated form is when the expression and all of its saturated parts are fully evaluated. So in our previous example, the saturated part 2+3
wasn’t evaluated, so it’s not fully evaluated. To get it to be fully evaluated, we’d need to evaluate 2+3
, giving us succ(5)
; then, since succ(5)
is saturated, it’s evaluated to 6
, which is the fully evaluated form of the expression.
Next post (coming monday!), we’ll use this new understanding of canonical expressions, and start looking at judgements, and what they mean. That’s when type theory starts getting really fun and interesting.
]]>We live in a culture that embodies a basic conflict. On one hand, racism and sexism are a deeply integrated part of our worldview. But on the other wand, we’ve come to believe that racism and sexism are bad. This puts us into an awkward situation. We don’t want to admit to saying or doing racist things. But there’s so much of it embedded in every facet of our society that it takes a lot of effort and awareness to even begin to avoid saying and doing racist things.
The problem there is that we can’t stop being racist/sexist until we admit that we are. We can’t stop doing sexist and racist things until we admit that we do sexist and racist things.
And here’s where we hit the logic piece. The problem is easiest to explain by looking at it in formal logical terms. We’ll look at it from the viewpoint of sexism, but the same argument applies for racism.
The key statement that I want to get to is: We believe that people who do bad things are bad people: .
That means that if you do something sexist, you are a bad person:
We know that we aren’t bad people: I’m a good person, right? So we reject that conclusion. I’m not bad; therefore, I can’t be sexist, therefore whatever I did couldn’t have been sexist.
This looks shallow and silly on the surface. Surely mature adults, mature educated adults couldn’t be quite that foolish!
Now go read this.
If his crime was to use the phrase “boys with toys”, and that is your threshold for sexism worthy of some of the abusive responses above, then ok – stop reading now.
My problem is that I have known Shri for many years, and I don’t believe that he’s even remotely sexist. But in 2015 can one defend someone who’s been labeled sexist without a social media storm?
Are people open to the possibility that actually Kulkarni might be very honourable in his dealings with women?
In an interview a week or so ago, Professor Shri Kulkarni said something stupid and sexist. The author of that piece believes that Professor Kulkarni couldn’t have said something sexist, because he knows him, and he knows that he’s not sexist, because he’s a good guy who treats women well.
The thing is, that doesn’t matter. He messed up, and said something sexist. It’s not a big deal; we all do stupid things from time to time. He’s not a bad person because he said something sexist. He just messed up. People are, correctly, pointing out that he messed up: you can’t fix a problem until you acknowledge that it exists! When you say something stupid, you should expect to get called on it, and when you do, you should accept it, apologize, and move on with your life, using that experience to improve yourself, and not make the mistake again.
The thing about racism and sexism is that we’re immersed in it, day in and day out. It’s part of the background of our lives – it’s inescapable. Living in that society means means that we’ve all absorbed a lot of racism and exism without meaning to. We don’t have to like that, but it’s true. In order to make things better, we need to first acklowledge the reality of the world that we live in, and the influence that it has on us.
In mathematical terms, the problem is that good and bad, sexist and not sexist, are absolutes. When we render them into pure two-valued logic, we’re taking shades of gray, and turning them into black and white.
There are people who are profoundly sexist or racist, and that makes them bad people. Just look at the MRAs involved in Gamergate: they’re utterly disgusting human beings, and the thing that makes them so despicably awful is the awfulness of their sexism. Look at a KKKer, and you find a terrible person, and the thing that makes them so terrible is their racism.
But most people aren’t that extreme. We’ve just absorbed a whole lot of racism and sexism from the world we’ve lived our lives in, and that influences us. We’re a little bit racist, and that makes us a little bit bad – we have room for improvement. But we’re still, mostly, good people. The two-valued logic creates an apparent conflict where none really exists.
Where do these sexist/racist attitudes come from? Picture a scientist. What do you see in your minds eye? It’s almost certainly a white guy. It is for me. Why is that?
In short, in all of my exposure to science, from kindergarten to graduate school, scientists were white men. (For some reason, I encountered a lot of women in math and comp sci, but not in the traditional sciences.) So when I picture a scientist, it’s just natural that I picture a man. There’s a similar story for most of us who’ve grown up in the current American culture.
When you consider that, it’s both an explanation of why we’ve got such a deeply embedded sexist sense about who can be a scientist, and an explanation how, despite the fact that we’re not deliberately being sexist, our subconscious sexism has a real impact.
I’ve told this story a thousand times, but during the time I worked at IBM, I ran the intership program for my department one summer. We had a deparmental quota of how many interns each department could pay for. But we had a second program that paid for interns that were women or minority – so they didn’t count against the quota. The first choice intern candidate of everyone in the department was a guy. When we ran out of slots, the guy across the hall from me ranted and raved about how unfair it was. We were discriminating against male candidates! It was reverse sexism! On and on. But the budget was what the budget was. Two days later, he showed up with a resume for a young woman, all excited – he’d found a candidate who was a woman, and she was even better than the guy he’d originally wanted to hire. We hired her, and she was brilliant, and did a great job that summer.
The question that I asked my office-neighbor afterwards was: Why didn’t he find the woman the first time through the resumes? He went through the resumes of all of the candidates before picking the original guy. The woman that he eventually hired had a resume that was clearly better than the guy. Why’d he pass her resume to settle on the guy? He didn’t know.
That little story demonstrates two things. One, it demonstrates the kind of subconscious bias we have. We don’t have to be mustache-twirling black-hatted villains to be sexists or racists. We just have to be human. Two, it demonstrates the way that these low-level biases actually harm people. Without our secondary budget for women/minority hires, that brilliant young woman would never have gotten an internship at IBM; without that internship, she probably wouldn’t have gotten a permanent job at IBM after graduation.
Professor Kulkarni said something silly. He knew he was saying something he shouldn’t have, but he went ahead and did it anyway, because it was normal and funny and harmless.
It’s not harmless. It reinforces that constant flood of experience that says that all scientists are men. If we want to change the culture of science to get rid of the sexism, we have to start with changing the deep attitudes that we aren’t even really aware of, but that influence our thoughts and decisions. That means that when someone says we did something sexist or racist, we need to be called on it. And when we get called on it, we need to admit that we did something wrong, apologize, and try not to make the same mistake again.
We can’t let the black and white reasoning blind us. Good people can be sexists or racists. Good people can do bad things without meaning to. We can’t allow our belief in our essential goodness prevent us from recognizing it when we do wrong, and making the choices that will allow us to become better people.
]]>Anyway, Daniel sent me a link to this article, and asked me to write about the error in it.
The real subject of the article involves a recent twitter-storm around a professor at Boston University. This professor tweeted some about racism and history, and she did it in very blunt, not-entirely-professional terms. The details of what she did isn’t something I want to discuss here. (Briefly, I think it wasn’t a smart thing to tweet like that, but plenty of white people get away with worse every day; the only reason that she’s getting as much grief as she is is because she dared to be a black woman saying bad things about white people, and the assholes at Breitbart used that to fuel the insatiable anger and hatred of their followers.)
But I don’t want to go into the details of that here. Lots of people have written interesting things about it, from all sides. Just by posting about this, I’m probably opening myself up to yet another wave of abuse, but I’d prefer to avoid and much of that as I can. Instead, I’m just going to rip out the introduction to this article, because it makes a kind of incredibly stupid mathematical argument that requires correction. Here are the first and second paragraphs:
There aren’t too many African Americans in higher education.
In fact, black folks only make up about 4 percent of all full time tenured college faculty in America. To put that in context, only 14 out of the 321—that’s about 4 percent—of U.S. astronauts have been African American. So in America, if you’re black, you’ve got about as good a chance of being shot into space as you do getting a job as a college professor.
Statistics and probability can be a difficult field of study. But… a lot of its everyday uses are really quite easy. If you’re going to open your mouth and make public statements involving probabilities, you probably should make sure that you at least understand the first chapter of “probability for dummies”.
This author doesn’t appear to have done that.
The most basic fact of understanding how to compare pretty much anything numeric in the real world is that you can only compare quantities that have the same units. You can’t compare 4 kilograms to 5 pounds, and conclude that 5 pounds is bigger than 4 kilograms because 5 is bigger than four.
That principle applies to probabilities and statistics: you need to make sure that you’re comparing apples to apples. If you compare an apple to a grapefruit, you’re not going to get a meaningful result.
The proportion of astronauts who are black is 14/321, or a bit over 4%. That means that out of every 100 astronauts, you’d expect to find four black ones.
The proportion of college professors who are black is also a bit over 4%. That means that out of every 100 randomly selected college professors, you’d expect 4 to be black.
So far, so good.
But from there, our intrepid author takes a leap, and says “if you’re black, you’ve got about as good a chance of being shot into space as you do getting a job as a college professor”.
Nothing in the quoted statistic in any way tells us anything about anyone’s chances to become an astronaut. Nothing at all.
This is a classic statistical error which is very easy to avoid. It’s a unit error: he’s comparing two things with different units. The short version of the problem is: he’s comparing black/astronaut with astronaut/black.
You can’t derive anything about the probability of a black person becoming an astronaut from the ratio of black astronauts to astronauts.
Let’s pull out some numbers to demonstrate the problem. These are completely made up, to make the calculations easy – I’m not using real data here.
Suppose that:
In this scenario, as in reality, the percentage of black college professors and the percentage of black astronauts are equal. What about the probability of a given black person being a professor or an astronaut?
The probability of a black person being a professor is 2,000/120,000,000 – or 1 in 60,000. The probability of a black person becoming an astronaut is just 2/120,000,000 – or 1 in 60 million. Even though the probability of a random astronaut being black is the same as a the probability of a random college professor being black, the probability of a given black person becoming a college professor is 10,000 times higher that the probability of a given black person becoming an astronaut.
This kind of thing isn’t rocket science. My 11 year old son has done enough statistics in school to understand this problem! It’s simple: you need to compare like to like. If you can’t understand that, if you can’t understand your statistics enough to understand their units, you should probably try to avoid making public statements about statistics. Otherwise, you’ll wind up doing something stupid, and make yourself look like an idiot.
(In the interests of disclosure: an earlier version of this post used the comparison of apples to watermelons. But given the racial issues discussed in the post, that had unfortunate unintended connotations. When someone pointed that out to me, I changed it. To anyone who was offended: I am sorry. I did not intend to say anything associated with the racist slurs; I simply never thought of it. I should have, and I shouldn’t have needed someone to point it out to me. I’ll try to be more careful in the future.)
]]>At the end of the last post, I started giving a sketch of what arities look like. Now we’re going to dive in, and take a look at how to determine the arity of an expression. It’s a fairly simple system of rules.
Before diving in, I want to stress the most important thing about the way that these rules work is that the expressions are totally syntactic and static. This is something that confused me the first time I tried to read about expression theory. When you see an expression, you think about how it’s evaluated. But expression theory is a purely syntactic theory: it’s about analyzing expressions as syntactic entities. There are, deliberately, no evaluation rules. It’s about understanding what the notations mean, and how to determine when two expressions are equivalent.
If, under the rules of Martin-Löf’s expression theory, two expressions are equivalent, then if you were to chose a valid set of evaluation rules, the two expressions will evaluate to the same value. But expression equivalence is stronger: expressions are equivalent only if you can prove their equivalence from their syntax.
That clarified, let’s start by looking at the rules of arity in expressions.
Let’s try working through an example: .
If you’re familiar with type inference in the simply typed lambda calculus, this should look pretty familiar; the only real difference is that the only thing that arity really tracks is applicability and parameter counting.
Just from this much, we can see how this prevents problems. If you try to compute the arity of (that is, the selection of the first element from 3), you find that you can’t: there is no arity rule that would allow you to do that. The selection rule only works on a product-arity, and 3 has arity 0.
The other reason we wanted arity was to allow us to compare expressions. Intuitively, it should be obvious that the expression and the expression are in some sense equal. But we need some way of being able to actually precisely define that equality.
The kind of equality that we’re trying to get at here is called definitional equality. We’re not trying to define equality where expressions and evaluate to equal values – that would be easy. Instead, we’re trying to get at something more subtle: we want to capture the idea that the expressions are different ways of writing the same thing.
We need arity for this, for a simple reason. Let’s go back to that first example expression: . Is that equivalent to ? Or to ? If we apply them to the value 3, and then evaluate them using standard arithmetic, then all three expressions evaluate to 25. So are they all equivalent? We want to be able to say that the first two are equivalent expressions, but the last one isn’t. And we’d really like to be able to say that structurally – that is, instead of saying something evaluation-based like “forall values of x, eval(f(x)) == eval(g(x)), therefore f == g”, we want to be able to do something that says because they have the same structure.
Using arity, we can work out a structural definition of equivalence for expressions.
In everything below, we’l write to mean that has arity , and to mean that and are equivalent expressions of arity . We’ll define equivalence in a classic inductive form by structure:
This is arities version of the eta-rule from lambda calculus: If is a variable of arity , and is an expression of arity , then . This is a fancy version of an identity rule: abstraction and application cancel.
Jumping back to our example: Is equivalent to ? If we convert them both into their canonical AST forms, then yes. They’re identical, except for one thing: the variable name in their abstraction. By abstraction rule 2, then, they’re equivalent.
]]>