# Sloppy Dualism Denies Free Will?

When I was an undergrad in college, I was a philosophy minor. I spent countless hours debating ideas about things like free will. My final paper was a 60 page rebuttal to what I thought was a sloppy argument against free will. Now, it’s been more years since I wrote that than I care to admit – and I still keep seeing the same kind of sloppy arguments, that I argue are ultimately circular, because they’re hiding their conclusion in their premises.

There’s an argument against free will that I find pretty compelling. I don’t agree with it, but I do think that it’s a solid argument:

Everything in our experience of the universe ultimately comes down to physics. Every phenomenon that we can observe is, ultimately, the result of particles interacting according to basic physical laws. Thermodynamics is the ultimate, fundamental ruler of the universe: everything that we observe is a result of a thermodynamic process. There are no exceptions to that.

Our brain is just another physical device. It’s another complex system made of an astonishing number of tiny particles, interacting in amazingly complicated ways. But ultimately, it’s particles interacting the way that particles interact. Our behavior is an emergent phenomenon, but ultimately, we don’t have any ability to make choice, because there’s no mechanism that allows us free choice. Our choice is determined by the physical interactions, and our consciousness of those results is just a side-effect of that.

If you want to argue that free will doesn’t exist, that argument is rock solid.

But for some reason, people constantly come up with other arguments – in fact, much weaker arguments that come from what I call sloppy dualism. Dualism is the philosophical position that says that a conscious being has two different parts: a physical part, and a non-physical part. In classical terms, you’ve got a body which is physical, and a mind/soul which is non-physical.

In this kind of argument, you rely on that implicit assumption of dualism, essentially asserting that whatever physical process we can observe isn’t really you, and that therefore by observing any physical process of decision-making, you infer that you didn’t really make the decision.

For example…

And indeed, this is starting to happen. As the early results of scientific brain experiments are showing, our minds appear to be making decisions before we’re actually aware of them — and at times by a significant degree. It’s a disturbing observation that has led some neuroscientists to conclude that we’re less in control of our choices than we think — at least as far as some basic movements and tasks are concerned.

This is something that I’ve seen a lot lately: when you do things like functional MRI, you can find that our brains settled on a decision before we consciously became aware of making the choice.

Why do I call it sloppy dualism? Because it’s based on the idea that somehow the piece of our brain that makes the decision is different from the part of our brain that is our consciousness.

If our brain is our mind, then everything that’s going on in our brain is part of our mind. Taking a piece of our brain, saying “Whoops, that piece of your brain isn’t you, so when it made the decision, it was deciding for you instead of it being you deciding.

By starting with the assumption that the physical process of decision-making we can observe is something different from your conscious choice of the decision, this kind of argument is building the conclusion into the premises.

If you don’t start with the assumption of sloppy dualism, then this whole argument says nothing. If we don’t separate our brain from our mind, then this whole experiment says nothing about the question of free will. It says a lot of very interesting things about how our brain works: it shows that there are multiple levels to our minds, and that we can observe those different levels in how our brains function. That’s a fascinating thing to know! But does it say anything about whether we can really make choices? No.

# Depression and Geeks

Since this weekend, when the news of Aaron Swartz’s suicide, there’s been a lot of discussion of the goverments ridiculous pursuit of him, and of the fact that he suffered from depression. I can’t contribute anything new about his prosecution. It was despicable, ridiculous, and sadly, all too typical of how our government works.

But on the topic of depression, I want to chime in. A good friend of mine wrote a post on his own blog about depression in the tech/geek community., which I feel like I have to respond to.

Benjy, who wrote the post, is a great guy who I have a lot of respect for. I don’t intend this to be an attack on him. But I’ve seen a lot of similar comments, and I think that they’re built on a very serious mistake.

Benjy argues that the mathematical/scientific/logical mindset of a geek (my word, not his) makes us more prone to depression:

Someone whose toolkit for dealing with the world consists of logic and reason, ideals and abstractions, may have particularly weak defenses against this trickster disease.

You realize that it’s lying to you, that there are treatments, that that things aren’t objectively as bad as they feel. But you know, on some level deeper than logic, that there is no point, no hope and no future. And to encounter, maybe for the first time, the hard limits of rationality, to realize that there’s a part of your mind that can override the logical world view that is the core of your identity, may leave you feeling particularly helpless and hopeless.

You can’t rationalize depression away, a fact that people who’ve never suffered from it find hard to comprehend. But if someone you care about is struggling with it, and it’s likely that someone is, you can help them find a new way to access their mind.

Tell them that you care about them and appreciate them and are glad to have them in your life. Show them that you enjoy being around them and that you love them. And above all, spend time with them. Give them glimpses of an alternate future, one in which they are secure, happy and loved, tear away the lies that depression needs in order to survive, and in that sunlight it will wither.

Most of what Benjy wrote, I agree with completely. The problem that I have with it is that I think that parts of it are built on the assumption that our conscious reasoning is a part of the cause of depression. If geeks are more prone to suffering from depression because the way that our minds work, that means that the way that we make decisions and interpret the world is a part of why we suffer from this disease. The implication that too many people will draw from that is that we just need to decide to make different decisions, and the disease will go away. But it won’t – because depression isn’t a choice.

The thing that you always need to remember about depression – and which Benjy mentions – is that depression is not something which you can reason with. Depression isn’t a feeling. It’s not a way of thinking, or a way of viewing the world. It’s not something that you can choose not to suffer from. It’s a part of how your brain works.

The thing that anyone who suffers from depression needs to know is that it’s a disease, and that it’s treatable. It doesn’t matter if your friends are nice to you. It doesn’t matter if you know that they love you. That kind of thinking – that kind of reasoning about depression – is part of the fundamental trap of depression.

Depression is a disease of the brain, and it affects your mind – it affects your self in a terrible way. No amount of support from your friends and family, no amount of positive reinforcement can change that. Believing that emotional support can help a depressed person is part of the problem, because it’s tied to the all-too-common stigma of mental illness: that you’re only suffering because you’re too weak or too helpless to get over it.

You don’t just get over a mental illness like depression, any more than you get over diabetes. As a friend or loved one of a person with diabetes, being kind, showing your love for them doesn’t help unless you get them to get treatment.

I’m speakaing from experience. I’ve been there. I spent years being miserable. It nearly wrecked my marriage. My wife was as supportive and loving as anyone could dream of. But I couldn’t see it. I couldn’t see anything.

The experience of depression in different for different people. But for me, it was like the world had gone flat. I wasn’t sad – I was just dead inside. Nothing could have any impact on me. It’s a hard thing to explain, but looking back, it’s like the world had gone two-dimensional and black-and-white. Eventually, I was reading something in some magazine about depression, and it talked about that flat feeling, and I realized that maybe, maybe that was what was wrong with me.

When I started taking antidepressants, it was almost frightening, because it changed the world so much. ANtidepressants didn’t make me happy. In fact, for a while, they made me very sad, because I was realizing how awful I’d been treating my wife and daughter. But they made me feel things again. A few weeks after I started taking them, I realized that I was noticing colors. I hadn’t done that for years. It wasn’t that I couldn’t see colors when I was depressed, but they didn’t mean anything.

Antidepressants aren’t a panacaea. They don’t work for everyone. But there are treatments that can help. The way to defeat depression is to do something that changes the way the brain is functioning. For some people, the exercise of therapy can do that. For others, it’s medication. For still others, exercise. The key is to get to someone who understands the disease, and who can help you find what will work for your brain.

My point here is that when we’re talking about depression, we need to realize that most of the time, no one is at fault. People don’t suffer from depression because they did something wrong, or because they’re weak, or because they’re flawed. People don’t suffer from depression because their friends and family are inadequate. Depression is a disease – a treatable, chronic disease. It needs to be recognized, and it needs to be treated.

In my case, my depression wasn’t caused by my wife and daughter. It wasn’t their fault, and it wasn’t my fault. No amount of support, love, and appreciation could have helped, because the nature of my depression meant that I couldn’t see those things. The only thing that anyone could have done for me is recognized that I was suffering from depression, and pushed me to get treatment sooner.

If someone you know is suffering from depression, then they need help. But the help they need isn’t any amount of love or appreciation. It isn’t instilling any kind of hope, because depression kills hope in your brain. The thing that you can do to help is to help them get the treatment that they need.

# The Investors vs. the Tabby

There’s an amusing article making its rounds of the internet today, about the successful investment strategy of a cat named Orlando..

A group of people at the Observer put together a fun experiment.
They asked three groups to pretend that they had 5000 pounds, and asked each of them to invest it, however they wanted, in stocks listed on the FTSE. They could only change their investments at the end of a calendar quarter. At the end of the year, they compared the result of the three groups.

Who were the three groups?

1. The first was a group of professional investors – people who are, at least in theory, experts at analyzing the stock market and using that analysis to make profitable investments.
2. The second was a classroom of students, who are bright, but who have no experience at investment.
3. The third was an orange tabby cat named Orlando. Orlando chose stocks by throwing his toy mouse at a
targetboard randomly marked with investment choices.

As you can probably guess by the fact that we’re talking about this, Orlando the tabby won, by a very respectable margin. (Let’s be honest: if the professional investors came in first, and the students came in second, no one would care.) At the end of the year, the students had lost 160 pounds on their investments. The professional investors ended with a profit of 176 pounds. And the cat ended with a profit of 542 pounds – more than triple the profit of the professionals.

Most people, when they saw this, had an immediate reaction: “see, those investors are a bunch of idiots. They don’t know anything! They were beaten by a cat!”
And on one level, they’re absolutely right. Investors and bankers like to present themselves as the best of the best. They deserve their multi-million dollar earnings, because, so they tell us, they’re more intelligent, more hard-working, more insightful than the people who earn less. And yet, despite their self-alleged brilliance, professional investors can’t beat a cat throwing a toy mouse!

It gets worse, because this isn’t a one-time phenomenon: there’ve been similar experiments that selected stocks by throwing darts at a news-sheet, or by rolling dice, or by picking slips of paper from a hat. Many times, when people have done these kinds of experiments, the experts don’t win. There’s a strong implication that “expert investors” are not actually experts.

Does that really hold up? Partly yes, partly no. But mostly no.

Before getting to that, there’s one thing in the article that bugged the heck out of me: the author went out of his/her way to make sure that they defended the humans, presenting their performance as if positive outcomes were due to human intelligence, and negative ones were due to bad luck. In fact, I think that in this experiment, it was all luck.

For example, the authors discuss how the professionals were making more money than the cat up to the last quarter of the year, and it’s presented as the human intelligence out-performing the random cat. But there’s no reason to believe that. There’s no evidence that there’s anything qualitatively different about the last quarter that made it less predictable than the first three.

The headmaster at the student’s school actually said “The mistakes we made earlier in the year were based on selecting companies in risky areas. But while our final position was disappointing, we are happy with our progress in terms of the ground we gained at the end and how our stock-picking skills have improved.” Again, there’s absolutely no reason to believe that the students stock picking skills miraculously improved in the final quarter; much more likely that they just got lucky.

The real question that underlies this is: is the performance of individual stocks in a stock market actually predictable, or is it dominantly random. Most of the evidence that I’ve seen suggests that there’s a combination; on a short timescale, it’s predominantly random, but on longer timescales it becomes much more predictable.

But people absolutely do not want to believe that. We humans are natural pattern-seekers. It doesn’t matter whether we’re talking about financial markets, pixel-patterns in a bitmap, or answers on a multiple choice test: our brains look for patterns. If you randomly generate data, and you look at it long enough, with enough possible strategies,
you’ll find a pattern that fits. But it’s an imposed pattern, and it has no predictive value. It’s like the images of jesus on toast: we see patterns in noise. So people see patterns in the market, and they want to believe that it’s predictable.

Second, people want to take responsibility for good outcomes, and excuse bad ones. If you make a million dollars betting on a horse, you’re going to want to say that it was your superiour judgement of the horses that led to your victory. When an investor makes a million dollars on a stock, of course he wants to say that he made that money because he made a smart choice, not because he made a lucky choice. But when that same investor loses a million dollars, he doesn’t want to say that the lost a million dollars because he’s stupid; he wants to say that he lost money because of bad luck, of random factors beyond his control that he couldn’t predict.

The professional investors were doing well during part of the year: therefore, during that part of the year, they claim that their good performance was because they did a good job judging which stocks to buy. But when they lost money during the last quarter? Bad luck. But overall, their knowledge and skills paid off! What evidence do we have to support that? Nothing: but we want to assert that we have control, that experts understand what’s going on, and are able to make intelligent predictions.

The students performance was lousy, and if they had invested real money, they would have lost a tidy chunk of it. But their teacher believes that their performance in the last quarter wasn’t luck – it was that their skills had improved. Nonsense! They were lucky.

On the general question: Are “experts” useless for managing investments?

It’s hard to say for sure. In general, experts do perform better than random, but not by a huge margin, certainly not by as much as they’d like us to believe. The Wall Street Journal used to do an experiment where they compared dartboard stock selection against human experts, and against passive investment in the Dow Jones Index stocks over a one-year period. The pros won 60% of the time. That’s better than chance: the experts knowledge/skills were clearly benefiting them. But: blindly throwing darts at a wall could beat experts 2 out of 5 times!

When you actually do the math and look at the data, it appears that human judgement does have value. Taken over time, human experts do outperform random choices, by a small but significant margin.

What’s most interesting is a time-window phenomenon. In most studies, the human performance relative to random choice is directly related to the amount of time that the investment strategy is followed: the longer the timeframe, the better the humans perform. In daily investments, like day-trading, most people don’t do any better than random. The performance of day-traders is pretty much in-line with what you’d expect from probability from random choice. Monthly, it’s still mostly a wash. But if you look at yearly performance, you start to see a significant difference: humans do typically outperform random choice by a small but definitely margin. If you look at longer time-frames, like 5 or ten years, then you start to see really sizeable differences. The data makes it look like daily fluctuations of the market are chaotic and unpredictable, but that there are long-term trends that we can identify and exploit.

# Weekend Recipe: Flank Steak with Mushroom Polenta

I just finished eating a great new dinner, and I’m going to share the recipe with you.

Both my wife and I never particularly liked polenta. But recently, we’ve
had it in a couple of outstanding Italian restaurants, and realized that polenta could be wonderful. What made the difference were two things: first, coarse-ground polenta. If you use fine-ground cornmeal for the polenta, it comes out very smooth and creamy. A lot of people like it that way. I don’t. Second, keeping it soft. Polenta, because of the starch, can become very gluey. It needs to be cooked with enough liquid and enough fat to keep it light.

So after discovering that we liked it, I went out and bought some good stone-ground coarse polenta to experiment with. I knew from the places where I’d like the polenta that it goes really well with strong-flavored meats. So I decided to make a flank steak. Since I absolutely adore mushrooms with steak, I wanted to find a way to get mushroom flavor into the polenta, so I went with a nice duxelles.

The result was absolutely phenomenal: one of the best meals I’ve made in the last several months.

Ingredients:

• 2 lbs flank steak.
• 2 cloves minced garlic.
• 1/2 teaspoon dijon mustard.
• 2 teaspoons tomato paste
• 1/2 cup red wine
• 1 tablespoon red wine vinegar
• For the duxelles:
• 1 pound mushrooms, minced.
• 2 olive oil
• 2 shallots, minced.
• 1/2 teaspoon dried thyme
• Salt and pepper
• 1/2 cup red wine
• For the polenta:
• 1 1/2 cups coarse polenta
• 5 1/2 cups chicken stock.
• 1 1/2 teaspoons salt.
• 4 tablespoons butter
• 1/4 cup parmesan cheese.
• The sauce:
• Drippings from the steak.
• 3 tablespoons butter.
• 1 minced shallot
• 1/2 cup port wine
• 1/2 cup chicken stock.

Instructions

1. Marinate the steak. Mix together all of the marinade ingredients, and coat the steak with the marinade. Let it set for a couple of hours.
2. Make the duxelle for the polenta. Put a pan on high heat, and melt the butter. When it’s melted, add the shallots and the mushrooms. Sprinkle with salt and pepper. After the mushrooms start to shed some of their liquid, add the thyme. Keep stirring. If the pan starts to get dry, add some of the red wine. Keep cooking, stirring all the time, until you run out of wine. By that time, the mushrooms should have lost a lot of their volume, and turned a deep caramel brown. Remove it from the heat, and set aside.
3. Start the polenta. Bring 4 1/2 cups of the chicken stock to a boil. Stir in the polenta and the salt. Reduce the heat to medium, and stir until it starts to thicken. Add in the duxelle, and reduce the heat a bit more, to medium-low. Now the polenta just sits and cooks. You want it to go for about 45 minutes at a minimum. But as long as you keep it moist, polenta just keeps getting better as it cooks, so don’t worry about it. Add some stock whenever it gets too dry, and stir it every few minutes.
4. Preheat your oven t 350.
5. Heat a cast iron pan on high heat. When it’s good and hot, sear the steak, about 3 minutes on each side. Then transfer it to a baking sheet, and put it in the oven for 10 minutes. At the end of the ten minutes, remove it, and transfer it to a cutting board, to rest for about ten minutes.
6. Heat 1 tablespoon of the butter in a saucepan. Add in the shallots, and cook until they turn translucent. Add in whatever drippings are left on the baking sheet, and the port wine, and reduce nearly all of the liquid away. Then add the chicken stock. When it boils, add in salt to taste, and then remove from the heat. Add in the remaining butter and stir until it melts.
7. While the steak is resting, add the butter and cheese to the polenta, and stir it in.
8. Slice the steak against the grain.
9. On each plate, put a nice mound of polenta, and a helping of the steak. Then drizzle the sauce over the steak, and a little bit of extra virgin olive oil over the polenta.
10. Eat!

# Define Distance Differently: the P-adic norm

As usual, sorry for the delay. Most of the time when I write this blog, I’m writing about stuff that I know about. I’m learning about the p-adic numbers as I write these posts, so it’s a lot more work for me. But I think many of you will be happy to hear about the other reason for the delay: I’ve been working on a book based on some of my favorite posts from the history of this blog. It’s nearly ready! I’ll have more news about getting pre-release versions of it later this week!

But now, back to p-adic numbers!

In the last post, we looked a bit at the definition of the p-adic numbers, and how p-adic arithmetic works. But what makes p-adic numbers really interesting and valuable is metrics.

Metrics are one of those ideas that are simultaneously simple and astonishingly complicated. The basic concept of a metric is straightforward: I’ve got two numbers, and I want to know how far apart they are. But it becomes complicated because it turns out that there are many different ways of defining metrics. We tend to think of metrics in terms of euclidean geometry and distance – the words that I to describe metrics “how far apart” come from our geometric intuition. In math, though, you can’t ever just rely on intuition: you need to be able to define things precisely. And precisely defining a metric is difficult. It’s also fascinating: you can create the real numbers from the integers and rationals by defining a metric, and the metric will reveal the gaps between the rationals. Completing the metric – filling in those gaps – gives you the real numbers. Or, in fact, if you fill them in differently, the p-adic numbers.

To define just what a metric is, we need to start with fields and norms. A field is an abstract algebraic structure which describes the behavior of numbers. It’s an abstract way of talking about the basic structure of numbers with addition and multiplication operations. I’ve talked about fields before, most recently when I was debunking the crackpottery of E. E. Escultura here.

A norm is a generalization of the concept of absolute value. If you’ve got a field $F$, then a norm of $F$ is a function, the norm of is a function $| cdot |$ from values in $F$ to non-negative numbers.

1. $| x | = 0$ if and only if $x = 0$.
2. $|x y| = |x| |y|$
3. $|x + y| le |x| + |y|$

A norm on $F$ can be used to define a distance metric $d(x, y)$ between $x$ and $y$ in $F$ as $| x - y|$.

For example, the absolute value is clearly a norm over the real numbers, and it defines the euclidean distance between them.

So where do the gaps come from?

You can define a sequence $a$ of values in $F$ as $a = { a_i }$
for some set of values $i$. There’s a special kind of sequence called
a Cauchy sequence, which is a sequence where $lim_{i,j rightarrow infty} |a_n - a_m| = 0$.

You can show that any Cauchy sequence converges to a real number. But even if every element of a Cauchy sequence is a rational number, it’s pretty easy to show that many (in fact, most!) Cauchy sequences do not converge to rational numbers. There’s something in between the rational numbers which Cauchy sequences of rational numbers can converge to, but it’s not a rational number. When we talk about the gaps in the rational numbers, that’s what we mean. (Yes, I’m hand-waving a bit, but getting into the details
would be a distraction, and this is the basic idea!)

When you’re playing with number fields, the fundamental choice that you get is just how to fill in those gaps. If you fill them in using a metric based on a Euclidean norm, you get the real numbers. What makes the p-adic numbers is just a different norm, which defines a different metric.

The idea of the p-adic metric is that there’s another way of describing the distance between numbers. We’re used to thinking about distance measured like a ruler on a numberline, which is what gives us the reals. For the p-adics, we’re going to define distance in a different way, based on the structure of numbers. The way that the p-adic metric works is based on how a number is built relative to the prime-number base.

We define the p-adic metric in terms of the p-adic norm exactly the way that we defined Euclidean distance in terms of the absolute value norm. For the p-adic number, we start off with a norm on the integers, and then generalize it. In the P-adic integers, the norm of a number is based around the largest power of the base that’s a factor of that number: for an integer $x$, if $p^n$ is the largest power of $p$ that’s a factor of $x$, then the the p-adic norm of $x$ (written $|x|_p$) is $p^{-n}$. So the more times you multiply a number by the p-adic base, the smaller the p-adic norm of that number is.

The way we apply that to the rationals is to extend the definition of p-factoring: if $p$ is our p-adic base, then we can define the p-adic norm of a rational number as:

• $|0|_p = 0$
• For other rational numbers $x$: $|x|_p = p^{-text{ord}_p(x)}$ where:
• If $x$ is a natural number, then $text{ord}_p(x)$ is the exponent of the largest power of $p$ that divides $x$.
• If $x$ is a rational number $a/b$, then $text{ord}(a/b) = ord(a) - ord(b)$.

Another way of saying that is based on a property of rational numbers and primes. For any prime number $p$, you can take any rational number $x$, and represent it as a p-based ratio $p^nfrac{a}{b}$, where neither $a$ nor $b$ is divisible by $p$. That representation is unique – there’s only one possible set of values for $a$, $b$, and $n$ where that’s true. In that case, p-adic norm of $x$,
$|x|_p == p^{-n}$.

Ok, that’s a nice definition, but what on earth does it mean?

Two p-adic numbers $x$ and $y$ are close together if $x - y$ is divisible by a large power of $p$.

In effect, this is the exact opposite of what we’re used to. In the real numbers written out in decimal for as a series of digits, the metric says that the more digits numbers have in common moving from left to right, the closer together they are. So 9999 and 9998 are closer than 9999 and 9988.

But with P-adic numbers, it doesn’t work that way. The P-adic numbers are closer together if, moving right to left, they have a common prefix. The distance ends up looking very strange. In 7-adic, the distance between 56666 and 66666 is smaller than the distance between 66665 and 66666!

As strange as it looks, it does make a peculiar kind of sense. The p-adic distance is measuring a valuable and meaningful kind of distance between numbers – their distance in terms of
their relationship to the base prime number p. That leads to a lot of interesting stuff, much of which is, to be honest, well beyond my comprehension! For example, the Wiles proof of Fermat’s last theorem uses properties of the P-adic metric!

Without getting into anything as hairy as FLT, there are still ways of seeing why the p-adic metric is valuable. Next post, we’ll look at something called Hensel’s lemma, which both shows how something like Newton’s method for root-finding works in the p-adic numbers, and also shows some nice algebraic properties of root-finding that aren’t nearly as clear for the real numbers.