The Investors vs. the Tabby

There’s an amusing article making its rounds of the internet today, about the successful investment strategy of a cat named Orlando..

A group of people at the Observer put together a fun experiment.
They asked three groups to pretend that they had 5000 pounds, and asked each of them to invest it, however they wanted, in stocks listed on the FTSE. They could only change their investments at the end of a calendar quarter. At the end of the year, they compared the result of the three groups.

Who were the three groups?

  1. The first was a group of professional investors – people who are, at least in theory, experts at analyzing the stock market and using that analysis to make profitable investments.
  2. The second was a classroom of students, who are bright, but who have no experience at investment.
  3. The third was an orange tabby cat named Orlando. Orlando chose stocks by throwing his toy mouse at a
    targetboard randomly marked with investment choices.

As you can probably guess by the fact that we’re talking about this, Orlando the tabby won, by a very respectable margin. (Let’s be honest: if the professional investors came in first, and the students came in second, no one would care.) At the end of the year, the students had lost 160 pounds on their investments. The professional investors ended with a profit of 176 pounds. And the cat ended with a profit of 542 pounds – more than triple the profit of the professionals.

Most people, when they saw this, had an immediate reaction: “see, those investors are a bunch of idiots. They don’t know anything! They were beaten by a cat!”
And on one level, they’re absolutely right. Investors and bankers like to present themselves as the best of the best. They deserve their multi-million dollar earnings, because, so they tell us, they’re more intelligent, more hard-working, more insightful than the people who earn less. And yet, despite their self-alleged brilliance, professional investors can’t beat a cat throwing a toy mouse!

It gets worse, because this isn’t a one-time phenomenon: there’ve been similar experiments that selected stocks by throwing darts at a news-sheet, or by rolling dice, or by picking slips of paper from a hat. Many times, when people have done these kinds of experiments, the experts don’t win. There’s a strong implication that “expert investors” are not actually experts.

Does that really hold up? Partly yes, partly no. But mostly no.

Before getting to that, there’s one thing in the article that bugged the heck out of me: the author went out of his/her way to make sure that they defended the humans, presenting their performance as if positive outcomes were due to human intelligence, and negative ones were due to bad luck. In fact, I think that in this experiment, it was all luck.

For example, the authors discuss how the professionals were making more money than the cat up to the last quarter of the year, and it’s presented as the human intelligence out-performing the random cat. But there’s no reason to believe that. There’s no evidence that there’s anything qualitatively different about the last quarter that made it less predictable than the first three.

The headmaster at the student’s school actually said “The mistakes we made earlier in the year were based on selecting companies in risky areas. But while our final position was disappointing, we are happy with our progress in terms of the ground we gained at the end and how our stock-picking skills have improved.” Again, there’s absolutely no reason to believe that the students stock picking skills miraculously improved in the final quarter; much more likely that they just got lucky.

The real question that underlies this is: is the performance of individual stocks in a stock market actually predictable, or is it dominantly random. Most of the evidence that I’ve seen suggests that there’s a combination; on a short timescale, it’s predominantly random, but on longer timescales it becomes much more predictable.

But people absolutely do not want to believe that. We humans are natural pattern-seekers. It doesn’t matter whether we’re talking about financial markets, pixel-patterns in a bitmap, or answers on a multiple choice test: our brains look for patterns. If you randomly generate data, and you look at it long enough, with enough possible strategies,
you’ll find a pattern that fits. But it’s an imposed pattern, and it has no predictive value. It’s like the images of jesus on toast: we see patterns in noise. So people see patterns in the market, and they want to believe that it’s predictable.

Second, people want to take responsibility for good outcomes, and excuse bad ones. If you make a million dollars betting on a horse, you’re going to want to say that it was your superiour judgement of the horses that led to your victory. When an investor makes a million dollars on a stock, of course he wants to say that he made that money because he made a smart choice, not because he made a lucky choice. But when that same investor loses a million dollars, he doesn’t want to say that the lost a million dollars because he’s stupid; he wants to say that he lost money because of bad luck, of random factors beyond his control that he couldn’t predict.

The professional investors were doing well during part of the year: therefore, during that part of the year, they claim that their good performance was because they did a good job judging which stocks to buy. But when they lost money during the last quarter? Bad luck. But overall, their knowledge and skills paid off! What evidence do we have to support that? Nothing: but we want to assert that we have control, that experts understand what’s going on, and are able to make intelligent predictions.

The students performance was lousy, and if they had invested real money, they would have lost a tidy chunk of it. But their teacher believes that their performance in the last quarter wasn’t luck – it was that their skills had improved. Nonsense! They were lucky.

On the general question: Are “experts” useless for managing investments?

It’s hard to say for sure. In general, experts do perform better than random, but not by a huge margin, certainly not by as much as they’d like us to believe. The Wall Street Journal used to do an experiment where they compared dartboard stock selection against human experts, and against passive investment in the Dow Jones Index stocks over a one-year period. The pros won 60% of the time. That’s better than chance: the experts knowledge/skills were clearly benefiting them. But: blindly throwing darts at a wall could beat experts 2 out of 5 times!

When you actually do the math and look at the data, it appears that human judgement does have value. Taken over time, human experts do outperform random choices, by a small but significant margin.

What’s most interesting is a time-window phenomenon. In most studies, the human performance relative to random choice is directly related to the amount of time that the investment strategy is followed: the longer the timeframe, the better the humans perform. In daily investments, like day-trading, most people don’t do any better than random. The performance of day-traders is pretty much in-line with what you’d expect from probability from random choice. Monthly, it’s still mostly a wash. But if you look at yearly performance, you start to see a significant difference: humans do typically outperform random choice by a small but definitely margin. If you look at longer time-frames, like 5 or ten years, then you start to see really sizeable differences. The data makes it look like daily fluctuations of the market are chaotic and unpredictable, but that there are long-term trends that we can identify and exploit.

5 thoughts on “The Investors vs. the Tabby

  1. Harald K

    There’s a thing here, that I rarely see accounted for in these things. To make money on the stock market it’s not enough to know which stocks are valuable. Those are already priced highly. Fundamentally, what you want to know is which stocks are overpriced, and which are underpriced.

    The stock market is mostly played with people who examine firms, and try very hard to find stocks that seem mispriced. Thus, most stocks aren’t very mispriced (at least not according to publicly available information), because there has already been a lot of people examining them. Thus, a single cat playing the stock market will probably hit stocks that aren’t too underpriced, or overpriced.

    On the other hand, if the stock market was played entirely by cats, and there was one thinking human in the bunch (student or graduated money-mover), then I feel confident the human would win big. The cat is surfing on the rationality of humans in the other example. Certainly I agree professional investors are overvalued, but it probably wouldn’t go better with society if we all invested randomly (or evenly).

    Reply
  2. Michael Chermside

    It’s worth noting that there is a selection bias which is impossible to eliminate. For all we know, there were three other similar experiments done by other researchers in which the experts won and the cat lost, but those results were uninteresting so they were never published.

    But moving on, I found something you said very interesting:

    > The real question that underlies this is: is the performance of individual stocks in a stock market actually predictable, or is it dominantly random. Most of the evidence that I’ve seen suggests that there’s a combination; on a short timescale, it’s predominantly random, but on longer timescales it becomes much more predictable.

    That “rings true” to me — it sounds plausible, and it would explain the observation that human judgement seems to do better over longer time scales and about the same as random selection over short time scales. But I’m wondering just what it means.

    In other words, is there a way to make mathematically precise the phrases “dominantly random” and “predictable”? For instance, perhaps model stock prices as if they had a “true value” component which adjusts slowly over time (perhaps use the moving average with a wide period of averaging?) combined with a “random” component (the rest of the variation)? Would we then say that this model was working well if we could find nice statistical properties for the “random” component? (Properties like having a normal distribution, or some sort of fractal-shaped variation to jump sizes with a consistent fractal dimension, or well… I’m not sure what.)

    Does anyone else have a better definition of what we might mean by the intuitive terms “dominantly random” and “predictable”?

    Reply
  3. Greg B

    Can you site some of these studies allegedly showing human choice superiority (aside from the Wall Street Journal gimmick)? Interesting (again in the Journal) we find that large pools of investor funds are leaving managed mutuals for index funds like Vanguard, which by definition are the result of “random” picks taken to the extreme.

    Reply
  4. MCA

    IMHO, the problem is quite simply that the cat *ever* wins. If the cat/dartboard come within 3 orders of magnitude of a “skilled” human, the system is clearly so random, so noisy, and so chaotic that every argument about “wisdom of markets” and “experienced investors” and suchlike simply becomes laughable.

    Based on these sorts of studies, it seems I would actually be better off taking my entire retirement savings to Vegas and betting on Blackjack – while it’s got a random element, there’s at least enough order that a human does better than a cat.

    That, IMHO, is the real punchline – the stock market is somewhere between roulette and blackjack, with far less skill than poker. And at least poker is fun.

    Reply
  5. John Haugeland

    It seems much more likely to me that for every 100 times this experiment is run, 97 times it does what you would superficially expect, and nobody publishes.

    Good math, bad math, huh?

    Then why are you interpreting the results of this study without a null hypothesis? There’s no record of how often this doesn’t work.

    This should be relatively easy to check: make a simple stock leaderboard and let people log in with virtual money, then compare their performance to that of rand().

    You just didn’t do it, because surely a single test over a statistically inadequate sample, run once, with no indication of negative run rate, is normative and interesting and useful.

    Reply

Leave a Reply