Paul Meehl is a name that doesn’t crop up much in betting circles. He was a Psychologist who would be loved by those amongst us who prefer to rely on the day in day out consistency of our computers to sort out our bets rather than the subjective opinions of ourselves or others. Daniel Kahneman touches upon his work in the excellent book Thinking Fast and Slow. A book that never mentions horse racing but should be on your to read list. Meehl is considered the father of research into whether algorithms (set of rules) can outperform humans in various decision making process ranging from predicting the future value of wine to the future grade scores of college admissions. This ongoing area of research reveals that algorithms, computer based or otherwise, outperform human decisions 60% of the time with the rest remaining mainly equal. Given that the algorithm approach is quicker and cheaper this still amounts to a win in the draw situations.
How does this relate to horse betting decisions?. I have always thought that a skilled punter will outperform an algorithm in terms of ROI% over a small set of bets. In other words when I used to read that Alan Potts would produce on average a 30% ROI over around 300 bets per year my experience suggested that it would be difficult to get a computer to beat that. Where the computer comes into its own is when we up the bet activity. It’s often said that ROI% is for the ego but actual profit is what really counts. I am sure everyone would agree that 5% on 3,000 bets is preferable to 30% on 300 bets. It is in this situation that Meehl’s ideas really get into gear. Getting a human to produce 3,000 bets per year (this is a modest number for computer based operators) subjectively and hit 5% would be far more difficult and a huge drain on time and mental stability and that is before you have added in the time and effort of executing the bets.
What else adds to the inferiority of experts over algorithms?. Meehl highlighted the fact that experts like to try and think outside the box. They over complicate things in order to justify their position as an expert. You certainly would not get a job presenting on Channel 4 by making selections based on 4 criteria race after race after race. The other problem is that humans are notorious for coming up with different conclusions when faced with the same criteria twice. The one area we are hopeless at is coming up with the same decision regardless of whether it’s a bright sunny morning, we slept well last night or we have just dropped 40 points over the last 2 weeks. Yes even the first two can have an impact. One of my favourite lines from Kahneman’s book and which conjures up images of just about any pundit you care to name in the media is

“The idea that the future is unpredictable is undermined every day by the ease with which the past is explained”

Robin Dawes takes this subject further by looking at how even algorithms can be made more efficient. In the same way that we prefer to argue that human judgement adds an element of much needed sophisticated flexible thinking to good decision making, Dawes argues that we even tend to make the same mistake with regard to algorithm design. A common method for creating algorithms based on data is to calculate weights to assign to each element. For example if two elements were how good the jockey and trainer are then statistical analysis using regression would come up with numbers to multiply the jockey and trainer values by. This sounds very plausible after all some factors need to be taken more serious than others. Dawes showed however that simply using the same weight for all factors is often superior in final performance.
How can you apply this to racing, well Kahneman has a do it yourself section that echo’s many good books on race betting. Select a set of criteria, around six is a considered a good number so taking the previously mentioned items we could assign how good the jockey is as one. You could even make it objective by using his or her strike rate as a simple means of evaluation. Now assign a number between 1 and 5 for each of the criteria. A top jockey getting 5 and a poor one getting 1. Summing these numbers could then be a very simple way of coming to a conclusion about the runner’s merits. One very important point which I cannot emphasise enough. If you can come up with 6 items that the market consistently undervalues then the strength of this approach will be greatly magnified. As one very good statistician and race bettor I know once said, it’s not so much the method of analysis chosen but the quality of your data.
Algorithms allow us to release more time for the important things in life outside of betting and if you are still unconvinced here is a very simple algorithm devised by statisticians for testing the likely longevity of relationships.

Frequency of love making minus frequency of quarrels
You don’t want to be below zero!.

Let me know in the comments below whether you have a systematic/algorithmic approach to betting or a more flexible interpretive approach.