How to make a good prediction

This is some general advice on how to make a good prediction.

1. Have an intelligent conversation with your gut instinct! 


Gut instincts are incredibly valuable when it comes to making a prediction, the best predictors often heavily rely on their gut instincts, but remember that your gut can be flawed. Your instinct is exactly that, an instinct, so any cognitive or emotional biases you have could impede your predictive success.

The trick is to not rely 100% on what your gut instincts tell you but to always question them: subject them to critical appraisal, think about any biases that might be effecting your objectivity.

Its useful to be aware of  some of the most common cognitive biases, thinking short cuts which can corrupt our metal calculations.

2. Dis-aggregate what you want to happen with the process of predicting what will happen 


Of all these biases the most significant one you should be aware of is confirmation bias. Selectively using evidence to support your point of view.

Remember your emotions have a tendency to force you to believe much more in evidence that supports your point of view than evidence that does not support your point of view.  The more emotionally involved you are in the outcome the less reliable your predictions are likely to be as a result of this.

To illustrate how significant this issue, is here are the results of a survey conducted in January 2016 where a group of US panellist were asked to predict who would win the US election. You can see that Democrats were pretty certain a democrat would win and Republications pretty certain a republications would win.



When making predictions try to really stand back from what you feel.  But also remember you can also over compensate for this and be overtly negative about an outcome sometimes.

3. Unpack the problem


Start by having a good think about the problem you are trying to solve.  Making a prediction is essential a problem that you are trying to solve.  What do you need to know to solve the problem, where are you going to find that out, what issues do you need to consider, what could effect things? Build up a map in your mind.

4. Gather as much evidence as you can from as diverse a sources as you can


For complex geopolitical predictions assembling as much evidence as you can is vital.  Look out for conflicting points of view, understanding differences in opinions will help you pin down the level of certainty on any one topic.

5. A sage piece of advice “You don’t need to look into a crystal ball to see the future, just read the history books” 


We often over think about how unique a situation is and forget to look at evidence from the past that could help understand issues.  Always think about whether there have been any similar situations in the past or from around the world that could help guide you.

6. Break out the problem into sub-predictions


Often with complex predictions you have to break it down in to lots of sub factors that are easier to make predictions about.

7. Look at it from more than one angle – inside out and outside in, more helicopter viewpoints and in meta dimensions:  


The more ways you can look at the problem the better.  We have a tendancy to start looking at problems from the inside out. For example, elections we would consider reactions to the leadership and specific policies, an outside in approach would be to consider things like historical voting habits of a community and factors like the overall state of the economy.   A good predictor will evaluate and weigh up things from as many points of view as they can.


8. Use a “Montecarlo” style approach to aggregate the answers from the different points of view.


This sounds a bit complicated but its not really.  Its just the name given to the process of adding up the answers from different outcome scenarios.   All you do is think about the topic from each specific point of view and predict the outcome based solely on that factor e.g. if the economy is doing well this tends to favour the incumbent political party so with that way of looking at things the incumbent part would win. You then look at the next point of view and the next and then add up how many point towards one choice or the other. If from 10 points of view choice A comes up 6 times and choice B 4 times, then this suggests there is a 60% chance of A being the outcome.

9. Enter the minds eye of the people with the opposing point of view and try to dissuade yourself!


For example if you are left leaning you should read right leaning publications and sources of information to get a sense of the message and feelings from the other side of the fence and vica versa.

When making any forecasts it is very important once you have made up your mind not to give up thinking about it, but from that point in time to focus on trying to dissuade yourself.   This will help you test out how robust your prediction is.  If you can easily dissuade yourself or find your opinions being shaken it’s important to take notice of these feelings.

10. Be prepared to change your mind over and over again


Dogmatism is the enemy of good prediction, never fix your prediction in stone, always be prepared to adjust your level of confidence in what the likely outcome will be and to be ready to completely change your mind.

You need to be in a ready state to jump ship at any time, and be constantly alert to over confidence.  A good predictor, I am afraid to say, lives in a somewhat paranoid mindset.

11. Think though all what if scenarios


What if something happens like an earthquake in the next 3 weeks?  Unlikely, but a good predictor will go though and consider all the things that might happen that could influence the outcome of a predictions and take this into consideration.

12. Understand market behaviour


Markets tend to over-react to news emotionally, and as a result you often see massive spikes in stock market prices when company results are announced or economic figures are released and then within a few days they settle back to where they were before.

It’s a natural process to overestimate the impact of any one piece of news, be considered when processing news information.  My advice would be to sleep on it and think about how you will react to it the next day.

13. Understand certainty predictions


It helps if you are asked to predict how certain you are that something will happen to understand the basic odds of things happening.  For example, if there are 2 people in a race and you are 50% certain one candidate will win,  means you don’t know who will win.  If there are 3 candidates in a race and you are 50% certain one particular candidate will win, this means you think this candidate is 50% more likely to win than the other 2 candidates, though you don’t know if they will win overall.

 Having a good understanding of your certainly level can help ensure you make more considered predictions and also identify where there are holes in your knowledge.

14. Read Superforecasting by Philip Tetlock & Dan Gardner and The Signal & the Noise by Nate Silver:


Much of this advice is taken from these 2 book and if you don’t read both of these books you might find it tough to be a good forecaster.



Putting these ideas into practice...


An example:  working out the result of the UK EU referendum

Imagine you were trying to predict which way the UK will vote in the upcoming EU referendum. Below are some examples of the range of things you could think about and research to help you make a better prediction.

Unpacking the problem:  
1. What are the polls saying?
2. We cannot entirely rely on what the polls are saying can we, why?
3. Phone polls and telephone polls are telling a different story, what does that mean?
4. A lot of people are undecided which way they likely to vote?
5. Who are the people who cannot be reached by polls and what are they thinking?
6. Who will be most motivated to vote?
7. What impact could news events have as they unfold in the next few months?
8. What will be the impact of debate and discussions?
9. What impact will the personalities involved have on the debate?
10. What will be the impact of the press media and social media conversations have on the debate?

Gathering evidence:
1. What are all the polls saying right now? How much do they vary?
2. Why did the polls get it wrong last time? What can we learn from this?
3. How have the polls moved so far?
4. What evidence do we have about who the hard to poll groups are and what they are thinking?
5. What evidence do we have about voting intention amongst the undecided?
6. What was the latent sentiment of the UK public before the referendum process started?
7. What historical data is available from other referendums or independence votes?

Looking at it from different angles:
1. In other votes for independence how did public sentiment evolve in the run up to these votes?
2. Do British people feel they will be economically better off inside out outside the EU?
3. How important are immigration issues in the mind’s eye of votes compared to economic issues?

which issue will have the final say?
4. Who do the British voter trust more David Cameron or Boris Johnson?
5. What other big issues are driving people’s feelings on which way to vote?
6. With 3 major newspaper groups lobbying for an exit, how much influence do newspapers have?
7. How passionately do the different sides of the debate feel about the issue?
8. Will the levels of passion change as we get close to the vote?
9. How will the levels of passion change if the polls are leaning heavily one way or the other?
10. Loss aversion, will this impact on our decision?

Scenario planning:
1. What would happen to voter’s sentiment if a terrorist attack occurs the UK before the election?
2. What would happen if there was a collapse in the value of the pound before the election?

After you have read and thought about this the most important thing perhaps to do is think about what has not been considered in these list that might have an impact?


What should we be measuring in brand tracking studies?

…In a nutshell, what brands do you buy and why?


Byron Sharpe et al have fairly convincingly proved that the key health metric of a brand is its total universe of users.

The awareness of the brand, the loyalty of the users of the brand and how much they like the brand are all rather academic constructs as all these measures highly correlate with each other and ultimately with the brand’s universe of users. All can be modeled using a Dirichlet distribution model.

The proportion of people who are brand-aware can be modeled from the proportion that are spontaneously aware of that brand. With X number of total users there will be Y number of loyalists and Z number of people who love and recommend the brand. If users drop, liking, awareness and loyalty levels will drop all in parallel. If you asking about liking of brand you will find we all like the brands we use at pretty equal levels.

To illustrate the point, here is an example of data taken from a quite typical brand tracking study where the statistical correlation between brands purchased in the last 12 months and all the other core metrics measured in the brand tracking study has been calculated. The correlation for nearly every metric is above 0.85 and some metrics in the high 0.9’s.


So you could argue that the only brand equity question really worth asking in a brand tracking survey is: “Which of these brands do you use?”



Understanding why people buy brands


What is worth asking question in a brand tracking survey is why people choose brands. This is something that will potentially vary by brand and category and cannot be modelled using a Dirichlet distribution model. This is where the drivers of market movements can be divined.

To measure this it is vitally important that you don’t prompt the respondents for their answers. If you do, these questions will themselves become a proxy measure for brand usage as well.

If you present respondents with a list of brands and an attribute as so many brand trackers tend to do, as it so much easier to associate a brand attribute with the brands you know best than the brands you don’t know, so the brands that get selected the most for each attribute will simply be those brands with the most users. To illustrate this point below is the correlation between purchased in last 12 months and brand attribution for each brand from the same brand tracking survey above and you can see again that it highly correlates.



Below is another example that really helps visualize this issue, it shows the the prompted brand attributes of different telecom services. They correlate on average at c0.85 with brand usage data. 

 

The second reason for not using a promoted brand attribute list is that these lists rarely adequately cover all the diversity of reasons why people actually choose individual brands in individual categories. They are all too often simply generic lists of factors that have little or no relevance to the category. The above example really speaks for itself - ask yourself - do you really make a decision on your choice of telecom provider because it “expresses your personality”? This factor is tenuous at best. 


Below are the results of an experiment where we asked people the reasons for purchasing shampoo. We compared their responses to the prompted attribute responses to an open ended question where over 60 distinct factors were given. We discovered that fewer than half the reasons for purchase were covered off in the closed questions.

The third reason for not using a prompted list process is that is does not tell you anything about the relative strength of different factors. If I ask you if price is important when making a purchase, most people will say yes, but that does not tell you anything about how significant price actually is.

To mine reasons for purchase effectively the exercise needs to be done in way that does not anchor responses, and allows the respondent the freedom to express the actual reasons rather than putting thoughts into their heads. We believe the best way to do this is by asking this question in a more spontaneous open ended format linked to the produce usage question. Asking simply why they chose the brand provides produces a much more varied list of responses by brand.




Using fairly basic text analytics techniques can deliver quant level metrics like the example below, demonstrating the influence and scale of each factor similar to a promoted process. The key difference is that you get far greater differentiation of opinions by brand.

See the example below where the same brand metrics were measured using closed v an open ended approach. You start to see the personality of the brands emerging far more strongly with the open ended question.



Using closed questions all four of these brands are seen as delivering pretty similar levels of cleaning power and gentleness, product strength and feeling. But when you examine the open ended data you see each brand’s unique personality emerging. The perceived cleaning power of Ariel, the gentleness of Fairy for example.


By conducting a detailed analysis of reasons for purchase we were able to show far more than a simple identification and quantification of the primary drivers of purchase. We were able to compare how significant each driver was for each brand. For example below, how often price is mentioned in association with these different shampoo brands varies enormously.


This comparative reason for purchase feedback can also be used to understand the niche issues and micro market movements that you would never normally be produced from a brand tracking survey.

Take the example below of a question we have added to a shampoo tracker for two years running where we asked people to think about the reason why they chose and switched shampoos. In Year 1 we observed 0 mentions of the term Parabins. A year later there were 5 mentions, a tiny number admittedly, but when we examined google search data it was clear that use of Parabins in some shampoos was an emerging and growing issue which was potentially important for marketeers to be aware of.


From this same tracker we were able to pick up movements on several micro factors of potential interest: UV sun protection (three mentions in Year 1 up to eight in Year 2), caffeine (eight mentions up to 12), ‘2 in 1’ (down from 14 to five), general mentions of “chemicals” (up from 15 to 30). None of these issues would be picked up in a traditional tracker but all are potentially useful insights.

In conclusion, with this approach you not only get a handle on the headline issue but also some of the emerging stories underneath.

What can researchers learn from film script writers?

If you study the art of film making, it will tell you that a good film script is based around one great question, that grabs your attention from the off and then the story naturally emerges from this and slowly reveals the answer. The question drives the whole story.

Here are some examples:
  • What if every day was the same? GROUNDHOG DAY 
  • What if a nun was made to be a nanny? THE SOUND OF MUSIC
  • What if a really smart innocent person went to prison? SHAWSHANK REDEPMPTION
  • What if dreams & reality were inter-changeable? MATRIX
  • What if there's more to life than being ridiculously good looking? ZOOLANDER
All the books also emphasise how important good narrative structure is to making a great film i.e. films that people want to watch and concentrate on watching from start to finish. Films construct heroes through which the story is told, and these stories needs to adhere to a strict story structure. There are about 7 of these basic story structures, established from a time well before the dawn of film making, in fact the basic structure of storytelling has hardly changed much for thousands of years.



Building a survey around a great question


I believe a good survey can be built around one great question in the same way and that the key to designing a great survey is then adhering to a strict narrative structure: where you place the respondent in the role of the hero; and the questions in the survey help the respondent to slowly reveal the answers to this central question by telling their own story.

Here are a couple of examples of a simple question that you could build a survey around:

“What is the secret of a really good shampoo?”

From the off, you immediately know the purpose of the survey and you can imagine taking participants on a journey through a series of questions that mine their viewpoint on this topic. You can tie all the questions into this e.g. first of all we would like to establish which brands you have had experience of using, what do you think about these different brands, which are the best in your mind and why? If you were going to sum up what you are looking for in a perfect shampoo what features would it have….etc.

“In a life of hair washing what have you learnt?”

Again this question has an in-built story structure, you might ask people from the outset to think about all the different types of shampoo they have experience of using in the past and what they thought of them, and what brands they have built some affinity with. You could then get them to think about their experiences of good and bad hair days as a result of using certain types of shampoo etc.

It’s interesting how once a good central question is established, the rest of the questions you ask can flow out of this easily and fluidly. Have a go next time you are planning a survey and you will see!

Narrative structure: the key to good surveys


Essentially what you are doing is building a story and like film making it’s important this story adhere to a strict structure. What kills so many survey experiences I believe, is being asked a whole load of questions in no particular order. In the same way as you might walk out of a film if it was just made up of a series of unrelated series of visuals and dialog, if they don’t understand where the survey is going, people get frustrated and are more likely to drop out or not pay attention to the question. A survey is a journey and the respondents need to know where they are going, otherwise they could be like kids in the back of the car asking “are we nearly there yet” every 5 minutes.

Ready-made narratives


Like in story telling where they have mapped out the 7 common plot structure, If you are struggling to think up your own survey narrative there are a number of ready made well established ones you can beg borrow or steal. Here are a couple of examples.

The trial narrative


The “trial” narrative is one we have repeatedly used very successfully. Putting a product or service on trial has an in built narrative structure. You have first the case for the prosecution; what is wrong with a product or service, what are you frustrated by. Then you have the case for the defence; what has the product or service done well. Then you have the jury process where respondents evaluate the pros and cons of a products’ strengths and weaknesses and finally a verdict where respondents are asked to give their final rating. We have used and adapted this idea in a number of ways, across a range of different consumer surveys.



The build a new future narrative


This is another one we have used in different guises. You start out by asking “What are the strengths and weaknesses of current products/service/situation” i.e. what is your life like right now, you then ask “what do you want in a perfect world” you then can explore living in the real world with practical constraints get them to explore trade off solutions. Once drawn into this process you can then challenge participants to design their own version of these product or services with or without any practical constraints and then ask them to cross evaluate each others ideas.


The journey narrative


People can very simply grasp the idea of a journey. So you can use this idea to help guide people though a whole range of survey processes. You basically tell people at the beginning of the survey where they are starting from and where they will end up.



Making the respondent the hero of their own story


This can be a tremendously powerful conceptual construct to really draw out more thoughtful feedback from respondents. They need to feel that what they are doing is important, has meaning and that you care about what they have to say. What you have to allow them to do is tell their story and give them room to do this by asking questions that place them in control. Rather like in a film you let them enter an imaginary world where you set them challenges to overcome. Like this example below.

Want to learn more?


If you are interested in learning more about how to apply narrative structure to your surveys, I recommend you start out by reading a book that was originally recommended to me by a good friend of mine who is a lecture in film editing. Discussing some of these ideas with him, he told me I must read “The writer’s journey” by Christopher Vogel who is one of Hollywood's most successful script editors. It’s a book that explains in detail the strict narrative structure of films and much of the thinking in this book can be directly applied to survey copy writing. I thoroughly recommend you read this book yourself as the start of a journey to improve you own surveys.


2014 market research book list

Coming to the end of the year, I I thought I would share a list of the best books I have read in 2014 that I think other market researchers might like to read.  Now not all of these are new books by any means so forgive me if you have yourself read half of them.

This will make you smarter



This book is a compendium of scientific and philosophical ideas in one of two page essays on quite a remarkable cross section of topics. There are some really exciting thought packed into this book that I think market researcher could make good use of. I think reading it really did make me a little smarter!






Expert Political Judgment: How Good Is It? How Can We Know?


Philip E. Tetlock

Philip Tetlock's thinking has had more influence on how I think about conducting market research than any one person this year. I was introduced to this book by Walker Smith from the Futures Company and I would recommend that anyone who has an interest in the science of prediction should read this book.  Learn that political experts are not quite as good as chimps tossing coins at predicting things!




The Signal and the Noise: The Art and Science of Prediction


I realise this book is a few years old now, and I wish I had read it sooner. There are so many really important ideas stuffed into this book that market researcher can use in their every day research. Its both inspiring and useful.






Strategy: A History


This small thumbnail belies a bloody thick book which I have to admit to not to have read every page of.  It looks at strategy from all sorts of angles from war through to politics and summarizes the thinking of every major strategist in history including the likes of Sun Tzu, Napoleon and Machiavelli.  There is loads of great thinking for market researchers to digest. And probably even more valuable incites for anyone running a business.   It contains a detailed look game theory and the trials and issues with trying to apply strategy in real life. There is some sage advice in this book



Decoded: The Science Behind Why We Buy



This book is a really helps explain the basics of shopping decision making and is a compendium of behavioral economic theory, an important topic for nearly all market researchers to understand - I really like the way it uses visual examples to explain some of the theory making it an effortless read. This book should be on every market researchers shelf.





100 Things Every Designer Needs to Know about People


This book should really be titled, 100 things market researchers designing surveys and presentations should know about people!  ...And everyone involved in either of these task encouraged to read this.   Loads and loads of really clear, sensible advice.








The Curve: Turning Followers into Superfans


I read this after reading a very enthusastic linkedin review by Ray Poynter, thank you!  It persuaded me to buy it. There are some nice radical ideas in here about how to market things by giving things away and at the same at the other end of the scale offering premium high price solutions for the those willing to pay for them.

The Numbers Game: Why Everything You Know About Football is Wrong

Chris Anderson (Author), David Sally (Author)

I rather immersed myself in reading sports stats books this year. The way that data is transforming sporting strategy, there are lessons to be learnt by the whole of the market research industry. As an English person with a love of football, I feel rather a bounden duty to promote the Numbers game which looks at how statistical data has changed how the game is played. I loved this book and I am afraid I bored senseless everyone I knew who had any interest in football quoting incites from it. I also read Money Ball this year too which is the classic opus on how a proper understanding of stats transformed the fortunes of a major league baseball team, it is a great story and lovely read.


Who owns the future?


Jaron Lanier

This book has an important message about the impact of the digital economy on our future I cite from the book directly as it best explains  "In the past, a revolution in production, such as the industrial revolution, generally increased the wealth and freedom of people. The digital revolution we are living through is different. Instead of leaving a greater number of us in excellent financial health, the effect of digital"  Worth a read!





The golden rules of Acting 

Andy Nyman

This is a lovely little book, you can read in one short sitting. Why though do I recommend market researchers read it?  Well not because it teaches you anything about acting more about life and humanity and dealing with failure and the right approach to challenges.  There is not much difference in my mind to going for an audition and going and doing a pitch presentation. I took some heart from reading this book.






Want to see some other book recommendations?  Try this site:

http://inspirationalshit.com/booklist#


Your 2015 recommendations?


Love to hear your recommendations for books I might read in 2015  tweet me @jonpuleston


The science of prediction



This blog post is a short introduction to the science of prediction which is a topic that I have been totallt immersed in over the last new months and recently presented about at the 2014 ESOMAR Congress with Hubertus Hofkirchner. I thought I would share some of what I have learnt.


The accuracy of any prediction is based roughly around this formula...

P Accuracy = Quality of information x Effort put into making the prediction x (1 - difficulty of accurately aggregating all the dependent variables) x The level of Objectivity with which you can do this  x The pure randomness of the event

P = QxEx(1-D)xOxR

Here is the thinking behind this:
  • If you have none of the right information your prediction will be unreliable
  • If you don't put any effort into processing the information your prediction may be be unreliable
  • The more complex a task it is to weigh up and analyse the information need to make a  prediction the less likely that the prediction will be correct
  • Unless you stand back from the prediction and look at things objectively then your prediction could be subject to biases which to lead to you making an inaccurate prediction 
  • Ultimately prediction accuracy is capped by the randomness of the event. For example predicting the outcome of tossing a coin 1 time v 10,000 times have  completely different levels of prediction reliability.

Realize that prediction accuracy is not directly linked to sample size


You might note as a market researcher, that this formula is not directly dependent on sample size i.e. one person with, access to the right information, who is prepared to put in enough effort, has the skills needed to process this data and is able to remain completely objective, can make as good a predictions as a global network of market research company interviewing millions of people on the same subject! I cite as an example of this Nate Silver's achievement of single handedly predicting all 52 US State 2012 election results.

Now obviously we are not all as smart as Nate Silver, we don't have access to as much information, few of us would be prepared to put in the same amount of effort and many of us many not be able to process up this information as objectively.

So it does help to have more than 1 person involved to ensure that the errors caused by one persons lack of info or another person lack of effort or objectivity can be accounted for.

So how many people do you need to make a prediction?


Now this is a good question, the answer obviously is that it depends.

It firstly depends on how much expertise the people making a prediction have on the subject individually and how much effort they are prepared to make. If they all know their stuff or are prepared to do some research and put some thought into it, then you need a lot less than you might think.

16 seems to be about the idea size of an active intelligent prediction group

In 2007, Jed Christiansen of the University of Buckingham took a look. He used a future event with very little general coverage and impact, rowing competitions, and asked participants to predict the winners. A daunting task, as there are no clever pundits airing their opinions in press, like in soccer. However Christiansen recruited his participant pool from the teams and their (smallish) fan base through a rowing community website, in other words, he found experts. He found that the magic number was as little as “16”. Markets with 16 traders or more were well-calibrated, below that number prices could not be driven far enough.

The Iowa Electronic Market, which is probably the most famous of prediction systems out there that has successfully been used to predicted over 600 elections, has I understand involved an average of less than 20 traders per prediction.

Taking account of ignorance


However for every one completely ignorant person you add into the mix who effectively makes a random prediction you will instantly start to corrupt the prediction.  And in many situations this is scarcity of experts means to isolate ignorant and expert predictions this often means you need to interview a lot more people than 16.

Take for example trying to predict tomorrows weather. Imagine that 10% of the people you ask have seen the weather forecast and know it will not rain - these could be described as the experts and the rest simply guess 50% guessing it will rain and 50% not its easy to understand that if by chance more than 10% of the random sample predict it will rain, which is entirely possible the group prediction will be wrong.   Run the maths and for 95% certainty you will need to have a margin of error of less than 10% to be confident which means you will have to ask 86 people.

It gets even harder if the experts themselves are somewhat split in their opinions.  Say for example you were trying to predict who will win a tennis match and 10% of the sample are you ask are keen tennis fans (experts) who predict 2:1 that player A will win, the rest randomly guess 50% player A 50% player B.  Because of division in the experts you now need to a margin of error of less that 7% to be 95% confident which means you will need to interview around 200 people.

Taking account of cognitive bias


It gets even harder if you start to take into account cognitive biases of the random sample.  For example just by asking whether you think it will rain tomorrow more people will randomly say yes than no because of latent acquiescence bias.  We have tested this out in experiments for example if you ask people to predict how many wine drinkers prefer red wine the prediction will be 54%, if you ask people to predict how many wine drinkers prefer white wine the number of people who select red wine drops to 46%.   So its easy to see how this cognitive bias like this make predicting things difficult .

In the above example predicting the weather this effect would instantly cancel out the opinions of the experts and no matter how many people  you interviewed you would never be able to get an accurate weather forecast prediction from the crowd unless you accounted for this bias.

This is just one of a number of biases that impact on the accuracy of our predictions, one of the worse being our emotions.

Asking a Manchester United football fan to predict the result of their teams match is nye on useless as it almost impossible for them to envisage losing a match due to their emotional attachment to the team.

This makes political predictions particularly difficult.

Prediction biases can be introduced simply as a result of how you ask the question


Imagine I were doing some research to get people to predict how often when a  coin is tossed it is heads and I asked the question "If I toss this coin, predict if it will be heads or tails" for the reasons explained above the on average around 68% of people will say heads. The question has been asked in a stupid way so it delivers back a wildly inaccurate aggregated prediction.  If you change the question to "If a coin were tossed 10,000 times, predict how often it would be heads" you probably need no more than a sample of 1 to get an accurate prediction.   Now this might sound obvious, but this issue sits at the route of many inaccurate predictions in market research.

Imagine you asked 15 people to predict the "% chance" of it raining tomorrows and 5 of them happen to have seen the forecast and know there is a 20% chance of rain and the rest randomly guess numbers between 0% and 100%. If their average random guess is 50%,  this will then push up the average prediction to 40% rain.  If there is the same mix of informed in non informed predictors in your sample like this, it does not matter how many more people you interview the average prediction accuracy will never improve and will always be out by 20%.

This runs very much counter to how we tend to think about things in market research, where its nearly all about gathering large robust samples.  In the world of prediction, its all about isolating experts and making calibrations to take account of biases.

The stupid way we ask question often in second hand ways we ask questions can exacerbate this.

"Do you like this ad" for example is not the same question as whether you think its going to be a successful ad. The question is a Chinese whisper away from what you want to know.

A successful ad is not an ad I like its an ad that lots of people will like.  Change the question and motivate the participants to really think and we have found to make a perfect prediction about the success of an ad samples drop from around 150 to as low as 40.

Picking the right aggregation process


The basics

Imagine you were asking people to predict the trading price of a product and a sample of  predictions from participants looks like this.

$1, $1.2,  $0.9, $0.89, $1.1,  $0.99, $0.01, $1.13,  $0.7,  $10,000  

Your Mean = $1,000  .....Whoopse that joker putting in $10k really messed up our prediction.

Now for this reason you cannot use mean averages. For basic prediction aggregation we recommend using Median.  The median average of these is = $1 which looks a lot more sensible.

An alternative might be to simply discard the"outliers" and use all the data that look sensible.  In this example its the $0.01 and the $10,000 that look out of sync with the rest removing these the medium average = £1.03 which seems a bit more precise

Weighting individual predictions


The importance of measuring prediction confidence

In the world of prediction its all about  working out how to differentiate the good and bad predictors and one of the simplest techniques to do this is simply to ask people how confident they are in their prediction.

For example if I had watched the weather forecast I would be a lot more confident in predicting tomorrows weather that if I had not.  So it would be sensible when asking people to predict tomorrows weather to ask them if they had seen the weather forecast and how confident they were. From this information you could easily isolate out the "signal" from the "noise"

The trick is with all prediction protocols to try and find a way of isolating the people that are better informed than others and better at objectively analyzing that information but in most cases its not as easy as asking if they have seen the weather forecast.

For more complex predictions like predicting the result of a sports match, prediction confidence and prediction accuracy is not a direct linear relationship but certainly confidence weighting can help but needs to be carefully calibrated.  How you go about this it a topic for another blog post.

In the mean time if you are interested in finding out more about prediction science read our recently published  ESOMAR paper titled Predicting the future





How to make the perfect guess in a pub quiz



Having spent the last few months researching and studying the science or prediction and also being quite fond of pub quizzes here is my guide to how to make a perfect guess in a pub quiz using some of what we have learnt.


Step 1: Ideation


Ask people to think of the first answer that comes into their heads.

If they think of an answer they should not shout out the answer,  this could corrupt the purity of other participants thinking. They should put up their hand to indicate they have thought of and answer and write it down. The should also write down how confident they are on a scale of 1 to 3.  Each player can think of more than one answer but they must score their confidence of each one.

Confidence range:
1 = a hunch
2 =  quite confident
3 = certain

Answer time:
Under 5 seconds = certain
5+ seconds = assign certainty based on personal confidence measure...

Step 2: Initial idea evaluation


After the point at which everyone gives up, you then share your answers from the team and the level of confidence.

Rules for deciding if the answer is correct:
  • If more than one person has come up with the same answer in under 5 seconds then its almost certain that this answer is correct.
  • If anyone is certain about their answer, there is a high chance this answer is correct.
  • If more than one person comes up with the same answer and the combined confidence score is higher than 3 then there is quite a high chance that answer is correct and suggest you opt for that.
If there is a conflict or no answer scoring more than 2 point then go to step 3....

If nobody has come up with an answer the team is satisfied with go to step 4....

Step 3: Answer market trading


Each person must rate each answer by buying or selling shares in each answer choice with some "virtual money" .  They can buy or sell up to 2 shares in each answer.

Tip: If a person has 2 ideas that both are "hunches" then the first idea research has shown this is around 30% more likely to be correct.  Take this into consideration when making your buy / sell decisions.

e.g. if I think an answer is definitely correct I buy 2 shares. If I think its correct but I am unsure I buy 1 share,  If I think its definitely not correct I sell 2 shares, If I am feeling a little uncomfortable that it is wrong I sell 1 share.   Everyone has to commit to buy or sell - no body is allowed to sit on the fence.

Add up the total money traded in each idea and choose the winner.

If you want to be super nerdy about how you do this then don't simply add up the amount bet.  Answer should be weighted somewhat as there is not a linear relationship between betting confidence and prediction accuracy. Having studied a data from a large number of predictions we have found that prediction accuracy of somone who claims to be very confident is not twice as good as someone who has a hunch its only about 20% better (see chart below).  And people having a hunch are only 10% better than people making a total guess.  Interestingly there is little difference between someone who has a hunch and someone who says they are fairly sure.


Further more when you look at people betting against things and comparing to betting for things the prediction accuracy of the amount bet varies in an odd way. Smaller negative bets are slightly more predictive we found than large negative bets.  Strong positive bets on the other hand were more predictive than small positive bets but those that bet more than 2 were actually slightly less predictive than those that bet 2.  Hence our 2 point betting scale.


A more accurate betting aggregation process should score the amount bet like this:

-2 =  -20% 
-1 =  -20%
+1 = +10% 
+2 = +20% 

If on either of these aggregation processes no idea has a positive trading value then go to step 4....

Step 4: Idea stimulation


If you are not satisfied with any answer,  then all the team members should voice any "clues" they may be thinking about e.g. "I think his name begins with B" or "I think its something to do with football". Your thoughts could help another person think up the answer.

The scientific terms for this is called "Dialectical Boostrapping" - which basically means the sharing and discussion of ideas, which has been shown to help improve crowd wisdom generation processes. Find out more about this here Herzog and Hertwig (2009)

The more small clue you share they greater the chance of one of them triggering a thought in a team member. Note these can also be negative clues e.g. its definitely not...

If this process stimulates any ideas then go back to step 3 to evaluate them...

Step 5:  Picking the best of a bad bunch of guesses



If you are left with more than one answer that nobody is particularly satisfied with,  then pick the first answer the first person thought of.  This one has the highest chance of being correct.  It wont necessarily be right but it will have a slightly higher chance.

Advanced techniques:


Performance weighting your teams predictions

If you keep track of each individual's answer trading record over the period of several quizzes (.i.e if they bought 2 shares in an answer that eventually proved to be correct their personal balance would be +2). You can then start to weight your teams market predictions. You can do this by giving each person in the team a different total pot of money to bet based on their past performance record in correctly predicting the right answer based on how much money they would have won.

Note it would take several weeks studying at least 100 predictions to get a good idea of the prediction ability of each player so it would be a mistake to calibrate this after only one or two quizzes - luck has far more important role to play thank skill in the short term.

You might also want to assess how individuals confidence levels change when they have drunk 1 unit, 2 units 3 units of alcohol and start removing budget (or indeed giving extra budget!) as the night progresses!

Encouraging the team to think like foxes not hedgehogs

What buggers up the predictions of many pubs quiz teams can be the bullish viewpoint one or two individuals.   Having a strong opinion about things generally I am afraid does not correlate very well with actually being good at making predictions.  If you want to read up some evidence on this I recommend  you order this book all will be explained.



The team should foster an atmosphere where its OK to change your mind,  its not a battled between right and wrong , and should not be scared of failure.

Avoiding decision making biases

If the question is multi-choice make sure that your answer is no biased by the order effect or anchoring in the way the question is asked.  For example yes/no questions more people pick yes than no for irrational reasons.  When presented with multi choice options slightly more people pick the first choice for irrational reasons.   By being aware of this you can be conscious that your decisions are being made objectively.

Important Note/disclaimer:

The advice is a fantasy methodology for making a perfect prediction.  I don't advocate you using it in a real pub quiz. Firstly for practical reasons,  In reality the speed at which most pub quizzes progress you probably would not have the time be able to implement this approach.  Secondly it may also not be in the spirit of a fair pub quiz to use this technique in real life - it might be considered cheating!



Kategori

Kategori