2014 market research book list

Coming to the end of the year, I I thought I would share a list of the best books I have read in 2014 that I think other market researchers might like to read.  Now not all of these are new books by any means so forgive me if you have yourself read half of them.

This will make you smarter



This book is a compendium of scientific and philosophical ideas in one of two page essays on quite a remarkable cross section of topics. There are some really exciting thought packed into this book that I think market researcher could make good use of. I think reading it really did make me a little smarter!






Expert Political Judgment: How Good Is It? How Can We Know?


Philip E. Tetlock

Philip Tetlock's thinking has had more influence on how I think about conducting market research than any one person this year. I was introduced to this book by Walker Smith from the Futures Company and I would recommend that anyone who has an interest in the science of prediction should read this book.  Learn that political experts are not quite as good as chimps tossing coins at predicting things!




The Signal and the Noise: The Art and Science of Prediction


I realise this book is a few years old now, and I wish I had read it sooner. There are so many really important ideas stuffed into this book that market researcher can use in their every day research. Its both inspiring and useful.






Strategy: A History


This small thumbnail belies a bloody thick book which I have to admit to not to have read every page of.  It looks at strategy from all sorts of angles from war through to politics and summarizes the thinking of every major strategist in history including the likes of Sun Tzu, Napoleon and Machiavelli.  There is loads of great thinking for market researchers to digest. And probably even more valuable incites for anyone running a business.   It contains a detailed look game theory and the trials and issues with trying to apply strategy in real life. There is some sage advice in this book



Decoded: The Science Behind Why We Buy



This book is a really helps explain the basics of shopping decision making and is a compendium of behavioral economic theory, an important topic for nearly all market researchers to understand - I really like the way it uses visual examples to explain some of the theory making it an effortless read. This book should be on every market researchers shelf.





100 Things Every Designer Needs to Know about People


This book should really be titled, 100 things market researchers designing surveys and presentations should know about people!  ...And everyone involved in either of these task encouraged to read this.   Loads and loads of really clear, sensible advice.








The Curve: Turning Followers into Superfans


I read this after reading a very enthusastic linkedin review by Ray Poynter, thank you!  It persuaded me to buy it. There are some nice radical ideas in here about how to market things by giving things away and at the same at the other end of the scale offering premium high price solutions for the those willing to pay for them.

The Numbers Game: Why Everything You Know About Football is Wrong

Chris Anderson (Author), David Sally (Author)

I rather immersed myself in reading sports stats books this year. The way that data is transforming sporting strategy, there are lessons to be learnt by the whole of the market research industry. As an English person with a love of football, I feel rather a bounden duty to promote the Numbers game which looks at how statistical data has changed how the game is played. I loved this book and I am afraid I bored senseless everyone I knew who had any interest in football quoting incites from it. I also read Money Ball this year too which is the classic opus on how a proper understanding of stats transformed the fortunes of a major league baseball team, it is a great story and lovely read.


Who owns the future?


Jaron Lanier

This book has an important message about the impact of the digital economy on our future I cite from the book directly as it best explains  "In the past, a revolution in production, such as the industrial revolution, generally increased the wealth and freedom of people. The digital revolution we are living through is different. Instead of leaving a greater number of us in excellent financial health, the effect of digital"  Worth a read!





The golden rules of Acting 

Andy Nyman

This is a lovely little book, you can read in one short sitting. Why though do I recommend market researchers read it?  Well not because it teaches you anything about acting more about life and humanity and dealing with failure and the right approach to challenges.  There is not much difference in my mind to going for an audition and going and doing a pitch presentation. I took some heart from reading this book.






Want to see some other book recommendations?  Try this site:

http://inspirationalshit.com/booklist#


Your 2015 recommendations?


Love to hear your recommendations for books I might read in 2015  tweet me @jonpuleston


The science of prediction



This blog post is a short introduction to the science of prediction which is a topic that I have been totallt immersed in over the last new months and recently presented about at the 2014 ESOMAR Congress with Hubertus Hofkirchner. I thought I would share some of what I have learnt.


The accuracy of any prediction is based roughly around this formula...

P Accuracy = Quality of information x Effort put into making the prediction x (1 - difficulty of accurately aggregating all the dependent variables) x The level of Objectivity with which you can do this  x The pure randomness of the event

P = QxEx(1-D)xOxR

Here is the thinking behind this:
  • If you have none of the right information your prediction will be unreliable
  • If you don't put any effort into processing the information your prediction may be be unreliable
  • The more complex a task it is to weigh up and analyse the information need to make a  prediction the less likely that the prediction will be correct
  • Unless you stand back from the prediction and look at things objectively then your prediction could be subject to biases which to lead to you making an inaccurate prediction 
  • Ultimately prediction accuracy is capped by the randomness of the event. For example predicting the outcome of tossing a coin 1 time v 10,000 times have  completely different levels of prediction reliability.

Realize that prediction accuracy is not directly linked to sample size


You might note as a market researcher, that this formula is not directly dependent on sample size i.e. one person with, access to the right information, who is prepared to put in enough effort, has the skills needed to process this data and is able to remain completely objective, can make as good a predictions as a global network of market research company interviewing millions of people on the same subject! I cite as an example of this Nate Silver's achievement of single handedly predicting all 52 US State 2012 election results.

Now obviously we are not all as smart as Nate Silver, we don't have access to as much information, few of us would be prepared to put in the same amount of effort and many of us many not be able to process up this information as objectively.

So it does help to have more than 1 person involved to ensure that the errors caused by one persons lack of info or another person lack of effort or objectivity can be accounted for.

So how many people do you need to make a prediction?


Now this is a good question, the answer obviously is that it depends.

It firstly depends on how much expertise the people making a prediction have on the subject individually and how much effort they are prepared to make. If they all know their stuff or are prepared to do some research and put some thought into it, then you need a lot less than you might think.

16 seems to be about the idea size of an active intelligent prediction group

In 2007, Jed Christiansen of the University of Buckingham took a look. He used a future event with very little general coverage and impact, rowing competitions, and asked participants to predict the winners. A daunting task, as there are no clever pundits airing their opinions in press, like in soccer. However Christiansen recruited his participant pool from the teams and their (smallish) fan base through a rowing community website, in other words, he found experts. He found that the magic number was as little as “16”. Markets with 16 traders or more were well-calibrated, below that number prices could not be driven far enough.

The Iowa Electronic Market, which is probably the most famous of prediction systems out there that has successfully been used to predicted over 600 elections, has I understand involved an average of less than 20 traders per prediction.

Taking account of ignorance


However for every one completely ignorant person you add into the mix who effectively makes a random prediction you will instantly start to corrupt the prediction.  And in many situations this is scarcity of experts means to isolate ignorant and expert predictions this often means you need to interview a lot more people than 16.

Take for example trying to predict tomorrows weather. Imagine that 10% of the people you ask have seen the weather forecast and know it will not rain - these could be described as the experts and the rest simply guess 50% guessing it will rain and 50% not its easy to understand that if by chance more than 10% of the random sample predict it will rain, which is entirely possible the group prediction will be wrong.   Run the maths and for 95% certainty you will need to have a margin of error of less than 10% to be confident which means you will have to ask 86 people.

It gets even harder if the experts themselves are somewhat split in their opinions.  Say for example you were trying to predict who will win a tennis match and 10% of the sample are you ask are keen tennis fans (experts) who predict 2:1 that player A will win, the rest randomly guess 50% player A 50% player B.  Because of division in the experts you now need to a margin of error of less that 7% to be 95% confident which means you will need to interview around 200 people.

Taking account of cognitive bias


It gets even harder if you start to take into account cognitive biases of the random sample.  For example just by asking whether you think it will rain tomorrow more people will randomly say yes than no because of latent acquiescence bias.  We have tested this out in experiments for example if you ask people to predict how many wine drinkers prefer red wine the prediction will be 54%, if you ask people to predict how many wine drinkers prefer white wine the number of people who select red wine drops to 46%.   So its easy to see how this cognitive bias like this make predicting things difficult .

In the above example predicting the weather this effect would instantly cancel out the opinions of the experts and no matter how many people  you interviewed you would never be able to get an accurate weather forecast prediction from the crowd unless you accounted for this bias.

This is just one of a number of biases that impact on the accuracy of our predictions, one of the worse being our emotions.

Asking a Manchester United football fan to predict the result of their teams match is nye on useless as it almost impossible for them to envisage losing a match due to their emotional attachment to the team.

This makes political predictions particularly difficult.

Prediction biases can be introduced simply as a result of how you ask the question


Imagine I were doing some research to get people to predict how often when a  coin is tossed it is heads and I asked the question "If I toss this coin, predict if it will be heads or tails" for the reasons explained above the on average around 68% of people will say heads. The question has been asked in a stupid way so it delivers back a wildly inaccurate aggregated prediction.  If you change the question to "If a coin were tossed 10,000 times, predict how often it would be heads" you probably need no more than a sample of 1 to get an accurate prediction.   Now this might sound obvious, but this issue sits at the route of many inaccurate predictions in market research.

Imagine you asked 15 people to predict the "% chance" of it raining tomorrows and 5 of them happen to have seen the forecast and know there is a 20% chance of rain and the rest randomly guess numbers between 0% and 100%. If their average random guess is 50%,  this will then push up the average prediction to 40% rain.  If there is the same mix of informed in non informed predictors in your sample like this, it does not matter how many more people you interview the average prediction accuracy will never improve and will always be out by 20%.

This runs very much counter to how we tend to think about things in market research, where its nearly all about gathering large robust samples.  In the world of prediction, its all about isolating experts and making calibrations to take account of biases.

The stupid way we ask question often in second hand ways we ask questions can exacerbate this.

"Do you like this ad" for example is not the same question as whether you think its going to be a successful ad. The question is a Chinese whisper away from what you want to know.

A successful ad is not an ad I like its an ad that lots of people will like.  Change the question and motivate the participants to really think and we have found to make a perfect prediction about the success of an ad samples drop from around 150 to as low as 40.

Picking the right aggregation process


The basics

Imagine you were asking people to predict the trading price of a product and a sample of  predictions from participants looks like this.

$1, $1.2,  $0.9, $0.89, $1.1,  $0.99, $0.01, $1.13,  $0.7,  $10,000  

Your Mean = $1,000  .....Whoopse that joker putting in $10k really messed up our prediction.

Now for this reason you cannot use mean averages. For basic prediction aggregation we recommend using Median.  The median average of these is = $1 which looks a lot more sensible.

An alternative might be to simply discard the"outliers" and use all the data that look sensible.  In this example its the $0.01 and the $10,000 that look out of sync with the rest removing these the medium average = £1.03 which seems a bit more precise

Weighting individual predictions


The importance of measuring prediction confidence

In the world of prediction its all about  working out how to differentiate the good and bad predictors and one of the simplest techniques to do this is simply to ask people how confident they are in their prediction.

For example if I had watched the weather forecast I would be a lot more confident in predicting tomorrows weather that if I had not.  So it would be sensible when asking people to predict tomorrows weather to ask them if they had seen the weather forecast and how confident they were. From this information you could easily isolate out the "signal" from the "noise"

The trick is with all prediction protocols to try and find a way of isolating the people that are better informed than others and better at objectively analyzing that information but in most cases its not as easy as asking if they have seen the weather forecast.

For more complex predictions like predicting the result of a sports match, prediction confidence and prediction accuracy is not a direct linear relationship but certainly confidence weighting can help but needs to be carefully calibrated.  How you go about this it a topic for another blog post.

In the mean time if you are interested in finding out more about prediction science read our recently published  ESOMAR paper titled Predicting the future





How to make the perfect guess in a pub quiz



Having spent the last few months researching and studying the science or prediction and also being quite fond of pub quizzes here is my guide to how to make a perfect guess in a pub quiz using some of what we have learnt.


Step 1: Ideation


Ask people to think of the first answer that comes into their heads.

If they think of an answer they should not shout out the answer,  this could corrupt the purity of other participants thinking. They should put up their hand to indicate they have thought of and answer and write it down. The should also write down how confident they are on a scale of 1 to 3.  Each player can think of more than one answer but they must score their confidence of each one.

Confidence range:
1 = a hunch
2 =  quite confident
3 = certain

Answer time:
Under 5 seconds = certain
5+ seconds = assign certainty based on personal confidence measure...

Step 2: Initial idea evaluation


After the point at which everyone gives up, you then share your answers from the team and the level of confidence.

Rules for deciding if the answer is correct:
  • If more than one person has come up with the same answer in under 5 seconds then its almost certain that this answer is correct.
  • If anyone is certain about their answer, there is a high chance this answer is correct.
  • If more than one person comes up with the same answer and the combined confidence score is higher than 3 then there is quite a high chance that answer is correct and suggest you opt for that.
If there is a conflict or no answer scoring more than 2 point then go to step 3....

If nobody has come up with an answer the team is satisfied with go to step 4....

Step 3: Answer market trading


Each person must rate each answer by buying or selling shares in each answer choice with some "virtual money" .  They can buy or sell up to 2 shares in each answer.

Tip: If a person has 2 ideas that both are "hunches" then the first idea research has shown this is around 30% more likely to be correct.  Take this into consideration when making your buy / sell decisions.

e.g. if I think an answer is definitely correct I buy 2 shares. If I think its correct but I am unsure I buy 1 share,  If I think its definitely not correct I sell 2 shares, If I am feeling a little uncomfortable that it is wrong I sell 1 share.   Everyone has to commit to buy or sell - no body is allowed to sit on the fence.

Add up the total money traded in each idea and choose the winner.

If you want to be super nerdy about how you do this then don't simply add up the amount bet.  Answer should be weighted somewhat as there is not a linear relationship between betting confidence and prediction accuracy. Having studied a data from a large number of predictions we have found that prediction accuracy of somone who claims to be very confident is not twice as good as someone who has a hunch its only about 20% better (see chart below).  And people having a hunch are only 10% better than people making a total guess.  Interestingly there is little difference between someone who has a hunch and someone who says they are fairly sure.


Further more when you look at people betting against things and comparing to betting for things the prediction accuracy of the amount bet varies in an odd way. Smaller negative bets are slightly more predictive we found than large negative bets.  Strong positive bets on the other hand were more predictive than small positive bets but those that bet more than 2 were actually slightly less predictive than those that bet 2.  Hence our 2 point betting scale.


A more accurate betting aggregation process should score the amount bet like this:

-2 =  -20% 
-1 =  -20%
+1 = +10% 
+2 = +20% 

If on either of these aggregation processes no idea has a positive trading value then go to step 4....

Step 4: Idea stimulation


If you are not satisfied with any answer,  then all the team members should voice any "clues" they may be thinking about e.g. "I think his name begins with B" or "I think its something to do with football". Your thoughts could help another person think up the answer.

The scientific terms for this is called "Dialectical Boostrapping" - which basically means the sharing and discussion of ideas, which has been shown to help improve crowd wisdom generation processes. Find out more about this here Herzog and Hertwig (2009)

The more small clue you share they greater the chance of one of them triggering a thought in a team member. Note these can also be negative clues e.g. its definitely not...

If this process stimulates any ideas then go back to step 3 to evaluate them...

Step 5:  Picking the best of a bad bunch of guesses



If you are left with more than one answer that nobody is particularly satisfied with,  then pick the first answer the first person thought of.  This one has the highest chance of being correct.  It wont necessarily be right but it will have a slightly higher chance.

Advanced techniques:


Performance weighting your teams predictions

If you keep track of each individual's answer trading record over the period of several quizzes (.i.e if they bought 2 shares in an answer that eventually proved to be correct their personal balance would be +2). You can then start to weight your teams market predictions. You can do this by giving each person in the team a different total pot of money to bet based on their past performance record in correctly predicting the right answer based on how much money they would have won.

Note it would take several weeks studying at least 100 predictions to get a good idea of the prediction ability of each player so it would be a mistake to calibrate this after only one or two quizzes - luck has far more important role to play thank skill in the short term.

You might also want to assess how individuals confidence levels change when they have drunk 1 unit, 2 units 3 units of alcohol and start removing budget (or indeed giving extra budget!) as the night progresses!

Encouraging the team to think like foxes not hedgehogs

What buggers up the predictions of many pubs quiz teams can be the bullish viewpoint one or two individuals.   Having a strong opinion about things generally I am afraid does not correlate very well with actually being good at making predictions.  If you want to read up some evidence on this I recommend  you order this book all will be explained.



The team should foster an atmosphere where its OK to change your mind,  its not a battled between right and wrong , and should not be scared of failure.

Avoiding decision making biases

If the question is multi-choice make sure that your answer is no biased by the order effect or anchoring in the way the question is asked.  For example yes/no questions more people pick yes than no for irrational reasons.  When presented with multi choice options slightly more people pick the first choice for irrational reasons.   By being aware of this you can be conscious that your decisions are being made objectively.

Important Note/disclaimer:

The advice is a fantasy methodology for making a perfect prediction.  I don't advocate you using it in a real pub quiz. Firstly for practical reasons,  In reality the speed at which most pub quizzes progress you probably would not have the time be able to implement this approach.  Secondly it may also not be in the spirit of a fair pub quiz to use this technique in real life - it might be considered cheating!



Tips on writing a good conference presentation

Are you fretting about putting together a presentation for an up coming event? Here are some tips based on the experience of sitting through a few and delivering a few and designing a few.  They are geared for market research but I suppose the thinking applies to any presentation.

Design
  1. Images really really do help make a good presentation - but read presentation zen to understand how to use them effectively
  2. Aim to present one thought per slide (you can break this rule if the slide is exceptionally well designed)
  3. Avoid at all possible costs bullet points - this is as much a philosophy as anything - "set your baysian priors to zero", OK you might be persuaded to let one or two creep in but don't start with a presentation that is 100% bullet points!
  4. Is your  main message tweetable? Think about what people viewing your presentations will being doing - some may be tweeting the content so help them by turning your headlines into tweetable messages
  5. Avoid video - don't fall into the trap of thinking adding a video will make your presentation more "dynamic" it usually dehumanizes your presentation.
  6. Don't fret on look and feel too much - yes do your best but, there will be sure to be better designed presentations at the event you attend - the story and how you deliver the content is far more important  - some of the best, most inspiring presentation I have ever listened to have looked awful from a design point of view (um...would it be undiplomatic to name Brainjuicer presentations as an example) - focus on the story and you will be fine! 
Structure

Now this might sound a bit pompous advice, but to write a good persuasive presentation I really do suggest you first read up on the basic tenants of Greek rhetoric, in particular Aristotles ideas of Ethos, Logos, Pathos, the 3 ways to persuade. 

You must start your presentation by establishing Ethos which is about building a bond of trust with your audience, then use Logos to which is about making a logic arguments and then end with pathos which is all about drawing out the emotions of the audience.

Often I see pathos being used wrongly at the start of a presentation e.g. kicking it off with some sort of cocky joke or dramatic video from which point everything else seems flat.  Drama and emotion must be saved until you have won the trust of your audience and won the argument then you use it to drive home your message.

Ethos is really about establishing some humanity and connection with the people you are talking to on some level and have written about it in this separate blog poste   

The logos part of the process is the most skilled and it is about identifying the key problems the audience might have on an issue and then outlining your solutions.  Never present a problem to an audience if you are not going to follow with a solution.

Tips for writing the story

  1. Your presentation must tell a simple story that you can recount in a basic elevator pitch
  2. To devise your story really roughly sketch it out first either in your mind, on a piece of paper 
  3. Try writing  your presentation as a story in excel - you will be amazed how effective this is at allowing you to coalescence you thoughts into a simple story - one line of the excel is one slide of your presentation. You can then once you got the basics down, really easily hack it around.
  4. Go on a walk and tell the story to yourself or tell it to yourself as you are going to sleep or driving to work and see if it flows cleanly
  5. If you get stuck telling the story to yourself you have what is known as a story knot - step back from it and try and tell it in a different way 
  6. Test market the story by trying to summaries what you are going to talk about to a colleague 

Content market researchers should avoid

  1. Don't pad your presentation with background stats - I don't need to know about all the details of the sampling techniques, we are all grown up market researchers - that's a given, just jump to the headlines.
  2. Avoid the genero advice trap:  we need to be faster, more insightful and cheaper!
  3. Check if your content fails the mobile phone growth statistics bleeding obvious test:  yes we know more people are now using mobile phones than own a toothbrush! Spouting any statistic we all could have a good stab at guessing or many in the audience has  heard before is wasting delegates time.
  4. If you are going to play to the crowd by highlighting one of the many short comings of our industries working practice, don't you dare propose a proprietary solution that you and only you can use. The audience will want to hit you! 
  5. Don't mention a problem without having a bloodly good solution to unveil that we can all grasp hold of.
  6. Don't tell us you have a solution and then not show us the details or an example
  7. If you are delivering a pure sales message about your great new piece of technology - come clean about it up front
  8. I don't need to be told about how great Daniel Kahneman's work is anymore or need to be explained what system 1 and system 2 is.
  9. Along the same likes, avoid the cliche buzzwords of the moment - in the naughties it was the word web2.0 which drove me mad hearing and this decades most important word to avoid using in any form of presentation is big data and I will let you make you mind up about all the rest
  10.  This is the year of the xxxx!!! mmmm... This is probably not the year of anything (and certainly not the year of the mobile!) Avoid protestation unless you are the chairman of the conference when that task becomes obligatory. 
  11. 100% recycling someone else's ideas already recounted in a New York Times best selling business publication is cheating!  

Technique
  1. Try and make me laugh at least once (or at very least a smile)
  2. Use simple examples: tell us your theory and then show us a real example could be seen as the essence of the structure of a good market research presentation
  3. Be prepared to go off screen - Never under estimate the value of a good prop! Most of the show stealing presentations I have seen have used something other than just a presentation to get their message across. 
  4. Admit your short comings and failures in a sandwich between your successes.  You could call this an integrity sandwich - we are much more open to hear about and believe in your achievements if you own up to your failures too. 
  5. Make it interactive - a quiz embedded into your presentation is the simplest way to do this but it can be tedious being challenge to guess the answer if there is no reward for doing so. Come armed with prizes if you are going to do this!
  6. If you are going to get people to do things, make sure its inspiring. It embarrassing to all stand up so if you are going to ask your audience to do this it better be fun or genuinely interesting
  7. Dress rehears your presentation the office first to your staff - they will benefit too by knowing what you are going to talk about 
  8. OK I suppose you better get your presentation spell checked too!  As a class one dyslexic spelling is a challenge to me and often the first feedback I get when I give a presentation is a polite aside about the spelling. To avoid this type of humiliation I do recommend getting your presentation spell checked by a third party.* 

*Admittedly this is not advice I always take myself to the fury of my marketing department and sales team as I would describe myself as a bit of a militant dyslexic and feel I have the human right to make spelling mistakes on my own content sometimes.

One market researchers viewpoint on the World Cup



So England got dumped out in the first round of the World Cup and everyone in our country feels disappointed, an emotion we are quite used to feeling. So begins a round of postmortems that we all probably secretly enjoy as much as the competition itself, working out who to blame for the team’s failure.  In past World Cups this has been quite easy: for example David Beckham kicking a player and getting sent off, having a turnip head as a manager or a lack of goal line technology.  But this year we are all fairly universally perplexed. I have read a lot of overfit analysis, none of which is particularly convincing because, well, in the scheme of things we all thought we played quite well, we had a sparky young team. It seems like we were just a bit unlucky this time round.

The role of randomness

Its quite hard to accept the role that randomness plays in the outcome of world cup matches.  Every nation when they get kicked out or fail to even qualify probably believes their teams were "unlucky" and that their teams are better than they actually are.  So what is the relative importance of  luck v skill when it come to winning the world cup?

Unlike the premiership where there are 38 games over which time the performance of the teams is largely correlated to the quality of the squads (take a read of the Number Game* by Chris Anderson and David Sally)  performance of a world cup squad cannot be calculated by the aggregated skill value of the squad there is a lot more randomness involved.  Imagine if the premiership only lasted 3 games: in two out of the last four seasons the team that one the premiership might have been relegated.

*a must read if you are a market researcher and like football!

There is another factor too, in the premiership the best players get sucked up into the best teams hence the much higher win ratios between the top and bottom performing sides compared to the world cup where the best players are distributed more randomly and is proportional to the size of each footballing nation.  This in tern makes the outcome of international matches even more random.

Who influences the outcome of a match?

If you look at who has goal scoring influence across a team you will notice that the negative effects of causing goals a pretty well distributed across a team but the positive effects of scoring goals are a lot more clustered amongst some individuals. See chart below showing statistics from an imaginary team based on typical performance data taken from the premier league.
 

The potential performance of a world cup team must be measured not by the overall skill value of the team but the value of a smaller network of attacking based players who can make the most game changing contributions. In the case of players like Lionel Messi a single player can carry the whole goal scoring burden of a squad.  It only takes one or two randomly allocated star players in a world cup team to elevate its performance chances (think of Pele or Maradonna).

The performance of defence is more a case of luck. You might have one or two unreliable defenders who you may not want in your premier league squad because you know over the course of a season they may cost you a match or two, but at the individual game level and a world cup is based on the outcome of  three or four key individual games, the chances are a poor defender might well run their luck.   The other two important factors defenders have to contend with are the extra stress and lack of team playing experience of a world cup team compared to a premiership squad.  Without doubt stress plays a big part, players are really hyped up and there is probably an order of magnitude increase in tension which is the root cause of many errors in world cup matches. If you look at the defensive mistakes that cost us goals in recent world cups some of the biggest mistakes were caused by effectively our most reliable players, John Terry and Steven Gerrard and Phil Neville.  There is also a lack of formation practice to contend which is particularly critical for defence. How many hours of playing together does it take for a defence to gel? Most world cup squads have days rather than months to prepare.

A team like England might well have a higher aggregated skill performance average compared to other teams, but this does not result in the same reliable performance ratios that you see in the premiership. This is because over half the value is based on their defensive skill which can be completely undermined by bad luck and we don’t have a cluster of super-skilled players to elevate the team out of bad luck matches by scoring more goals than we let in.

The influence of the Ref

To win world cup matches you are much more reliant on the manager’s structural approach, the contributions from clusters of individuals who might form good attacking combinations and one other person – the REF!  Or rather, the ref in conjunction with the crowd and the linesmen.

If you analyse a typical game you will find that the number of major goal scoring decisions that are in the hands of the referee and linesmen are actually enormous compared to any individual player. It’s difficult to put a figure on it but let’s say on average there are about 6 decisions that could have affected a goal one way or another by the referee* its instantly obvious the relative influence they have on a match.

*That is a wisdom of the crowd estimate by asking a collection of football fans how many goal-affecting decisions are made in the match by the referee and linesmen, six was the median average estimate.


Now in nine times out of ten these decisions balance themselves out but refs are only human and so it’s no wonder why there is such a big home team advantage – with 50,000 fans screaming penalty it must be extremely difficult for refs not to be influenced by the crowd.  In fact you can almost put a figure on the influence of the crowd by comparing home and away goal scoring averages the home side gains an average 0.35 of a goal per game net advantage if you examine premiership games,  which can only be really down to the net contribution of the crowd/ref decision effects.

It’s no wonder as a result that there is such a disproportionate home nation advantage.  Effectively every home nation team is starting with a 0.35 goal lead, this advantage aggregated up over the course of a tournament  has means that nearly 30% of all world cups have been won by the home nation that is 10 times higher than chance.

Am I likely to ever see England win another world cup in my lifetime?

Is probably a question most England fans ask themselves. What does it take to win a world cup – how good do you have to be to override luck?  We have taken a look at this and run some calculations.

The chart below take a little explaining but it maps out a team’s skill level v the number of times it’s likely to win a world cup over the course of someone’s average football supporter’s lifetime of 72 years = 18 world cups.  If there are 32 teams in a word cup and you are an average team and your team qualifies for every world cup final the chances are you will win 1.1 world cups over your lifetime. If in you are England and only qualify roughly 80% of the time the changes will drop to 0.96.  If your team is twice as good as average, you are likely to win roughly 2 world cups and 4 times better 6 world cups.


 England have one one world cup, Germany three and Brazil five so does that mean we are average team and Germany are three times better than us and Brazil four times better than us?

Well essentially yes, if you look at the average game-win ratios of all the teams that have played the most regularly in World Cups v the number of World Cups they have won its pretty closely correlated at 0.91.    Germany has a three times higher win ratio than us and Brazil four times higher.


Now I appreciate there is some self-selection involved here – this chart should really be based on first round matches only for a totally fair comparison, but we don’t have that data. I think it’s reasonable to say though that England has not really been done out of its fair share of World Cups.  I think we have won as many as our teams aggregated performance deserves.  You might argue that some teams have been luckier than most: Italy certainly and others unlucky, Mexico should have won it twice by now based on their aggregated performance.

Roughly speaking that means for me there is only about a 40% chance I will witness England lift a world cup in my remaining lifetime but almost certainly I will have to endure another series of victories for Germany and Brazil.  Oh well better come to terms with it but I live in hope.

But lets fantasise for a minute, how many world Cups could we have won?

Imagine we lived in an infinite number of universes where for the last 70 years we had been playing an infinite series of world cups with a team with the same skill level.


Well on average 32% of the time we would not have won a single world cup by now, in 21% of cases  we might have picked up two and in 7% of case the same number as Germany.  There is one universe in 5,000 where Germany would not have won a single world up and England would have won 4!  Anyone fancy moving there?




Baby steps into the wearable era of research: ESOMAR DD 2014 Roundup

Baby steps into the wearable era of research: ESOMAR DD 2014 Roundup

Compared to other global research events the ESOMAR Digital Dimensions conference is by no means the biggest, it faces competition, without doubt, from more ‘ideas’ driven events, but never the less it is by far and away still my favourite market research event on the global calendar. Now I have to say that because I was chairman this year, but I do feel that despite all the competition, it has reliably proved to be one of the most fruitful sources of new thinking and new trends for the market research industry - I consistently learn so much more at this event compared to the others I attend and this year it was particularly fruitful.

I think that part of its success is down to the consistently high standards ESOMAR sets on paper submission, only 1 in 5 papers get selected and it also demands a lot more robust thinking from its participants. What you get as a result from this conference is a really thoughtful mixture of new ideas, philosophy and argued out science.

This year was one of the strongest collections of papers ever assembled, so much so that the selection committee asked to extend the prizes beyond 1st place. There were 6 major themes that emerged and 1 paper that I think could go on to have a major impact well beyond the boundaries of market research and I returned home with 23 new buzzwords and phrases to add to my growing collection (see other post).

The big themes

1. The Physiological data age: At this conference we witness some of the baby steps being taken into the world of wearable technology; and a prostration by Gawain Morrison from SENSUM who were one of the stars of the event, that we are about to enter the physiological data age.  They showed us a galvanic skin response recording of a 7 hour train journey which revealed the insight that the highest stress point on the journey was not caused by any delays or anxiety to reach the station but when the on-board internet service went down!  IPSOS are one of many MR companies to start experimenting with google glasses and showed us how they were using them to conduct some ethnographic research amongst new parents for Kimberly Clarke. We saw some wonderful footage of  a father interacting with his new born child in such a natural and intimate way it does not take much of a leap of the imagination to realise wearable technology is going to be a big topic in future MR events.

2. The Big Privacy issues looming over these new techniques:   With the rise of wearable devices raises a whole range of new issues surrounding data privacy that was widely discussed at this conference,  Alex Johnson highlighted in his award winning work Exploring the Practical Use of Wearable Video Devices, which won best paper, - the central emerging dilemma - it’s almost impossible to avoid gathering accidental data from people and companies who have not given their consent to take part in the research when doing wearable research. It’s critical for the research industry to take stock of.

3. Developing the new skills needed to process massive quantities of data:  The second big focus of this conference, that Alex Johnson’s paper also highlighted, was the enormity of the data evaluation tasks researchers face in the future, for example processing hundreds of hours of video and meta data generated from wearable devices.  Image processing software is a long way from being able to efficiently process high volumes of content right now. He had some good ideas, to process this type of data. He proposed a whole new methodological approach which centres around building taxonomies and short cuts for what a computer should look for and a more iterative analytical approach.  In one of the most impressive papers at the conference TNS & Absolute Data provided an analytical guide to how they deconstructed 20 million hours of mobile phone data to build a detailed story about our mobile phone usage, that could be utilised as a media planning platform for the phone – the research battle ground of the future is surely going to be fought on who has the best data processing skills.

4. De-siloed research techniques: I wish I could think of a better simple phrase to describe this idea as it was probably the strongest message coming out of the ESOMAR DD conference - the emergence of a next generation class of more de-siloed research methodologies, that combined a much richer range of less conventional techniques and a more intelligent use of research participants. Hall & Partners described a new multi-channel research approach that involved a more longitudinal relationship with a carefully selected small sample of participants where across 4 stages of activity they engaged them in a mix of mobile diary, forum discussion and conventional online research - challenging them to not just answer questions but help solve real marketing problems; Millward brown described a collaboration with Facebook where they mixed qual and mobile intercept research and task based exercises to understand more about how mobiles are used as part of the shopping experience;  Mesh Planning described how they integrated live research data with fluid data analysis to help a media agency dynamically adjust their advertising activity; IPSOS showed us some amazing work for Kimberly-Clarke that spanned the use of Facebook to do preliminary qual, social media analysis, traditional home based ethography, and a new technique of glassnoraphy. What all these research companies demonstrated was that decoupled from the constraints of convention, given a good open brief from a client and access to not just the research data that the research company can generate but the data the client has themselves we saw some research companies doing some amazing things!

5. Mining more insights from open ended feedback:  Text analytics in infancy focussed on basic understanding of sentiment but 3 great papers at the event showed how much more sophisticated we are becoming at deciphering open ended feedback.  Examining search queries seems to be a big underutilised area for market researcher right now and KOS Research and Clustaar elegantly outline how you could gather really deep understanding of people’s buying motivations by statistically analysing the search queries around a topic.  Annie Pettit from Peanut Labs, looking at the same issue from the other end of the telescope, showed how the suggestions to improve brands and new product development opportunities could be extracted from social media chatter by the careful deconstruction of the language they used to express these ideas.  And Alex Wheatley, in my team at GMI, who I am proud to say won a silver prize for his paper, highlighted just how powerful open ended feedback from traditional market research surveys could be when subjected to quant scale statistical analysis, rivalling and often surpassing the quality of feedback from banks of closed questions.

6. Better understanding the role of mobile phones & tablets in our lives: We learnt  a whole lot more about the role of mobile phones and tablets in our lives at the conference, some of it quite scary.  We had expansive looks at this topic from Google, Yahoo and Facebook.  AOL provided some useful “Shapely value” analysis to highlight the value of different devices for different tasks and activities, it demonstrated how the tablet is emerging as such an important “evening device” , its role in the kitchen and bedroom and how the combination of these devices opens up our access to brands.  We learn how significant the smart phone is when we go retail shopping for a combination of social and investigative research reasons. We learn about the emergence of the “Google shop assistant” many people preferring to use google in shops to search for their shopping queries than actually ask the shop assistants and how we use the phone to seek shopping advice from our friends and how many of us post our trophy purchases on social media.

The impact of technology on our memory

The paper that had the single most impact at the conference was some research by Nick Drew from Yahoo! and Olga Churkina from Fresh Intelligence Research showing how our use of smart phone devices is really impacting on our short term memory – we are subcontracting so many reminder tasks to the technology we carry around with us that we are not using our memory so actively and this was demonstrated by a range of simple short term memory test correlated with mobile phone usage found the heavier smart phone users performing less well. The smart phone is becoming part of our brain!  This obviously has much bigger implications outside of the world of market research and so I am sure we are going to hear a lot more about this topic in the future.

Scary thought, which made the great end session by Alex Debnovsky from BBDO about going on a digital detox all the more salient.  I am going to be taking one soon!

23 Buzzwords coined and used at the 2014 ESOMAR Digital Dimensions Conference

23 Buzzwords coined and used at the 2014 ESOMAR Digital Dimensions Conference

This years Digital Dimensions conference produced a particularly good harvest of buzzwords, some of which you may have heard before but some I can guarantee are 100% new!

The conference buzzword award has to go to John Humphrey Kimberly Clark & Joost Poolman Simons from IPSOS who in one presentation delivered more new additions to the market research vocabulary pool than I think I have heard in one hit. 

1. Glassnography: The new term for ethnography using google glasses Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

2. Privacy:  The word probably mentioned more often at this conference than any other (source: various)

3. Fanqual:  Doing qualitative research amongst your Facebook fans by posting ideas and getting their feedback  Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

4. Spammyness index = How Impersonal x how intentionally manipulative a piece of marketing communication is. Coined by Jacob White VisualDNA

5. Cupcake research: A derogatory term for the type of research that looks great, but is too sweet to eat and has little or no healthy substance.  Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

6. Phylogenetics: A new very nerdy means of analysing social networks devised and discussed in a paper by OMD (don’t ask me to explain it!)

7. Amazonification: The observation of how some people are using Amazon as first point of call for researching and  purchasing so many different things (Note Ebay is used in a similar way by another sub strand of the population but ebayification doesn’t sound quite right!) Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

8.Data exhaust: The data pollution that pours out systems that we cannot effectively use (source: various)

9. Shapely value:  The game theory technique devised buy Lloyd Shapely he won a Nobel Prize is emerging as a great way of segmenting activity usage in market research which was ably demonstrated by the team from AOL

10. The Big 5: The five key personality traits to understanding human behaviour: openness, conscientiousness, extraversion, agreeableness, and neuroticism. These are becoming the cornerstone of many standard research measurement techniques so they get a collective noun the big 5  Coined by Jacob White VisualDNA

11. The physiological data era:  Wearable technology is going to be the dominant new source of market research data over the next decade and a prediction that physiological data is going to become one of the key metric of market research coined by Gawain Morrison from SENSUM

12. Why analytics: Analysing big data to gain deeper insights into customers wants and needs (source various)

13. The army of influencers:  An observation of what it feels like to be an expectant mum for the first time and the bewildering array of advice you are offered  in the digital age.  Coined in presentation by: John Humphrey Kimberly Clark & Joost Poolman Simons Ipsos

14. Corporate purpose: In the social media age, companies are realising the collective power of consumers – business that simply pursue the need to make money and don’t  address  the actual needs of their consumers in the decisions can easily become a cropper.  To address this there is a growing trend amongst business to try and define their “Corporate purpose”  Coined by Jacob White VisualDNA

15. Share of experience:  Share of voice is a fairly meaningless idea in a wold where we are bombarded by messages in multi-dimensions.   We should focus on share of experience which is more about measuring what is cutting through.  Coined by Chris Wallbridge Mesh Planning

16.Renegade professionals:  “The future is in the hands of the “renegade professionals” So much innovation is happening outside conventional businesses and it’s the outsiders, the renegades that are inventing the future – very much on their terms!

17. Microboredom:  Those moments waiting in the queue when we noodle on our mobile phones. Mentioned by Alex Drozdovsky BBDO Worldwide

18. Technotots & digikids:  Terms to describe the proliferation and impact of technology on young children Mentioned by Alex Drozdovsky BBDO Worldwide

19.The google shop assistant: The phenomenon that many people out shopping would prefer to look up their query on their smart phone than ask the shop assistant. Google is becoming the shop assistant!

20. Selection over sample: New research techniques are going to be more reliant on having the right type of active participants than larger numbers of balanced sample.  Coined by Grant Bird Hall & Partners

21. The Zero moment of memory: We are using our smartphones more and more to replace our own memory. Coined by Nick Drew from Yahoo! and Olga Churkina from Fresh Intelligence Research

22. Digital Detox: The idea of decoupling ourselves from our digitial devices to allow us to unwind coined by Alex Drozdovsky BBDO Worldwide

23. Turning off is the new emotion: Another idea coined by Alex Drozdovsky BBDO Worldwide we are being so bombarded by data and information in  the new digital age that turning off is the new emotion

A Network of Brains


I would like to put forward a new viewpoint on research participants.

I would like you to move away from thinking of participants as binary bits of data that you patch together to build a picture of a market and behavior, to start thinking of them of a network of brains that you can use to help solve marketing problems.






Bonsai Survey Design


This blog post below is the first in a series I am writing to accompany a paper I am about to publish on Bonsai Survey design, which will be presented at the up coming ESOMAR Asia Pacific event in May 2014.

There is a pressing need in our industry to shorten survey to meet the needs of the mobile enable research consumer.  More and more people are wanting to complete surveys on their mobile phones, yet the average online market research survey we currently produce is too long and unwieldy to actually complete on a mobile device.  So surveys have to change to accommodate this. They have to become shorter, more engaging and better designed and this paper is a guide to how to navigate your way though this process.

We are also going to produce a small "Bonsai" book to accompany this paper which I am excited about.

We have recently analysed over 30,000 surveys that GMI respondents have been asked to complete over the last 18 months around the world and the average length of survey, and I stress the word average, is 20 minutes.   The attention span of someone using a mobile phone is around 5 minutes for any one activity.  So there is a big gap to close.  

I estimate that less that 1 in 5 surveys right now is adequately optimized for mobile phone completion.  So my big focus over the last few months has been to explore practical ways in which surveys can be shorten and to establish a framework that our client can use to address this issue.

I hope in these future blog posts to highlight some of the most effective technique that you can share with your own clients as to how to make surveys shorter and more effective.


How many people need to answer that question?




When we set up a survey we tend to think about how many people should answer the survey survey as a whole.  I would challenge  you to think a bit beyond this and think about how many people should answer each question in your survey, as there are potentially significant savings in survey lengths that can be achieved by optimizing the sample for each question.

A typical  survey might be sent out to around 400 respondent, in many respect this is a bit of an arbitrary figure,  the number of people who need to answer each question in any one survey to get statistically significant answer might range from anywhere between 5 and several thousand.

The number of people who need to answer each question is based on the level of variance in the answers respondents give.  Say I am testing 2 ads to find out which is better and the first 5 people I interview all prefer ad A over ad B, there is a 95% certainty that ad A is going to be preferred everyone, so job done.  If on the other hand I ask respondents to evaluate a brand on 15 different brand metrics using a 5 point likert rating scale and the score range from 3.0 to 3.5 (which is not uncommon level of differentiation) to pull apart the differences between these 15 metrics completely you would need to interview around 5,000 people.

In an average survey there are a range of questions between these two extremes and so it would make sense to stop thinking about your sample requirements for a whole survey but your sample requirements at a question level.

Now the problem is that it is difficult in advance to know exactly how much sample you will need for each question because to work it out accurately requires some data.  

The solution to this is a more iterative survey design approach, where you don't set your sample quota's until you have sample enough people to estimate the sample size requirements.   This can be easily done by instead of sending out your survey in one go, you send it out in 2 batches. You send out the first batch to what I would normally recommend 100 respondents, pause the survey, this will give you enough data to roughly assess the sample requirements for each question, you can then set quota's on each question for the second batch of sample.

Now there are obviously a few things you need to consider, for example how you are going to sub divide up the data for example if you are going to want to analyse some sub demographic trends in the data for any one question, e.g. compare men v women or look a age split for each of these groups you will need a minimum sample so you may need to double or even quadruple your basic sample requirements for some questions to account for this.

When you do this across a survey you get a chart like this example below:


In this example you can clearly see that there are some questions that require a lot more sample than others.

If you were say interviewing 400 respondents in total then some of these question you will already have enough data on from the first batch of responses and some of the others need only be answered by 1 in n of the respondents.  What this mean is that if you randomize at a respondent level who answers each question the survey overall gets shorter for each respondents.

So how do you actually work out sample sizes for a question?  

There is a relatively basic formula that you can use to calculate the minimum sample size for a question:

Minimum sample size = [(Standard Deviation x Z)/Acceptable Error]2

Z is the factor that determines the level of statistical confidence you wish to apply. For 90% I would recommend Z = 1.64, and for 95% Z= 1.96.

You can see from this formula its all related to standard deviation and the level of variance in the answers which is how you set the acceptable error  (in the brand example I quoted above, if the overall variance in answers in 0.5 and there are 15 metrics to differentiate them all the "acceptable error" would be around 0.03 (0.5/15) .







.

Kategori

Kategori