20 new buzz words for 2013

20 new buzz words for 2013

I have had an amazing opportunity to attend market research conferences around the world over the last year and so been exposed to all sorts of fantastic ideas and innovations and some brilliant thinking.  Stuck on a plane for 5 hours last week with not much to do, I though I would try and condense the best of the best of this thinking and use it to try and identify some research trends of the future.  Here is what I have come up with. 20 new buzz words for 2013...

Qualitic analysis: You heard it here first . Qualitic analysis is where you use qualitative research methods to help analyse and process large scale volumes of open ended feedback.  Text analytics software is powerful but largely stupid and so letting humans train these systems through qualitative analysis is the way forward.  With the merging of social media and traditional research I think this sort of hybrid approach is going to have a big future.

Lifestyle mapping : with geographical tracking now readily available via mobile phones I believe we are going to layered on top of maps a whole lot more information about people's lifestyles and activities and when we start wearing glasses (https://plus.google.com/+projectglass/posts) which act as mini computer screens which I believe is pretty inevitable all this information is gonna get a whole lot more valuable.

Social influence tracking: Mark Earls has be evangelizing how many of our decisions are influenced by the crowd, but who is taking account of this in their every day brand tracking research. I think tracking of social influence it going to become an important benchmark measure for certain categories of products in the future.

Tribal research: Linked to this is looking at people as tribes and the people who buy certain types of brands as tribes of consumers. There is some very interesting pioneering work being conducted by the University of Bath School of management that is worth checking out. This book is worth a read on this topic.

Bonzia research: there is a global trend toward working out how to make shorter more efficient surveys with mobile phones set to become a primary channel through which research is conducted pressures to do this will only increase. Bonzia research is the art of designing small but perfectly formed surveys

Habit specialist: understanding how we get into habits how we break them and how we set up new ones is a really valuable piece of knowledge for market researchers trying to understand how to influence behaviour. Everyone should read this book i can see it being a hot topics in the next year.  The Power of Habit 

Research data banks: We are generating shed loads of personal data that is clearly potentially very valuable. Right now Google and Facebook and Twitter etc think it's theirs, its not its ours!  I envisage a consumer revolution over the ownership of all this information in the future.  The furore caused by Instagram trying to claim ownership of the picture we upload is a case in point to illustrate the power of mass consumer revolt, causing them to instantly back track on their plans.   Someone someday is going to create a bank for it all and turn it into a personal asset that we and only we control and have the marketing rights to. You can have my purchase history if you like I am selling it to you for £10!

Implicit research:  The mutli award winning work by Cog research showing how the speed of association between words and brands can help understand the underlying personality of brands is I think going to turn this into a must have technique for research tracking studies in the future.

Things research: all sorts of things now have technology embedded into them from cars to fridges to billboard. I see new research companies will emerge that specialise in turning these into to data gathering research tools.

Now research: OK we currently have some pretty quick turnaround research products out their that can give you responses in hours. The demand for this is growing and could fuel new breeds of research companies that offer instant real time research.

Social research networks: I see the potential for a next generation group of micro social pollsters and aggregators who keep abreast of what their friends think and report back.  A step up from mroc.

Smart intercept research: as the quality of research that can be conducted on tablets improves (watch this space next year) I believe we are going to find these anchored all over the place to gather consumer opinion related to the experience people are having, be it in the queue to the bank in the changing rooms of shops handed around on train and planes, in hotel receptions, at the exit of every Mcdonalds. The future will be made up of an increasing amount of intercept research with a menu of research studies that people have the option to do based upon their experiences.

Segmenting by decision making processes: Watch this presentation delivered at the recent New MR festival by Elina Halonen I think she is onto something! Understanding how we "like to think" and how it effects our decisions is the hot new area of behavioral economics for next year.

Organic respondent generated research : Instead of writing a survey you simply pose a question and let respondents work together to answer it.

Consumer media: newspapers radio already heavily rely on consumers to shape content. Next stem handing it all over to them completely a newspaper entirely written and produced by the crowd, a radio station who's entire content is crowd driven.  Researchers could have a critical role in helping to facilitate this.

Consumer products : likewise we have seen the emergence of co-created products in recent years but again the role of the consumer has in the main been that of a  bit part, the last decisions being left in the hands of the marketeer. I see consumers taking over and completely ruining things in the future. Why not have consumer run bank where they vote on the fees they charge the profit they make co-create their own advertising promote the bank themselves. Instead of passive shareholders we have active ones who contribution is rewarded buy partial/micro ownership. Why not set up 2 banks like this at the same time and let them compete with each other.  I am not sure if this is research or marketing but surely its an opportunity.

Prediction research: has not the us election proved to us the power of predictive markets once and for all? Why are we not all doing more of this.  I think we will all be in the future.

Social brains: looking at the collective thinking of social networking and treating them like a giant brain. Now I am not talking about large scale semantic analysis. More mass ethnography mapping the patterns of thinking expressed in social

Survey games :  Obviously this is a specific area of interest for me, surveys that cross the divide between entertainment and market research that can sit and be positively recieved within the social media space - watch this space

(ok this is only 17 as someone has pointed out!  - any suggestions to fill the last 3 spaces?)
The 4 killer stats from the ESOMAR 3D conference

The 4 killer stats from the ESOMAR 3D conference

I was only able to attend one day of this conference, for me without doubt this is the most useful research conference of the year and so I am sorry, I am only able to give you half the story, but  here is what I brought back with me, 4 interesting stats, 3 new buzzword and 1 stray fact about weather forecasting.

350 out of 36,000: This is how many useful comments Porsche manage to pick out from analysing 36,000 social media comments about their cars. So the cost benefit analysis of this runs a bit short and this was probably the headline news for me from the ESOMAR 3D conference: No existing piece of text analytics technology seems to be capable of intelligently process up this feedback. Every single one of these comments had to be read and coded manually I was shocked. I thought we were swimming in text analytics technology, but apparently most of the existing tools fall short of the real needs to market researcher right now (I spot one big fat opportunity!).

240 hours: This was the amount of time spent again conducting manual free text analysis by IPSOS OTX to process data from 1,000 Facebook users for one project (and from this they felt they had really only scratched the surface). As Michael Rodenburgh from IPSOS OTX put it "holly crap they know everything about us".  There are, he estimated, 50 million pieces of data associated with these 1,000 uses that it is possible to access, if the end user gives you a one click permission in a survey. He outlined the nightmare it was to deal with the data that is generated from Facebook just to decipher it is a task in itself and none of the existing data analytics tools we have right like SPSS now are capable of even reading it. There was lots of excellent insights in this presentation which I think deservedly won best paper. 

0.18: This is the correlation between aided awareness of a brand & purchase activity measured in some research conducted by Jannie Hofmyer and Alice Louw from TNS i.e. there is none. So the question is why do we bother asking this question in a survey? Far better just to ask top of mind brand awareness  - this correlates apparently at a much more respectable 0.56. We are stuffing our survey full of questions like these that don't correlate with any measurable behaviour.   This was the key message from a very insightful  presentation. They were able to demonstrate this by comparing survey responses to real shopping activity by the same individuals. We are also not taking enough care to ask a tailor made set of questions to each respondent, that gleans the most relevant information from each one of them. A buyer and a non buyer of a product in effect need to do 2 completely different surveys. Jannie senses that the long dull online surveys we create are now are akin to fax machines and will be obsolete in a few years time. Micro surveys are the future, especially when you think about the transition to mobile research. So we need to get the scalpel out now and start working out how to optimise every question for every respondent.

50%: The average variation between the claimed online readership of various dutch newspapers as publish by their industry jic and the readership levels measured from behavioural measurement using pc and mobile activity in tracking as conducted by Peit Hein van Dam from Wakoopa. There was such a big difference he went to great lengths to try and clean and weight the behavioural measurement to account for the demographic skew of his panel, but found this did not bring the data any closer the the industry data but in fact further away. Having worked in media research for several years I am well aware of the politics of industry readership measurement processes, so I am not surprised how "out" this data was and I know which set of figures I would use. He pointed out that cookie based tracking techniques in particular are really falling short of delivering any kind of sensible media measurement of web traffic. He cited the "unique visitors" statistics published for one Dutch newspaper website and pointed out that it was larger than the entire population of the Netherlands.

Note: Forgive me if I got any of these figures wrong - many of them were mentioned in passing and so I did not write all of them down at the time - so I am open to any corrections and clarifications if I have made some mistakes.

3 New buzzwords

Smart Ads: the next generation of online advertising with literally 1000's of variant components that are adapted to the individual end user.

Biotic Design: A technique pioneered by Yahoo that uses computer modelling to predict the stand out and noticeability of content on a web page. It is used to test out advertising and page design and we were show how close to real eye tracking results this method could be. We were not told the magic behind the black box technique but looked good to me!

Tweetvertising: Using tweets to promote things (sister of textervising)

One stray fact about weather forecasting

Predicting the weather: We were told by one of the presenters that although we have super computers and all the advances delivered by the sophisticated algorithms of the Monte Carlo method, still if you want to predict what the weather is going to be like tomorrow the most statistically reliable method is to look what the weather is like today, compare it to how it was yesterday and then draw a straight line extrapolation! I also heard that 10 human being asked to guess what the weather will be like, operating as a wisdom of the crowns team, could consistently out performed a super computer's weather prediction when programmed with the 8 previous days of weather activity. Both of these "facts" may well be popular urban myths, so I do apologise if I might be passing on tittle tattle, but do feel free to socially extend them out to everyone you know to ensure they become properly enshrined in our collective consciousness as facts!

Big data and the home chemistry set


Are we all Dodos?   I heard a couple of people tell us at the ESOMAR 3D conference that we are perilously close to extinction,  that we market researchers are dodos. In fact this has been a bit of a common theme at many of conference I have attended in the last few years a prediction of the terminal decline of research as we know it. The message is that our industry is gonna be hit by a bus with the growth of social media and the big boys like Google and Facebook and IBM muscling in to our space. We are also in many parts of the world facing tough economic times and tightening budget.

Yet despite all this it appeared that this was the best attended 3d conference ever, and it's not just this isolated conference either. I have been going to research conferences all around the world over the last year and they all seem to be seeing growing numbers of attendees and all I can sense from these conferences and particularly at this event, is an industry brimming with confidence and ideas.

So are we all putting on a brave face? Are we naively sleep walking into the future?   I don't think so...


The macro-economics of market research

Over the last decade we have seen a near exponential growth of data being generated by the worlds population. It is literally pouring out of the internet and our mobile phones.   We also have an ever increasing range of innovative ways to measure and analyse things, ranging from geo-location tracking right through to sensors attached to our heads.  We are able to measure almost everything we do. So who is going to do it? 

 We learnt at this conference how hopeless computers are at actually thinking and they lack the ability to really intelligently analyse data to the quality and standard needed to glean real market research insights. In every presentation we saw the critical contribution that human being had.  It just leads me to believe that with all this expansion of data the pool and means of measuring things more people are going to be needed to make sense of it all and who is best at doing this?  Ultimately I believe it is market researchers.  Media companies and consultancy firms might think they are in with a shout but I genuinely don't believe they have to core competences needed. Market research is all about working out how to measure things, gathering and analysis of data and using this to delivering insights. That is what we specialise in, that is what we are expert at.   No other industry is better placed to capitalise on all this free flowing data than ours.

The big thing I think we need to focus on is developing new tools to process big data. Right now it looks like we have oil tankers full of information waiting on our door step and the research industry is currently attempting to use tools that look like they are from a home chemistry set to try and process it into fuel.  


We need to develop more tools to refine all this information and learn the skills to do it.  That is our challenge , grasping the technology that is out there and adapting it for our needs.

But I have every confidence we can. I believe we are the best industry out there at cross communicating ideas and with that comes innovation. There are lots of people who bemoan the lack of innovation in our industry and again I see quite the opposite. I see an industry racing to innovate.  Step back and look at how things have developed in the last couple of years alone.  Look what we can do already in the field of mobile  research,  how many research companies have so quickly moved into offering social media research solutions. Look at some of the fantastic tools that have emerged like facial emotion measurement and neuro/biometric monitoring, look at what we are learning and embracing from the fields of behavioural economics, look at crowd sourcing and the success of MROC communities and how we have developed technology to serve these communities. Look at some of the new text analytics tools that are emerging.

I think the market research industry is more than capable to adapt to these needs and I feel it has a big future.



The Future of Market Research

What do you get if you Google the future of market research? Well not a link to this blog post, as Dan Kvistbo @kvistbo noticed.   I am glad someone actually checked.

This post is part of an experiment to see how a single post gets tracked on google search and how easy it would be to find if you searched for it.

I will actually be doing an article about the future of market research which I will be writing shortly as part of a conference being organised by http://www.warc.com/ about the future of market research.

The future of market research. the future of market research? The future of market research?




ESOMAR congress: the buzz

ESOMAR congress: the buzz

This is a summary of some of the buzz I picked up at the ESOMAR congress.

There were 3 dominant phrases I heard over and over again at this years ESOMAR congress: big data, social and story telling...

Big data: I estimate that nearly 50% of the presentations I sat through mentioned the term big data in one context or another. Taking over from the term "mobile research" which has held the number one slot of market research buzz words for the last 2 years. Despite this we did not exactly see many presentations demonstrating the execution of big data mostly its use came in the form of a warning sign to the industry that big data is about to engulf us all and change all the rules of engagement and encouraging new competitors in our market research space.

Social: One of the most prominent nouns used by market researchers at the ESOMAR congress, the word social seems to have become detached from the word media and has taken on a life of its own. It has now been attached to the word research and survey so we heard mention of a social survey - one that uses the language of the consumer.

Story telling: We were told over and over again that incites are not enough, as an industry we have to become better story tellers. We were also challenged to ask the right questions. We were told that agency planners are better story tellers and management consultants ask better questions and if we could do both of these things better we could "wop both their arses"

Behavioral economics the star of the show

Behavioral economics was undoubtedly the star of the show though. Papers exploiting the idea picked up best paper award from Tom Ewing @Brain Juicer, and best case study from Florian Bauer and for my vote best presentation from Kevin Kary @Affinova.

All 3 demonstrated the impact of thinking about the behavioral psychology of answering questions in survey and how rational or irrational it can be and if you can account for this you start to see a completely different picture in the data you are gathering.

Tom Ewing in his most eloquent style showed how turning off peoples rational decision making process allows them to measure the impact of more emotional decision making processes. Florian Buaer ground breaking pricing research demonstrating that unless you take into account the behavioral psychology of pricing when conducting price research you will under estimate how much you can push up prices - which I am slightly concerned that if the whole marketing industry cottons on to this it could trigger global hyper inflation. Affinova identified a big whole in existing concept development work, that when we evaluate choices we forget about whether or not we would purchase any of the products at all. By plugging this gap through a change in the way they asked the question they were able to far more accurately predict the the success of new concepts.

Other observation...

Constant connectivity/Welcome to the new normal: There were many observations made at the congress about the changing relationship between brands and consumers. With an abundance of easily accessible data and consumers taking over the brand message through social feedback mechanics we are moving from a push relationship with consumers where we spend millions feeding them information through advertising and branding to a pull relationship where consumers go out and get it on demand. This requires totally different thinking about how to position brands.

Customer-centricity: On the back of this, there was a lot of talk about placing the customer at the heart of decision making and we commonly heard phrases like customer centricity, customer facing, Empathetic relationship with customers, How we are engaging with customers every day of the conference. Clearly the market research industry has identified that the custom is now king, not to say that it has not always been, but now it is a nye on dictatorship!

Iteration/beta test norm: Consumers expect products now to evolve and expect this to happen rapidly. There was a lot of talk about this idea and how it is changing how we think about developing products and researching products, in a sense these two things are becoming merged. Consumers buying experiencing and reporting back their opinions on product are now part of the product development cycle. The mantra seems to be get your product out there and see if it flys, if not iterate.

Google: The no.1 brand on everyone’s lips this year was Google, perhaps because of the entry into the market research sector with a research offering, but also because they epitomise the big data players moving into the little data market place.

The rise and fall of prezzi:  Over the last year seen the rapid rise an rapid fall of the use of prezzi.  Prezzi dominated the last few conferences I went to but only a couple of uses at Congress. Perhaps we all got fed up of feeling sea sick?

A few new buzzwords:

This conference was bit light on new buzzwords but here are a few I picked up on:

Flawsome: this was the best one, mentioned by of Wendy Clark Coca Cola, flawsome means awesome with flaws. The idea that great should not get in the way of good, that consumers are getting used to beta testing product and should not let perfection hold you back, in fact a slightly flawed querky concept can give a brand more humanity.

Innernet: kids spending more time inside consuming the internet

Outernet: how the internet is now being used outside as part of our everyday lives 

Super abundancy: the prevalence of data and easy access to information in this digital world

Now: News is old hat! It frames the idea of information in the past tense. We don't want news any more we want to know what is happening NOW!!! The time delay between events happening and us as consumers finding out about them has gone done to zero. With Twitter and live streaming, news is dead, long live NOW.

Invent Forward: A "reinvention" of the word reinvent

Phrases I heard used to describe our function as market researchers:

Insight intrapreneurs: mentioned 3 times

Agents of change: being the agent of change was a common call to action

Business story tellers: we don't deal insights any more we have to tell stories

Data synthesizers: the future of market research in the world of big data
A clutch of new Buzzwords

A clutch of new Buzzwords


Here are some new buzz words and interesting phrases that I have collected recently that I think market researchers might be interested in.

Intrapreneurs: the entrepreneurs that instead of setting up and running their own business, work within larger businesses or organisations and drive entrepreneurial activity within these organisations. Source: Maryan Broadbent, David Smith & Adam Riley ESOMAR Asia 2012

Linguistic anthropology: Social media data mining is leading to a new bread of research focusing on understanding the detailed use of language and the processes of human communications, variation in language across time and space, the social uses of language, and the relationship between language and culture.

SoLoMo: social-local-mobile. A word made for market researchers lips that combined the hot 3 topics of social, local geographical targeting and mobile. Source: http://mashable.com/2012/01/12/solomo-hyperlocal-search/

Micro-multinationals: A new breed of entrepreneurs creating “micro-multinationals”, organizations that are global from day one. Source: Amit Gupta & Terry Sweeney ESOMAR Asia 2012

Social looping: Connecting and taking control of your disparate set of social network connections and connection channels. Source: marketing age http://sedatedworld.com/?p=947

Personal branding: The idea that people now are thinking about themselves as brands. Source: various  (Elina Halonen rightly pointed out that this is not exactly a new buzzword, but all I would say is that I have heard it being used quite a lot at the moment!)

Crowdfunding: The new trend for social crowd backed business ventures  e.g.  http://www.crowdfunder.co.uk/,  http://www.crowdcube.com/

Global villager: The globe has been connected into a village by digital technology - an idea originally presented by Marshall McLuhan, popularized in his books The Gutenberg Galaxy: The Making of Typographic Man (1962) but really only realised since the advent of the web. So if you are hooking up on twitter with people in another continent you are one of the global villagers. Source: Maryan Broadbent, David Smith & Adam Riley at the ESOMAR Asia 2012

Research Improv: Using some of the theatrical techniques of improvisation in focus groups or workshops to develop and explore ideas.Source: Lee Ryan: http://appliedimprov.ning.com/profile/LeeRyan

Kinesthetic research: Kinesthetic learning is a style of teaching where pupils carry out a physical activity, rather than listening to a lecture or watching a demonstration. Kinesthetic research is where we conduct research through a physical activity or immersive activity and is tipped to be a growing area of research innovation.   Research improv is a branch of  Kinesthetic research,  clients participation in co-creation exercises with end users is another example and  so too is I suspect next example Socialized research which I spotted as a topic at the forthcoming ESOMAR congress.

Socialized Research:  This is the title of what looks like it might be a hot ticket presentation at this years ESOMAR congress by OTX Ipsos Open Thinking Exchange "a brave new world of immersion, augmented reality, geo-location, co-creation…" the addition of a little “social” into everything we do so that consumers are engaged in ways that capitalize on and mimic their expectations given the realities of today’s new world. Welcome to the new normal. Are you ready?

Decision making science: We started with psychology, this branched off into social psychology then behavioural science and got refined into behavioural economics now we have a new one decision making science. A nice all explaining concept. Source: http://www.research-live.com/features/measuring-emotion/applying-the-science-of-decision-making-to-marketing/4007689.article

Creative leaders: people in organisations to act as grit to drive innovation. Source: Maryan Broadbent, David Smith & Adam Riley at the ESOMAR Asia Conference
Social graph: the global mapping of everybody and how they're related.  Source: Brad Fitzpatrick http://bradfitz.com/social-graph-problem/

Being the wide angle lens: The person in an organisation who offers a more panoramic viewpoint on a business. Source: Maryan Broadbent, David Smith & Adam Riley at the ESOMAR Asia Conference

Chief customer: Person on persons who represent the embodiment of a customer in a business.  Source: Maryan Broadbent, David Smith & Adam Riley ESOMAR Asia 2012

Fremium: This is a is a business model by which a product or service is provided free of charge, but a premium is charged for advanced features. Source: This term has be around long enough to grab itself a wikipedia entry  http://en.wikipedia.org/wiki/Freemium

Showroom retailing &  Monitor Shopping:  A shift to retail spaces like electronic and book shops to become showrooms where people look at products and then order them online.  Monitor shopping is the process of going shopping online.   Source: various

Sharkonomics: Taking a shark like approach to battling with your competitor i.e. sneaking up behind them and aiming taking great big chunk out of their market share though some clever strategic move.  This seems to be the way that some of the big boys in the mobile and internet businesses seem to be operating now.  e.g. Microsoft launching a premium tablet. Source: title of book by Stefan Engeseth

Finally a couple of twitter specific buzzwords:
Trashtag: A hashtag that someone tries to establish for purely self-centred and/or commercial reasons, rather than to create a strand of content that might actually be useful or interesting to someone else.
Twitchunt: torrents of me-too sentiment on twitter gathering mass and momentum very quickly.
Obsoltweet: a tweet that has missed the boat
source: http://www.abccopywriting.com/blog/2012/05/10/12-new-twitter-buzzwords/


How to calculate the length of a survey

How to calculate the length of a survey

As an industry we tend to use survey length as the cornerstone for how we price surveys, but often the estimated lengths and real lengths of surveys can turn out to be wildly different. Leading as I have experience to potential conflict.

The reason is we have not established in the research industry a common and reliable way of estimating the length of a survey.  The most common method in circulation is to assume we answer surveys at 2.5 questions per minute but this technique is fatally flawed.  This is because question themselves can vary wildly in length e.g. a survey with 10 grid question with say 50 options may take 50 times longer to answer than a survey of 10 simple yes no questions.

So I have been on a bit of quest to work out some slightly more accurate ways of do this. As a result of some recent work we have been doing to examine in detail how long respondents answer survey I have come up with 3 new alternative methods I would like to put forward to more accurately calculate the length of a survey. 

I hope they may be of use to some of you.

Method 1: Survey length = (W/5 + Q*5 + (D-Q)*2 + T*15)/60


This is the most accurate way of doing it (though I recognise it take a quite a bit of work). This formula will given you the length of an English language survey in minutes.

W = word count: Do a word count of the total length of questionnaire (questions, instructions and options). An easy way to do this is to cut and paste the survey into word but don't forget to remove any coding instructions first and it will tell you the word count. Respondents read English in western markets at an average rate of 5 words per second.

Q = Number of Questions: Count how many questions the average respondent has to answer. Allow 4 seconds per question general thinking time and 1 second navigation time* (assuming 1 question per page).
*this may vary depending on survey platform if it takes longer than 1 second to load each page adjust accordingly

D = Total number of decisions respondents have to make: Count in total how many decisions the average respondent makes in total using this guide below and allow then 2 seconds per decision.

Single choice question = 1 decision
Multi-choice question = 0.5 of a decision per option
Grids = 1 decision per row

T = Open ended text questions: Count how many open ended text feedback questions a respondents has to answer and allow 15 seconds per question. (note this may vary quite dramatically based on the content of the question but on average people dedicate 15 seconds to answering and open ended question).

Method 2: Survey length = (W/5 + R*1.8)/60

If you want slightly simpler approach use this formula which is not quite so reliable but will get you close...

W= word count
R = total number of row options: Note this is just rows and not columns on a grid. This can be quite easily done by cutting and pasting your survey into excel and then in a side column mark up all the rows and then sort.


Method 3: W/150

If you don't have enough time to add up all the number of questions and row options this is another quick a dirty method (though I would not vouch for it being much more acurate than the 2.5 questions per minute approach).

This will give you a rough estimate of the length of a survey in minutes.  It is no where near as acurate as the above 2 more detailed methods but it will be someone in the correct ball park. Careful though if  you spot your a dealing with a particularly verbose questionnaire.

A wisdom of the crowd approach I would recommend would be to use both the 2.5 question and W/150 methods and compare the differences - if they produce just about similar figures well go with that, if they generate big differences it might be worth adopting method 1 to do it properly.


Where all these formula will fall over?


1. If all the respondents don't see all the questions: Skip logic can mean not everyone sees every question in a survey which means it can be hard to work out the average number of question respondents will have to answer which you will need to know to accurately work out the average survey completion time. Most errors in estimating survey length centre around this issue. There is often no easy way of doing this other than manually working it out using a spreadsheet.

2. Not properly taking into account question loops. This another issue that leads to people miscalculating the length of a survey. If for example there is a loop of question that you ask for a set of brands people often forget to include the extra time it take to answer these question and only count one loop.

3. If you are working out the length of a survey not conducted in English: or where English is not the primary language (India for example) you will need to weight for longer reading, comprehension, consideration and survey loading times in different countries. Below is a rough weighting guide if you are working from a translated version of an English survey, (sorry that I don't have time weighting data from every country):

            Length   Weighting
Japanese/Korean 0.95
Netherlands 1.00
Germany 1.05
French 1.06
Spain 1.09
Scandinavia 1.10
Italy 1.11
Chinese 1.13
India 1.34
Eastern Europe 1.35
Russia 1.37
Latin America 1.43

4. If there are a lot of images in the survey:  you will need to allow for extra loading times. Allow between 2-10 seconds per mb.

5. If you are including a lot of  non-standard question formats in the survey e.g. sorting and drop down style question take longer to answer.

6. If you are boring people to death with a long highly repetitive survey!  Respondents will start speeding when they get bored and so average decision making time can drop.

Do you have any thoughts?


Now I would love to hear from anyone who has some thoughts on this or have come up with what they think is a more effective means of doing this. My ultimate aim is to find an agreed means for the who industry to adopt to use as a more effective trading currency when pricing surveys.




The Ginny Valentine Badge of Courage Awards

I had the honour to attend this week  the Ginny Valenine Badge of Courage awards organised by Fiona Blades and John Griffiths on behalf of the Research Liberation Front  http://researchliberationfront.com/ginny-valentine.html and I wanted to pay tribute to the organisers and to all the winners. What a wonderful moving event this was and long may it live in the future.

Very rarely in a job like market research can I say that I have been emotionally moved by something or really felt such a strong sense of pride as I did at this award ceremony which celebrated some of the less prominent hero's of market research.  It is an award ceremony set up specifically to celebrate those people in market research who had the courage to stick their necks out and do something different, to go against the flow and who battled on through adversity to reach their goals.

The highlights for me were:

Simon Lidington who nominated his own daughter Rosie, for the efforts she put into established the Big Sofa research company. For 4 years she took a sofa around the shopping centres of the country to conduct face to face interviews with the public before pulling in there first major pieces of business.  http://www.thebigsofa.com/about_businessbenefits.html

Betty Adamou's nomination from Ray Poyner for having the courage to put her money where her mouth was and set up her own research gaming company.http://www.researchthroughgaming.com/

And Alison White who was actually brave enough to nominate herself. She explained the battle she had to set up here own field research company, her first attempt was stolen off her, the second her offices were burnt down twice.  http://facefactsresearch.com/ 

But the most astonishing tale of all though was told by Finn Raben who on behalf of ESOMAR nominated an Afghan company ORCA who had 2 researchers shot dead while collecting data.  Which puts all our own battles into perspective.

There were no black ties, 3 course dinners and celebrity presenters at this event, instead home made sandwiches and pay bar but all the better for it. Ginny Valentines son gave a wonderful eulogy to his mother and it was lovely to see him handing out the awards at the end to the winners.


200 question surveys?!!!!

In the presentation I gave yesterday at the MRS conference I mentioned that we had been working on some experimental survey games where we had managed to get people to voluntarily complete a 200 frame survey.

Now firstly, this has been quoted as a 200 "question" survey which I am afraid is a bit of an exaggeration, as a lot of the frames were feedback pages and not questions, it was about 120 questions in total.  I apologise I did not make this clear during my presentation.

I am NOT, may I repeat, NOT espousing or suggesting that anyone does a 200 question survey or indeed a 120 question survey for that matter!!!!!!!

I am slightly concerned about the mixed message this is conveying  and so I thought I should clarify things a little with this blog post about the details of this experiment.

This survey in question was designed purely as an experiment to see how many question respondents were prepared to answer, when instead of doing a survey they were playing a game and getting feedback that was  of some use and interest to them. This was not a traditional survey but a shopping game we had specially designed, that stepped away from the thinking constraints of a typical survey and focusing purely on the game and feedback mechanic.

The respondents had to work their way through a series of  "levels" where they were are asked to do things like guess the most and least expensive products, the most and least popular products, the prices of products and try and predict what different celebrities and types of people would buy. See below screen grabs of what the survey looked like.


 They would get points for getting things right and at the end of each level they would find out how well they did and we also revealed to them what this told us about the type of shopper they were.  So for example the respondents found out how price concious they were compared to other people and whether they were a social shopper who buys popular products or a individualist who buys not so popular products.  Each level was voluntary, they were asked if they wanted to proceed to the next level. There were 6 levels in total and 15-20 challenges in each level and we found 94% voluntarily completed all 6 levels spending over 20 minutes on average completing it.   The survey had an enjoyment score of 9.0 out of 10. The highest audience evaluation score we have ever achieved for a survey.

We often say that surveys should not being longer than 20 minutes. That is because most if not all surveys are not entertaining enough to persuade  us to want do them for any longer. Most surveys fail to cross the entertainment divide.  20 minutes is in effect a tolerance limit for expecting anyone to do anything they find boring.  But if you start to look at a survey through the lens of being a piece of entertainment or a game then yes it does open up possibilities for surveys that are genuinely entertaining to be longer than 20 minutes.  After all we happily will watch a film for a couple of hours, read a book all day on holiday play Angry Birds in any spare waking moment we get with our mobile phone.   But the aim of this research was not to path the way for, or encourage the industry to start churning out ever longer dull surveys!


7 factors, No sorry, 78 factors influencing the authenticity of responses

7 factors, No sorry, 78 factors influencing the authenticity of responses


Note:  Having written my post outlining the 7 factors influencing the honestly of responses, Edward Appleton rather politely pointed out that I may well have missed one or two issues! Directing me to this great list of cognitive biases listed on Wikepedia:  http://en.wikipedia.org/wiki/List_of_cognitive_biases

Enjoy reading though them. I think  Confirmation, Congurence, Hindsight Hyperbolic discounting biases are ones I need to watch out for.  I wonder if I should offer a prize to anyone who can come up with any more.

  • Ambiguity effect – the tendency to avoid options for which missing information makes the probability seem "unknown."[6]
  • Anchoring – the tendency to rely too heavily, or "anchor," on a past reference or on one trait or piece of information when making decisions (also called "insufficient adjustment").
  • Attentional Bias – the tendency of emotionally dominant stimuli in one's environment to preferentially draw and hold attention and to neglect relevant data when making judgments of a correlation or association.
  • Availability heuristic – estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples.
  • Availability cascade – a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true").
  • Backfire effect – when people react to disconfirming evidence by strengthening their beliefs[7]
  • Bandwagon effect – the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink and herd behavior.
  • Base rate neglect or Base rate fallacy – the tendency to base judgments on specifics, ignoring general statistical information.[8]
  • Belief bias – an effect where someone's evaluation of the logical strength of an argument is biased by the believability of the conclusion.[9]
  • Bias blind spot – the tendency to see oneself as less biased than other people, or to be able to identify more cognitive biases in others than in oneself.[10]
  • Choice-supportive bias – the tendency to remember one's choices as better than they actually were.[11]
  • Clustering illusion – the tendency to under-expect runs, streaks or clusters in small samples of random data
  • Confirmation bias – the tendency to search for or interpret information in a way that confirms one's preconceptions.[12]
  • Congruence bias – the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses.
  • Conjunction fallacy – the tendency to assume that specific conditions are more probable than general ones.[13]
  • Conservatism or Regressive Bias – tendency to underestimate high values and high likelihoods/probabilities/frequencies and overestimate low ones. Based on the observed evidence, estimates are not extreme enough[14][15][5]
  • Contrast effect – the enhancement or diminishing of a weight or other measurement when compared with a recently observed contrasting object.[16]
  • Denomination effect – the tendency to spend more money when it is denominated in small amounts (e.g. coins) rather than large amounts (e.g. bills).[17]
  • Distinction bias – the tendency to view two options as more dissimilar when evaluating them simultaneously than when evaluating them separately.[18]
  • Empathy gap – the tendency to underestimate the influence or strength of feelings, in either oneself or others.
  • Endowment effect – the fact that people often demand much more to give up an object than they would be willing to pay to acquire it.[19]
  • Essentialism - categorizing people and things according to their essential nature, in spite of variations.[20]
  • Exaggerated expectation – based on the estimates, real-world evidence turns out to be less extreme than our expectations (conditionally inverse of the conservatism bias).[21][5]
  • Experimenter's or Expectation bias – the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations.[22]
  • Focusing effect – the tendency to place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome.[23]
  • Forward Bias – the tendency to create models based on past data which are validated only against that past data.[citation needed]
  • Framing effect – drawing different conclusions from the same information, depending on how that information is presented.
  • Frequency illusion – the illusion in which a word, a name or other thing that has recently come to one's attention suddenly appears "everywhere" with improbable frequency (see also recency illusion). Sometimes called "The Baader-Meinhof phenomenon".
  • Gambler's fallacy – the tendency to think that future probabilities are altered by past events, when in reality they are unchanged. Results from an erroneous conceptualization of the Law of large numbers. For example, "I've flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads."
  • Hard-easy effect – Based on a specific level of task difficulty, the confidence in judgments is too conservative and not extreme enough[24][25][26][5]
  • Hindsight bias – sometimes called the "I-knew-it-all-along" effect, the tendency to see past events as being predictable[27] at the time those events happened.(sometimes phrased as "Hindsight is 20/20")
  • Hostile media effect – the tendency to see a media report as being biased due to one's own strong partisan views.
  • Hyperbolic discounting – the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, where the tendency increases the closer to the present both payoffs are.[28]
  • Illusion of control – the tendency to overestimate one's degree of influence over other external events.[29]
  • Illusion of validity - when consistent but predictively weak data leads to confident predictions
  • Illusory correlation – inaccurately perceiving a relationship between two unrelated events.[30][31]
  • Impact bias – the tendency to overestimate the length or the intensity of the impact of future feeling states.[32]
  • Information bias – the tendency to seek information even when it cannot affect action.[33]
  • Insensitivity to sample size - the tendency to under-expect variation in small samples
  • Irrational escalation – the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong.
  • Just-world hypothesis – the tendency for people to want to believe that the world is fundamentally just, causing them to rationalize an otherwise inexplicable injustice as deserved by the victim(s).
  • Knowledge bias – the tendency of people to choose the option they know best rather than the best option.[citation needed]
  • Loss aversion – "the disutility of giving up an object is greater than the utility associated with acquiring it".[34] (see also Sunk cost effects and endowment effect).
  • Mere exposure effect – the tendency to express undue liking for things merely because of familiarity with them.[35]
  • Money illusion – the tendency to concentrate on the nominal (face value) of money rather than its value in terms of purchasing power.[36]
  • Moral credential effect – the tendency of a track record of non-prejudice to increase subsequent prejudice.
  • Negativity bias – the tendency to pay more attention and give more weight to negative than positive experiences or other kinds of information.
  • Neglect of probability – the tendency to completely disregard probability when making a decision under uncertainty.[37]
  • Normalcy bias – the refusal to plan for, or react to, a disaster which has never happened before.
  • Observer-expectancy effect – when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect).
  • Omission bias – the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).[38]
  • Optimism bias – the tendency to be over-optimistic, overestimating favorable and pleasing outcomes (see also wishful thinkingvalence effectpositive outcome bias).[39][40]
  • Ostrich effect – ignoring an obvious (negative) situation.
  • Outcome bias – the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.
  • Overconfidence effect – excessive confidence in one's own answers to questions. For example, for certain types of questions, answers that people rate as "99% certain" turn out to be wrong 40% of the time.[41][42][43][5]
  • Pareidolia – a vague and random stimulus (often an image or sound) is perceived as significant, e.g., seeing images of animals or faces in clouds, the man in the moon, and hearing hidden messages on records played in reverse.
  • Pessimism bias – the tendency for some people, especially those suffering from depression, to overestimate the likelihood of negative things happening to them.
  • Planning fallacy – the tendency to underestimate task-completion times.[32]
  • Post-purchase rationalization – the tendency to persuade oneself through rational argument that a purchase was a good value.
  • Pro-innovation bias – the tendency to reflect a personal bias towards an invention/innovation, while often failing to identify limitations and weaknesses or address the possibility of failure.
  • Pseudocertainty effect – the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.[44]
  • Reactance – the urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice.
  • Recency bias – a cognitive bias that results from disproportionate salience of recent stimuli or observations – the tendency to weigh recent events more than earlier events (see also peak-end rule).
  • Recency illusion – the illusion that a phenomenon, typically a word or language usage, that one has just begun to notice is a recent innovation (see also frequency illusion).
  • Regressive Bayesian likelihood – estimates of conditional probabilities are conservative and not extreme enough[45][46][5]
  • Restraint bias – the tendency to overestimate one's ability to show restraint in the face of temptation.
  • Selective perception – the tendency for expectations to affect perception.
  • Semmelweis reflex – the tendency to reject new evidence that contradicts a paradigm.[47]
  • Social comparison bias – the tendency, when making hiring decisions, to favour potential candidates who don't compete with one's own particular strengths.[48]
  • Status quo bias – the tendency to like things to stay relatively the same (see also loss aversion, endowment effect, and system justification).[49][50]
  • Stereotyping – expecting a member of a group to have certain characteristics without having actual information about that individual.
  • Subadditivity effect – the tendency to estimate that the likelihood of an event is less than the sum of its (more than two) mutually exclusive components.[51]
  • Subjective validation – perception that something is true if a subject's belief demands it to be true. Also assigns perceived connections between coincidences.
  • Unit bias – the tendency to want to finish a given unit of a task or an item. Strong effects on the consumption of food in particular.[52]
  • Well travelled road effect – underestimation of the duration taken to traverse oft-traveled routes and over-estimate the duration taken to traverse less familiar routes.
  • Zero-risk bias – preference for reducing a small risk to zero over a greater reduction in a larger risk.


Kategori

Kategori