A Year of awards that probably won't come round very often



I am very pleased to have won our third award of the year, this one, The MRS Award for Innovation in Research Methodology that we won with Deborah Sleep at Engage Research.

You wait half you life for an award and then 3 come along all at once.   Therefore I accept in pure statistical terms this may well be the very last award I win for the rest of my life, so for the moment, forgive me I while I bask in some level of glory!

Hopefully won't last too long, my girlfriend thinks 3 awards in 1 year is bordering on greedy and has already decided I need to be brought down a peg or two starting with the washing up and sorting out the leak in the hallway.




Editorial: Pop Science Books

Editorial: Pop Science Books


Reg Baker in his inevitable style posted a wonderfully forthright blog piece (click here to read) bemoaning the industries taste in books, pointing out that a great majority of the books in the list of nominees for the books that had the most transformative impact on market research were Popular Science and highlighted how unscientific some of these types of books are, championing potentially totally bogus theories that have not been subjected to proper scientific scrutiny. 

Now I can total understand his emotional reaction to this. I have exactly the same response when I go into health food shops and see all homoeopathic remedies on the shelf and am horrified that they can get away with their often totally bogus healing claims that have not been subjected to any rigorous medical trials.

But I have a few points on this that I thought I would share:

1. Books are not apples:  and in most cases not in the class of homoeopathic style remedies, one or two bad ones do not rot the whole crop.  Whilst there are some very bad pop science books, there are also some really brilliant ones that help, as Tom Ewing has pointed out, eloquently digest and explain often very complex subject matter in a clear and understandable way. To draw over riding conclusions about pop science books on the bases of analysis of parts of the sample is in itself is bad science.


2. We can learn things from all types of books:  I think reading pop science books is as important as reading more serious business books and as is in fact being open to read anything from any source. To be closed to one genre of books is like gathering research from an unbiased sample.   I am a believer in the wisdom of the crowd and any business book bogus or not that says reaches the New York Times best seller bogus or not, I would want to read almost on principal to find out what thoughts and ideas people find so interesting.  Its not to say you cannot read these with a critical mind.  What is more just because something is packaged up as a light read does not make it a bad thing.  It like a classical musician dismissing all pop music as rubbish and not listening to it - they would be missing some great tunes and conceptual musical ideas.

3. MR is a branch of pop science:  We in Market research are all actually in business of conducting "popular science".  Market Research is in theory a branch of science but very rarely do we adhere to the rigorous standards that scientist need to adhere to where conducting proper scientific research.  Who for example has ever conducted a double blind trial as part of a market research experiment? Next to nobody.  Who puts up all their raw data from their experiments up for peer review? Next to nobody.  Have I ever read an MR paper that would stand up to a rigorous scientific review?  Well yes, but not the vast majority and (certainly not any of the ones I have written I am afraid!).  

4. This type of debate is important:  So it is easy to take a pop at what Reg has written. But I would seriously encourage Reg to keep writing these types of posts. Without Reg Baker the MR industry would be woefully short of serious internal detractors and we need them.  Despite what we think, that we are all firing pops at each other in the MR industry,  very few people actually have the courage to stand up and hold something to account which is essentially what Reg is trying to do here.   

5. Science and commerce play by totally different rules: The reason is we are operating in a commercial environment where the market decides on whether something is right on wrong true or false not science.  We are all out peddling snake oil on one level or another and the once that can do it most successfully are hailed as kings.  Be it a company putting water in a bottle and selling it as a remedy or Apple packing up a few computer chips in nice wrapping and selling them for a lot more than their inherent value all commerce is a form of "deception of the truth", selling something for more than its inherent value or actual worth. As market researchers we are often called upon to work out what truths and falsehoods we can get away with, what deceptions will fly.  So for our industry to be respected we have to ensure we do try to re-enforce the types of standards that Reg Baker with every fiber of his being is trying to espouse.  

What Reg Baker is simply asking us to be is more scientific in our approach which I hail. 
What is the next big thing?

What is the next big thing?

I asked the panel of judges of the my research transformation awards, who were made up of industry thought leaders and innovators from across the global market research industry, to make their prediction of what they thought would be transforming market research in the future. There were some very interesting and intelligent thoughts and so I thought I would share these predictions with you:
  • Consumers waking up to the value of their own data.
  • The economy ... the last recession seems to have been one of the drivers of the current wave of innovation in research (as well as in marcomms); I suspect that a new economic crisis so soon after the last one will lead to a lot of client insecurity and expect less rather than more interest in experimentation
  • Leveraging social media profiles and the open social graph to understand respondents as well as online advertisers understand site visitors.
  • Big data; the convergence of multiple data streams from diverse sources (research, crm, social media, etc..) and analyzed using Bayesian or other techniques to predict behavior
  • I strongly believe in the potential of ethnographic research - offline or online. Using mobile technology is only one of a number of avenues worth pursuing. Unfortunately, only mobile ethnography featured among the innovations discussed in the survey.
  • Behavioural Economics and what Daniel Kahneman refers to as 'System 1 thinking' are going to be central to the next generation of research approaches. They fundamentally shift our understanding of behaviour and how we make decisions, and challenge research orthodoxies. They provide a framework for approaches that better understand and predict behaviour, and provide a higher purpose for research games. Games are going to be big when people have worked out what to do with them. For me, creating research games that can replicate moods, mindsets and hot-states will ensure that we get closer to how people really think, feel, behave and decide in context.
  • It's the mindset.....The mindset change that "Research" is everywhere. Everyone does it and therefore it is not the bastion of 'traditional researchers'. Once that is clear, there will be a lot more openness, innovation and Change. The next thing to watch out for is the Blinkers-off mode from new-gen researchers - straddling most of the techniques we discussed in the survey!
  • Although it's been mentioned throughout the survey, research communities and co-creation are indicative of the hyper-connected world we live in. By increasing the engagement between companies, researchers and consumers directly we empower the consumer. Providing them with a tangible connection between themselves and businesses that look to serve them. Not only does this provide exceptional results when applied correctly it also has the power to break down barriers between researchers and respondents and in a self-regulating industry were trust is paramount this can only be good thing. In the future the direct connections between researchers, respondents and businesses will only continue to grow.
  • Mobile Social Media Research
  • I think the future of MR is a transformation of client side research departments into facilitators of 24/7 interaction between companies and their audiences. Technology, especially social media technology, will have a key part to play in this.
  • Big Data is the biggest opportunity and threat to the MR industry the next years.
  • The day that brands prototype in real/near real-time. Performance->Feedback->Revision->Performance->Feedback->Revision->...
  • The realization and acknowledgement that the random probability sample is dead (note, not impossible to achieve but unfeasible both in terms of timing and costs). 2. Mobile. 3. Pay for performance.
  • Community panels
  • Digital marketing research will become a subset of marketing. it is the only way the sums add up. the old marketing versus MR split is dead in the digital world
  • I've been reading a lot about Identity Economics, how our identities and the norms we expect to see within these identities shape consumption patterns.  Brand communication can influence our identities and therefore our consumption.  I'd like to explore how we can adapt some of the research techniques used in sociological identity studies within communications research.
  • Client pricing revolution!  (Some) clients, especially those with MROCs, have already figured out that the historical service-model can, and should be disrupted.  Market Research companies must deliver consulting value, and be prepared to separate their operational costs (eg Sample, Programming/Hosting) from their consulting fees.
  • The merging and mixing of methodologies
  • "Global migration to mobile devices as the way people organize their communications, their activities, AND (as mobile payments systems evolve) the way they organize their financial lives, suggests that mobile devices will become the primary channel for many forms of MR in the future.
  • I am eager to see our industry deal with this evolution above and beyond the work done to date where mobile devices are used primarily as ""data collection"" devices.  I am eager to see our best MR minds use mobile devices the way real people use them -- to share not just information but also experiences and emotions."
  • Payment by results.
  • I think that, over time, CMOs will start to realize that their organizations should talk less and listen more.  They'll decide (finally) to increase the proportion of their marketing budgets devoted to research, and that will change their growth trajectories.
  • Predictive techniques that can shorten the length of surveys
  • Facebook!
  • I think the type of interactive things you can do with mobile phones and tablet are set to transform market research.  e.g. sitting and watching the TV and giving live feedback, voting on your feet about the products you like as you go around a supermarket.
  • I think we are at the infancy of text analytics people are going to get smarter and smarter at this.

The Winners of The Research Transformation award

The Winners of The Research Transformation award

These are the result of the Research Transformations awards which I was asked to organise for a special session at the NewMR Festival.   These awards were initiated to celebrate the things that have had (or are having!) the most transformative impact on market research.  The awards have been judged by an esteemed panel of 30 leading research innovators and thought leaders from across the industry.  The full list of judges is published below and I would like to thank them all for their time and active contributions to these awards.

This is the judging survey and if you would like to CAST YOUR OWN VOTES on these awards please do,  and I hope to be able to publishing the results of this open vote in the future, it may be interesting to compare the open vote with those of the judging panel:
http://qsurvey.gmisurveys.com/dc/index.html?p=jwoYlg

The big ideas that are transforming how we think about market research
The first award is for the big ideas that are transforming how we think about market research. Over the few years a number of major ideas and theme have emerged that have shaped the way we think.

1. Listening rather than asking
The emergence of social media has opened up a whole new viewpoint on how to conduct research moving away from asking questions to listening to what people are saying.  What is has spawned a whole new  industry monitoring and measuring and analysing what we are saying when we are not asked questions by market researcher.  Listening is becoming more important than asking.
2. The hidden decision making process of our brain 
We are slowly learning more and more about how our brains work and we are finding out that the way we decide things and make decisions is a lot more complicated that we think and a lot of it happens outside of our consciousness.  This thought is really transforming how many market researchers think about conducting research, no longer can we rely on simply asking questions we have look further.
3. Information can be beautiful 
We in the market research agency can be labelled by the outside world an boring numbers people but the arrival of inforgraphics onto the scene and new story telling technique have started to make our industry a whole lot more sexy!

This is the full list of nominations.... 

Wisdom of the crowds: Discovering that groups can make intelligent decisions
Information can be beautiful: The idea that data can be sexy, and there are a lot more creative ways of presenting data than a bar chart
The hidden decision making process of our brain: Leaning that our concious mind is not always in control of our decisions
Herd thinking: Understanding the power of social influence
Co-creation: We can work together to create things
Gamification: Using gaming techniques to get respondents to think more effectively
Listening rather than asking: The idea of gaining incite from the mass of communicaton happening on the internet and using observation and means of conducting research
Big data: The emerging opportunity to consolidate information from all sources of sources to undertake super analysis
Thin slicing: The idea that you can generate a lot more incite than you think from very small data sources
The communitisation of market research: We are all sharing a lot more ideas & information with things like twitter, linkedin and blogs and this is "superscalling" our industry


General research techniques & methodologies that are transforming how we conduct market research
The second award is for the specific techniques and methodologies that have had a transformative impact on market research.  I asked the judges not to pick out specifically the ones that had had most impact but pick out the methodologies and techniques that resonated most with them personally

.1. Research communities
Citation: MROC communities have been the real success story for market research over recent years, spawning a raft of successful new business and re-engineered how many businesses are interacting with their customers
2.Co-creation
Citation: This technique has married marketers and consumer together to revolutionise product development
3. Gamification
Citation: An idea that has set the industry alight with the realms of possibilities it offers to not just gather data but to communicate with consumers and clients alike.

This full list of nominations....
Text analytics
Infographics
Mobile phone research
Co-creation
Conjoint analysis
Gamification
Predictive markets trading
Real time research
Virtual shopping
Research communities
Social media monitoring
Behavioral economics
Neuro-research
Online qualitative research
Implicit association research
Eye tracking
Online dial testing
Location based research
Mobile ethnography
Conquest sofa technique
Facial emotion recognition
Observational research
  
Books that have had a transformative impact on Market research
These are the books that our judging panels felt have had the most transformative impact on how we think about market research

1. Information is Beautiful
Citation: David McCandless’s book is a must have addition to any market researchers coffee table! The book that made data sexy

2. The Wisdom of the Crowds
Citation: James Surowiecki’s book is one of the most cited books in Market research a wonderful validation of what we all hoped to be believe that collectively we are smarter than we think.

3. How we decide
Citation: Jonah Lehrer’s book delves into the hidden decision making process of the brain, a must read for would be behavioural scientists

This is the short list of books compiled largely from a linkedIn discussion I set in place earlier this year.

Information is beautiful, David McCandless
Reality os Broken, Jane McGonnigal
Herd, Mark Earls
How we Decide, Jonah Lehrer
Where good ideas come from, Steven Johnson
The wisdom of the crowd, James Surowiecki
The Psychology of Persuasion, Robert Cialdini
Art & Science of Interpreting Market Research Evidence, Smith and Fletcher
Being Wrong, Adventures in the Margin of Error, Kathryn Schulz
Consumer.ology,  Philip Graves
Black Swan,The Impact of the Highly Improbable
Resonate, Nancy Duarte

Communication
Innovation in any field is strongly linked to the quality of cross communication and the ways of communicating ideas about market research have proliferated in recent years with the advent of the internet. Here is a list of nominations of research organisations, publications and new communication channels that are transforming how we think about market research.


Research organisations
Winner:  ESOMAR
Citation: A truly global market research organisation which unites the world of market research

The nominees:
ESOMAR
WARC
The Market Research Society
The ARF
Research & Results
CASRO
QRCA
GOR
MRA 
AMA


Publications
Winner:  Research Magazine
Citation: It has its fingers on the pulse of market research ! Covering both the stories and the people behind market research and its successful blend of online and offline communication.

The nominees:
International Journal of Market Research
Quirk
Admap
Research Magazine


New media communication channels
Winner:  Twitter
Citation: Emerging as the No.1 platform for the transfer of news and ideas across the Market Research community
Runner up: The blogging community
Citation: Market research bloggers are really driving forward the market research agenda.

The nominees:

Twitter
The general blogging communities
Greenbook Blog
LinkedIn
NextGenMR
Facebook
Festival of New MR!

Finally here are some more specialist categories that I have grouped together

Data analysis
Winner: Big Data


The nominees:
Big data
Conjoint analysis
Text analytics
Semantic analysis
Discourse analysis
Respondent self analysis

Data presentation
The winner: story telling

The nominees:
Wordle
Dashboarding
Story telling
Infographics

Mobile phone based research
The winner: Location based research

The nominees:
Location based research
Mobile ethnography
Me research
Interactive experiences

Organizational
Winners: The younger generation of MR firms 
Citation: The last decade has seen the rapid growth of a number of new MR first which have really transformed and shaken up the market research industry

 The nominees:Off shoring
Existing leading MR firms
The younger generation of MR firms
New entrants from outside traditional MR
Client side market research teams
The panelist providers

Qual Techniques
Winners: Hybrid quali-quant techniques
Citation: The lines between qual and quant are rapidly blurring and techniques that blend the two are very much seen as the future.

  The nominees:
Online focus groups
Hybrid quali-quant techniques
Webcam interviews
Online offline techniques
Focus group game play techniques
NLP inspired techniques

Software
The Winner: Confirmit
Citation: Its flexibility and depth of features it has become the most widely adopted platform for designing online surveys in the MR industry

The nominees:
Confirmit
SPSS
Survey Monkey
Snap
Survey Gizmo
Sawtooth
Etabs
Market Tools
Qstreaming

Technology enabled methodologies
The Winner: Facial emotion recognition
Citation: Anyone who has seen this in action cannot be failed to be impressed with its potential as a market research tool

The nominees:  Virtual shopping
Implicit association research
Eye tracking
Online dial testing
Facial emotion recognition
Click testing/heat mapping

Ways of gathering data
The winner: Social media sourcing
Citation: Who would bet against social media becoming a primary channel for accessing audiences for market research?
Runner up: Intelligent Sample merging
Citation: Mix sample sources are going to be ever more important in the future and technology that can intelligently merge sample will have a critical role to play

 
The nominees:  
Research communities
Social media monitoring
Instant polling
Traditional panels
Social media sourcing
River sampling
Micro sampling
Observational techniques
Intelligent sample merging

The panel of judges:

Leonard Murphy: Greenbook blog
Sue York: NewMR
Alex Johnson: Kantar Operations
Jo Rigby: Omnicom Media Group
Sven Arn: H,T,P Concepts
Reg Baker: Market Strategies
Pravin Shekar: Krea
Bernie Malinoff: Element 54
Deborah Sleep: Engage Research
Peter Mouncey: IJMR
Edward Kasabov: Bath University
Brian Tarran: Research Magazine
Jeffrey Hennings: Affinnova
Mark Uttley: ex Sony Music
Sean Copeland: Environics Research
Wim van Slooten: MOA Netherlands
Mike Cooke: GFK
Orlando Wood: Brainjuicer Labs
Tiama Hanson-Drury: GMI
Martin Oxley: BuzzBack
Sabine Stork: ThinkTank
Surinder Siama: ResearchTalk.co.uk
Tom De Ruyck: InSites Consulting
Betsy Leichliter: Leichliter Associates
Kathryn Korostoff: Research Rockstar
Mitch Eggers: GMI 
Roxana Strohmenger: Forrester
Mario Menti: twitterfeed.com
Dan Kvistbo:  Norstat
Ole Andresen : Confirmit
Adam Warner: RW Connect
Diane Hessan: Communispace

30+ New Buzz words and concepts emerging from ESOMAR 3D conference

30+ New Buzz words and concepts emerging from ESOMAR 3D conference

Want to know what Silent Dog Analysis is?  Have you heard about Facebook's new cluster influence model? Did you know that news is becoming a social currency?  Have you done any crowd interpretation recently? Are you up to date with your Digital Etiquette? Here are some of the new buzz words and concepts that I picked up upon at the ESOMAR 3D conference that you might want to drop into your next market research conversation. It was a veritable Exoflood of ideas!

1. Exoflood: An exabyte of data is 1 billion gigabytes ie a lot! Exoflood is a new buzz word used by Philip Sheldrake and coined by a west coast think tank to describe the feeling that we are all being drowned in big data.

2. Stream banks: Philip Sheldrake talking about banks that store all the personal data you are constantly generating from your life online and potentially spend by “smart decision makers”

3. Smart decision makers: This is one of several new trends indentified by Dominic Harrison from the Future Foundation. An emerging trend to allow smart process to make decisions for us that we find a bore to do ourselves e.g. tracking and automatically switching electricity supplier to the guaranteed lowest cost supplier.

4. Surrendering choice: comes with this the idea that we are ever more happy to surrender choice to smart decision makers. If someone knows a lot about computers let them tell me which one to by so I don’t have to bother to make that choice myself. Source: Dominic Harrison from the Future Foundation

5. Smart boredom: The idea we are finding more and more smart things to do with our downtime, (primarily noodling around on our mobiles!). Source: Dominic Harrison from the Future Foundation

6. Purposeful leisure: a similar concept that our leisure downtime activity has become more purposeful with the advent of the internet – we don’t sit around and consumer TV we interact more. Source: Dominic Harrison from the Future Foundation

7. The quantified self: We are amassing a huge amount of information about ourselves measuring ourselves is a new trend that is being facilitated by the growing number of apps that enable us to do this. Source: Dominic Harrison from the Future Foundation

8. Connectionomic: the science of how we communicate and connect with people the digital space. Idea coined by Added Value & Yahoo exploring how in particular women use the web. Here is a link that explains more

9. Digital etiquette: studying the evolving rules of engaging with each other on the internet. An etiquette is emerging. An observation by TNS & Yahoo exploring how women use the web.

10. Social + content: One of the emerging pieces of digital etiquette, the idea that to engage with other people on the web you have to provide content and not just spout on about what you are doing. e.g. I am not interested reading a post telling me that you are eating breakfast but if you have just discovered a great new brand of breakfast cereal while having your breakfast that’s worth hearing about. This trend spotted by TNS & Yahoo 

11. Eve-olution: the study of the changing behavior of women. Nice new word invented by TNS & Yahoo 

12. The context gap: Peter Harrison from Brainjuicer identifies and coined this phrase to highlight a major problem with many pieces of research, which is asking people to think about things out of context e.g. asking about food when I am not hungry is not going to deliver the same results as when I am and the purchasing decision I make when I am in a hurry are not the same as when I have time on my hand. And what I feel about when I am angry is different when I am calm.

13. Use games to recreate experiences: an idea championed by Peter Harrison/Brainjuicer that you can use games to recreate certain experience and thus help bridge the “context gap” when conducting market research e.g. you can force people to make decision with time constraints and you can conjure up moods and feelings by game play (not necessarily hunger thought!)

14. Think of the research reasons, not just engagement reasons when getting creative with online research: More creative and gamified question technique have a wide variety of roles to play in market research not just for 'engagement'. Think about the research need for using a specific technique and understand the role each can have before jumping in a message from Bernie Malinoff

15. Gamification v Normification: The biggest enemy gamification has is the fortress of norm data. Researchers have a choice between improving the data and maintaining norms – so which one is most important to you? Point raised by Bernie Malinoff. 

16. Facial expression analysis: OK not specifically new, but we leant from Alistair Gordon from Gordon & McCallum how powerful and flexible this technique can be. All it requires now is a recorded webcam stream. I think we will hear a lot about this technique in the coming years and thus its inclusion as a buzz concept. 

17. Stimulation junkies: generation Y are, as we all know bombarded with stimulus from the web which has turned them into “stimulus junkies”. A term talked about by Tom De Ruyck & Elias Veris from InSites Consulting.

18. Marketing research to consumers: The idea that in the future we will be marketing the participation in a research study in the same way we market other consumer experiences. And that we need to position it as a cool potential thing to do. An idea from Tom De Ruyck & Elias Veris from InSites Consulting who when on to show us exactly how cool market research can be if you pitch it as a fun more game like experience.

19. Crowd interpretation: Using respondents to help interpret and analyze research data. Again perhaps not a new concept but this idea has been brought to life through new gaming techniques pioneered by InSites consulting including “insight battles”.

20. Insight battles: let researchers and consumers battle with others to discover insights using gaming techniques. A great idea from Tom De Ruyck & Elias Veris from InSites Consulting that was demonstrated live at the ESO3D event. 

21. Need & solution information disjoint theory: There are two types of feedback you can get from social listening, understanding of need of consumers and proffered solutions from consumers. Brands like to hear the needs but don’t want to listen to the solutions is the observation from a review of a decades worth of Netnography research conducted by HYVE and Karlsruher Institute of Technology in Germany. 

22. The NIH syndrome: “Not Invented Here” this is partly explain by clients and in fact a latent instinct amongst all of us to want to be part of the solution generated process and if not then it undermines our role so we are reluctant to take up solutions that have been fostered upon us.

23. If you can measure it, it is probably is not important: incite mining via social networks is about understanding what the minority of us are doing as opposed to conventional market research which can focus on discovering what the majority of us are doing. An idea proposed by Ray Poynter.

24. Silent dog analysis: looking for the words not used in discursive analysis and not just the ones that are used, a new phrase/idea coined by Ray Poynter inspired by Sherlock Holmes not hearing the dog barking in the short story Silver Blaze  – which I think has got legs!

25. The cluster influence model: Malcolm Gladwell’s idea of a group of super powerful influencers existing on the web is challenged by new Facebook analysis of influence on their network - demonstrating that influence is made up of connected clusters and you cannot single out any specific super influencers just well connected members of these clusters who are not specifically any more influential than others.

26. Measuring the viral reach of social chatter: Facebook believe we need to move on from simply measuring how many people have mentioned your brand on Facebook, this is the wrong metric, what is more important is how many people read these mentions a figure that is potentially 10 times higher.

27. News is becoming a social currency: The value of news is based upon not who creates it but who distributes it. Highlighted by research carried out by CNN & Innerscope Research

28. Social sharing emotionally tags information: Any information that is shared is more impactful and rest more strongly in our memories because of the emotional tagging of that information with person who distributed it, according to research from CNN & Innerscope Research.

29. Friendships = influence: well all influence each other via our friendship connections on the web

30. We are social media researcher, not social media data miners!: A clearer distinction needs to be made between the process of mining and gathering social media data and the process of interpreting and analyzing as market researchers was a strong message from Annie Pettit - The two get confused.

31. Social media research need to establish some quality standards: As researchers we need to apply the same vigorous standards to the analysis of this data as we do in the realms of traditional market research. Right now this is not always happening according to Annie Pettit. Who eloquently demonstrated this by showing how the results of social media analysis could be so easily manipulated using unregulated technique.

32. Survey chunking: Breaking surveys into micro part and asking those questions independently and then sticking them all together. An idea that has huge potential for conducting research on Facebook where people will more willingly answer a micro poll but not do a whole survey. 

33. The MROC engagement/fun model: Want to maximize active participation in your community – get directly involved with them and make it as fun process as possible was a message strongly espoused by both InSites Consulting and Isabella Kee Wong from Philips design

34. MROC research has the potential to move on from being seen as research to active participatory lifestyle experiences: Just how this can be done was can be found out by reading Jon Rodriguez & Isabella Kee Wong from Philips design’s paper from this conference a supurb, probably best ever example I have seen of how to set up and run a dynamic market research community research project.

35. Social media historical benchmarking: The great thing about social media research is you can look backwards as well as track what people are doing now and thus it is a form of research with inbuilt historical benchmarking opportunities. This is an extremely powerful idea that was explained with tremendous gravitas by Japanese researchers looking at the social impact of the Earthquake & Tsunami and how it had changed opinions in the Japanese society. They did this by looking at the language used in blogs posts that were posted before and after the event. They found they were able to look back 2 or 3 years in history and track year on year changes.

36. Personas: An alternative way of thinking about traditional segmentation groups. Instead of segments, think about "Personas", a technique of personalizing a segment to bring it to life invented by  Jon Rodriguez & Isabella Kee Wong from Philips design

P.S.

Sorry have I missed any or not got some of the interpretation quite right? Do let me know happy to update and amend!

ESOMAR Congress Award for Best Methodology Paper


I am very pleased to have won the ESOMAR best methodology paper award at this years annual congress along with Deborah Sleep from Engage research for our game experiments exploring how to put the fun into surveys and the impact it can have.
A thank you to the company that pioneered gamification

I would like to pay tribute to the 3 people who I could say acted as important inspiration for this research. They ran a company I worked for earlier in my career called Buspak,  you will never had heard of them, an Australian based company that has long ceased to exist, but they succeeded in gamifying their whole businesses and really understood the power of fun and how it could be used to engage people.

Now when I say gamified I am not exaggerating…This was a company that once hired a tank for a day to drive up to central London to the steps of Saatchi & Saatchi on Charlotte Street to deliver an “explosive deal”. Who would insist that the entire sales team get dress up in fancy dress costume to go on sales calls (see pic below). Who would get actors to go into advertising agencies to deliver speeches. Who once took a whole fair ground around the major cities of Australia and invited the entire advertising industry to a day at their fair.  They published a comic that they sent round to companies in place of a brochure. We would casually fax over a deal on Friday afternoon to all of our key clients and the prize for anyone that read the small print and faxed it back was a round the world air ticket.  We once sent everyone of our clients a brick and I can’t even remember why now.


They sold advertising on buses which frankly was not on the radar or most advertising agencies and so their strategy was to go out and make as much noise about themselves as they could and engage with their audience.  They also had a very serious side investing huge amounts of money in research to demonstrate the effectiveness of their medium which was my earliest introduction to market research. The simple test and control techniques they would use to demonstrate its effectiveness were the is same techniques used for all the research conducted in this award winning paper.  

And it worked, they took over the Australian bus advertising business and I think quadrupled the turnover in 5 year, their companies founder was named Australian businessman of the year and  they then came over to the UK and bought up as much of the UK bus advertising business as they could and within 2 years had more than doubled the turnover.  They eventually sold up to a US company and retired very rich.  If you dig around I think there is a Harvard Business review case study on what they achieved with this business.  A truely amazing company.

They taught me to place no limit to the imagination on implementing ideas in the business environment and what a powerful weapon fun was.

Their names were Colin Hindmarsh, Peter Cosgrove and John Williams. These names are folk law to many people in the media industry in Australia.

Thanks!

Failed experiments



Everyone likes to talk about successes of their research techniques and methodologies but we often tend to brush the failures under the carpet and don't like to talk about them. This is understandable, nobody likes to be associated with failure. But this silence on things can result in other people wasting a lot of time and effort going down the same road and making the same mistake. I am afraid I witnessed a case of this at a conference I recently attended where someone was talking about conducting video based online interviews which we experimented with a few years ago with not much success and I could see they were heading in the same direction. 

I feel a bit guilty now that we did not publish the findings from this failed experimentation at the time and so thought I’d use the opportunity of having this blog to make up for this a bit by laying a few of our failed experiments on the table as a gesture towards encouraging more open collaboration on failures as well as successes in the market research industry...


The failed video experiments 

3 years ago we invested a lot of thought and effort experimenting with the idea of video surveys where we got actors to deliver the question as we thought it might make the surveys more engaging. It was failure on several fronts, firstly the research company we worked with kept changing the questions, not an uncommon issue, in fact it’s the norm and so we had to re-record the actors speaking the script 3 times at a tremendous cost. Then, integrating these videos into the survey made the survey really slow and cumbersome to load restricting it to people with decent broadband connections, perhaps less of a problem now in many markets. Then the third factor we faced which we did not even think about when starting out was the realisation that up to a third of people were doing our surveys in public situations like offices where it would be annoying for other people to hear someone speaking the survey questions.

As a result when experimenting with this, all things combined we experienced more than a 30% drop out rate from these surveys which really rather undermined the whole point of doing this which was an effort to improve the volume of feedback.


Now it’s not to say that the video did not work as an engagement technique, it did for those that were prepared to watch it, but we found that by using a silent animation technique instead we could stimulate a similar quality of response and this approach was far more cost effective.

Difficulties of implementing virtual shopping

Another area of online research which we have had real problems with is virtual shopping. Virtual shopping is a very popular idea, and a virtual shopping module looks great in anyone’s portfolio of technical online survey solutions you always see them being shown off and demonstrated at conference events, but to be candid we have found it almost impossible to properly emulate a shopping experience online.

Here are the problems. Firstly looking at a group of products on a web page is nothing like the experience of being in a shop looking at products on the shelves. In a shop the shelf at eye level gets the most attention and the products on top and bottom shelves are not looked at so often, on a web page our eye scans from the top left to bottom right naturally (in western markets) and so there is no way you can effectively model the same experience.

The second issue is one of pure statistics. Most real shopping experiences have around 20 or so competitive products to choose between, but if you do the maths with 20 products there is only a 1 in 20 chance a test product will be selected and to get a statistically significant measure on whether one product will sell more than another you need at least 50 selections, ideally 100 which means sample cells of 1,000 to 2,000 per design variant. So if you were say, testing out 5 designs you might need to interview 10,000 which is simply not economic.
Naive to this fact when we first started creating virtual shops we were designing them with 50 to 100 products sometimes, with samples of a few hundred resulting in only 2 or 3 purchase instances of any one product which made it almost impossible to make any real sense of the data. 


The third factor is one of costs, when we started out we would spend days designing these wonderfully accurate renditions of a supermarket shelf, with 3d depth effects and shadowing and even went to the length of experimenting with 3d environments which were taking 2 or 3 weeks to create with a price tag of £10k to £20k per project just to create the shelves. The simple fact though is that the average budget to design test a product is less than £10k including sample and there are often significant time constraints and so these more elaborate techniques were simply not economic.

The 4th factor is one of screen resolution. If you cram 20 or so items on a page it becomes almost impossible to get a real sense of what the product looks like, read the labels or details and this factor alone is enough to make many comparisons almost meaningless. When you are in a shop looking at products it’s amazing how much detail you can see and pick up on without even picking up the items; that is just missing when you are looking at fuzzy pixel on a screen.

The solution we have reverted to for design testing is a really quite a simple shelf with between 3 to 8 competitive products on display and have developed a virtual shopping module that enables us to create these dynamically without having to do any elaborate design work cutting down dramatically on cost and creation time.


Yet still to this day we have clients come up to us unaware of these issues, with a request to do a virtual shopping project saying they want to test out 10 designs on a shelf with 100 products, and we have to explain to them the issues.

The failed ad evaluation time experiments

We have done a lot of work looking at consideration time when it comes to making online survey decisions and it is clear that there is an underlying relationship between consideration time and uncertainty, though complicated buy a wide number of factors. One of the other things we were aware of from reading psychology books was that people spend longer looking at more appealing images. So we thought, what about using this as a test of the effectiveness of advertising. Surely if respondents spend longer looking at one ad vs. another it is likely to be more effective?


Well, we conducted a whole series of experiments to see if we could measure this with abject failure. It’s not to say that the basic premise is not true, the problem was that there are so many other more significant factors at play like how long people spend looking at an ad, from research terms viewing time seemed to be almost a random number! Confusion was an issue, the amount of visual clutter, the clarity of the message, layout factors, colours, the style of visual content and so on, all had measurable effects. Often some of the most effective ad respondents were merely glancing and some of the worst and respondent were spending ages looking, perhaps in the same way as you stare at a train wreck, so we gave up on these experiments with our tails between our legs.

Failed question formats

Now I am rather sensitive about this and somewhat defensive as we are in the game of developing more creative questioning techniques and we have developed quite a number over the last few years but it has to be said that some of them have crashed and burnt along the way.

Multiple drag and drop questions

Dragging and dropping is a particularly problematic format, especially if you want people to drag multiple items we have found that people get bored of doing it very rapidly which restricts some of the most creative applications of this technique. Dragging and dropping is a brilliant solution for single choice selection process because it can produce measurable reductions in staightlining and improved data granularity. But if say you had 3 brands and a range of attributes that you ask respondents to drag and drop onto the brands you would be much better off doing this with conventional click selections, as we have found to our loss, there is simply a limit to how many things people can be bothered to drag and drop, 2 or 3 at most before they think they have done their job.

The flying words question


We had this idea that we could make tick selection more fun if we made the options fly across the screen and ask people to click on them as they went past. It took one experiment to realise it was not going to be a very usable question format. A combination of bemusement amongst respondents seeing the options flying past and the realisation that not everyone is 100% focused, 100% of the time resulted in about half the number of clicks being registered compared to a traditional question.

Opinion Snowboarding 


This is a question format where respondents snowboard down a hill and pass through gates along the way to indicate their choices. It looked fantastic, I was very excited when we first developed it. Most respondents thought it was a fun way of answering the questions. The problem was that in world of survey design most is not enough, around 15% of people found it totally annoying and what we got out of the back of it was really quite chaotic data from people who could not make up their minds in time. We tried to slow it down to give people more time to think but that just started to annoy another group of people who became frustrated with the time it was taking.

We have not given up on this question format yet though, we have be working on a simplified version using a single poll that respondents ski to the left or right of which seems to perform much better and we feel it may have an interesting use for implicit association style research were you force people to make quick decisions but as a swap out variant for a conventional grid question forget it!





Drag & Drop questions: a user guide


Drag & Drop style question formats open up an raft of more creative questioning techniques for researcher, but they can be open to miss-used and many researchers are unsure as to the creative possibilities of what you can do with this style of question. 

This guide covers the factors that should be considered when deciding when and how to use Drag & Drop questions in online surveys, and an outline of the range of Drag & Drop question formats that are available.

Understanding the value of dragging!

The principal value of Drag & Drop in surveys is that it allows respondents to sort and group options, rather than simply pick them.

Ranking

Drag & Drop is most commonly used to allow respondents to rank choices, such as to pick their first, second and third choice from a list. To achieve this in a conventional format would require presenting effectively the same question three times – asking the respondent to select their first choice, then second, then third. This would be a repetitive task for respondents, and also fiddly to program, as the previously selected options would have to be filtered out of the list each time. The Drag & Drop format allows these three questions to be combined, with ranking and selecting being part of same process. From a respondent’s point of view, it is more intuitive than either repeated questions or a grid.

Sorting

Another useful role for Drag & Drop is to allow respondents to apply a large set of options or attributes to two or more ‘targets’. For example, a respondent might be asked to pick words from a list at the top of the screen, and match them against two brands at the bottom of the screen. By making and reviewing their selections for both brands at the same time, it allows them to compare and refine their choices, bringing out the truly distinct characteristics for each brand. By presenting all the choices they have to make on one page, rather than showing the same list over and over, it helps to ‘concertina’ the thinking process.

Reducing straightlining effects

In experiments we have conducted at GMI, we have observed up to 80% less measurable straightlining using on Drag & Drop questions, compared with conventional grid questions using standard button selection *.

One factor that might explain this is the boredom respondents experience when asked a large number of grid-style questions in a survey. The switch from button-pressing to Drag & Drop makes the question more interesting to answer, which research has shown improves respondent focus and thus reduces straightlining. But that is not it.

In addition, the extra concentration required to drag a selected option from one part of the screen to another makes it far more likely for the respondent to think about the choice, since it takes the same effort to answer the question properly as not. This is distinct from simple button-pushing, when banks of repetitive questions can be sped through with the brain disengaged and the eyes almost closed.

But while the possibility of an 80% reduction in straightlining might tempt some to replace every grid question in a survey with Drag & Drop, it should be kept in mind that the extra time required to complete such questions can become frustrating to respondents if they occur too frequently, and this can trigger dropout and speeding. They are best used sparingly in a mix of creative question formats, and reserved for when their benefits are most needed.

*source: ESOMAR GMI Engage 2008 Panel conference paper “Measuring the value of respondent engagement”

What types of Drag & Drop question are there? 

There follows an outline of the various options for using Drag & Drop questions in a survey.

Terminology of different Drag & Drop questions

We define 4 different Drag & Drop processes, and it is helpful to use this terminology when defining a Drag & Drop question requirement in a draft questionnaire.

1. Drag disappear: dropped options disappear
2. Drag and stack: dropped options become a stack
3. Drag and restrict: only one option may be dropped onto a target
4. Drag and list: dropped options become a list

In addition, targets maybe be presented all on one page, or one at a time, referred to as “sequential Drag & Drop format”.

Options may also be set as single- or multi-choice.

They can be dropped onto restricted positions on the screen, or onto one- or two- dimensional zones.

Basic option ranking


This question format can be arranged vertically or horizontally, with the options organised as blocks or straight lists.

(Methodological note: the temptation here is to allow respondents to rank all the options, but this could result in several-hundred variable combinations to process. It is best to ask respondents to rank only the top 2 or 3, and possibly the worst.)

This question format is also an ideal alternative to Max diff style conjoint question approaches.

Option sorters


In this format, the respondent drags the options onto target choice. These can stack up in a pile (sometimes called card sorting) or disappear as they are dropped onto the selection zone.

The main decision about option sorting is whether to have all the options on display or show them one by one (sequentially) and largely this is dictated by the number of options respondents are asked to sort.

The visual format of this question can be adapted in many different ways, depending on the task. Icons can be added to help emotionalise the choice. The question format can be set up with text alone, or, just as easily, with images.

The layout can be organised to enable respondents to drag options to targets arranged horizontally, vertically, or onto a grid.

List builders


In this format, the options are dropped onto each target to form a list. This is useful when, for example, asking respondents to select a set of features for a product, or to encourage respondents to pick a minimum number of choices for each target – e.g. words associated with different brands.

The same range of custom features are available as in the option sorting format: respondents can drag either words or images, which can be shaped and sized according to specific requirements.

Flag Drag & Drop onto line


This question format is often used as an alternative to sliders, as in effect it produced the same data. The respondent places each option onto a bar, marked with a range from 1-100. The benefit of this format over sliders is that respondents can make more micro-comparisons, and there is an element of ranking involved which can help pull out more subtle differences. Jeffrey Henning wrote a good post about when to use ranking v rating questions http://blog.vovici.com/blog/bid/18228/Ranking-Questions-vs-Rating-Questions. Well the popularity of this question format is probably due to the fact that it combines these two techniques.

Target Drag & Drop


This is equivalent to a one-dimensional ‘flag Drag & Drop’, but respondents place their choices onto a two-dimensional target. Respondents can find this a more intuitive process, such as when asked how much they like something. Another advantage is that more options can fit onto a two-dimensional target range without it getting overcrowded.

The target imagery can be customised, as can the colours of the target zones.

Graph Drag & Drop


This records answers on a two-dimensional scale, thus making respondents perform two tasks at once, say for example how much they watching sport on TV v how often the participate.

This format needs to be used with some care, as it can be a little confusing for respondents to answer. See note below about instructions...

The importance of instructions when using drag and drop questions

Respondents are so used to pressing buttons in surveys that the sudden appearance of a Drag & Drop question can cause confusion. If such a question is badly identified, respondents can end up trying to press all the options as though they were buttons, and then quit the survey in the belief that they are stuck. Experiments have shown that up to 10% of respondents can drop out of a survey because of a badly identified Drag & Drop question.

For this reason, it is essential that instructions on Drag & Drop questions are clear. Ideally, they should be illustrated, such as with animated arrows.

How does data differ from Drag & Drop questions to conventional button-selection techniques?

Experiments to compare Drag & Drop questions to conventional grid alternatives show no major differences in the overall balance of data, other than a slightly lower level of neutral/don’t know selections accounted for by the improved level of engagement. The distribution between the top and middle boxes appears to be equivalent.

Potentially better quality data

As previously explained, there is less straightlining associated with drag and drop question formats and so you potentially get richer data. But there can be fewer answers though in certain circumstances!

A particular problem found with list-building Drag & Drop question formats is that respondents have a habit of only dragging one or two options onto each target particularly if there are a lot of target, and are less likely to match the same option to more than one target compared to a situation where they are presented with each target one at a time.

This can result in reduced the volume of data compared to sequential tick selection approaches.

One solution to this is to define minimum the number of options they must drag into each zone but this can be frustrating for respondents if they cannot think of enough associations. So it would also be a recommendation to outline in the question to recommended number of attributes to select but without making this conditional. e.g. "please select 5 features that you think represent each brand"

Kategori

Kategori