ARF Great Minds Quality in Research Award 2011

I am pleased to announce that my team at GMI intereactive has won the inaugural ARF 2011 Great Minds, Quality In Research Award for the work we have been doing with Mintel to re-engineer their online consumer brand tracking surveys.


These surveys were re-designed using QStudio our in house flash survey design platform, as a replacement for  existing HTML survey and we managed to dramatically improve both the consumer experience, reducing drop-out by 50% and also improve the quality of data, increasing average click count by 20%.

We will be publishing a full case study explaining the journey we went through to re-engineer these surveys soon.

But in the mean time my team is basking in the glory of winning this award!

 

Game Theory – turning surveys into games

Gamification is clearly a bit of a buzz word in the marketing industry right now and a subject of growing interest amongst market researcher as witness by the enthusiasm this topic generated at the recent NewMR festival web conference organised by Ray Poynter that I was part of.

This is an article I wrote following on from this conference for MrWeb that I am reproducing  that looks at the work we have been doing to explore the role of game play in online surveys.



Background

For the last 3 years we have been on a quest to find ways to improve online surveys by making them more engaging for respondent, conducting over 100 experiments looking at different techniques. Over the course of this work, we observed how impactful introducing any level of playfulness or game-style activity in an online survey could be at stimulating extra feedback.

I would go as far as saying that you could single out game play as the single most effective means of engaging with respondents we have discovered.

As a result, we began last year to take a serious look at how we could introduce more game-like activities into surveys and researching the impact this had. 

For those that are not already tuned in to survey Gamification techniques, here is a short summary of what we have learnt so far.

  1. Reframing questions to be more game-like:
A game is really anything we do that involves thinking that is fun. What differentiates a game from a survey question really boils down to how we ask it.  Take this example:

Question:  “what is your favourite meal”
Game:   “Imagine you are on death row, what would you order for your last meal” 

By asking the question in a more imaginary framework it becomes more fun for respondents to answer and in turn we have found respondents then deliver far richer responses.  (see examples below)




I was recently interviewed by Surinder Siama from Researchtalk and we talked about this technique and he challenged me to put my money where my mouth was and asked me to do a live experiment with a couple of randomly selected people in the office, he recorded it and here are the results:



http://www.youtube.com/user/researchtalk#p/u/0/MybClB6cAQ4

  1. Adding a competitive element

Most games have a competitive element and it is amazing the impact that adding any form of competitive activity to a survey can have.

The simple phrase “We challenge you...” added to a question asking people to recall something can easily double the volume of responses.  Other phrases like “can you guess...” or “you have 1 minute...”  seem to be equally powerful.

Question:  “what brands do you recall”
Game:   “can you guess the top 5 brands, you have 2 minutes...”

This shift in emphasis not only improves response, in this above case delivers 4 times as many brand names we see how it also improve enjoyment levels, highlighting the direct relationship between fun and effort.

  1. Reward and feedback mechanisms
A typical survey contains no form of feedback or reward for successful performance, except a somewhat distant financial incentive.  Yet you only have to look at the success of at psychometric test style games flying around Facebook to understand the power of a reward/feedback mechanic.

Question: “what is your favourite colour”
Game: “Find out what your favourite colour says about you!”

Reward and feedback mechanisms are extremely easy conceptually to integrate into surveys and we are working several clients developing these right now.  The issue really is in adapting your survey technology to handle scoring and feedback mechanics effectively. This is something we have been focusing on from a technical point of view and now offer a scoring component that can ne integrated into any survey.



  1. Understanding that we enjoy thinking
Many of the games we most enjoy involve often involved quite complex thinking, take chess or Scrabble or most computer games as examples. Yet many surveys simply do not have any expectations for people to think at all,  Conjoint research is a perfect example of this, a mind numbingly dull task for respondent.  We have played around with transposing traditional conjoint research to more complex product building games and the impact is dramatic, delivering far more creative solutions to the extent that we are now building a full product offering using this particular game play technique called “Evolution: survival of the fittest product”.

Question:  “would you prefer a 10’ thick base pizza, or 12’ thin base”
Game: “Who can make the most popular pizza from these ingredients?” 



  1. We play games with other people
Playing within a group, or as part of a team, brings the potential for collective spirit, which can engage greater effort and increase desire to complete a task effectively.

Clearly, this is much harder to realize in an online environment as we complete a survey as individual. 

But not impossible, the idea we are exploring right now, is to create a series of surveys and allocate respondents into teams and they compete against each other to complete a series of task. We are doing some experiments to see how this works, but the technique of using team based game play I already know is being employed extremely successfully in focus group by Arthur.Fletcher & Blauw Research.

Question: “what is your solution?”
Game:  “Can your team come up with the best solution?” 

  1. “Gamifying” questions
Working from these initial ideas, we have explored how to redesign conventional question formats, such as text input, word selectors and grid questions, to make them more playful and game-like in character and are building a suite of more game style question format into our survey technology.

Question



Game






Some of these new question techniques are still under wraps while we are testing them out, but hope to soon understand the impact that they can have on how respondents answer.

  1. Turning whole surveys into games
The final thoughts is on more holistic approaches to game play.  I think the field is open here to a wide range of ideas. Games can take you into all sorts of mental spaces and do a lot more than just engage.

The most successful approach we have found of doing this so far has been to employ projection techniques, such as asking respondents to imagine they are the boss of a company.

Question:  “what do you think of this new product idea”
Game:  “Imagine you are the judge on a new TV game show called new product factors”


This technique we have found can result in 3 times as many thoughts and ideas from respondent but I would also at a lot more freeform candid and creative feedback.  

In summary

The basic idea behind game theory as applied to market research is that respondents who perceive a survey as an enjoyable game-like activity are much more likely to devote effort and thought to its completion, and thus give more valuable answers.  In game play more respondents are prepared to do more adventurous things, are more creative in thinking and you have the opportunity to tease out feedback from people you could not get in conventional surveying.

Game thinking is really as much of a mindset about engaging respondents and challenging them to more adventurous as it about the specifics of the methodology or about designing wacky questions. 

I believe that nearly any question in a survey could be made more fun and game like to answer with enough thought and imagination and I predict we will see whole new game style survey mechanics emerging in the future.


I appreciate as a survey design company we focus on the mechanic of engagement but I think game play has a lot more to offer that just this.  They have the ability to take people into different mental spaces and mind sets and who know really how this could be exploited. 
 
We are currently working with Engage research and three end-user clients to explore ways of making whole surveys more game-like and we are also looking at how these gaming technique impact on the character of answers, an important question for many people. We hope to reveal the results of all this research in a paper we are intending to publish later this year.

Updates:

Here is a feature article on survey gamification from Research Magazine I contributed to:
http://www.research-live.com/features/an-extra-life-for-online-surveys/4005075.article

This is a link to a recording of a presentation on gamifying survey I gave at the NewMR festival event in Jan 2011 which has just been made publically available: 

Upcoming presentation on Gamification of surveys:

I will be presenting a full paper on this topic with Deborah Sleep from Engage Research the ESOMAR congress in 21st Sept:
http://www.esomar.org/index.php/events-congress-2011-presentation-abstracts.html#the-game-experiments

And a more technical paper on survey gaming techniques at the the ASC Conference on 22nd September:
http://www.asc.org.uk/events/september-2011

I will also be presenting on survey gaming at the BAQMAR annual conference in December:
http://www.baqmar.be/?page_id=343

The Eureka Experiments


We recently conducted a series of experiments looking at how best to stimulate creative thinking within an online survey environment.  Co-creation is a hot topic out there and moderated think tanking is a well established art.  But within the online survey environment it is a very difficult task to get people to think and act creatively, most rely on co-opting large numbers or profiling groups of creative people to come up with ideas.  We wanted to look at how you could stimulate creativity amongst small groups of regular joe respondents and try and develop some practical technique that our clients could use as part of their every day research.

This is a short report on these research experiments.

Exploring idea generation theory

The starting point for this research was to explore all the idea generation techniques out there and to see how they could be adapted to work in an unmoderated survey environment.

Now there are a lot of techniques that have been publish, not least by Edward De Bono who has pioneered much of the thinking on this topic.  There are some truely wonderful and inspiring books and papers published about lateral thinking and idea generation.

I cannot confess to have read it all, it is a huge subject and I must admit to feeling a bit like a man in a rowing boat responding to an SOS call and approaching the topic fining out it as an oil tanker!

Idea generation enzymes

If I had to put all this thinking into a nut shell, most idea generation techniques seem to rely on idea generating enzymes that trigger creative thought.


A way you can think about this is that if you ask someone in a survey to list all the things they can think of that they may find in a  fridge, people get stuck after they have thought of about 8 items.  But if you ask them to think up items in a fridge that might have a sell by date and go off quickly, or white items or small items, each of these in tern can deliver more responses and by rolling respondents through this prompting process you can get respondents to easily build up list of 20, 30 items quite easily.   This refinement of the thinking to focus on a specific view point switches off the frontal cortex which can over dominate decision making and allow access to more free form parts of the brain (apparently!).

What items do you find in a fridge?

Well the same trick applies to coming up with ideas, if you ask people straight to come up with an idea most people come up blank, only about 2% of people can do this spontaneously without much effort. For the rest of us we need something to help trigger ideas and there are hundreds of ways you can do this.  Edward de Bono espouses using completely random stimulus  say for example if you had to think of a name for a new brand of yoghurt you throw in the word "super hero" and see what ideas this trigger, so people might come up with "captain yum" youth yoghurt brand. There are whole books written on different seeding processes.

The difficulties of applying idea seeding techniques to an online survey

To use these techniques within a moderated research environment is quite easy to moderator effectively acts as the enzyme to ideas and can toss out different seed throughts to stimulate peoples imagination.  But in online research it is a more linear process you have to work out ways of seeding people effectively without a moderator, a moderator can play around with different techniques until he hit upon something that works, with online research you have to plan what you are going to do, it requires a lot more up front creative thinking.  There are technical hurdles too, to work out how to randomly deliver seed idea effectively for example.  There is also an important emotional engagement issue as well.  In a focus group you can warm people up before you start the task.  If you throw people in cold to a creative idea generation process you are likely to get little out of them, you need to work on their mindset first. How do you do that in an online survey?

The warm up process

The starting point for these experiments we conducted was to think up some warm up technique to get online respondents in the mood to give you ideas and this basically involved playing some games.  I spoke friend of mine who run the Islington Youth Theatre and specialises in improvisation and he helped us come up with some ideas.

The trick is he said to try and get people to do something stupid!  Once you have encouraged that you have a more open mind! and they are blooded in to the process.

Well we thought to get them to be stupid we have to do something stupid ourselves!!!

Stupid uses game

We devised this game, which is straight out of lateral thinking school  called  "Stupid uses"  Where we asked respondents to try and think up some stupid uses for different products.  But to see this process we spent an afternoon in the office playing this game ourselves and photographed the experience and added these photos to the survey to lead the way....(I can honestly say I have not laughed quite so much in years doing this)



The impact of this technique was fantastic.  We did a test and control experiment, half were asked to come up with some ideas without this promoting and half with it and did a simple audit of the quality of the idea see the result below a measurable improvement in creativity was achieved.


We have experiment with some other warm up process that have also been effective. Notably a technique you could use to encourage people to write straplines for product where first you ask people to summarise films in as few a number of words as possible.  We ran this as a competition at the start of one survey and got them to pick out the ones they liked best and it really did work a treat at getting them engaged.

Exploring specific idea stimulation techniques...
Next we moved out thinking on to specific idea generation techniques we could employ in surveys and we have played around we about 10 different ideas but these are the 3 methods that proved to be most successful..

Idea for an idea

The idea for this technique came from a game I play with my kids travelling in a car.  You ask them to think up uses for things competitively.    If I were to ask my son to list as many uses as he can for a brick which I did as a test once, he got bored and gave up after about 10 ideas,  you then play this as a game in the car on a long journey trying to think up alternative uses for say a box of matches, he comes up with an idea then I do and the loser is the ones that gives up first - well can play it for well over an hour and come up with 100+ ideas.

So we tried to reproduce this process in an online survey. We asked people to think up ideas for something and when they run out of ideas, we show them an idea someone else has had and see if it prompts them to have another idea.  We do this over an over again until the respondent gets bored doing it.  We get them also to vote as the go along on the ideas other people have and say if it is an idea they have had already, which gives us a way of quantifying the quality of the ideas.

We build a special question widget to do this that recycles ideas from other people back into the survey.



What you end up with is an idea generating engine that in test experiments can increase both the volume and quality of ideas,  from around 7 ideas per respondent to 18.

Random rooms

This next technique requires more up front creative work but is effective.  You allow people to visit a series of different room, give them total freedom to explore which ever room they like and in each room there is a different piece of random stimulation or perspective on the problem or issue designed to trigger some creative thought.   What you have to do is come up with as many seed ideas as you can to put in each rooms (an exercise in itself in many respects.)  The more random and creative you can been in how you design this the better.  we do have some tricks up our sleeve to do this that I would probably describe as proprietorial so cannot reveal them here but the idea is very simple and very effective.



We ran an experiment using this technique getting people to think up ideas for New Ben and Jerry's ice cream flavours and cold they come up with 0.7 but through this process each person came up with nearly 7 ideas.

Through the minds of

This third and last technique I would recommend is called through the minds of. Again it is very simple you ask people to think up product development ideas through the minds of different things.



In experiments we have conducted this can trigger an order of magnitude improvement in the number of thoughts and ideas people come up with.

Summary of what we learnt from these experiments

Good ideas in, good ideas out...we did a bit of follow up research to measure the quality of the stimulus ideas and evaluate the number of ideas each ones generated.  You can see here that the best ideas generated a lot more ideas themselves...





Sliders: a user guide

The slider question format is a fantastic tool for comparative decision-making, and respondents enjoy using them in surveys if you don't use them in the right way and don't over load the survey with them. The question commonly arises however, as to how the data from slider questions compare to standard range questions. In fact they have been singled out as one of those sexy question that can deliver dangerous answers.



Well a great deal of detailed research has been undertaken into this topic, and so this is a summary of the main issues and a guide to how to effectively use sliders in your online surveys.



Sliders, should they be classified as one of those sexy question formats that give dangerous answers?

The short response to this is that yes there can be significant differences between the answers given to slider questions compared to their standard range question counterparts,  but if you know the primary reasons behind this and understand how to use them effectively  they can be an incredibly powerful and engaging question format to use in a survey.

These are the main issues to you need to be aware of when using them...

1. The slider position can greatly influence the slider score

The main difference between a slider and a comparative single-choice range question is that the slider normally has a prompted starting point which influences how people answer.  Without a prompted starting point (like you have with standard button selection) respondents have a tendency to click in the middle of the range.  If the slider starting anchor point is initially placed at the center of the slider range is it like asking a 5-point range question with option 3 already being selected.  Our instinct is to move it up or down and as a result you see when you examine the distribution of answers a what could be described as a double camel hump.




If the slider starting anchor point is positioned at the zero of the scale, the respondent move the answers up the scale but the answers are more weighted towards the bottom of the scale and infact across the whole range you tend to see flatter more evenly distributed data.

The effect can be significant see the chart below as a real example of 3 groups of people asked the same set of questions, one group with a 9 point unprompted standard button range, one group with a center anchored slider and one a zero anchored slider.



So which is best?

I would argue that the aim of any range question is to encourage a good event spread of answers and on that basis you could argue that a zero anchored slider delivers the most even  distribution, button selection answers are too center weighted and center point sliders underscore the middle point.

This is fine for what is technically described as "unipolar" scales which natually start at zero, e.g.  a 9 point rating scale. However there is an exception when you might want to consider using center point positioning if you are for example trying to understand if people agree or disagree with a topic you might actually want to discourage people center point scoring as this is in effect opting out of giving an opinion, in this case centrally anchored sliders might be preferred.



2. Range labelling protocols

On a more subtle level, the labeling and even the tagging of the point scale on the slider can influence the response. If there are many labels, respondents tend to treat the slider like a single-choice point range, while fewer labels encourages respondents to exercise greater freedom. Numbers are a less prompting way of allowing respondents to make more refined choices.

The chart below is a good example of the anchoring impact of range points. This was a zero anchored slider with a 100 point scale range but the strongly delineated anchor points encourage massive peaks in answers at each anchor point on the scale.


We recommend a 5 or 9 point numbered range but understated, with the extremes labelled at each end to encourage the smoothest and most even distribution of answers.

3. Slider sizing

The width of a slider can also influence response. Micro sliders respondents are more likely to move right to the end. If the slider range takes up the whole width of the page, respondents are less likely to select the extreme values compared with a narrower visible range.





As a result, try to stick to a standard width when using sliders.

4. Use of Iconography to emphasise choices

There are many ways of decorating a slider response range to make answering the question more fun in term tend to put a bit more effort into answering the questions. This can be valuable in increasing the quality of response, but such iconography can have an overt influence on the results, so care must be exercised in their use.



5. Warning - You may see higher levels of neutral scoring than you want with 5 point slicing of data

If you are trying to differentiate opinion using central anchored sliders you would be advised not to slice up the data using a 5-point range as this will disguise a certain amount of positive or negative movement.




This is because if respondents move the slider only a small amount up or down to register positive or negative opinions. In the example above, all three of these responses, when consolidated into a 5-point range, would score ‘3 = no opinion’, when is it clear that they have all expressed an opinion.

This anomaly can be solved by using more precise ranges when slicing the data (which normally should be stored in a 100-point scale before being partitioned). We recommend a 9-point slice, and then, if desired, the resulting slices can be consolidated into a 5-point range thus: 1&2, 3&4, 5, 6&7, 8&9.

6. Strength of opinion: the extra variable you get with sliders

With any standard range question, the levels of agreement are fixed, with usually very blunt choices such as ‘agree’ or ‘strongly agree’. With a slider, respondents are able to define their own levels of agreement, leading to more precise differentiation and better understanding of the strength of opinion.

7. Right-leaning movement

There is a slight natural tendency in respondents using center anchored sliders towards moving the slider to the right, if they have to move it.




This small factor is only really a problem when trying to match historical benchmark data comparisons with traditional range questions. Overall, a shift of approximately 5% towards the right can be observed.


8. Up and to the right is positive!

Respondents tend to naturally assume that moving a slider up or to the right represents the highest score and most positive agreement. Care should be exercised when breaking this convention and marking the scale in the opposite way, with the negative choices at the top or to the right. 



Interpreting slider data

For the many of the reasons explained above, slider data must be differently interpreted to range question data. In particular, researchers should establish protocols for dealing with respondents who agree or disagree with a statement only very slightly.

Slider data is often more refined than data from conventional range questions, and this is where their real value lies. For example, scores of 57% versus 59% in a range question might be indistinguishable, but the difference could be statistically significant between slider ratings for say 2 ads and imagine on a spend of £1m on a poster campaign, this could represent £20k more value.

The best way to compare is to use the absolute % range data. Unfortunately, most research data handling tools struggle to deal with this, so some smarter research companies ask us to process the data in 2 ways: a conventional 5 point split and a variant where only the 50% point is recorded as don’t know and the 51% and above and 49% and below included in the slightly agree/disagree ranges.


When to use sliders and when not to?


1. When asking respondents to make direct comparisons

Sliders are most useful when asking respondents to make direct comparisons between option choices, say for example rating the appeal of different designs of a product or common attributes about a brand or choice of colour. In circumstances where the subtle differences between ratings is important.

A good example of how sliders can be used effectively in a surveys can be a taken from Sony Music UK, who use a set of vertical sliders to ask respondents to calibrate how they feel about different aspects of a music artist character. In conventional surveys this would be a bank of 10 point grid options. We discovered that the impact of switching to using sliders resulted in respondents spending more time and applying more thought to the process. What it encourage respondents to do was review tweak their previously set range choices as they went through the process which is something that you see rarely every happen with grid answering.



They are of less benefit when the interrelationship between question options is less important: for example, when asking a set of attitude statements that are not related for example:

How much do you agree or disagree with these statements:
I watch a lot of TV
I believe in capital punishment
I like Pepsi

In this case there is no relationship between these 3 questions and by grouping them together in a slider set you will end up with relative prompting effects. For example you may well like Pepsi but may also have very strong views on capital punishment which would prevent you from moving the slider as high as you would do if say the comparison was between liking Pepsi, Vinegar & Salt water.

i.e. when asking respondents to move sliders that are grouped together you are asking them to make a relative comparisons and the options each are compared to have a lot to say about the answers you give.


2. When you are interested in the spread of the strength of opinion.

Sometimes people simply do not have strong opinions about things, or find it difficult to make a distinction between choices. Sliders can help you measure the strength of opinion and help you make judgements about how important a choice is to respondents. I suppose in an analogy to mathematical differentiation, sliders give you clearer way to identify rate of acceleration as opposed to speed.

3. Because it is more fun!


This might be regarded as a factious comment, but is important to understand how significant a factor respondent engagement is to the quality of data you get back from online surveys. When used in the right sort of way, sliders can make answering questions more fun and there are lots of creative ways to make answering sliders more fun.



Research shows that this can lead to respondents paying more attention to the answers they give.


4. The under-12 rule

The novelty value of sliders can quickly wear off when respondents are presented with too many banks of them. Because it takes a little more attention to move a slider than press a button, we recommend switching to a different question format if more than 12 options are being asked (roughly enough to fill 1 page without scrolling).

5. Price setting

Sliders are very useful for price setting, within GMI interactive’s system you can link sliders up to a set of data point ranges with a lower and upper limit so they can be used to allow respondents to define for example a price or, date range



6. Linked sliders

Slider can also be linked together to allow respondents to allocate a budget or time spent to different options like in the example below. This is a lot more effective way of asking this type of question compared to the traditional approach of asking respondents to type figures into a box.


7. Double point sliders

For more specialist task you can also use 2 sliders on a range to allow respondents to set lower and upper limits for things, particularly useful for price testing products like the example below.





8. The Flag drop alternative

As a complete alternative to slider GMI Interactive have developed a flag drag and drop question format which can be used for the same type of tasks as you would for traditional slider. The benefit of this format is that it really helps to re-enforce the relative position of choices and in effect serves also as a ranking style question.



Sexy Questions, Dangerous Results?

At the recent NewMR Festival Bernie Malinoff delivered a presentation highlighting some of this issues surrounding the use of more "sexed up" flash question formats in surveys and have noted that this has stirred up quite a bit of follow up commentary and debate in the last few weeks.

As part of these experiments he showed an example of the negative impact of an over elaborate confusing flash format question can have on data quality and the willingness of respondents to take part in future surveys.


Well I wanted to contribute to this debate and so here are my thoughts...


I think this highlights an issue about the quality and standard of survey design not just about using flash per say...

We have done about 3 years of work looking at how to effectively design and deliver flash questions in surveys and have published a couple of papers on this topic and along the way have experienced very similar issues to Bernie where using a more creative question approach can have a negative effect if they are poorly designed or confusing for respondents to use. 

Yes poorly designed question can damage data

When designing more creative question formats if you do not think about their ergonomic design or you try to over elaborate them, or expect them to do too much, yes things can go badly wrong - triggering drop-out and respondent dissatisfaction.  We have found some drag and drop question for example can be very confusing for respondents if you make them too complicated and with slider format question where the slider can prompt the answer they can deliver back data that can be quite different to what you might expect when asking the same question in a more traditional way  (see my other blog post on how best to use sliders in a survey to find out more on this  How to use sliders )

But get it right more creative question approaches can be much more effective...

But what I would want highlight that if you get it right, well designed and thought through flash questions can really work and deliver exactly the same balance of data as traditional question though with less neutral scoring (which is an indirect measure of respondent boredom), radically improve respondent enjoyment levels, reduce dropout and straight lining effects.  In experiments we have been able to reduce dropout from surveys by up to 75% and increase data click counts by as much as 50% - so when you get it right, the benefits really are quite significant.

Its all about simplicity, visual clarity, aesthetic appeal, and working on the flow (e.g. using techniques like animated auto-nexting of question options so you don't flood the screen with options) ....



Appreciating the skill involved and experience needed in the designing of surveys

Just like say creating an advert on TV there a good ones that sell products and bad ones that don't.  You could liken the use of flash v normal radio questions to say using Print v TV advertising.  Sure you can say that TV advertising is a lot more effective than print. But you can easily make a dreadful TV ad that performs  worse than a print ad. It is a creative process and in the case of advertising it is heavily dependent on the quality of advertising agency you employ to do the task.

There is a thought out there in the industry that just by dumping a couple of off the shelf wacky flash questions in  your survey you are going to make the survey a whole lot more engaging.  As Bernie has shown, you are just as  likely to confuse people chopping between formats and highlight how boring the rest of the survey is and the net value will be negative.

When I discussed this with Bernie he also highlighted a very salient point that survey programmers are usually tasked with speed, not ‘thinking’.  Survey production is a job that is often deemed something that the IT dept does under the researchers instructions and is not really seen as anything other than a production line process. So whilst a good survey programming team are probably well aware of some of the main issues that effect data quality in survey design they take their dues often from researchers who may have little or know knowledge of these issue through lack of experience. 

Next generation survey design companies

I believe making survey  is a holistic creative process that requires not just looking at the design, but the wording and structure. It should be treated with the same respect as the design of say an advert and with the understanding that it is not something you can be universally successful at implementing, or that anyone can think they are able to do themselves - it requires a lot of skills and experience in the same way as creating advertising does.

I see a future where as we seem more and more creative techniques becoming available,  a new breed of specialist survey design companies will emerge into the market research industry that work along side MR firms that are treated with the same respect as design companies or advertising agencies, who are total experts in the creative execution of surveys.

P.S. 

I did email Bernie to say I would post this blog and I would like to thank him for his sincere openness in encouraging me to add my views to this debate and he made it clear that his presentation was put out there in the first place the goal of fostering dialogue on this topic which clearly he has managed to acheive. I totally agree with him that more industry standards need to be established as new question formats emerge into the main stream.



If anyone would like some further reading I would be happy to distribute the papers we have problished on how to engage respondents in online research and I would recommend anyone getting hold of the brilliant book "The Art of Asking Questions"  by Stanley L Payne published in 1951 for the definitive words written on this topic half a century ago.











MR Personality test

Do you want to find out what type of market researcher you are?  Well I am developing a market researcher personality test to explore the character traits of different types of market researchers.

This is a link to a prototype version of the survey  and I would love for as many people as possible to take it to set some benchmark norms and give their feedback on what other question I could ask.



Once I have enough benchmark data I will build the norms into a feedback mechanic so you can see how you score v the rest of the industry...
The 7 power phrases to boost question responses

The 7 power phrases to boost question responses

1. "We challenge you.." : This phrase seems to trigger peoples competitive spirit and when used in the right way you can often farm 2 or 3 times as many answers out of people.

2. "You have 1 minute..":  It causes a bit of a panic reaction amongst respondent but is a great way of encouraging spontaneous thought and whilst 1 minute does not seem much time it is fact 30 seconds longer that the average time a person naturally spend answering an open ended question in most surveys.

3. " In no more than 5 words...":  I don't know why but people like describing things in a few words like this but they do, its quite fun. Again 5 psychologically sounds like you are asking them to limit themselves, but in reality you are actually extending..if you asked people to list words they would use to describe thing you often only get 2 or 3 so 5 is good!

4. "Every little detail helps..": This phrase can help increase the volume of feedback by 10-20% depending on how you use it - ok a relatively small amount by hey - every little helps!

5.  "Because…": Why do you want to know this... because...you may not think it is important to explain what you are doing or why but respondent are often interested and if you let them into your motives you can encourage more active participation.

6." Imagine…": This word takes you all over the place, can can unlock whole new mindsets amongst respondents.  The more imaginatively use use this word the more imaginative the answers you get .  "Imagine  you had £1000 to spend online buying any stuff you liked in the next hour, what would you do..."

7. "This part is voluntary…": This is one of the most powerful phrases you can use in an online survey. It is counter intuitive to make questions voluntary but it is an extremely powerful piece of behavioural psychology that actually can encourage greater levels of active participation.  We have experimented with do this at the end of a quite boring 20 minute survey for the last section we said  "this last part is voluntary..."  Over 90% agreed to do it, mostly I think out of curiosity and what's we found the they spent 60% more time doing the task  versus a control cell who were just asked to do the task anyway.  The psychology of this is that once you have said yes it is an emotional commitment you feel you have made and we are all socially conditioned to do what we say we will do.


Using Greek rhetoric to improve the level of feedback from your surveys


Building "Ethos"  one of the 3 tenents of Greek rhetoric, is the idea of establishing a bond of trust between you and your audience. Aristotle showed out how important this is when you are trying to get people to believe what you say in a court of law or get someone to do something for you and I want to show how you can use aspects of this thinking to help increase the feedback and responsiveness of your market research.


We are so often asking people to do research for very little incentive, and asking about very low interest topics and with online research in particular, this can lead to very casual answering approach where people don't really care what they say.  People who are not buying into the reasons why you want to do the research then don't care about the answers they give.

In the research we have conducted we have found that if you can work out how to build a bond with respondents at the outset of a survey you can significantly increase participation levels and the quality of feedback and the improvements can be quite dramatic.  In experiments we have witnessed 2 fold, 3 fold, even 6 fold  improvements in responses simply by priming people in the right way when they start a survey, and using the principals behind Greek rhetoric to establish Ethos with respondents is one of the most effective ways of doing this.

So how did Aristotle recommend you go about establishing Ethos?

Well it is all about establishing a bond with the listener and there are lots of ways he explained this could be done.

The key thing he espouses is openness - "readiness to reveal things that one might be expected to conceal or a promise to tell the whole story from the beginning".  If you think about this from a survey point of view we often hide the motives for doing a piece of research from respondents.  Yet in so many cases the reason you are doing the research in the first place is to try and make better products or improve services which if explained well can be used to motivate respondents to do a survey.

Here below is an example of this technique in action.  We did an experiment asking people to tell us what they think of baby wipe and with a test cell we added this message below.  This had a quite measurable impact, respondents who saw this image spent nearly 50% more time answering the question.



and Laying claim to shared values:  "by laying claim to certain beliefs which agree with accepted social values a speaker can with contrived inadvertence reveal something of his character" . An example of this can be seen in the fantastic market research episode of Mad Men in season 4 when the character Fey Miller went to the length of changing her entire outfit, hairstyle and make up and removed her wedding ring to interview a group of secretaries so she came across as one of them.  You can transpose this thinking to the very design of a survey to appeal to a particular target audience, if you want women to talk candidly about a topic it may be an idea to dress a survey up like it were a woman's magazine for example.












Kategori

Kategori