The Power of imagery in surveys


Over the last few years we have conducted a number of experiments exploring the role and use of imagery in online surveys, mostly as part of more wider research looking at how to effectively engage respondents in research. A lot of it is hidden away across several papers and reports and so I thought I would try and consolidate this learning into one blog post. This is my guide to the use of imagery in online surveys.


Now Imagery can have a very powerful role to play in online research, in the same way as they used in just about any other form of creative communication, as a means to more effectively communicate ideas, entertain, to engage, to trigger thinking patterns but they can also be quite disruptive when used in the wrong way.  There are some define do's and don't to think about when using imagery.

I have heard a lot of people say they are concerned about using images in surveys because they can bias the answers people give.  In some quarters there are even people actively campaigning against the use of images in survey.  My viewpoint is this...YES it is true images can steer answers, in exactly the same way the words you choose can steer answers too and for that reason you have to use them very wisely, but don't use this as an excuse not to use them at all!  It is important that you understand how they can influence answers and the reasons why. Armed with this information you can then set out to use imagery in surveys with more confidence and I hope this blog post will help.

I will start with the do's...

1. Using imagery to trigger the memory

Some of the early experiments we conducted were to investigate the role that imagery could have to stimulate the memory.  We conducted a series of experiments where we asked people to recall things in a survey, principally brands and advertising and for some test groups we prompted them with some various visuals and measured the impact.  Not directly to prompt them but to get them thinking.

This example below is from one of the experiments that we published in an ARF paper.  We found that by visually stimulating respondents with these images of different advertising media prior to asking them to recall some recent advertising it encouraged them to list 60% more ads.


This was not an atypical figure, when used creatively we found images could sometimes double the number of items respondents recalled. The more startling and show stopping the images you use the better.

If you compare the average brand awareness when you prompt with a picture of the brand compared to just name of the brand there are often enormous differences.

Take the example below taken from an experiment we conducted to explore this issue where we measured awareness and purchase consideration of a range of yogurt brands, half the sample were shown just the brand name the other half an image of the product.    Average brand awareness increased by 20% and purchase consideration by 40%.  This is perhaps not surprising if you think about how we process shopping brands almost entirely visually, we rarely hear the brand name used in isolation unless they advertise on TV or we actually purchase the brand and talk about it.  So brands like "Frijj" which 70% of the population are aware of when you show the packaging only 40% can recall by name only.  You could argue that without imagery measuring brand awareness of supermarket brands like this almost meaningless.




2. Using imagery to stimulate the imagination and triggering emotional responses

When we asked a group of mums to talk about their experience of using baby wipes and we showed them this picture below of a baby a long with a bit of emotional rhetoric we went from 18 words per respondent to over 35.



Likewise imagery can really help stimulate our imagination, and again the more resonant the imagery used the more powerful the impact it can have.

Here is an example of another classic experiment we did we did where we asked people to recall foods they dislike eating and we show them this image of a woman expressing disgust.  This prompted respondents to list 90% more food items that the simple text only variant of the question stimulated.


3. Using imagery to help communicate ideas more effectively...


Have you ever had to start a survey with a big block of text explaining what respondents have to do like the example below...


The problem is very few people actually read it properly.  Breaking this text up with imagery to help communicate each point can really have a major impact on helping respondent reading and absorbing the message.  Like the example below (this is taken from the work we did with John Pawle at QRi (then QIQ) to develop their pioneering Lovemark surveys)

 
 
When examining the time invested answering a controlled set of follow on question including some open ended question we found respondents who we visualized the introduction to the survey in this way investing 40% more time and gave us 20% more feedback.

This type of approach to using imagery at the start of a survey we have explored in quite some detail in research for an ESOMAR paper I published with Deborah Sleep at Engage research called "Measuring the value of respondent engagement" and found introductions like these also had a major influence on improving overall survey completion rates.  The majority of dropout from surveys occurs in the first 5 frames and these visual intro's proved to reduce dropout, on occasions, by more than half .

4. Using imagery and iconography to help interpret and digest option choices more effectively


Strings of words at a glance are very difficult to digest and when you are asking people to evaluate a series of strings of words like we do with say grid question things it is often difficult to differentiate between individual options.  What that means is respondent have to hold in their working memory what each option choice means which is hard work.



Take this example above, at a glance in the text version all these options fade into one you have to carefully read each one to clearly understand them and imagine you are asked this question repetitively about 10 or 20 books its difficult having to read and clarify these options repeatedly. But with icons once you have read the first couple they are almost instantly decodable making it so much simpler to answer this type of question repetitively, you can view the icons in your peripheral vision meaning you don't need to store anything in your working memory.   As a result questions using visuals like this are easier to answer and deliver back better data.

5. Using imagery to generate more cross consistent data

We have also discovered that using imagery and iconography in range sets can deliver back more cross consistent answers, particularly when conducting international research where words can be interpreted differently in different languages have different meanings in different cultures.  With some exceptions, images tend to be more consistently interpreted than words.

Take the example below where we compared across several markets the balance of answer to a question using a standard like range v a facial range.  Whilst the pattern of answers overall was identical, there was a 25% reduction in intra-country data variance. If you are interested in reading more about this I would point you towards the ESOMAR Asia 2012 paper that I wrote with Duncan Rintoul from the University of Wollongong called "Can gaming technique cross cultures" that discusses this in more detail.


6. Using imagery to make questions more fun and engaging to answer

In this same paper we explore the value of making selection processes more fun and engaging and we found consistently in every market we tested that, more fun meant better engaged respondents who focused more on their answers. If anyone has seen me present they will recognise this example below which I cite in nearly every presentation I do on this topic. We added to the end of a standard set of sliders a character who would move from being asleep to standing up applauding as a visual proxy for how much people enjoyed watching. This improved enjoyment scores, consideration time, increase the standard deviation in the answers by 20% (an indication of data quality) and reduced intra country variance by 23%.



7. Using imagery to help maintain peoples attention in a survey

One of the challenges we face in survey design is maintaining peoples attention levels as they proceed through a survey. In experiments we have conducted where we have placed the same multi-choice question at different points in the survey we have found the number of choices on a list respondents can be bothered to tick can fall off  by in cases up to 50%. We have found that images can have an important role at helping to maintain respondents attention level as they start to get bored answering questions.

This example below is a very good example of the impact that images can have at maintaining respondent attention.  In this specific experiment we asked 600 respondents on a multi-choice tick list to mark the times they recalled drinking water on the previous day and using split cells of respondents we rotated the position of the question so 1/3 answered the question either at the start, middle or end. Half the sample were asked the question using text only the other half using an image.




For the text only group at the start of the survey an average of 3.1 water drinking instances were recorded by the  middle it fell to 2.4 and at the end it had dropped to 2.1, when we added the simple image of a water tap to the question we found there to be significantly lower levels of answer decay moving from 3.2 to 3 to 2.6. The imagery did something to help motivate respondents who were flagging to put a bit more effort into answering the question later in in the survey.


8. Using images to improve the all round appeal of the survey take experience

Well designed image based surveys are perceived to be shorter, more enjoyable, less repetitive and have higher completion rates, time and time again in experiments we have been able to demonstrate this.

The decorative power of imagery can have an all round effect at making people feel more comfortable and interested in the survey experience.  I can show examples of surveys with 50% improvements in completion rates simply as a result of their visual appeal.

Now the Don'ts...

These are some important caveats to the use of imagery in surveys becuase its not all plane sailing!
Here are the key issues:


1. Be weary of the literal interpretation of images

Warning respondents tend to be fiercely literate in the way they interpret images.

Earlier in this post I illustrated an example of the differences in brand awareness when showing pictures compared to brand names and in most cases brand awareness increase, but with the exception of the 2 most familiar brands, where their awareness scores fell back slightly.  In both cases it is because of the choice of imagery which gets more literally interpreted.




We conducted another experiment to explore the impact of the literal interpretation of imagery where we asked respondents to evaluate a range of musical artist and varied the picture we showed to see how it effected the artist rating.  We did some preliminary research where we ask people to rate a range of different pictures of artist from which we picked the most popular and least popular image for each one.  We then asked a separate group of respondents to say how much they liked each music artist, splitting the sample, showing half the highest rated picture and half the lowest rated picture. The choice of imagery had a dominating impact on respondents ratings, measuring an average 20% variance in scores.




Our solution in this case was to work with a basket of imagery see below example.


The literal interpretation of imagery become even more problematic when you are asking people to make emotional associations.  We have found that the choice of imagery can have such an all dominating impact that this is the one area where we feel using imagery should be avoided, unless the pictures you are using have been very carefully selected and calibrated.   In this experiment below we looked at the variation in choices of emotion people associated with doing different things when using imagery of men v women expressing each emotion.  We witnessed upwards of 100% swings in selection rate as we swapped out imagery.


2. Anchoring effects of visuals in rating scales

Visuals can also have a very strong anchoring effect when used in range scales and here is an example...

 Below are the results from an experiment we conducted to look at the differences in answers between facial and numeric rating scales.  In this experiment we asked people to say how much they enjoyed doing a range of domestic activities ranging from watching TV, cooking through to doing the washing up and cleaning.  When we used a numeric rating scale the answers were nicely evenly distributed, but the responses to the facial scale we found a disproportionate number of respondents choosing face no 5.    The reason being that the emotional range used here was too strong to measure domestic activities, if you look at these faces, nobody is that ecstatically happy about watching TV or that angry about having to do the washing up and so in effect it narrowed down the choice range. This is because respondents again interpreted the images literally. 




We then conducted a follow up experiment where we used this womans face instead which expressed a more neutral range of emotions and found it broaded out the distribution of answers.






Here is a lesson though when considering using any form of iconography, be it smiley faces, thumbs up thumbs down or what ever - you must be aware that they can narrow down the distribution of answers.   The benefit of using simple number ranges is that we can assign our own meaning on what the numbers means  e.g. in this scale 1= washing up 7 = watching TV.  

Now I would want to underline that it might be easy to use this as one of the excuse to avoid using more emotive ranges in surveys, what you have to remember though is that numeric rating process because they are less emotional are more boring to answer and as a result encourage far higher levels of straightling and ill thought out answers when over used in a survey which is just as much of a problem.  So it is always a compromise.  So my advice for this is not do or don't but maybee!  Choose to them carefully and make sure they adequately reflect the range choices you are trying to measure.

So for example if you are asking people to rate how much they like fruit - well we all basically like most types of fruits its just a question of how much so using a thumbs up thumbs down rating you will loose all the nuance as respondents will only use the top half of the scales.  If however you are asking people what they think of a range of political opinions, well in that case the thumbs up and thumbs down may be quite sensible range choice to use.

3. Poor quality logos and even logo size differences can alter perceptions scores

We also have investigated perceptions of brands based  upon the style and quality of the logo used and found small but never the less significant differences in rating scores in a variety of different dimensions based upon the choice of logos.

In one such experiments testing 20 brands, we saw shifts of average of 10% for brand “liking” score and 7% for “trust” score based on style of logo used (and individual cases recorded up to 20% shifts). Poorly rendered logos see an average 18% reduction in liking perception and 13% in trust. Size of logos presented seems to have a small but significant influence too. Presenting logo’s larger reduced Like and trust scores by an average 4%.

4. Choosing the right images

The selection of imagery for a survey IS A  CREATIVE SKILL raiding the clip art library of powerpoint or spending 5 minutes on google images is not good enough.  Think about how images are uses in advertising and that is the standard you need to think about.

The whole point of an image is to help respondents emotionally engage with a task and simply and effectively communicate a message.

So this type of approach is not good enough (REG!)...




Summary of advice


The selection and use of imagery in surveys is both a science and and artform.  They can have a critical role in engaging respondents and improving the quality of feedback but used thoughtlessly they can be highly disruptive.

Some researchers reaction to this is simply to avoid using them in surveys which might be reasonable enough response if you don't have confidence in how you are going to use them.  But I personally think this is chickening out!

Think of imagery as a weapon you can use to bring survey questions to life....










Assessing the nutritional value of your survey?

Assessing the nutritional value of your survey?



Is the feedback from your survey the equivalent of a double quarter pounder burger with extra cheese and a side of fries?



Yep you get the quick hit from it you want, you are filled up with facts - people strongly agreeing that your brand or is great and a whole load of nice descriptive words that people associate with you product that you can talk to people about at your next marketing meeting.

My question is....have you assessed the real nutritional value of your survey?  Are you actually getting any actionable information that you can use to improve your product or services or are you feeding your organisation with a whole load of pretty useless information that is clogging up the communication arteries?

If you interview 1,000 people and 50% of them say they are aware of your brand and they rate it highly, this might make everyone in the marketing department happy but apart  from slapping yourself on the back what are you actually able to do with this information? To know that your brand is seen as being modern, technologically advanced & innovative, is that just nice to know or is it going to help you move your brand forward?

I see so many surveys that are just plane too fat!

My primary target for this type of criticism would be customer feedback surveys which are often stuffed full on benchmark questions that tell you how much people rated various aspects of  a companies service rather than focusing on having a conversation with customers to find out how they feel and what the company could do to improve themselves.

I just came back from a trip with an airline, I think it probably diplomatic for me best not to mention their name and completed their customer satisfaction survey. It included over 100 questions covering in fastidious detail every aspect of my journey with them. All hail the companies attempt to be thorough and is concern about every detail of their service but the survey was just over weight.

I was asked 6 specific question about the staff custom service: if the staff were friendly, were they open and up front; did they make things easy for me; did they care if I had a good experience; did they treat me as an individual etc All these were asked on a x point scale and so I click the same option for all 6 questions because their service was well, fine, they were nice, I had no opinion about it! If fact i didn't really notice them I am afraid I am sorry. So  I didn't need to answer 6 questions.

 I wonder how many people have done this airline survey, I guess well over 50,000?  I wonder how much over the course of a year the rating of their staff customer service actually changes?  I bet the first 500 on average deliver pretty much the same answer as the last 500.  So why ask all 50,000 people all 6 of these questions, why not just ask one in 10 people one of these question and aggregate it out.  I bet that would be statistically good enough to get a good steer on their customer service.

What is more this extremely long survey it was also totally self obsessed.   I was not asked for example to rate how good their staff customer service was compared to other companies and whether I thought it needed to be improved or was good enough, That really was the only question I really needed to be asked in this case.

What was ironic about this was that in amongst this whole survey it did not ask me about the one thing that had actually bothered me about my flight..

Why could then not have just asked me just one question - how was the flight?

5 tips for tackling survey obesity

1. If its a tracker or customer feedback survey where you have some existing data I suggest the first thing you do is get someone to through the answers to that survey and looked at which questions predict the answers to other questions and looked at which questions are actually delivering unique incite.

2. Work out statistically how many people need to answer each question to get a statistically accurate indicator.  For some question it might be closer to 50 people than 500 people.

3. Next question has anyone sat down to find out the answers to which questions are actually being used by anyone in the organisation?  Ask the marketing department if they had to buy the data back what they would pay for the answer to each question?  Ask them what decisions will be made as a result of each question.  If they don't have an answer to this then challenge them to ask them why they want to know.

4. Has anyone thought about exactly what questions would be useful to ask each customer and thought about customising what question were asked based upon the attitudes of each customer?

5. Randomise:  I think the problem we face particularly when we are doing exploratory research is we have slipped into a habit of using a scatter gun approach of asking questions with a hope that some will deliver back some incite.  There is nothing wrong with doing this at perhaps the pilot phase of a survey.  There is nothing wrong with asking lots of question so long as I don't have to answer all of them - I would advocate where there is doubt randomise splitting samples so you ask more questions but each to less people!  (see last post on the monte carlo method)


A Monte Carlo approach to asking questions


In the early days of the internet in designing websites you would often have a discussion with clients about routing to different pages and setting out which link should take you to which page and after that to which page. Navigating a website in the early days was like going through narrow tunnels and then you had to back out of them to get to anywhere else. Then some bright spark realised you could have more than one linkage point to each page on more than one page so you could navigate from one part to another more easily.

I make this point because I think we have a similar degree of tunnel thinking when we write surveys, in that we only ever think of asking a question in one way. What I would encourage you to think about is the opportunity of asking questions in more than one way.

How often do you struggle to pin down the exact wording of a question in a survey and be in two minds how to word it? Rating something is a classic quandary. Do you ask them how much they like it; how appealing is it; how keen are they to buy it; how much better or worse it is than other things etc. Asking people to give open ended feedback is another area where a possibly infinite way to word a question exists, and I have had a career-long obsession about the best way to word this type of questions. For instance, if you want to ask for feedback about a product you might word it "please tell us what you like or dislike about this product" or "what do you think about this product? what do you like or dislike about it" or "if you were in criticising this product what would you have to say" or "what is the best thing about this product and the worst thing" . Everyone answering these questions will respond in a slightly different way. Some will deliver better answers than others, some will work more effectively with some groups of people than other groups. Some may not deliver the same volume of feedback but more thoughtful responses. Some may trigger more thought than others.

OK, so the survey has to go live today and you don't have time to test and you are not sure which wording will generate the most feedback; what do you do?

The approach most people take is to pick the one wording you think is best or the one a small committee of you think is best. But have you ever thought about just randomly asking this question in every single conceivable different way to reach respondents and then mashing up all the answers.

Now, I have been playing around with doing this of late. It's not difficult to do from a technical point of view and I am really loving the data I get back (sorry not sure if you are supposed to love data or if that phrase is appropriate).

What I am finding is that in closed rating questions, asking a question in a random basket of ways appears to deliver* more stable answers that iron out the differences caused by question interpretation effects, and for open ended questions it appears to deliver* a greater range of more nuanced feedback than asking a question one way.

I would described this as a Monte Carlo approach, because that is essentially what this is; what I am doing is netting out mass random predictions of the best way to ask each question. I have no way of knowing which is the most accurate, but netting out their predictions is more reliable than asking the viewpoint in one single dimension.

What do you think? I appreciate I probably need to back this up with some solid research evidence as there are lots of issues here and so I am planning to conduct some larger scale experiments to test this theory more thoroughly. But before I dive in, I am open to some critical feedback.

A viewpoint on designing Customer feedback surveys

A viewpoint on designing Customer feedback surveys

Because I travel a lot and stay lots of hotels and fly with lots of different airlines  I get asked to complete a lot of customer satisfaction surveys.  I also from time to time get asked to design customer feedback studies myself and so out of interest in the topic I have started to collect them.

They fall generally into 2 camps the small piece of cardboard with a handful  of close questions and a small box to add any comments,  favored by many hotel chains and the online survey variant favored by airlines,  where I am bombarded with often hundred or so obsessively detailed closed questions and at the end of which normally a single open ended comment box asking my opinion.  

In both cased I think generally there is some massive room for improvement.

The hotel customer feedback survey...

Take the situation of a hotel owner designing a customer satisfaction survey for the people who stay in their hotel.

The standard approach to doing this is working out what are the key questions to ask e.g. how would you rate your stay, how did you rate the cleanliness of the hotel room, how do you rate the quality of service etc.

The problem you have is often trying to work out how to condense all the questions you have into one survey so you might whittle it down to say 10 questions that you think are most important.

Then from that point onward you asked everyone staying at the hotel just those 10 questions. Now tell me after say 20 people have stayed in your hotel and on average they give the hotel a rating of 4 out of 5 stars, is that rating going to change much? No not really, certainly not very much over a 2 or 3 month period unless you are making some radical changes to the hotels structure or services. So why bother carrying on asking these same question to every person who stays at your hotel?

The questions asked are often what I would describe as "slap on the back" questions.  They give you good news about how much your customers like your hotel and they can rapidly turn into sacred cows in surveys that nobody can remove as we get addicted to hearing that people like staying in the hotel, it massages our ego and so the question remains but are often of little strategic value, in that they do little to advise on how to improve and develop your hotel.

I would challenge you to think about a different approach, instead of having one questionnaire that several thousand people complete, how about having 52 different questionnaires that you change every week to focus on different specific issue and each week you gather fresh incites.  You can repeat some of the questions across some of these questionnaires depending on the frequency with which you need to be updated.

Now think about it a different way, when someone answers your hotel survey you are potentially talking to people who have stayed in different hotels from around the world. Think about them as consultants that you could borrow 2 or 3 minutes of their time. Surely what you really want to know is how to make your hotel better, more efficient, more popular, more profitable. So why construct the survey to start mining your customers for ideas and information that will help you improve your hotel.

This means a total rethink about how you construct the questionnaire. The focus should be on working out how to motivate your guests to proffer their real opinions, feed you with thoughts about their experiences of staying in different hotels that they think you could adopt. Getting them to think analytically about your hotel in the context of others, encouraging them to observe. To achieve this is where the real skills of survey copy writing come into play.

The first question in a survey is like the opening lines of a novel, it either grabs you and makes you want to read on or not. The question you ask needs to capture the imagination. That means for a start avoiding the cliche's of language, it also means test and control experiments to work out what question people would most want to answer.

I would also encourage a rethink about how these surveys are delivered, often at the moment we are rushing to leave a hotel or discretely placed on the bed in your hotel room.

You got to think about a good time to reach people, when they have a few spare moments to think.    I don't completely have the answer to this as it would take some research but I would hazard a guess that if you gave people the feedback forms to people in the restaurant just after taking their food order when they have to naturally wait about 15 minutes for their food to arrive might be a good time.

I also think the staff in the hotel or airline are probably your best feedback mechanic.  setting up staff wide polling or prediction market protocols to anticipate customer satisfaction issue is a really good way I think of involving everyone in your business in the process of improving customer satisfaction.

For example you could run a prediction market where the staff have to predict the main customer feedback issues. One up and running this become a very useful management tool to assess ahead of the curve the changes and improvements needed.

The airline customer feedback survey...

My biggest bugbear about these types of surveys is that I feel that I bombarded with loads of questions but never the ONE question I want to be asked!


When I have been on a plane journey and there has been an issue, frankly the first thing I do when I open the custom feedback survey is get that issue off my chest.  There is nothing worse than answering a whole survey with loads of question and them not asking about the one thing that bothered me about my journey.

The other thing that frustrates me is being ask the same question over and over in slightly different ways.  For example there was one Airline survey I completed where there were 6 question about the service I received from the cabin crew , were they courteous, did they anticipate my needs, were they friendly etc.  On that particular journey I had no interaction with them what so ever so had no opinion.  On another journey I may well have had some minor issue or other for which I may have wanted to give feedback.  The way you tackle this is through more thoughtful branching.   By ask me one simple question was I happy with the cabin crew or was there something that could be improved allows me to either skip part this question if I was happy but stop and give feedback if I was not.

I ask you to recognize that I what I want to do and say in a customer feedback will vary depending on my experience and so the survey needs to assess my mood and adjust itself accordingly right from the very start.

The way I think you achieve this is by offering the people an emotional choice e.g.

How was you flight?
Perfect
Good
Satisfactory
There were some issues I would like to give some feedback about

And from that point you adjust the survey accordingly.  The people who say it was perfect are emotionally onboard and so you might challenge them to think of ways to make it even better.  Those that said they had issues you allow them to explain them up front.

But main thing I am astonished about Airline custom sat  surveys is how poorly designed many of them are, akin to filling out an insurance form, no pleasure what so ever, ney an arduous experience to complete.

Considering the amount of effort these companies put into marketing themselves, I think they need to recognize that the customer feedback survey is as important piece of marketing communication as any of their multi-million pound advertising campaigns and they should invest more in their design and copy writing.

A customer feedback survey is an opportunity for building a longer term relationship

The one other thought I have is that a customer sat survey could be an opportunity to build a longer term feedback relationship with the people who make an effort to complete the survey.

If for example they are regular customer why not turn them into a task force of mystery shoppers or if they travel on a lot of other airlines and visit a lot of other hotels you could incentivise the them to give you feedback and ideas from the experiences they have travelling with these different airlines and staying in these different hotels.

e.g. we will give you a discount on your future hotel stay if you give us feedback every 6 months on other hotels you have stayed it to help us gain ideas to improve our hotel.

Generally speaking I think there is an opportunity to be a lot more ambitious in the way customer feedback is gathered.





5 nice questions to ask about your own survey

5 nice questions to ask about your own survey

 1. Would you do it yourself?

 This has to be the key question you should ask yourself.  If you were sent your survey by someone else would complete it? Would you give it your full attention to every question?    If the answer is no, then in that case don't expect the average respondent to answer your survey properly either.  

2 Does your survey pass the presentation test?

This is a good way of looking at things.  If your survey was a presentation that you were delivering to a room full of 50 people how much more effort would you put into the design of it?  I bet you probably would want to add a few more visuals for a start to liven it up. Where would you add these images?  Would you change the flow of it to ensure it made sense? Would you trim back the text?  Would your presentation be crammed with pages of dense bullet points?  Now imagine you were presenting this to say 500 people or or even 1,000 presumably you would put even more effort into the design of the presentation?  Well these are types of number of people who might well be consuming your survey so why not put the same effort into the design of it as you would a PowerPoint presentation.

3. Have you written the press release yet?

 One of the best way of understanding what you really want to get out of the data generated from your survey is to write the press release summarizing its fantasy findings after you have drafted the survey. Its amazing when you start doing this what you focus on and what you leave out.  All off a sudden half the questions in your survey might start to seem irrelevant. Its a brilliant way of refining and editing back your survey.  This tips was given to my by one of my old bosses, Ivor Blight whilst working at Mirror Group newspapers and its been one of the most valuable pieces of survey design advice I have ever received.

4. Why are you asking that question? 

Is it because it will produce a nice looking answer or because it is actually generating useful actionable feedback?

Take a customer feedback study where you ask your customers to rate you product or service.  You find out that after polling 500 people that they score it a 4 out of 5.  Now tell me apart from feeling pleased what are you going to do with this information to improve your product or service?   What if instead you asked those 500 people to name ONE thing that might make your product or service better, how much more useful would that information be?

We also have a habit of making huge assumptions about about what a question will actually measure.  A classic example would be the purchase intent question, "would you buy this new product?"  This as I hope most of your reading this will be aware is proven to have little or no value as a predictor of sales.  A far more predictive question would be to ask them if they think they would buy the product instead of the the main brand they buy.

I would challenge in particular you to consider the value of those banks of questions that so often get asked in surveys that attempt to measure brand characteristics like.. how much do you agree or disagree with these statements about this product... "its a modern brand", "its a trust worthy brand", etc. What are are these types of question actually telling you? Are you trying to find out the driving reasons why people buy a particular brands? Well why in that case don't you simple ask people that question "why do you buy this brand"  . We did exactly this recently in an experiment to find out the driving factors behind why people purchase different brands of shampoo,  there were over 50 clear reasons cited why people choose a particular brand ranging from the smell through to the size of the bottle, the impact of advertising, the type of ingredients, the appeal of the packaging and how well each shampoo cleaned different types of hair, some people don't believe there are any differences in one shampoo to another so buy the cheapest  some people buy shampoo because it was recommended by their hairdresser,  others chose a brand because they liked the cream feel of the shampoo or the way it lathered up or they thought it was more ethical, but only 3 people out of a sample of 500 said they chose their brand because the felt it was modern that is 0.6%.

5. Have you tested the survey?

And I don’t mean for routing errors and spelling mistakes. I mean have 30 people done your survey and have you had a really good look at the data to see what it is delivering and how useful it is and  what its missing and what could be improved?  So few people in my experience properly pilot their research studies or use piloting as a means to develop and improve their survey and yet this in my opinion is the single most effective way of improving your market research.  

Where can we inject more creativity into survey design



Here are my thoughts on some examples of the areas where I feel we need to inject a bit more creativity into the design of surveys. This content is taken from one of my presentations on the topic.



1. Think about how we motivate respondents to answer questions and take part in surveys

We so often jump straight into surveys with questions that look like this...




In other words, we jump straight into the nitty gritty of things we want to know.

I ask, has anyone in market research ever heard of foreplay?

If you want to encourage people to think, which is essentially the task we are asking respondents to do, it really does help if you warm them up and provoke their curiosity and interest in a topic first, before wading in with the boring questions.

There are lots of ways this can be done but one of simplest techniques is to try to think of a question respondents might actually want to answer and use it as your opening gambit.

Say for example you were doing a survey about toothpaste. You could jump straight in with questions about what brands of toothpaste they are aware of, or you could ask questions like which of these celebrities has the nicest teeth? 



i.e. You get them thinking about the topic in a more rounded way. We have found that this can stimulate them to take a lot more interest in answering the subsequent questions.

This approach to considering respondents’ motivation is at the heart of effective survey design. You should be thinking about this with every question you write in a survey.

2. Improve skills at copy writing

I think we need to make a fresh start in the way we think about writing questions in online surveys. We are locked into dry research speak that doesn’t connect with respondents. We are obsessed with the platitudes of 5 point, 7 point, 9 point scales and concepts like ‘strongly agree’ or ‘disagree’, which are often meaningless and ineffective in emotionally engaging respondents. Go out and listen to consumers describing brands, and I don't think you ever will hear the words ‘very appealing’ or ‘I strongly like this brand’.

If you want to connect with respondents, it helps if you use their language. Tell me, what does ‘strongly agree’ actually mean to most people? You either agree or disagree with something.

We have a general over reliance on range scales, which to me is just a lazy approach to research. Respondents in turn are extremely lazy in the way they answer them. If you roll up the answers to a cross section of say 10 point range scale questions, you will find that nearly 80% of answers are clustered between 5 and 8. Respondents are not being encouraged to think. Scales mean nothing to most people, or rather, we have learnt to use them in a certain way.

Sweep away verbosity


We need to clear away the verbosity of instructions which may be essential in a face to face interview, but are not needed in questions which have visual cues. I would recommend applying a twitter rule of no more than 140 characters for a question. Believe me, no matter how important you think the instructions are, a huge proportion of respondents do not read past the first sentence.*

* roughly speaking 50%!

Note: if you do have a more complex idea to communicate in a survey, then this needs to be broken down into clear parts and presented presentation style, one thought at a time. Injecting imagery will be a powerful tool to help communicate the message.

Write questions with a sense of humanity



We are habitually autistically precise in our phrasing, which can be completely alienating to respondents. I would recommend writing from the point of view of an advertising agency trying to sell the question to the respondent, or a TV interviewer questioning a famous celebrity. That’s not to recommend adopting the vernacular or colloquial, but question wording needs to be clear. Try to use what might be described as ‘natural language’.

Think more conceptually about how to ask questions


And realise it’s not just about the wording, it’s about the underlying concept. For example, you could ask people to rank their first, second and third choices of toothpaste, or you could say to respondents: imagine you are a judge in an award ceremony and you have to give out 1st, 2nd and 3rd prizes in the Toothpaste of the Year competition. A conceptual shift of emphasis like this that can make surveys far more interesting for respondents to answer. We too often rely on a cliched armoury of standard questions and approaches to problems, and these are habits we need to break!

3. Storyboard the flow of surveys

My next point is the lack of storyboarding in surveys. We throw respondents a jumbled mess of questions, which for the respondent has no sense of flow or structure. We ask them to think of one topic, then another, then the first again. We deliver these horrible never-ending loops of questions that make the respondent give up all hope that they will get to the end of a survey. I feel that much more thought is needed in the way we organise and deliver questions and signpost their order so respondents can grasp where they are and where they are going. It may not be important to you; it is to respondents.

4. Ask fewer dumb questions

All the research we have conducted over the last few years into what makes surveys interesting to respondents leads to one clear observation: respondents like nothing more than to actually think! Yet so often we bombard them with dumb questions that don't require more than momentary thought: how much do you agree with this, do you like this, or pick some words. Very rarely do we actually ask them to think. I encourage you to think of respondents as consultants; try to bring them into the problems you are trying to solve. Ask them more intelligent questions and I think you will be surprised at the quality of feedback you will receive. 



5. Up our game on the visual side

I don't think many people have grasped the impact that quality imagery can have at improving the experience of taking a survey, how well-chosen images can be used to communicate ideas more efficiently or the impact that an image can have on stimulating the imagination. I very rarely see imagery properly used in surveys.

More often what I see is badly chosen clip art slapped to the top of a range question, and imagery that looks like something the intern has been tasked with finding.

6. Rethink the use of likert scale grid questions

I think the industry needs to have a total rethink on the use of likert scale grid questions in surveys. They are simply not working. Respondents hate them and then deliver, at best, watered down data, and at worst, heavily corrupted data.  They are officially my number 1 bugbear with surveys.



If you take out of the equation reading times the average respondent spends 4.3 seconds thinking about the answer to an average question, but as low as 1.7 seconds considering the answer to a likert scale question in a bank of grids. If you look in detail at the data coming back from a grid question, upwards of 80% of respondents show some signs of speeding after the 20th repetition, and I see surveys with repetitions like this in their hundreds.

Part of the problem is our obsession with agreement and liking and 5 point scoring scales. Respondents are so tired of these we have an almost pavlovian response to answering them - we mostly say that we slightly agree, slightly like everything!

What we see if you offer up more creative scales can be radical improvements in attention. To my mind an agree scale is just a really lazy way of asking a question.

How much do you like watching these sports on TV on a scale of 1 to 5? Every sport will score between 2.5 and 3.5! Surely the point of most questions like this is to draw out comparisons. Just look at the behavioural reality of our attitude towards watching different sports on TV; Football would score 100 and most other sports under 10.

The solution is to think about how you ask the question more creatively, understanding the real objective of the question, using more relevant anchor points for these types of judgements and using more bespoke ranges for each measure.



Kategori

Kategori