In the early days of the internet in designing websites you would often have a discussion with clients about routing to different pages and setting out which link should take you to which page and after that to which page. Navigating a website in the early days was like going through narrow tunnels and then you had to back out of them to get to anywhere else. Then some bright spark realised you could have more than one linkage point to each page on more than one page so you could navigate from one part to another more easily.
I make this point because I think we have a similar degree of tunnel thinking when we write surveys, in that we only ever think of asking a question in one way. What I would encourage you to think about is the opportunity of asking questions in more than one way.
How often do you struggle to pin down the exact wording of a question in a survey and be in two minds how to word it? Rating something is a classic quandary. Do you ask them how much they like it; how appealing is it; how keen are they to buy it; how much better or worse it is than other things etc. Asking people to give open ended feedback is another area where a possibly infinite way to word a question exists, and I have had a career-long obsession about the best way to word this type of questions. For instance, if you want to ask for feedback about a product you might word it "please tell us what you like or dislike about this product" or "what do you think about this product? what do you like or dislike about it" or "if you were in criticising this product what would you have to say" or "what is the best thing about this product and the worst thing" . Everyone answering these questions will respond in a slightly different way. Some will deliver better answers than others, some will work more effectively with some groups of people than other groups. Some may not deliver the same volume of feedback but more thoughtful responses. Some may trigger more thought than others.
OK, so the survey has to go live today and you don't have time to test and you are not sure which wording will generate the most feedback; what do you do?
The approach most people take is to pick the one wording you think is best or the one a small committee of you think is best. But have you ever thought about just randomly asking this question in every single conceivable different way to reach respondents and then mashing up all the answers.
Now, I have been playing around with doing this of late. It's not difficult to do from a technical point of view and I am really loving the data I get back (sorry not sure if you are supposed to love data or if that phrase is appropriate).
What I am finding is that in closed rating questions, asking a question in a random basket of ways appears to deliver* more stable answers that iron out the differences caused by question interpretation effects, and for open ended questions it appears to deliver* a greater range of more nuanced feedback than asking a question one way.
I would described this as a Monte Carlo approach, because that is essentially what this is; what I am doing is netting out mass random predictions of the best way to ask each question. I have no way of knowing which is the most accurate, but netting out their predictions is more reliable than asking the viewpoint in one single dimension.
What do you think? I appreciate I probably need to back this up with some solid research evidence as there are lots of issues here and so I am planning to conduct some larger scale experiments to test this theory more thoroughly. But before I dive in, I am open to some critical feedback.
How often do you struggle to pin down the exact wording of a question in a survey and be in two minds how to word it? Rating something is a classic quandary. Do you ask them how much they like it; how appealing is it; how keen are they to buy it; how much better or worse it is than other things etc. Asking people to give open ended feedback is another area where a possibly infinite way to word a question exists, and I have had a career-long obsession about the best way to word this type of questions. For instance, if you want to ask for feedback about a product you might word it "please tell us what you like or dislike about this product" or "what do you think about this product? what do you like or dislike about it" or "if you were in criticising this product what would you have to say" or "what is the best thing about this product and the worst thing" . Everyone answering these questions will respond in a slightly different way. Some will deliver better answers than others, some will work more effectively with some groups of people than other groups. Some may not deliver the same volume of feedback but more thoughtful responses. Some may trigger more thought than others.
OK, so the survey has to go live today and you don't have time to test and you are not sure which wording will generate the most feedback; what do you do?
The approach most people take is to pick the one wording you think is best or the one a small committee of you think is best. But have you ever thought about just randomly asking this question in every single conceivable different way to reach respondents and then mashing up all the answers.
Now, I have been playing around with doing this of late. It's not difficult to do from a technical point of view and I am really loving the data I get back (sorry not sure if you are supposed to love data or if that phrase is appropriate).
What I am finding is that in closed rating questions, asking a question in a random basket of ways appears to deliver* more stable answers that iron out the differences caused by question interpretation effects, and for open ended questions it appears to deliver* a greater range of more nuanced feedback than asking a question one way.
I would described this as a Monte Carlo approach, because that is essentially what this is; what I am doing is netting out mass random predictions of the best way to ask each question. I have no way of knowing which is the most accurate, but netting out their predictions is more reliable than asking the viewpoint in one single dimension.
What do you think? I appreciate I probably need to back this up with some solid research evidence as there are lots of issues here and so I am planning to conduct some larger scale experiments to test this theory more thoroughly. But before I dive in, I am open to some critical feedback.