Drag & Drop style question formats open up an raft of more creative questioning techniques for researcher, but they can be open to miss-used and many researchers are unsure as to the creative possibilities of what you can do with this style of question.
This guide covers the factors that should be considered when deciding when and how to use Drag & Drop questions in online surveys, and an outline of the range of Drag & Drop question formats that are available.
The principal value of Drag & Drop in surveys is that it allows respondents to sort and group options, rather than simply pick them.
Drag & Drop is most commonly used to allow respondents to rank choices, such as to pick their first, second and third choice from a list. To achieve this in a conventional format would require presenting effectively the same question three times – asking the respondent to select their first choice, then second, then third. This would be a repetitive task for respondents, and also fiddly to program, as the previously selected options would have to be filtered out of the list each time. The Drag & Drop format allows these three questions to be combined, with ranking and selecting being part of same process. From a respondent’s point of view, it is more intuitive than either repeated questions or a grid.
SortingAnother useful role for Drag & Drop is to allow respondents to apply a large set of options or attributes to two or more ‘targets’. For example, a respondent might be asked to pick words from a list at the top of the screen, and match them against two brands at the bottom of the screen. By making and reviewing their selections for both brands at the same time, it allows them to compare and refine their choices, bringing out the truly distinct characteristics for each brand. By presenting all the choices they have to make on one page, rather than showing the same list over and over, it helps to ‘concertina’ the thinking process.
Reducing straightlining effectsIn experiments we have conducted at GMI, we have observed up to 80% less measurable straightlining using on Drag & Drop questions, compared with conventional grid questions using standard button selection *.
One factor that might explain this is the boredom respondents experience when asked a large number of grid-style questions in a survey. The switch from button-pressing to Drag & Drop makes the question more interesting to answer, which research has shown improves respondent focus and thus reduces straightlining. But that is not it.
In addition, the extra concentration required to drag a selected option from one part of the screen to another makes it far more likely for the respondent to think about the choice, since it takes the same effort to answer the question properly as not. This is distinct from simple button-pushing, when banks of repetitive questions can be sped through with the brain disengaged and the eyes almost closed.
But while the possibility of an 80% reduction in straightlining might tempt some to replace every grid question in a survey with Drag & Drop, it should be kept in mind that the extra time required to complete such questions can become frustrating to respondents if they occur too frequently, and this can trigger dropout and speeding. They are best used sparingly in a mix of creative question formats, and reserved for when their benefits are most needed.
*source: ESOMAR GMI Engage 2008 Panel conference paper “Measuring the value of respondent engagement”
What types of Drag & Drop question are there? There follows an outline of the various options for using Drag & Drop questions in a survey.
Terminology of different Drag & Drop questions We define 4 different Drag & Drop processes, and it is helpful to use this terminology when defining a Drag & Drop question requirement in a draft questionnaire.
1. Drag disappear: dropped options disappear
2. Drag and stack: dropped options become a stack
3. Drag and restrict: only one option may be dropped onto a target
4. Drag and list: dropped options become a list
In addition, targets maybe be presented all on one page, or one at a time, referred to as “sequential Drag & Drop format”.
Options may also be set as single- or multi-choice.
They can be dropped onto restricted positions on the screen, or onto one- or two- dimensional zones.
Basic option ranking
This question format can be arranged vertically or horizontally, with the options organised as blocks or straight lists.
(Methodological note: the temptation here is to allow respondents to rank all the options, but this could result in several-hundred variable combinations to process. It is best to ask respondents to rank only the top 2 or 3, and possibly the worst.)
This question format is also an ideal alternative to Max diff style conjoint question approaches.
Option sorters
In this format, the respondent drags the options onto target choice. These can stack up in a pile (sometimes called card sorting) or disappear as they are dropped onto the selection zone.
The main decision about option sorting is whether to have all the options on display or show them one by one (sequentially) and largely this is dictated by the number of options respondents are asked to sort.
The visual format of this question can be adapted in many different ways, depending on the task. Icons can be added to help emotionalise the choice. The question format can be set up with text alone, or, just as easily, with images.
The layout can be organised to enable respondents to drag options to targets arranged horizontally, vertically, or onto a grid.
List builders
In this format, the options are dropped onto each target to form a list. This is useful when, for example, asking respondents to select a set of features for a product, or to encourage respondents to pick a minimum number of choices for each target – e.g. words associated with different brands.
The same range of custom features are available as in the option sorting format: respondents can drag either words or images, which can be shaped and sized according to specific requirements.
Flag Drag & Drop onto line
This question format is often used as an alternative to sliders, as in effect it produced the same data. The respondent places each option onto a bar, marked with a range from 1-100. The benefit of this format over sliders is that respondents can make more micro-comparisons, and there is an element of ranking involved which can help pull out more subtle differences. Jeffrey Henning
wrote a good post about when to use ranking v rating questions
http://blog.vovici.com/blog/bid/18228/Ranking-Questions-vs-Rating-Questions. Well the popularity of this question format is probably due to the fact that it combines these two techniques.
Target Drag & Drop
This is equivalent to a one-dimensional ‘flag Drag & Drop’, but respondents place their choices onto a two-dimensional target. Respondents can find this a more intuitive process, such as when asked how much they like something. Another advantage is that more options can fit onto a two-dimensional target range without it getting overcrowded.
The target imagery can be customised, as can the colours of the target zones.
Graph Drag & Drop
This records answers on a two-dimensional scale, thus making respondents perform two tasks at once, say for example how much they watching sport on TV v how often the participate.
This format needs to be used with some care, as it can be a little confusing for respondents to answer. See note below about instructions...
The importance of instructions when using drag and drop questions Respondents are so used to pressing buttons in surveys that the sudden appearance of a Drag & Drop question can cause confusion. If such a question is badly identified, respondents can end up trying to press all the options as though they were buttons, and then quit the survey in the belief that they are stuck. Experiments have shown that up to 10% of respondents can drop out of a survey because of a badly identified Drag & Drop question.
For this reason, it is essential that instructions on Drag & Drop questions are clear. Ideally, they should be illustrated, such as with animated arrows.
How does data differ from Drag & Drop questions to conventional button-selection techniques? Experiments to compare Drag & Drop questions to conventional grid alternatives show no major differences in the overall balance of data, other than a slightly lower level of neutral/don’t know selections accounted for by the improved level of engagement. The distribution between the top and middle boxes appears to be equivalent.
Potentially better quality data As previously explained, there is less straightlining associated with drag and drop question formats and so you potentially get richer data.
But there can be fewer answers though in certain circumstances!A particular problem found with list-building Drag & Drop question formats is that respondents have a habit of only dragging one or two options onto each target particularly if there are a lot of target, and are less likely to match the same option to more than one target compared to a situation where they are presented with each target one at a time.
This can result in reduced the volume of data compared to sequential tick selection approaches.
One solution to this is to define minimum the number of options they must drag into each zone but this can be frustrating for respondents if they cannot think of enough associations. So it would also be a recommendation to outline in the question to recommended number of attributes to select but without making this conditional. e.g. "please select 5 features that you think represent each brand"