Evaluating the quality of the data is a key preoccupation for researchers to be confident in their results. When web surveys are used, it seems even more crucial since the researchers have less control on the data collection process. However, they also have the possibility to collect some paradata that may help them evaluating the quality. Using this paradata, it was noticed that some respondents of web panels are spending much less time than expected to complete the surveys. This creates worries ...
Evaluating the quality of the data is a key preoccupation for researchers to be confident in their results. When web surveys are used, it seems even more crucial since the researchers have less control on the data collection process. However, they also have the possibility to collect some paradata that may help them evaluating the quality. Using this paradata, it was noticed that some respondents of web panels are spending much less time than expected to complete the surveys. This creates worries about the quality of the data obtained. Nevertheless, not much is known about the link between response times (RT) and quality. Therefore, the goal of this study is to look at the link between the RT of the respondents in an online survey and other more usual indicators of quality often used in the literature: absence of straight-lining, properly following an Instructional Manipulation Check (IMC), coherence and precision of answers, etc. Besides, we are also interested in the link of both RT and the “usual” quality indicators with the auto-evaluation of the respondents about the efforts they did to answer the survey. Using a SEM approach which allows separating the structural and the measurement models and controlling for potential spurious effects, we find a significant relationship between RT and quality in the three countries studied. We also find a significant, but lower, relationship between RT and autoevaluation. However, we do not find a significant link between autoevaluation and quality.
+