TY - JOUR
T1 - A hierarchical clustering approach to identify repeated enrollments in web survey data
AU - Handorf, Elizabeth A.
AU - Heckman, Carolyn J.
AU - Darlow, Susan
AU - Slifker, Michael
AU - Ritterband, Lee
N1 - Publisher Copyright:
© 2018 Handorf et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
PY - 2018/9
Y1 - 2018/9
N2 - Introduction Online surveys are a valuable tool for social science research, but the perceived anonymity provided by online administration may lead to problematic behaviors from study participants. Particularly, if a study offers incentives, some participants may attempt to enroll multiple times. We propose a method to identify clusters of non-independent enrollments in a webbased study, motivated by an analysis of survey data which tests the effectiveness of an online skin-cancer risk reduction program. Methods To identify groups of enrollments, we used a hierarchical clustering algorithm based on the Euclidean distance matrix formed by participant responses to a series of Likert-type eligibility questions. We then systematically identified clusters that are unusual in terms of both size and similarity, by repeatedly simulating datasets from the empirical distribution of responses under the assumption of independent enrollments. By performing the clustering algorithm on the simulated datasets, we determined the distribution of cluster size and similarity under independence, which is then used to identify groups of outliers in the observed data. Next, we assessed 12 other quality indicators, including previously proposed and study-specific measures. We summarized the quality measures by cluster membership, and compared the cluster groupings to those found when using the quality indicators with latent class modeling. Results and conclusions When we excluded the clustered enrollments and/or lower-quality latent classes from the analysis of study outcomes, the estimates of the intervention effect were larger. This demonstrates how including repeat or low quality participants can introduce bias into a web-based study. As much as is possible, web-based surveys should be designed to verify participant quality. Our method can be used to verify survey quality and identify problematic groups of enrollments when necessary.
AB - Introduction Online surveys are a valuable tool for social science research, but the perceived anonymity provided by online administration may lead to problematic behaviors from study participants. Particularly, if a study offers incentives, some participants may attempt to enroll multiple times. We propose a method to identify clusters of non-independent enrollments in a webbased study, motivated by an analysis of survey data which tests the effectiveness of an online skin-cancer risk reduction program. Methods To identify groups of enrollments, we used a hierarchical clustering algorithm based on the Euclidean distance matrix formed by participant responses to a series of Likert-type eligibility questions. We then systematically identified clusters that are unusual in terms of both size and similarity, by repeatedly simulating datasets from the empirical distribution of responses under the assumption of independent enrollments. By performing the clustering algorithm on the simulated datasets, we determined the distribution of cluster size and similarity under independence, which is then used to identify groups of outliers in the observed data. Next, we assessed 12 other quality indicators, including previously proposed and study-specific measures. We summarized the quality measures by cluster membership, and compared the cluster groupings to those found when using the quality indicators with latent class modeling. Results and conclusions When we excluded the clustered enrollments and/or lower-quality latent classes from the analysis of study outcomes, the estimates of the intervention effect were larger. This demonstrates how including repeat or low quality participants can introduce bias into a web-based study. As much as is possible, web-based surveys should be designed to verify participant quality. Our method can be used to verify survey quality and identify problematic groups of enrollments when necessary.
UR - http://www.scopus.com/inward/record.url?scp=85054009035&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85054009035&partnerID=8YFLogxK
U2 - 10.1371/journal.pone.0204394
DO - 10.1371/journal.pone.0204394
M3 - Article
C2 - 30252908
AN - SCOPUS:85054009035
SN - 1932-6203
VL - 13
JO - PloS one
JF - PloS one
IS - 9
M1 - e0204394
ER -