Comparing Continuous and Dichotomous Scoring of Social Desirability Scales: Effects of Different Scoring Methods on the Reliability and Validity of the Winkler-Kroh-Spiess BIDR Short Scale

Patrick Schnapp, Center for Quality in Care, Berlin, Germany
Simon Eggert, Center for Quality in Care, Berlin, Germany
Ralf Suhr, Center for Quality in Care, Berlin, Germany


Survey researchers often include measures of social desirability in questionnaires. The Balanced Inventory of Desirable Responding (BIDR; Paulhus, 1991) is a widely used instrument that measures two components of socially desirable responding: self-deceptive enhancement (SDE) and impression management (IM). An open question is whether these scales should be scored dichotomously (counting only extreme values) or continuously (taking the mean of the answers). This paper compares the two methods with respect to test-retest reliability (stability) and internal consistency using a short German version of the BIDR (Winkler, Kroh, & Spiess, 2006). Tests of criterion validity are also presented. Data are taken …


, , , , ,

No Comments

Testing the Validity of the Crosswise Model: A Study on Attitudes Towards Muslims

David Johann, German Centre for Higher Education Research and Science Studies, Berlin
Kathrin Thomas, City, University of London


This paper investigates the concurrent validity of the Crosswise Model when “high incidence behaviour” is concerned by looking at respondents’ self-reported attitudes towards Muslims. We analyse the concurrent validity by comparing the performance of the Crosswise Model to a Direct Question format. The Crosswise Model was designed to ensure anonymity and confidentiality in order to reduce Social Desirability Bias induced by the tendency of survey respondents to present themselves in a favourable light. The article suggests that measures obtained using either question format are fairly similar. However, when estimating models and comparing the impact of common predictors of negative attitudes …


, ,

No Comments

Nonsampling errors and their implication for estimates of current cancer treatment using the Medical Expenditure Panel Survey

Jeffrey M. Gonzalez, PhD, Office of Survey Methods Research, U.S. Bureau of Labor Statistics
Lisa B. Mirel, MS, Office of Analysis and Epidemiology, National Center for Health Statistics, Centers for Disease Control and Prevention
Nina Verevkina, PhD, Department of Health Policy & Administration, The Pennsylvania State University


Survey nonsampling errors refer to the components of total survey error (TSE) that result from failures in data collection and processing procedures. Evaluating nonsampling errors can lead to a better understanding of their sources, which in turn, can inform survey inference and assist in the design of future surveys. Data collected via supplemental questionnaires can provide a means for evaluating nonsampling errors because it may provide additional information on survey nonrespondents and/or measurements of the same concept over repeated trials on the same sampling unit. We used a supplemental questionnaire administered to cancer survivors to explore potential nonsampling errors, focusing …


No Comments

Question Order Experiments in the German-European Context

Henning Silber, GESIS - Leibniz-Institute for the Social Sciences, Germany
Jan Karem Höhne, University of Göttingen, Germany
Stephan Schlosser, University of Göttingen, Germany


In this paper, we investigate the context stability of questions on political issues in cross-national surveys. For this purpose, we conducted three replication studies (N1 = 213; N2 = 677; N3 = 1,489) based on eight split-ballot design experiments with undergraduate and graduate students to test for question order effects. The questions, which were taken from the Eurobarometer (2013), included questions on perceived performance and identification. Respondents were randomly assigned to one of two experimental groups which received the questions either in the original or the reversed order. In all three studies, respondents answered the questions about Germany and the …


, ,

No Comments

The effect of interviewers’ motivation and attitudes on respondents‘ consent to contact secondary respondents in a multi-actor design

Jette Schröder, GESIS – Leibniz Institute for the Social Sciences, Germany
Claudia Schmiedeberg, University of Munich (LMU), Germany
Laura Castiglioni, University of Munich (LMU), Germany


In surveys using a multi-actor design, data is collected not only from sampled ‘primary’ respondents, but also from related persons such as partners, colleagues, or friends. For this purpose, primary respondents are asked for their consent to survey such ‘secondary’ respondents. The existence of interviewer effects on unit nonresponse of sampled respondents in surveys is well documented, and research increasingly focuses on interviewer attributes in the non-response process. However, research regarding interviewer effects on unit nonresponse of secondary respondents, more specifically, primary respondents’ consent to include secondary respondents into the survey, is sparse. We use the German Family Panel (pairfam) …


, , , ,

No Comments

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License