Skip navigation

Category Archives: Questionnaire

Article by Straub, Detma; Boudreau, Marie-Claude; Gefen, David (2004) in Communications of the Association for Information Systems, 13.

 

Prior to reading this article, I’ve read two papers written by the same authors, i.e. Straub (1989)[i] and Boudreau, Gefen & Straub (2001)[ii]. I think, this paper is the conclusion of those two earlier papers. The main contribution of this paper is the guideline on what aspect of validation should be included in IS positivist research. The authors rate the requirement (of performing the validation procedures) in three different level of importance, i.e. mandatory, highly recommended, and optional.

 

Mandatory

All IS positivist researches are required (compulsory) to evident the following aspect of validity:

  1. Construct Validitywhether the measures chosen by the researcher “fit” together in such as way so as to capture the essence of the construct. [Note: you may refer to my previous entry related to construct validity here, or refer to wikipedia (here) for the complete definition]. Note that construct validity consists of four different but inter-related elements, i.e. (i) Discriminant Validity; (ii) Convergent Validity; (iii) Nomological Validity; (iv) Factorial Validity[iii] and; (v) Testing of Common Method Bias[iv]. What are mandatory, according to the authors, are Discriminant Validity and Convergent Validity. Therefore, Factorial Validity is this context is sufficient.
  2. ReliabilityTo prove that measures for one construct are, indeed, related to each other. It is worth to note that reliability works only for reflective construct (never perform reliability on formative construct as its measures are not correlated with each other). [Note: Please refer to my earlier entry on article by Petter, Straub and Rai (2007) for further details].
  3. Manipulation Validity – it is mandatory for certain types of (lab) experimental study only. Experiment that requires participant to be treated with physical substance (such as drug) is not required to prove the manipulation validity.
  4. Statistical Conclusion Validity Researchers need to provide sound arguments on the quality of the statistical evidence of covariation, such as sources of error, the use of appropriate statistical tools, and bias.

 

Highly Recommended

It is highly recommended that positivist research perform the following aspects of validation:

  1. Testing for Common Method BiasCommon Methods Bias can be avoided by gathering data for the independent variables and dependent variables from different methods, or, if a single method is used, to test it through SEM.
  2. Nomological Validity – The evidence that the structural relationships among variables/constructs is consistent with other studies that have been measured with validated instruments and tested against a variety of persons, settings, times, and, methods.
  3. Manipulation Validity – for quasi-experimental or non-experimental study in social (and design) science where characterizes a great deal of management research, researchers have to prove that participants were truly received the treatment.

 

Optional

It is optional that positivist research perform the following aspects of validation:

  1. Predictive Validity – “Also known as “practical,” “criterion-related,” “postdiction,” or “concurrent validity,” predictive validity establishes the relationship between measures and constructs by demonstrating that a given set of measures posited for a particular construct correlate with or predict a given outcome variable.”
  2. Unidimensional Validity Evidence that shows each measurement item reflects one and only one latent variable (construct). The terms frequently used to discuss this validity are: “first order factors,” “second order factors,” etc. According to the authors, this type of validity is relatively new and the understanding on its capabilities is currently (still) very much limited.

 

The authors also made the following recommendations pertaining to the innovation of research instruments: 

  1. Researchers are highly recommended to use previously validated instruments wherever possible. If researchers make significant alterations in validated instruments, they are required to revalidate the instrument’s content, constructs, and reliability.
  2. For those who are able to create their own instrument, they’re highly recommended to do so provided that they are required to validate it thoroughly.


 

[i] Straub, D. W. (1989) “Validating Instruments in MIS Research,” MIS Quarterly, 13:2, pp. 147- 169.

[ii] Boudreau, M., D. Gefen, and D. Straub (2001) “Validation in IS Research: A State-of-the-Art Assessment,” MIS Quarterly, 25:1, pp. 1-23.

[iii] Factorial validity can be assessed using factor analytic techniques such as common factor analysis, PCA, as well as confirmatory factor analysis in SEM. It can assess both convergent and discriminant validity, but does not provide evidence to rule out common methods bias when the researcher uses only one method in collecting the data.

[iv] Common Method Bias is also known as “method halo” or “methods effects”. It may occur when data are collected via only one method or via the same method but only at one point in time. Data collected in these ways likely share part of the variance that the items have in common with each other due to the data collection method rather than to: (i) the hypothesized relationships between the measurement items and their respective latent variables, or; (ii) the hypothesized relationships among the latent variables.

 

 

Advertisements

Article by de Raza, Vidal Diaz (2005) in IJSRM, 8:1.

 

Interesting facts pertaining to the findings of this study and recommendation by the author:

  • Colour of questionnaire significantly influence the response rate of mail survey
  • Questionnaire with a cover page gets (significantly) higher response rate that the one without a cover page
  • The author recommended the use of a questionnaire of 14.85 cm x 21 cm size, with white paper and with a colour cover page
  • The author recalled what had suggested by Dillman (1991) – ‘the smaller-sized questionnaires are responded to better by young people, while older people tend to respond better to large-sized ones’. This study did not prove it right.

 

Looking at the reference list of this paper, there are few more studies related to questionnaire design. I’ll try to find them all and summarize their findings in this weblog.

  • Crittenden, W., Crittenden, V., & Hawes, J. (1985). “Examining the Effects of Questionnaire Colour and Print Front on Mail Survey Response Rates”. Akron Business and Economic Review, 16, 51–56.
  • Deleeuw, E. D., & Hox, J. J. (1988). “The Effects of Response-stimulating Factors on Response Rates and Data Quality in Mail Surveys”. Journal of Official Statistics, 4, 241–249.
  • Dillman, J. J., & Dillman, D.A. (1995). “The Influence of Questionnaire Cover Design on Response to Mail Surveys”. Proceedings of the International Conference on Survey Management and Process Quality, Bristol, pp. 109–114.
  • Dillman, D. A., & Frey, J. H. (1974). “Contribution of Personalization to Mail Questionnaire Response as an Element of a Previously Tested Method”. Journal of Applied Psychology, 59, 297–301.
  • Grembowski, G. (1985). “Survey Questionnaire Salience”. American Journal of Public Health, 75, 1350–1360.
  • Gullahorn, J., & Gullahorn, J. (1963). “An Investigation of the Effects of Three Factors on Response to Mail Questionnaires”. Public Opinion Quarterly, 27, 276–281.
  • Jansen, J. H. (1985). “Effect of Questionnaire Layout and Size and Issue-involvement on Response Rates in Mail Surveys”. Perceptual and Motor Skills, 61, 139–142.
  • Jobber, D., & Sanderson, S. (1983). “The Effects of a Prior Letter and Coloured Questionnaires Paper on Mail Survey Response Rates”. Journal of the Market Research Society, 25, 339–349.
  • Johnson, T. P., Parsons, J. A., Warnecke, R. B., & Kaluny, A.D. (1993). “Dimensions of Mail Questionnaires and Response Quality”. Sociological Focus, 26, 271–274.
  • Matteson, M. (1974). “Type of Transmittal Letter and Questionnaire Colour as Two Variables Influencing Response Rates in a Mail Survey”. Journal of Applied Psychology, 59, 535–536.
  • Nederhof, A. J. (1988). “Effects of a Final Telephone Reminder and Questionnaire Cover Design in Mail Surveys”. Social Science Research, 17, 353–361.
  • Phippis, P.A., Robertson, K.W., & Keel, K.G. (1991). “Does Questionnaire Colour Affect Survey Response Rates?” Proceedings of the Section on Survey Research Methods, American Statistical Association, pp. 484–490.
  • Poe, G.L., Seeman, I., McLaughlin, J., Mehl, E., & Dietz, M. (1988). “‘Don’t know’ Boxes in Factual Questions in a Mail Questionnaire”. Public Opinion Quarterly, 52, 212–222.
  • Sánchez, M. E. (1992). “Effects of Questionnaire Design on the Quality of Survey Data”. Public Opinion Quarterly, 56, 206–217.