Skip navigation

Category Archives: Validity

Article by Stone-Romero, Eugene F., and Rosopa, Patrick J. (2008) in Organizational Research Methods, 11:2.

 

Let say we want to conduct a study with the simplest mediation model, X à M à Y, in order to gain the maximum internal validity inferences, we have to carry out two randomized experiments; one to prove the causal model of X à M and the other one for M à Y. If we choose to conduct quasi-experiments instead of randomized experiments, the internal validity inferences will be reduced; whereas if we choose to conduct non-experimental approach, the causal inferences may not be valid at all (even though we are using ‘causal modeling’ techniques in analyzing the data such as Hierarchical Multiple Regression, Path Analysis and Structural Equation Modeling). This paper outlines the effects of different research design on internal validity inferences:

 

Research Design

Control on Confounding Variables

Internal Validity Inferences

2 randomized experiments

Design

Very Strong

2 quasi-experiments

Statistical

Moderately Strong

1 randomized experiment

Design

Weak

1 quasi experiment

Statistical

Weak

2 non-experiment

Statistical

Very Weak

1 non-experiment

Statistical

Very Weak

 

For more complex mediation model, we need to conduct multiple randomized experiments accordingly.

 

The authors argued that, non-experimental study is only appropriate to prove ‘the consistency between an assumed causal model and the results of the study’, but NOT ‘the consistency between the reality and the results of the study’. That is basically why if non-experimental design approach is selected, we are not legitimately allowed to make any causal inferences. Instead, what we legitimately can claim is the inferences at covariance (e.g. correlation) level only, for example, we can say “…this study has recognized patterns of covariances among measured variables that are consistent with the assumed causal model (specified in the research model)”.

 

In addition to that (inferences at covariance level), the author argued that we also need to acknowledge that: (a) the same pattern of covariances may also consistent with other causal models, and; (b) our findings do not provide a valid basis for making causal inferences specified in the assumed causal model. For example, we can say (this example is taken directly from this paper) “Hypothesis 1 argued that there would be a positive correlation between X and Y. The test of this hypothesis showed that there was. This finding is consistent with the assumed causal model shown in Figure 1. However, it may also be consistent with a number of other causal models.”

 

One of the recommendations given by the authors made me smile, I nodded my head few times and I said “yea, you’re right… indeed you’re right!!” Here is the recommendation:

“…we recommend that individuals who teach undergraduate and/or graduate courses in such areas as statistics, research design, research methods, and causal modeling, instruct their students on the inferences that can and can not be made on the basis of data from nonexperimental research. Moreover, they need to disabuse students of the baseless arguments that appear in various publications about the inferences that are appropriate on the basis of ‘causal modeling’ procedures…”

Advertisements

Article by Straub, Detma; Boudreau, Marie-Claude; Gefen, David (2004) in Communications of the Association for Information Systems, 13.

 

Prior to reading this article, I’ve read two papers written by the same authors, i.e. Straub (1989)[i] and Boudreau, Gefen & Straub (2001)[ii]. I think, this paper is the conclusion of those two earlier papers. The main contribution of this paper is the guideline on what aspect of validation should be included in IS positivist research. The authors rate the requirement (of performing the validation procedures) in three different level of importance, i.e. mandatory, highly recommended, and optional.

 

Mandatory

All IS positivist researches are required (compulsory) to evident the following aspect of validity:

  1. Construct Validitywhether the measures chosen by the researcher “fit” together in such as way so as to capture the essence of the construct. [Note: you may refer to my previous entry related to construct validity here, or refer to wikipedia (here) for the complete definition]. Note that construct validity consists of four different but inter-related elements, i.e. (i) Discriminant Validity; (ii) Convergent Validity; (iii) Nomological Validity; (iv) Factorial Validity[iii] and; (v) Testing of Common Method Bias[iv]. What are mandatory, according to the authors, are Discriminant Validity and Convergent Validity. Therefore, Factorial Validity is this context is sufficient.
  2. ReliabilityTo prove that measures for one construct are, indeed, related to each other. It is worth to note that reliability works only for reflective construct (never perform reliability on formative construct as its measures are not correlated with each other). [Note: Please refer to my earlier entry on article by Petter, Straub and Rai (2007) for further details].
  3. Manipulation Validity – it is mandatory for certain types of (lab) experimental study only. Experiment that requires participant to be treated with physical substance (such as drug) is not required to prove the manipulation validity.
  4. Statistical Conclusion Validity Researchers need to provide sound arguments on the quality of the statistical evidence of covariation, such as sources of error, the use of appropriate statistical tools, and bias.

 

Highly Recommended

It is highly recommended that positivist research perform the following aspects of validation:

  1. Testing for Common Method BiasCommon Methods Bias can be avoided by gathering data for the independent variables and dependent variables from different methods, or, if a single method is used, to test it through SEM.
  2. Nomological Validity – The evidence that the structural relationships among variables/constructs is consistent with other studies that have been measured with validated instruments and tested against a variety of persons, settings, times, and, methods.
  3. Manipulation Validity – for quasi-experimental or non-experimental study in social (and design) science where characterizes a great deal of management research, researchers have to prove that participants were truly received the treatment.

 

Optional

It is optional that positivist research perform the following aspects of validation:

  1. Predictive Validity – “Also known as “practical,” “criterion-related,” “postdiction,” or “concurrent validity,” predictive validity establishes the relationship between measures and constructs by demonstrating that a given set of measures posited for a particular construct correlate with or predict a given outcome variable.”
  2. Unidimensional Validity Evidence that shows each measurement item reflects one and only one latent variable (construct). The terms frequently used to discuss this validity are: “first order factors,” “second order factors,” etc. According to the authors, this type of validity is relatively new and the understanding on its capabilities is currently (still) very much limited.

 

The authors also made the following recommendations pertaining to the innovation of research instruments: 

  1. Researchers are highly recommended to use previously validated instruments wherever possible. If researchers make significant alterations in validated instruments, they are required to revalidate the instrument’s content, constructs, and reliability.
  2. For those who are able to create their own instrument, they’re highly recommended to do so provided that they are required to validate it thoroughly.


 

[i] Straub, D. W. (1989) “Validating Instruments in MIS Research,” MIS Quarterly, 13:2, pp. 147- 169.

[ii] Boudreau, M., D. Gefen, and D. Straub (2001) “Validation in IS Research: A State-of-the-Art Assessment,” MIS Quarterly, 25:1, pp. 1-23.

[iii] Factorial validity can be assessed using factor analytic techniques such as common factor analysis, PCA, as well as confirmatory factor analysis in SEM. It can assess both convergent and discriminant validity, but does not provide evidence to rule out common methods bias when the researcher uses only one method in collecting the data.

[iv] Common Method Bias is also known as “method halo” or “methods effects”. It may occur when data are collected via only one method or via the same method but only at one point in time. Data collected in these ways likely share part of the variance that the items have in common with each other due to the data collection method rather than to: (i) the hypothesized relationships between the measurement items and their respective latent variables, or; (ii) the hypothesized relationships among the latent variables.