Skip navigation

Article by Henson, Robin K. (2001) in Measurement and Evaluation on Counseling and Development, 34.


 [Note: This entry has been ammended on September 8, 2008 – 12:25am.]


Last week I presented a paper at the International Accounting and Business Conference (IABC 2008) which was held on 18-19th August at the Puteri Pacific Johor Bahru, Johor, Malaysia. During the Q&A session, one of the audiences asked my opinion on the issue of validity. He asked whether or not we should estimate the validity of an instrument (he meant a questionnaire) if we simply take it from previous studies where it has been validated many times. He also asked whether or not the ‘cronbach alpha’ (what he meant here is actually the internal consistency reliability test) is sufficient to support the validity aspect of such instrument. I put the summary of my answers below:

  1. When we adopt questionnaire from other studies and we assume that the validity has been proven, what we assume here is actually the CONTENT VALIDITY. It means that, the items in the instrument are very well supported by the (related) theory.
  2. Beside content validity, there are few other aspects of validity that we have to prove. For example, we have to prove that the respondents perceive the questions (in the questionnaire) in the way we want, which means, when we ask them about ‘A’, we have to ensure that the respondents understand that the question is exactly about ‘A’, not about anything else. What we try to prove here is called ‘CONSTRUCT VALIDITY’ or ‘MEASUREMENT VALIDITY’.
  3. CONSTRUCT validity consists of two components, i.e. CONVERGENT validity and DISCRIMINANT validity. Convergent validity proves that all the items (a.k.a. measurements or measured variables) are correctly measure the designed construct (a.k.a. latent variable or unobserved variable), while discriminant validity proves that none of the item measures other construct. (I prepare a diagram to distinguish the two components of construct validity in Figure A at the end of this entry). One way to estimate the construct validity is through the so-called Factorial Validity such as Confirmatory Factor Analysis (CFA). By conducting CFA, we’ll get the structure of constructs with its measures (items) that fulfill the requirement of discriminant and convergent validity.
  4. Cronbach Alpha is an internal consistency test which measures the degree of which the items (measurements) consistently measure the underlying latent construct. It is an indicator of RELIABILITY. The difference of reliability and convergent validity is that, reliability looks into one individual construct at one time while convergent validity look at individual construct with comparison to other constructs in the proposed nomological network. It means that Cronbach Alpha is estimating the convergent validity. Having this (the Cronbach Alpha) in hand, we have proven the reliability but not the construct validity. We have to prove the other one, the discriminant validity. So Cronbach Alpha alone is not sufficient!
  5. I remind myself that what we should estimate here[i] is actually the validity of the ‘score’ or ‘measurement’, and NOT the ‘questionnaire’ or the ‘instrument’[ii]. That explains the reason why we should examine the validity although the instrument has been validated many times before.


Another audience asked me about the stage where internal consistency test should be done. She is currently at the data analysis stage of her PhD works. She asked my opinion whether or not she should perform the Cronbach Alpha twice, one before the Factor Analysis and the other one after that. I put the summary of my answers below:

  1. We perform the (first) internal consistency test (the Cronbach Alpha) to detect whether or not all the items are in a single conceptual direction. If the result of Cronbach Alpha indicates that they are not so, than recoding has to be made accordingly.
  2. After we done with the (first) Cronbach Alpha, we perform Factor Analysis to estimate the construct validity. At the end of this step, we’ll have the structure of constructs in our study (the structure depicts which items indicate which construct).
  3. We need to perform Cronbach Alpha again to support the reliability aspect of the new structure obtained in the Factor Analysis result. If the structure obtained is identical with the one prior to factor analysis, then the (second) cronbach alpha is optional.


[Note: What I have written above is not from the paper, they are all from me.]


The important points that I got from this paper are listed below (Note: there are many other points discussed in this paper but I do not want to include them here simply because of I didn’t perceive them as interesting):

  1. Many researchers, according to the author, have misconception about reliability test. They perceived reliability should be focusing on the tests, rather than to scores (data). It’s indeed a wrong concept! The author wrote “…it is more appropriate to speak of the reliability of ‘test scores’ of the ‘measurement’ than of the ‘test’ or the ‘instrument…’.
  2. As written by the author “Different samples, testing conditions, and any other factor that may affect observed scores can in turn affect reliability estimates…”, reliability test should be done although instrument that we used had already been validated many times before.
  3. Three sources of measurement error (within the classical framework) are: (i) content sampling of items (“..the theoretical idea that the test is made up of a random sampling of all possible items that could be on the test….”); (ii) stability across time; (iii) inter-rater error.
  4. Poor score reliability will reduce the power of statistical significance test, where it becomes harder to find.
  5. The author suggested the so-called “reliability generalization (RG) studies”. The author described RG as “…the cumulative information they may yield in describing study characteristics that affect reliability estimates for scores from a given test and perhaps, study characteristics that consistently affect score reliability across different tests..”

[i] By ‘here’ I mean ‘construct validity’.

[ii] This is one of the core arguments of this paper.



  1. Thanks for the information. Your blog is added to your bookmarks. Develop.
    Your reader.

  2. Excellent site, keep up the good work

One Trackback/Pingback

  1. […] essence of the construct. [Note: you may refer to my previous entry related to construct validity here, or refer to wikipedia (here) for the complete definition]. Note that construct validity consists […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: