Skip navigation

Category Archives: Triangulation

Article by Henson, Robin K. (2001) in Measurement and Evaluation on Counseling and Development, 34.


 [Note: This entry has been ammended on September 8, 2008 – 12:25am.]


Last week I presented a paper at the International Accounting and Business Conference (IABC 2008) which was held on 18-19th August at the Puteri Pacific Johor Bahru, Johor, Malaysia. During the Q&A session, one of the audiences asked my opinion on the issue of validity. He asked whether or not we should estimate the validity of an instrument (he meant a questionnaire) if we simply take it from previous studies where it has been validated many times. He also asked whether or not the ‘cronbach alpha’ (what he meant here is actually the internal consistency reliability test) is sufficient to support the validity aspect of such instrument. I put the summary of my answers below:

  1. When we adopt questionnaire from other studies and we assume that the validity has been proven, what we assume here is actually the CONTENT VALIDITY. It means that, the items in the instrument are very well supported by the (related) theory.
  2. Beside content validity, there are few other aspects of validity that we have to prove. For example, we have to prove that the respondents perceive the questions (in the questionnaire) in the way we want, which means, when we ask them about ‘A’, we have to ensure that the respondents understand that the question is exactly about ‘A’, not about anything else. What we try to prove here is called ‘CONSTRUCT VALIDITY’ or ‘MEASUREMENT VALIDITY’.
  3. CONSTRUCT validity consists of two components, i.e. CONVERGENT validity and DISCRIMINANT validity. Convergent validity proves that all the items (a.k.a. measurements or measured variables) are correctly measure the designed construct (a.k.a. latent variable or unobserved variable), while discriminant validity proves that none of the item measures other construct. (I prepare a diagram to distinguish the two components of construct validity in Figure A at the end of this entry). One way to estimate the construct validity is through the so-called Factorial Validity such as Confirmatory Factor Analysis (CFA). By conducting CFA, we’ll get the structure of constructs with its measures (items) that fulfill the requirement of discriminant and convergent validity.
  4. Cronbach Alpha is an internal consistency test which measures the degree of which the items (measurements) consistently measure the underlying latent construct. It is an indicator of RELIABILITY. The difference of reliability and convergent validity is that, reliability looks into one individual construct at one time while convergent validity look at individual construct with comparison to other constructs in the proposed nomological network. It means that Cronbach Alpha is estimating the convergent validity. Having this (the Cronbach Alpha) in hand, we have proven the reliability but not the construct validity. We have to prove the other one, the discriminant validity. So Cronbach Alpha alone is not sufficient!
  5. I remind myself that what we should estimate here[i] is actually the validity of the ‘score’ or ‘measurement’, and NOT the ‘questionnaire’ or the ‘instrument’[ii]. That explains the reason why we should examine the validity although the instrument has been validated many times before.


Another audience asked me about the stage where internal consistency test should be done. She is currently at the data analysis stage of her PhD works. She asked my opinion whether or not she should perform the Cronbach Alpha twice, one before the Factor Analysis and the other one after that. I put the summary of my answers below:

  1. We perform the (first) internal consistency test (the Cronbach Alpha) to detect whether or not all the items are in a single conceptual direction. If the result of Cronbach Alpha indicates that they are not so, than recoding has to be made accordingly.
  2. After we done with the (first) Cronbach Alpha, we perform Factor Analysis to estimate the construct validity. At the end of this step, we’ll have the structure of constructs in our study (the structure depicts which items indicate which construct).
  3. We need to perform Cronbach Alpha again to support the reliability aspect of the new structure obtained in the Factor Analysis result. If the structure obtained is identical with the one prior to factor analysis, then the (second) cronbach alpha is optional.


[Note: What I have written above is not from the paper, they are all from me.]


The important points that I got from this paper are listed below (Note: there are many other points discussed in this paper but I do not want to include them here simply because of I didn’t perceive them as interesting):

  1. Many researchers, according to the author, have misconception about reliability test. They perceived reliability should be focusing on the tests, rather than to scores (data). It’s indeed a wrong concept! The author wrote “…it is more appropriate to speak of the reliability of ‘test scores’ of the ‘measurement’ than of the ‘test’ or the ‘instrument…’.
  2. As written by the author “Different samples, testing conditions, and any other factor that may affect observed scores can in turn affect reliability estimates…”, reliability test should be done although instrument that we used had already been validated many times before.
  3. Three sources of measurement error (within the classical framework) are: (i) content sampling of items (“..the theoretical idea that the test is made up of a random sampling of all possible items that could be on the test….”); (ii) stability across time; (iii) inter-rater error.
  4. Poor score reliability will reduce the power of statistical significance test, where it becomes harder to find.
  5. The author suggested the so-called “reliability generalization (RG) studies”. The author described RG as “…the cumulative information they may yield in describing study characteristics that affect reliability estimates for scores from a given test and perhaps, study characteristics that consistently affect score reliability across different tests..”

[i] By ‘here’ I mean ‘construct validity’.

[ii] This is one of the core arguments of this paper.


Article by Jasperson, Jon (Sean), et. al. (2002) in MIS Quarterly, 26:4.


Content-wise, this paper is basically not a research methodology paper. But I want to include it here because it teaches me the great method in literature research, the so called “meta-triangulation” method. The authors explained in details all the steps in the method, and reading the entire paper gives me a complete example of how to conduct a meta-triangulation research. I personally rate this paper as ‘extremely heavy’ and reading the entire 52 pages was really tiring, but it gives me a very high valuable knowledge. If one day I become a professor (my very ambitious dream, yet still too far to be reached), I’ll make it a compulsory reading material to each of my students.


The steps of meta-triangulation method (directly taken from the paper) are as follow:

  1. Phase I. Groundwork: In the groundwork phase, researchers identify the phenomenon of interest, choose the paradigmatic lenses, and collect a meta-theoretical sample. By defining the phenomenon of interest and choosing the paradigm lenses, researchers establish the boundaries of what will be considered in their multi-paradigm investigation.
  2. Phase II. Data Analysis: Data analysis consists of planning the paradigm itinerary, conducting multi-paradigm coding, and constructing paradigm accounts. Applying a systematic series of analyses allows researchers to overcome information processing limitations that emerge when trying to understand information intensive data.
  3. Phase III. Theory Building: The final phase of meta-triangulation attempts to build theory by exploring meta-conjectures, attaining a meta-paradigm perspective that can accommodate representations from multiple paradigms, and articulating/critiquing the resulting theory and theory building process.

I must note here that the meta-triangulation study reported in this paper was conducted by 6 prominent and renowned researchers in the field. The meta-theoretical sample they studied consists of 82 articles from 6 highly ranked MIS journals. It gives me insight that only professors with extensively wide research experience will be able to do it. I hope one day, I’ll be one of them!!