Skip navigation

Category Archives: Sample Size

Article by Sivo, Stephen A.; Saunders, Carol; Chang, Qing, and; Jiang, James J. (2006) in Journal of the Association for Information Systems, 7: 6.



ERRORs that frequently occur in questionnaire-based research:

  1. Sampling error – inadequate sample size/nonrandom samples;
  2. Measurement error – imperfect questionnaires;
  3. Coverage error – inability to contact some people in the population, and;
  4. Nonresponse error – the condition wherein people of a particular ilk are systematically not represented in the sample because such people are alike in their tendency not to respond.


Types of VALIDITY that researchers need to seriously concern (from the taxonomy of Validity by Shadish, et. al., 2002):

  1. External validity – examines whether or not an observed causal relationship should be generalized to and across different measures, persons, settings, and times. It refers to either (1) generalizing to a well-specified population, or (2) generalizing across subpopulations. Generalizing to a well-specified population involves generalizing research findings to the larger population of interest. Generalizing across subpopulations refers to conceptual replicability (or robustness) to the extent that a cause-effect relationship found in a study that used particular subjects and settings would be replicated if different subjects, settings, and time intervals were used.
  2. Statistical conclusion validity – concerns the power to detect relationships that exist and determine with precision the magnitude of these relationships. A chief cause of insufficient power in practice involves having an inadequate sample size. In such cases, sampling error tends to be very high, and so the statistical conclusion validity of a study’s inferences is weakened.
  3. Internal validity – not explicitly defined in this paper.
  4. Construct validity – not explicitly defined in this paper.


Compliance Principles for designing a survey study (taken from Cialdini, 1988):

  1. Reciprocation – people are more willing to comply with a request to the extent that it constitutes the repayment of a perceived gift, favor, or concession;
  2. Consistency – after committing oneself to a position, one is more willing to comply with requests for behaviors that are consistent with that position (e.g., a respondent has verbalized those commitments before the request for participation);
  3. Social Validation – people frequently use the beliefs, attitudes, and actions of similar others as standards of comparison for their own beliefs, attitudes, and actions that is, individuals are more willing to comply with a survey request to the degree that they believe that similar others would comply with it;
  4. Authority – people are more likely to comply with a request if it comes from a properly constituted authority;
  5. Scarcity – people are more willing to comply with requests to secure opportunities that are scarce, and;
  6. Liking – people are favorably inclined toward those individuals that they like — they are more willing to comply with the requests of liked others, such as sponsoring organizations.



This paper argued that nonresponse error is just like selection bias in experimental research, where it will cause: (i) sample bias (respondents are systematically different from the non-respondents with respect to one or more known or unknown characteristics), and; (ii) loss of power to detect effects due to a resulting inadequate sample size, and inaccurate effect size estimation.


This paper reported that few standards for survey research prefer to accept the nonresponse rate of between 20% and 30%. The authors also quoted Babbie that suggested 60% response rate for e-survey as good and 70% as very good. Based on field (as reported in their respective high rank journals), their average response rate reported in this paper is as follows: Management (55.6%); Small Business Respondents (30%); Information Systems (40%). This paper argued that those standards are merely ‘rules of thumb’ that ignore the compounding effects of sampling, measurement and coverage errors. To overcome this problem, the authors suggested a formula to calculate the effect of response rate, sample size, population size, and sample proportion on confidence interval (CI).


Using that formula, after series of simulation, the authors concluded that: (i) small sample size and low response rate can be problematic, and; (ii) the problem is even compounded when both exist simultaneously.


The most valuable contribution of this paper is, I believe, the recommendations on how to deal with the issues of nonresponse rate. The authors put the suggestions in a table, but I put them in a value-chain-model like diagram. For people from system science like me, I believe, the diagram form is much easier to understand.

Response Rate Strategy

Response Rate Strategy


Article by Krejcie, R. V., & Morgan, D. W. (1970) in Educational and Psychological Measurement, 30:3.


The table of sample size determination developed by Krejcie & Morgan can be retrieved here.


I couldn’t get the original copy of the paper. Anybody out there have it and willing to share with me?