Purposeful sampling in qualitative research
I regularly direct people to sociologist Michael Patton’s Qualitative Research & Evaluation Methods, especially the chapter on Purposeful Sampling. In Patton’s words: “perhaps nothing better captures the difference between quantitative and qualitative methods than the different logics that undergird sampling approaches”.
What is purposeful sampling?
In brief:
Purposeful sampling focuses on selecting information-rich cases whose study will illuminate the questions under study.
Why?
The logic and power of purposeful sampling lie in selecting information-rich cases for study in depth. Information-rich cases are those from which one can learn a great deal about issues of central important to the purpose of the inquiry, thus the term purposeful sampling.
Studying information-rich cases yields insights and in-depth understanding rather than empirical generalizations.
Patton lists 15 strategies for purposeful sampling:
- Extreme or deviant case sampling. Self-explanatory. A focus on “outliers” according to the question of interest.
- Intensity sampling. “An intensity sample consists of information-rich cases that manifest the phenomenon of interest intesely (but not extremely). …. [Intensity sampling] seeks excellent or rich examples of phenomenon of interest, but not highly unusual cases.”
- Maximum variation (heterogeneity) sampling. “This strategy for purposeful sampling aims at capturing and describing the central themes that cut across a great deal of variation.”
- Homogeneous samples. “The strategy of picking a small, homogeneous sample, the purpose of which is to describe some particular subgroup in depth.”
- Typical case sampling. For characterizing the normal case; often selected based on quantitative data or a “key informant” (e.g. a teacher, a program administrator, etc.).
- Critical case sampling. The person, group, or research site that will “yield the most information and have the greatest impact on the development of knowledge”. It might require creativity or deep knowledge of existing dynamics to identify critical cases. “Identification of critical cases depends on recognition of the key dimensions that make for a critical case.”
- Snowball or chain sampling. Repeatedly ask the question “Who should I talk to about X?” and you’ll soon find yourself talking to people who know an awful lot about X!
- Criterion sampling. Sample all people who exceed some threshold or meet some specific criteria. Common in quality assurance contexts.
- Theory-based sampling, operational construct sampling, and theoretical sampling. “The researcher samples incidents, slices of life, time periods, or people on the basis of their potential manifestation or representation of important theoretical constructs.”
- Confirming and disconfirming cases. Used after the importance of a pattern or the stability of a theory has been established to check and adjust emerging theory. Disconfirming cases are particularly important because they can demarcate the “boundary” of a theory i.e. “the exception that proves the rule”.
- Stratified purposeful sampling. “The purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis.” My opinion: if you find yourself wanting to do this, you should probably start considering mixed (quantitative) methods rather than purely qualitative methods.
- Opportunisitic or emergent sampling. Appropriate during fieldwork; taking unexpected opportunities to do impromptu interviews.
- Purposeful random sampling. “A small, purposeful random sample aims to reduce suspicion about why certain cases were selected for study, but such a sample still does not permit statistical generalizations.”
- Sampling politically important cases. “Trying to increase the usefulness and relevance of information where resources permit the study of only a limited number of cases.”
- Convenience sampling. “Doing what’s fast and convenient. This is probably the most common sampling strategy—and the least desirable. …. Convenience sampling is neither purposeful nor strategic.”
The other chapter of note I’d recommend from Patton’s textbook is “From Generalizations to Extrapolations and Transferability”, which summarizes a variety of ways that people have thought about the utility of qualitative findings.
Patton quotes Guba and Lincoln’s 1981 writings on generalization, suggesting to use the concept of “transferability” instead:
The degree of transferability is a direct function of the similarity between the two contexts, what we shall call “fittingness.” Fittingness is defined as degree of congruence between sending and receiving contexts. If context A and context B are “sufficiently” congruent, then working hypotheses from the sending origining context may be applicable in the receiving context.
Patton recommends extrapolation, a term he likes because it “clearly connotes that one has gone beyond the narrow confines of the data to think about other applications of the findings”.
Extrapolations are modest speculations on the likely applicability of dinings to toherh situation under similar, but not identical, conditions. Extrapolations are logical, thoughtful, case derived, and problem oriented rather than statistical and probabilistic.
I’ve heard the term “theoretical generalization” used to refer to some mix of transferability and extrapolation, and in general that’s what I expect from qualitative work. Based on the particulars of the data, I look for implications in a broader context of interest.
These notes are from the third edition of Qualitative Research & Evaluation Methods, published in 2002.