D-2: Distinguish between internal and external validity ©
Want this as a downloadable PDF? Click here!
Target Terms: Internal Validity, External Validity
Definition: An experiment shows convincingly that changes in a behavior are a function of the intervention/treatment and NOT the result of uncontrolled or unknown factors.
Example in clinical context: A behavior analyst implements a DRA procedure to support a client who engages in skin picking. The skin picking behavior responds favorably to the intervention. When the DRA is removed, the target behavior returns. When the intervention is put in place a second time, the behavior returns to low rates. This fact pattern strongly suggests that the DRA intervention (and nothing else) was primarily responsible for the reduction in skin picking behavior.
Example in supervision/consultation context: A behavior analyst is consulting in a classroom and implements a direct instruction methodology during literacy time. No other classroom or homework practices are altered. After several weeks of receiving this new instruction, the students show significant improvement in their reading performance. The behavior analyst concludes that the instructional method is the reason why the student’s academic performance is improving and is not the result of uncontrolled variables.
Why it matters: Without high internal validity, cause-and-effect relationships cannot be discovered.
Behavior analytic literature places an emphasis on within-subjects designs, wherein research participants serve as their own controls. This is a fantastic way of answering clinical questions about individuals. When applying behavior analytic research to our own work, we carefully select findings that have direct bearing on our own clinical problems (for example, by matching the function of problem behavior).
However, because behavior analytic studies usually have small numbers of participants, and because we tend not to use between-subjects (groups) designs, it can take longer to build external validity, and our work can be confusing to other fields which rely on large numbers of participants to “even out” individual variables.
Definition: The degree which a study’s results are generalizable to other subjects, settings and/or behaviors not included in the original study.
Example in clinical context: A behavior analyst is implementing a new intervention from a study that they read in a peer reviewed journal. The individual participant variables (developmental level, topography and function of behavior, for example) are a good match with the behavior analyst’s current client. The analyst replicates the intervention steps with their client and achieves similar favorable results. This supports the study’s external validity, since the results from the study have been replicated with a different subject.
Example in supervision/consultation context: A study is conducted on a systematic way to teach consultees how to conduct functional assessments independently. All subjects in the study learned to complete functional assessments using the methods described in the study. Subsequently, numerous other studies replicated the methods and found similar results across participants and settings. This is strong evidence in support of the original study’s external validity.
Why it matters: Research findings are clinically useless unless they can convincingly demonstrate (1) that the methods were responsible for the observed changes, and (2) that the methods can work across participants and contexts not included in the original study. Science is always building and correcting itself, and replication is a vital – if unglamorous – part of the scientific process!