D. Experimental Design
D-1: Distinguish between dependent and independent variables.
Target Terms: Dependent Variable, Independent Variable
Definition: The target behavior which the intervention is designed to change.
Example in everyday context: Making a pot of coffee.
Example in clinical context: A client’s eloping behavior which is targeted for intervention.
Example in supervision/consultation context: Employee weekly productivity reports.
Why it matters: The dependent variable must be identified if the goal is to produce change in behavior.
Definition: The intervention designed to have an effect on the dependent variable.
Example in everyday context: Pushing the “brew” button on your coffee machine to make coffee.
Example in clinical context: Response blocking as a means to prevent elopement.
Example in supervision/consultation context: Positive reinforcement earned when employee’s make their weekly productivity goal.
Why it matters: In order to accurately understand behavior change, all treatment conditions must be considered and identified as to which one will have the greatest impact on the dependent variable(s).
D-2: Distinguish between internal and external validity.
Target Terms: Internal Validity, External Validity
Definition: An experiment shows convincingly that changes in a behavior are a function of the intervention/treatment and NOT the result of uncontrolled or unknown factors.
Example in everyday context: You can determine that your coffee brews every morning when someone presses “brew” on the coffee machine and coffee does not brew under any other circumstances.
Example in clinical context: A behavior analyst has been conducting a formal mand program with a client. Over time, the client masters their mand targets during instruction. The behavior analyst concludes that the mand program and instructional techniques are the reason why the client is mastering their mand targets and they are not mastering their targets due to any other variables.
Example in supervision/consultation context: A behavior analyst is consulting in a classroom and implements a new instructional technique to improve skill acquisition. After several weeks of receiving this new instruction, the students show significant improvement in their academic performance. The behavior analyst believes that the instructional method is the reason why the student’s academic performance is improving and is not the result of uncontrolled variables.
Why it matters: High internal validity in an experiment demonstrates experimental control.
Definition: The degree which a study’s results are generalizable to other subjects, settings and/or other behaviors.
Example in everyday context: When using other coffee machines, you know that if you press “brew” coffee will be produced.
Example in clinical context: A behavior analyst is implementing a new intervention from a new study that they read about in a peer reviewed journal. They replicate the intervention steps with their client and achieve the same results. This study demonstrates external validity, since the results have been replicated to a different subject.
Example in supervision/consultation context: A behavior analyst is hired by a company to increase the worker’s knowledge, awareness and proper practice of safety techniques and guidelines in the workplace. This behavior analyst has conducted several treatment sessions in this specific area with many companies. Their approach demonstrates external validity because the results are generalizable to several different companies.
Why it matters: For interventions to make a quality and lasting impact, they must generalize to other environments and behaviors. This allows for the learner to contact reinforcement in various settings and improve their engagement with average daily living skills.
D-3: Identify defining features of single-subject experimental designs (e.g., individuals serve as their own controls, repeated measures, prediction, verification, replication).
D-4: Describe the advantages of single subject experimental designs compared to group design.
Target Terms: Single-Subject Design, Prediction, Verification, Replication
Definition: A type of experimental design where the subjects serve as their own control.
Example in clinical context: A researcher is evaluating the effects of two schedules of reinforcement on the reduction of aggressive behaviors on two subjects. The subjects serve as their own control and tests conditions.
Example in supervision/consultation context: A group of teachers at a school serve as their own controls in an experiment comparing instructional methods.
Why it matters: Single-subject designs are the hallmark of experimental design used in applied behavior analysis.
Definition: The anticipated outcome of a data path after collecting several stable data points.
Example in a clinical context: A behavior analyst collects baseline data on self-injury with a new client. After several data points, the behavior analyst observes that the rate of the self-injury has remained between 55% and 61% of daily half-hour intervals. The behavior analyst concludes that rates of self-injury would remain stable over time.
Example in supervision/consultation context: A supervisor is collecting baseline data on their employee’s safety awareness and safe behaviors at the workplace before implementing an intervention targeted at increasing safety awareness. The supervisor observes that the employee’s engage in unsafe behaviors 30% to 39% of the workday. The supervisor concludes that without intervention, the employees will continue to engage in similar rates of unsafe work behaviors.
Why it matters: Prediction allows for the investigator to formulate a hypothesis regarding the outcome of a behavior change before intervention or during an intervention.
Definition: After termination or withdrawal of the treatment variable, the data path returns to baseline levels of responding.
Example in clinical context: A behavior analyst is implementing an intervention that targets a client’s self-injurious behavior. The data demonstrates that during the intervention phase, rates of self-injurious behavior appear to be decreasing with rates of 7% to 10% of daily half hour intervals. The behavior analyst removes the intervention and observes that the client’s self-injurious behavior begins to increase. The withdrawal of the intervention and the rates of behavior returning to similar rates as in baseline produce verification of the intervention.
Example in supervision/consultation: A supervisor implements an intervention that is designed to increase employee’s understanding of safety in the workplace with includes posting “warning” and “safety hazard” signs in designated areas and finds that employees are 100% compliant with all safety protocols. The supervisor then withdrawal’s the intervention and finds that employees begin to fall back into “bad habits” of acting unsafely around hazard areas. The withdrawal of the intervention and the resurrection of unsafe workplace behavior similar to baseline measures produce verification of the intervention.
Why it matters: Verification allows for the investigator to verify the effects that an intervention had on a behavior or interest.
Definition: After the withdrawal of an intervention, the intervention is re-introduced to determine if the effects will be similar to the first intervention condition.
Example in clinical context: A behavior analyst reinstates an intervention targeted at reducing self-injurious behavior and finds that the client engages in significantly less rates self-injurious behavior similar to the rates in the first intervention condition. This demonstrates replication of the results produced by the intervention.
Example in supervision/consultation context: A supervisor reinstates the workplace safety intervention and finds that the employees resume with being 100% compliant with all safety protocols.
Why it matters: Replication strengthens the ability to demonstrate experimental control with an intervention’s effectiveness in behavior change.
D-5: Use single-subject experimental designs (e.g., Reversal, Multiple Baseline, Multielement, Changing Criterion).
Target Terms: Reversal (A-B-A-B) Design, Multiple Baseline Design, Multielement/Alternating Treatment Design, Changing Criterion Design
Reversal (A-B-A-B) Design
Definition: An experimental design where baseline conditions (A) and the same intervention conditions (B) are reversed with the goal of strengthening experimental control.
Example in clinical context: A behavior analyst collects baseline data (A) on a student’s tantrum behavior. They begin to implement an intervention (B) and collects data on the student’s tantrum behavior. After several trials of the intervention (B), the behavior analyst withdrawals the intervention (A). To demonstrate strong experimental control, the behavior analyst reinstates the intervention (B).
Example in supervision/consultation context: A behavior analyst is consulting in a classroom where they are providing instructional support to the paraprofessionals in the room. The behavior analyst collects baseline data (A) on the paraprofessional’s ability to implement instructional techniques and begins to implement an intervention (B) which targets their abilities to collect instructional data as well as follow instructional procedures. The behavior analyst withdrawals the intervention (A) and rates of the target behavior return to baseline rates. The behavior analyst reinstates the intervention (B) and finds that the intervention has an effect on the paraprofessional’s instructional techniques.
Why it matters: Reversal designs are the most powerful single-subject design for demonstrating a functional relation between an independent and dependent variable. Reversal designs involve prediction, verification and replication. There are several variations of reversal designs depending on the severity of the target behavior or type of reinforcement schedule the implementor sees fit to use.
Multiple Baseline Design
Definition: An experimental design where implementation of the intervention is staggered in a stepwise fashion across behaviors, settings, and subjects.
Example in clinical context: A behavior analyst wants to target a student’s dropping behavior in two different settings: the classroom and in the hallway. The behavior analyst begins to collect baseline data on the dropping behavior in both settings. After a steady state of responding is demonstrated, the behavior analyst implements the intervention to the first setting, the classroom, while holding the hallway in baseline. After steady responding is achieved in the first implementation setting, the intervention is applied to the second setting which is the hallway.
Example in supervision/consultation context: A behavioral analyst is consulting for a small company that has a uniform set of goals for employees to achieve. They conduct a multiple baseline design on one of these goals for five employees. The behavior analyst begins to collect baseline data for all five employees. After a steady state of responding is achieved for all five employees, the behavior analyst implements an intervention to address the first employee goal on the first employee while holding the other four employees in baseline. After a steady state of responding is achieved with the first employee, the behavior analyst implements the intervention with the second employee and follows this stepwise fashion with all employees.
Why it matters: Multiple baseline designs are the most widely used design due to their flexibility. They do not require the withdrawal of a treatment variable. Multiple baseline designs involve prediction, verification and replication. There are variations of the multiple baseline design.
Multielement/Alternating Treatments Design
Definition: An experimental design where two or more conditions are presented in rapidly alternating succession independent of the level of responding and the effects on the target behavior.
Example in clinical context: A behavior analysts is comparing two treatments with a client on the response rate of their aggressive behavior. The behavior analyst conducts a multielement/alternating treatments design on Treatment A and Treatment B. Treatment A did not appear to have an effect on the aggressive behavior, but Treatment B showed a definite decrease in aggressive behavior.
Example in supervision/consultation context: A supervisor is comparing two types of supervision modalities to determine which one is more effective in teaching supervisees BACB task list concepts. The supervisor conducts a multielement/alternating treatments design with their supervisee on supervision types 1 and 2. Type 1 demonstrated a significant amount of change in the supervisee’s knowledge, whereas Type 2 did not demonstrate any change. The supervisor concludes Type 1 is likely to be a more effective means of teaching novel concepts.
Why it matters: Multielement/Alternating treatments designs are used to evaluate which independent variable would be best to utilize with a client. They do not require withdrawal of the intervention and can be used to quickly make comparisons between treatment conditions. Multielement/Alternating treatment designs involve prediction, verification and replication. There are several variations of the multielement/alternative treatment designs including with or without baseline.
Changing Criterion Design
Definition: An experimental design where the initial baseline phases are followed by a series of treatment phases consisting of successive and gradual changing criteria for reinforcement or punishment.
Example in clinical context: A behavior analyst wants to assess how a client’s behavior changes when they provide reinforcement for every five responses per minute, then ten responses per minute and so on. The criterion increases as the client demonstrates stable states of responding.
Example in supervision/consultation context: A behavior analyst is consulting with a client who wants to decrease the number of cigarettes they smoke per day with the goal of quitting. The client currently smokes 16 cigarettes per day. The first criterion the behavior analyst sets before the client can earn reinforcement is 13 cigarettes per day, to 10, seven, five and one. The criteria decrease as the client demonstrates stable states of responding.
Why it matters: Changing criterion designs can only be used when the behavior is already in the learner’s repertoire. They do not require reversal or withdrawal of treatment. Changing criterion designs do not allow for comparison. They also involve prediction, verification and replication. Experimental control is demonstrated by the extent that the level of responding changes to conform each new criterion.
D-6: Describe rationales for conducting comparative, component and parametric analyses.
Target Terms: Comparative Analysis, Component Analysis, Parametric Analysis
Definition: A type of analysis used to compare two different types of treatments, such as a multielement/alternating treatment design.
Definition: An experiment designed to identify which part of the treatment package is responsible for behavior change.
Drop-Out Component Analysis: An experiment where the investigator presents the treatment package and then systematically removes components.
Example in clinical context: A behavioral analyst is using an intensive feeding intervention with a client. They present all the components of the intensive feeding intervention and begins to eliminate components to determine if there are any effects of on the client’s behavior (e.g., Removal of response cost for packing or expulsion of food while keeping the forms of positive reinforcement).
Add-In Component Analysis: An experiment where the investigator assesses components of a treatment package individually or in combination before the treatment package is delivered.
Example in supervision/consultation context: A behavior analyst presents a treatment package to a supervisor that is used to target employee productivity. The behavior analyst and supervisor review each component and choose to add in additional components based on the needs of the organization.
Why it matters: Component analyses may be used in an effort to change behavior when numerous interventions have failed.
Definition: An experiment designed to evaluate the range of values for an intervention.
Example in clinical context: A psychiatrist and behavioral analyst are examining the effects of Clonidine on a patient’s attentiveness and hyperactivity. They begin to evaluate the effects of a 0.10mg dosage and then evaluate the effects of a 0.05mg dosage. They find that the patient’s attentiveness and hyperactivity was sustained at 0.05mg.
Example in supervision/consultation context: A supervisor consults with a behavior analyst because they want to evaluate how long of a lunch break they should provide to their employees to increase their afternoon work production. The behavior analyst provides employees with a 90-minute break and then a 60-minute break. They found that afternoon production increased when the employees had a 90-minute lunch break rather than a 60-minute lunch break.
Why it matters: Parametric analyses seek to understand the value of an independent variable to determine which of those values are the most effective for the intervention.