Study Bay Coursework Assignment Writing Help
To what extent are schedules of reinforcement more than just rules governing which responses will be reinforced? Illustrate your answer with basic and applied research examples.
I am writing this essay in order to illustrate the role of schedules of reinforcement; basic and applied research examples provide evidence that schedules of reinforcement are more than just rules governing which responses will be reinforced.
A schedule of reinforcement is defined as a rule that describes a contingency of reinforcement, those environmental arrangements that determine conditions by which behaviors will produce reinforcement (Cooper, Heron, Heward, 2007). There are two basic types in a schedule of reinforcement: a continuous reinforcement schedule (CRF schedule) is one in which each occurence of a response is reinforced, and an intermittent reinforcement schedule where each occurence of the response is not reinforced; rather, responses are occasionally or intermittently reinforced (Miltenberger, 2008).
Ferster and Skinner (1957) studied various types of intermittent reinforcement schedules and described four basic types in this category: fixed ratio, variable ratio, fixed interval, variable interval. In a fixed ratio (FR) schedule, a specific or fixed number of responses must occur before the reinforcer is delivered; in a variable ratio (VR) schedule, delivery of a reinforcer is based on the number of responses that occur, but in this case, the number of responses needed for reinforcement varies each time, around an average number; in a fixed interval (FI) schedule, the interval of time is fixed, or stays the same each time; in a variable interval (VI) schedule of reinforcement, the reinforcer is delivered for the first response that occurs after an interval of time has elapsed (Miltenberger, 2008).
There are also some variations on the basic intermittent schedules of reinforcement: a) the schedules of differential reinforcement of rates of responding and, b) the progressive schedules of reinforcement. Differential reinforcement provides an intervention for behavior problems associated with rate of response and that means that it is a variation of ratio schedule; delivery of the reinforcer is contingent on responses occuring at a rate either higher than or lower than some predetermined criterion (Cooper, 2007). The reinforcement of responses higher than a predetermined criterion is called differential reinforcement of high rates (DRH); when responses are reinforced only when they are lower than the criterion, the schedule provides differential reinforcement of low rates (DRL). There is also the differential reinforcement of diminishing rates (DRD) schedule that provides reinforcement at the end of a predetermined time interval when the number of responses is less than a criterion that is gradually decreased across time intervals based on the individual’s performance (Cooper, 2007).
Progessive schedules of reinforcement by contrast, systematically thin each successive reinforcement opportunity independent of the participant’s behavior (Cooper, 2007), Progressive ratio (PR) and progressive interval (PI) schedules of reinforcement change schedule requirements using a) arithmetic progressions to add a constant amount to each successive ratio or interval or b) geometric progressions to add successively a constant proportion of the preceding ratio or interval (Lattal & Neef, 1996).
Additionally, applied behavior analysts combine the elements of continuous reinforcement, the four schedules of reinforcement, differential reinforcement of various rates of responding and extinction to form compound schedules of reinforcement. Concurrent schedules of reinforcement occur when a) two or more contingencies of reinforcement b) operate independently and simultaneously c)for two or more behaviors (Cooper, 2007). Discriminative schedules of reinforcement consist of a) multiple schedules -present two or more basic schedules of reinforrcement in an alterating, usually random, sequence; the basic schedules within the multiple schedule occur successively and independently and a discriminative stimulus is correlated with each basic schedule; the stimulus is present as long as the schedule is in effect- and b) chained schedules -the multiple and chained schedules have two or more basic schedule requirements that occur successively and have a discriminative stimulus correlated with each independent schedule (Cooper, 2007).
Nondiscriminative schedules consist of a) mixed schedules -use an identical procedure to multiple ones but, without discriminative stimuli- and b) tandem schedules -identical to chained schedules, but also without the discriminative stimuli (Cooper, 2007).
Now through basic and applied research examples from all types of schedules of reinforcement, it is going to be shown the role of schedules of reinforcement; the schedules of reinforcement play a major role in a behavior change program, and also in the acquisition and maintenance of a behavior. In the study of Kirby and Shields (1972), a systematic measure of changes in academic response rate and accuracy through a more direct approach to academic performance was conducted. The study was designed to measure the combined effects of an adjusting fixed-ratio schedule of immediate praise and immediate correctness feedback on the arithmetic response rate of a seventh- grade student and to measure possible collateral changes in study behavior.
The study was divided into four phases: baseline, treatment 1, reversal, treatment 2. Using an adjusting fixed-ratio schedule, delivery of reinforcement was initially given for every two problems completed; then, the experimenter gradually increased the units of work or number of problems completed before delivering reinforcement. The results demonstrated the effectiveness of the fixed-ratio schedule of praise and immediate correctness feedback in increasing the subject’s arithmetic response rate and associated attending behavior. When student’s rate of correct problem solving was increased through systematic reinforcement, incompatible behaviors of non-attending decreased. It was also noted that during reversal, when all praise and immediate correctness feedback was withheld, the subject maintained a much higher level of arithmetic achievement and attending behavior than before treatment 1. The adjusting ratio schedule of reinforcement frequent contact with the student during early phases requiring small units of work, it requires no extra effort during later phases when large units of work are assigned.
In the study of De Luca and Holborn (1992), the effects of a variable-ratio schedule of reinforcement on pedaling a stationary exercise bicycle were examined. A changing-criterion design was used in which each successive criterion was increased over mean performance rate in the previous phase by approximately 15%. The participants were 3 obese and 3 nonobese boys. The experimental phases were: baseline, VR-first subphase (the VR schedule of reinforcement was introduced after a stable baseline had been achieved), VR-second subphase (stability had been achieved in the first subphase), VR-third subphase (stability was achieved for the second subphase), return to baseline and return to VR third subphase.
All participants had systematic increases in their rate of pedaling with each VR value, meaning that the larger the variable ratio, the higher the rate of response. The results indicated that the rate of exercise can be increased using a VR schedule of reinforcement. The introduction of the initial VR subphase of the changing-criterion design produced marked increases in the rate of exercise for all subjects.
Rasmussen and O’neill (2006), examined the effects of fixed-time reinforcement schedules on problem behavior of students with emotional-behavioral disorders in a clinical day-treatment classroom setting. The participants were three elementary-aged students and the dependent variable for all 3 participants was the frequency of verbal disruptions. The study employed an ABAB withdrawl design, alternating between baseline and FI conditions -verbal praise and pats on the arm were provided, with a final brief schedule thinning phase for each participant.
All participants exhibited variable but relatively high rates during baseline. Implementation of FT schedules resulted in immediate, substantial, and stable decreases for all participants. The results of this study demonstrate the use of FT schedules and their implementation in a day-treatment classroom setting with children with clinically diagnosed emotional or behavioral disorders. These procedures were effective in reducing disruptive verbal behavior and these reductions were maintained while the FI schedules underwent initial thinning.
The effectiveness of fixed-time schedules has also been evaluated through data on both appropriate and inappropriate responses. In the study of Roane, Fisher and Sgro (2001), fixed-time schedules were used in order to reduce destructive behavior but also, to increase adaptive behavior. The participant was a 12-year-old girl who had been diagnosed with pervasive developmental disorder and traumatic brain injury. There were two conditions: control condition and FT condition; with the exception of the FT schedule of reinforcement, the FT condition was identical to the control condition. During the FT condition, increases in two adaptive responses were observed, even though neither response was reinforced through direct contingencies. Similarly, decreases in destructive behavior were obtained under the FT schedule. The results suggest that, in addition to suppressing inappropriate behavior, FT schedules may also increase and stabilize adaptive behavior.
Austin and Soeda (2008), validated the use of fixed-time reinforcer delivery with typically developing population. A fixed-time teacher attention was used to decrease off-task behavior in two third-grade boys. An ABAB was used with two phases: baseline (the teacher interacted with the boys in her usual manner) and noncontingent reinforcement-NCR (the teacher provided attention on an FT schedule). The findings indicated that NCR was an effective strategy for reducing the off-task behaviors of both boys, as immediate and sustained reductions in the percentage of intervals with off-task behavior were observed.
Van Camp, Lerman, Kelley, Contrucci and Vondran (2000), evaluated the efficacy of noncontingent reinforcement with variable interval schedules in reducing problem behavior maintained by social consequences, comparing the effects of VT and FT reinforcement schedules with 2 individuals who had been diagnosed with moderate to severe mental retardation. Baseline and treatment conditions -with FT and VT sessions- were conducted in both participants. Although previous studies on the use of NCR as treatment for problem behavior have primarily examined FT schedules, results of this study indicated that VT schedules were as effective as FT schedules in reducing problem behavior.
Carr, Kellum and Chong (2001), examined the effects of fixed-time and variable-time schedules on responding with 2 adults with mental retardation. Multielement and reversal designs were used to compare the effects of FT and VT schedules previously maintained on variable-ratio reinforcement schedules. The target behavior for the first participant was defined as making a penci mark on his name and placing the paper into the receptable. The target behavior for the second participant was defined as picking up a paper clip and dropping it in the receptable. The experimental phases were: baseline, FR 1 reinforcement, VR 3 reinforcement, FT, VT. The results showed that both FT and VT schedules were equally effective in reducing the target behaviors.
Wright and Vollmer (2002), used a treatment package that involved an adjusting differential-reinforcement-of-low-rate responding (DRL) schedule, response blocking and prompts in order to reduce rapid eating. The participant was a 17-year-old girl who had been diagnosed with profound mental retardation. The experimental phases consisted of baseline and treatment condition, where an adjusting DRL procedure was introduced, along with blocking and prompts. The DRL intervals were determined by calculating the average IRT from previous sessions. The results showed that the treatment package was effective in increasing the IRTs between each attempted bite of food. The treatment package also resulted in an increase in the negative side-effects (increase in the levels of SIB and tantrums). However, the treatment continued despite these side-effects, which eventually decreased.
In the study of Dietz and Repp (1973), a differential reinforcement of diminishing rates (DRD) schedule was used in order to decrease classroom misbehavior. The procedure that was followed was that reinforcement was produced when responding was less than a limit for a period of time, rather than when a response followed a specified period of no responding. Three experiments were conducted. In the first experiment DRD schedule was implemented to reduce the talking-out behavior of one 11-year old boy, classified as trainable mentally retarded (TRM) in a special classroom. The second experiment involved the reduction of talk-outs in a group of ten TRM students in an also special classroom, and the third experiment involved the use of a DRD schedule to reduce the verbal behavior of a group of 15 high school students in a regular class.
The results demonstrated the effectiveness of DRD schedules in reducing classroom disruption both in individual and in group behaviors. In addition, the success with both TRM students and with high school students suggests the efficacy of DRD schedules across widely divergent groups. In the present study the use of positive reinforcement suggests also a nonpunitive method of classroom control.
Roane, Lerman and Vorndran (2001), tried to examine if the reinforcing stimuli can be differentially effective as response requirements increase by evaluating responding under increasing schedule requirements via progressive-ratio schedules and behavioral economic analyses. In experiment 1 (reinforcer assessment), four individuals with developmental disabilities, who had been referred for the assessment and treatment of severe behavior problems, participated. The findings showed that one stimulus was associated with greater response persistence under increasing schedule requirements for all participants. Results also suggested that progressive schedules allow a relatively expeditious examination of shifts in reinforcer preference or value under increasing schedule requirements.
In experiment 2, the correspondence between responding under progressive schedules and levels of destructive behavior under various reinforcement-based treatments was examined in order to evaluate the utility of the reinforcer assessment. Three interventions were selected: noncontingent reinforcement, DRA and DRO. Results indicated that the high-preference stimuli identified via this assessment were more likely to reduce problem behavior or increase adaptive behavior than stimuli identified as less preferred. In summary, results of this study suggest that stimuli identified as similarly preferred via a commonly used preference assessment were differentially effective under increasing schedule requirements. Additionally, stimuli that were more effective under progressive schedules were more likely to produce decreases in problem behavior maintained by automatic reinforcement.
The influence of concurrent reinforcement schedules on behavior change without the use of extinction was examined by Hoch, McComas and Thomson (2002). Two responses were measured: problem behavior maintained by negative reinforcement, and task completion in three children with autism. Moreover, the maintenance of behavior change was evaluated under conditions of increased response requirements and leaner schedules of reinforcement. The results showed that immediate and sustained decreases in problem behavior and increases in task completion occurred when task completion produced both negative reinforcement and access to preferred activities and problem behavior continued to result in negative reinforcement. The findings demonstrated that concurrent schedules of reinforcement can be arranged to decrease negatively reinforced problem behavior and increase an adaptive alternative response without the use of escape extinction.
Tiger and Hanley (2004), described a multiple-schedule procedure to reduce ill-timed requests, which involved providing children with two distinct continuous signals that were correlated with periods in which teacher attention was either available or unavailable. Cammilleri, Tiger and Hanley (2008), conducted a study in order to assess the efficacy of a classwide application of the multiple-schedule procedure described by Tiger and Hanley when implemented by teachers during instructional periods in three elementary classrooms. The results demonstrated the effectiveness of a classwide multiple-schedule procedure when implemented by teachers in a private elementary school classroom.
Conclusively, schedules of reinforcement are not only rules that govern which responses will be reinforced; they are substantial components of a behavior change program. CRF schedules are used in the acquisition of a behavior -when a person is learning a behavior or engaging in the behavior for the first time. Once the person has acquired or learned the behavior, an intermittent reinforcement schedule is used so that the person continues to engage in the behavior -maintenance of behavior (Miltenberger, 2008). In this way, schedules of reinforcement help in the progression to naturally occurring reinforcement, which is a major goal for most behavior change programs.
It was shown that schedules of reinforcement can be applied effectively in different settings, behaviors, populations. They have been used to decrease inappropriate behaviors such as rapid eating (Wright & Vollmer, 2002) or classroom misbehavior (Dietz & Repp, 1973); to increase appropriate behaviors such as arithmetic response rate and attending behavior (Kirby & Shields, 1972). They have also been applied in both typically developing children (e.g. Austin & Soeda, 2008), and in children with behavior problems (e.g. Rasmussen & O’Neill, 2006). Schedules of reinforcement can have great effects in a behavior change program, but it is also very important to know how and when to apply the most appropriate schedule or a combination of them in a specific behavior.
References
Austin, J. L.,& Soeda, J. M. (2008). Fixed-time teacher attention to decrease off-task behaviors of typically developing third graders. Journal of Applied Behavior Analysis, 41, 279-283.
Cammilleri, A. P., Tiger, J. H., & Hanley, G. P. (2008). Developing stimulus control of young children’s requests to teachers: Classwide applications of multiple schedules. Journal of Applied Behavior Analysis, 41, 299-303.
Carr, J. E., Kellum, K. K., & Chong, I. M. (2001). The reductive effects of noncontingent reinforcement: Fixed-time versus variable-time schedules. Journal of Applied Behavior Analysis, 34, 505-509.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.), Schedules of reinforcement (pp. 304-323). Upper Saddle River, NJ: Pearson.
De Luca, R. V., & Holborn, S. W. (1992). Effects of a variable-ratio reinforcement schedule with changing criteria on exercise in obese and nonobese boys. Journal of Applied Behavior Analysis, 25, 671-679.
Dietz, S. M., & Repp, A. C. (1973). Decreasing classroom misbehavior through the use of DRL schedules of reinforcement. Journal of Applied Behavior Analysis, 6, 457-463.
Hoch, H., McComas, J. J. and Thomson, A. L., & Paone, D. (2002). Concurrent reinforcement schedules: Behavior change and maintenance without extinction. Journal of Applied Behavior Analysis, 35, 155-169.
Kirby, F. D., & Shields, F. (1972). Modification of arithmetic response rate and attending behavior in a seventh-grade student. Journal of Applied Behavior Analysis, 5, 79-84.
Lattal, K. A., & Neef, N. A. (1996). Recent reinforcement-schedule research and applied behavior analysis. Journal of Applied Behavior Analysis, 29, 213-220. Cited in Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.), Schedules of reinforcement (pp. 304-323). Upper Saddle River, NJ: Pearson.
Rasmussen, K., & O’Neill, R. E. (2006). The effects of fixed-time reinforcement schedules on problem behavior of children with emotional and behavioral disorders in a day-treatment classroom setting. Journal of Applied Behavior Analysis, 39, 453-457.
Roane, H. S., Fisher, W. W., & Sgro, G. M. (2001). Effects of a fixed-time schedule on aberrant and adaptive behavior. Journal of Applied Behavior Analysis, 34, 333-336.
Roane, H. S., & Lerman, D. C. and Vorndran, C. M. (2001). Assessing reinforcers under progressive schedule requirements. Journal of Applied Behavior Analysis, 34, 145-167.
Tiger, J. H., & Hanley, G. P. (2004). Developing stimulus control of preschooler mands: An analysis of schedule-correlated and contingency-specifying stimuli. Journal of Applied Behavior Analysis, 37, 517-521. Cited in Cammilleri, A. P., Tiger, J. H., & Hanley, G. P. (2008). Developing stimulus control of young children’s requests to teachers: Classwide applications of multiple schedules. Journal of Applied Behavior Analysis, 41, 299-303.
Van Camp, C. M., Lerman, D. C., Kelley, M. E., Contrucci, S. A., & Vorndran, C. M. (2000). Variable-time reinforcement schedules in the treatment of socially maintained problem behavior. Journal of Applied Behavior Analysis, 33, 545-557.
Wright, C. S., & Vollmer, T. R. (2002). Assessment of a treatment package to reduce rapid eating. Journal of Applied Behavior Analysis, 35, 89-93.