WTF are Schedules of Reinforcement

Trying to grasp which schedule is which, when to use one versus the other or simply trying to classify which is at play in an example? Let the bitches explain schedules of reinforcement SNABA style!

Reinforcement: when a stimulus is added (#positive) or removed (#negative) contingent on a behavior that increases the future probability of that response in similar situations.

Fixed Interval: this schedule requires that a fixed amount of time passes and the first completed response after that time has passed produces a reinforcer. A post-reinforcement pause is often noted with looking at the consistency of responding in this schedule.

Real World Example
Do you guys still play Candy Crush? Well I do… and at some point we all have. Remember when you ran out of lives and then you had to wait 15 entire minutes before you got another one!? Like, who even set that duration? Anyway, your first response of opening the app after that duration had passed produced reinforcement. If you opened it any sooner, you still couldn’t play.

Clinical Example
A classroom teacher who also has training in ABA is growing tired of giving vocal reminders to a specific student that they can’t always be called on, even if they know the correct answer. The teacher decides to review and implement a fixed interval schedule of 5 minutes. She explains to the student that after the five minute interval has passed, she will call on her the first time she raises her hand and then the 5 minute interval will start over. The intervention was so successful that the teacher gradually increased the time interval to continue to give other children opportunities to participate.

Fixed Ratio: this schedule requires a fixed amount of responses to occur for a reinforcer. A post-reinforcement pause is often noted when looking at the consistency of responding in this schedule.

Real World Example
Nicole owns her own pottery business. She only makes what her client’s order and gets paid by the piece.

Clinical Example
Marcy, a BCBA introduces a token system to a new learner. She works with an RBT to explain that they initially will reward a token for each individual response. They start with a single token on the board and then give the learner a reinforcer break. Over time, Marcy adjusts the number of tokens earned for each working time to eventually go up to 6 tokens. When there are 6 tokens this still could be considered a fixed schedule, but the number has changed from 1 response to 6 responses before a reinforcer break is delivered.

Variable Interval: this schedule requires a variable (on average) amount of time to pass and the first completed response after the time has passed produces a reinforcer. This schedule typically produces slow and steady responses that are constant and stable.

Real World Example
Have you ever told a joke at the wrong time? Well, joke telling is a prime example of a variable interval schedule of social skills. In some situations you can tell a joke immediately and it will contact reinforcement (people laugh lol) but at other times you have to read the room and judge how much time should pass before you tell a joke. What makes it hard is that the amount of time that passes is never the same! In a more clinical setting you would use some sort of generator to ensure that the average interval is maintained but this is the real world we’re talking about. This isn’t a perfect example but you get the gist.

Clinical Example
Stacey wants to work on a client’s on task behavior. She wants to do it right though, and not just take data on staring at a piece of paper. So she creates a beautiful operational definition of on task behavior and uses a computer system to generate a variable interval schedule of 5 minutes. She reinforces her student’s on task behavior on the first response after the time interval has passed by giving him a bit of attention and social praise. By the end of the month her student’s on task behavior increased by 40%.

Variable Ratio: this schedule requires a variable (on average) number of responses to occur for a reinforcer. These schedules often produce consistent, steady rates of responding because learners do not know which response will contact reinforcement.

Real World Example
Your grandma sends you to the gas station to purchase a daily lottery ticket even though she knows that chances of winning are slim. There was that one time she won 50$ that she still recalls, and a handful of occasions where she won 20$ or less. Still, she doesn’t know which day will be the day that she hits the MegaMillions.

Clinical Example
Margaret, a clinical BCBA, has just introduced a new program to one of her clients. He is 4.5 years old and has recently mastered the simple game Memory independently. Margaret’s new goal is for the client to play simple games such as Candy Land, Uno, and Don’t Break the Ice. At first when the game is introduced, the therapists take it easy on the client and contrive the game so that the client wins every time. Margaret gave feedback to the therapists that that was not real life and they needed to actually try to win the game. The client sometimes contacted reinforcement through praise and high-fives when he won a game and learned a functional response when losing “can we play again?”.

Schedules of Reinforcement is covered in Task List 5- B-5, G-1, G-14, G-22.

Take our Mini Mock Section G to test your knowledge!

Mini Mock Section G: 5th Edition

Related Articles