Operant Conditioning: How Behavior Is Shaped by Its Own Consequences

Мы поможем в написании ваших работ!

Мы поможем в написании ваших работ!

Мы поможем в написании ваших работ!


Operant Conditioning: How Behavior Is Shaped by Its Own Consequences

Operant behavior is characteri2ed by actions that have consequences. Flick a light switch and the consequence is illumination. Saw on a piece of wood and the consequence is two shorter pieces of wood. Tell a joke and the consequence is (sometimes) the laughter of others. Work hard at a job all week and the consequence is a paycheck. In each of these cases the specified action ‘‘operates"' on the environment, changes it in some way.

It was Б. F. Skinner (1904—1990) who applied the term operant to the kind of behaviors described above. He saw that operant behavior is both acquired and shaped by experience. Consequently, he identified it as a kind of learning. In addition, also categon2ed it as a form of conditioning because he believed that such concepts as consciousness and thinking are not necessary to explain much (perhaps most) operanl behavior.

Skinner, long associated with Harvard., invented a device called the operant conditioning apparatus; its informal name is the Skinner box. Think of the apparatus as something like a candy machine for animals such as rats and pigeons. A rat, for example, learns that it can obtain a pellet of food when it presses a lever. If the pellet appears each time the lever is pressed, the rate of lever pressing will increase. Lever pressing is operant behavior (or simply an operant') The pellet is a reinforcer. A reinforcer is a stimulus that has the effect of increasing the frequency of a given category of behavior (in this case, lever pressing).

The concept of reinforcement plays a big part in Skinner’s way of looking at behavior. Consequently, it is important to expand on the concept. Note in the above definition that a reinforcer is understood in terms of its actual effects. It is to be distinguished from a reward. A reward is perceived as valuable to the individual giving the reward., but it may not be valued by the receiving organism. In the case of a reinforcer, it is a reinforcer only if it has some sort of payoff value to the receiving organism. By definition, a reinforcer has an impact on operant behavior.

Its function is always to increase the frequency of a class of operant behaviors.

One important way to categori2e reinforcers is to refer to them as positive and negative. A positive reinforcer has value for the organism. Food when you are hungry, water when you are thirsty, and money when you’re strapped for cash all provide examples of positive reinforcers.

A negative reinforcer has no value for the organism. It does injury or is noxious in some way. A hot room, an offensive person, and a dangerous situation all provide examples of negative reinforcers. The organism tends to either escape from or avoid such reinforcers. The operant behavior takes the subject aatay from the reinforcer. Turning on the air conditioner when a room is hot provides an example of operant behavior designed to escape from a negative reinforcer. Note that the effect of the negative reinforcer on behavior is still to increase the frequency of a class of operants. You are more likely to turn on an air conditioner tomorrow if you have obtained relief by doing so today.

It is also important to note that a negative reinforcer is not punishment. In the case of punishment, an operant isfbllowedhy an adverse stimulus. For example, a child sasses a parent and then gets slapped. Getting slapped comes after the child’s behavior. In the case of a negative reinforcer, the adverse stimulus is first in time.

Then the operant behavior of escape or avoidance follows.

Another important way to classify reinforcers is to designate them as having either a primary or a secondary quality. A primary reinforcer has intrinsic value for the organism. No learning is required for the worth of the reinforcer to exist.

Food when you are hungry and water when you are thirsty are not only positive reinforcers, as indicated above, they are also primary reinforcers.

A secondary reinforcer has acquired value for the organism. Learning is required. Money when you’re strapped for cash is a positive reinforcer, as indicated above, but it is a secondary one. You have to learn that cash has value. An infant does not value cash, but does value milk. A medal, a diploma, and a trophy all provide examples of secondary reinforcers.

One of the important phenomena associated with operant conditioning is extinction. Earlier, we discussed how extinction takes place when the conditioned stimulus is presented a number of times without the unconditioned stimulus. Extinction also takes place when the frequency of a category of operant responses declines. If, using the operant conditioning apparatus, reinforcement is withheld from a rat, then lever pressing for food will decline and eventually diminish to nearly zero. The organism has learned to give up a given operant because it no longer brings the reinforcer.

Both animal and human research on extinction suggest that it is a better way to “break" bad habits than is punishment. If a way can be found to eliminate the reinforcer (or reinforcers) linked to a behavior pattern, the behavior is likely to be given up. Punishment tends to temporarily suppress the appearance of an operant, but extinction has not necessarily taken place. Consequently, the unwanted operant has “gone underground,"' and may in time surface as an unpleasant surprise. Also, punishment is frustrating to organisms and tends to make them more aggressive.

Another important phenomenon associated with operant conditioning is the partial reinforcement effect, the tendency of operant behavior acquired under conditions of partial reinforcement to possess greater resistance to extinction than behavior acquired under conditions of continuous reinforcement. Let’s say that rat 1 is reinforced every time it presses a lever; this rat is receiving continuous reinforcement Rat 2 is reinforced every other time it presses a lever; this rat is receiving partial reinforcement. Both rats will eventually acquire the lever-pressing response. Now assume that reinforcement is withheld for both rats. The rat that win, in most cases, display greater resistance to extinction is rat 2. Skinner was surprised by this result. If reinforcement is a kind of strengthening of a habit, then rat 1, receiving more reinforcement, should have the more well-established habit.

And it should demonstrate greater resistance to extinction than rat 2.

Nonetheless, the partial reinforcement effect is a reality, and Skinner became interested in it. He and his coworkers used many schedules of reinforcement to study the partial reinforcement effect. In general, it holds for both animals and human beings that there is indeed a partial reinforcement effect. Random reinforceme is determined by chance, and is, consequently, unpredictable. If behavior is acquired wit] random reinforcement, it exaggerates the partial reinforcement effect. Skinner was fond of pointing out that random payoffs are associated with gambling. This explains to some extent why a well-established gambling habit is hard to break.

Assume that an instrumental conditioning apparatus contains a light bulb.

When the light is on, pressing the lever pays off. When the light is off, pressing

the lever fails to bring forth a reinforcer. Under these conditions, a trained experimental

animal will tend to display a high rate of lever pressing when the light is

on and ignore the lever when the light is off. The light is called a discriminative

stimulus, meaning a stimulus that allows the organism to tell the difference

between a situation that is potentially reinforcing and one that is not. Cues used

to train animals, such as whistles and hand signals, are discriminative stimuli.

Skinner notes that discriminative stimuli control human behavior, too. A factory whistle communicating to workers that it’s time for lunch, a bell’s ring for a prizefighter, a school bell’s ring for a child, and a traffic light for a driver are all discriminative stimuli. Stimuli can be more subtle than these examples. A lover’s facial expression or tone of voice may communicate a readiness or lack of readiness to respond to amorous advances.

Skinner asserts that in real life both discriminative stimuli and reinforcers automatic control much of our behavior.

Последнее изменение этой страницы: 2016-12-12; Нарушение авторского права страницы; Мы поможем в написании вашей работы!

infopedia.su Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. Обратная связь - (0.007 с.)