Classical conditioning (also Pavlovian conditioning, respondent conditioning or alpha-conditioning) is a type of associative learning. Ivan Pavlov described the learning of conditioned behavior as being formed by pairing stimuli to condition an animal into giving a certain response. The simplest form of classical conditioning is reminiscent of what Aristotle would have called the law of contiguity, which states that: "When two things commonly occur together, the appearance of one will bring the other to mind." Classical conditioning originally focused on reflexive behavior or involuntary behavior. Any reflex can be conditioned to respond to a formerly neutral stimulus. A view of classical conditioning restricted to reflexes has been abandoned in recent years, and voluntary responses to conditioned stimuli have made important contributions to the field[1]



The typical paradigm for classical conditioning involves repeatedly pairing a neutral stimulus with an unconditioned stimulus.

An unconditioned response is an automatic response brought forth by an unconditioned stimulus. These responses are automatic and require no learning and are usually apparent in all species. The relationship between the unconditioned stimulus and unconditioned response is known as the unconditioned reflex. The conditioned stimulus, is an initially neutral stimulus that elicits no response. However, when a neutral stimulus is paired with an unconditioned stimulus, learning occurs. The neutral stimulus, now known as the conditioned stimulus, brings forth the same unconditioned response, now known as the conditioned response, without being paired with the unconditioned stimulus. Conditioned stimuli are associated psychologically with conditions such as anticipation, satisfaction (both immediate and prolonged), and fear. The relationship between the conditioned stimulus and conditioned response is known as the conditioned (or conditional) reflex.

In classical conditioning, when the unconditioned stimulus is repeatedly or strongly paired with a neutral stimulus, the neutral stimulus becomes a conditioned stimulus and elicits a conditioned response.

Unconditioned stimulus (U.C.S.) Dogs and cats have the same stimuli

Unconditioned response (U.C.R.)

Neutral stimulus (N.S.)

Conditioned stimulus (C.S.)

Conditioned response (C.R.)

Food (U.C.S.) => Salivation (U.C.R.) Natural response.

Bell (N.S.) + Food (U.C.S.) => Salivation (U.C.R.) After repeating the pairing a few times.

Bell (C.S.) => Salivation (C.R.) Learning occurs. Notice how the response never changes.

There are two competing theories of how classical conditioning works. The first, stimulus-response theory suggests that an association with the UCS is made with the C.S. within the brain, but without involving conscious thought. The second theory stimulus-stimulus theory involves a cognitive component, in which the C.S. is associated with the concept of the UCS, a subtle but important distinction.

Note too that the timing of the C.S. is critical, in that it immediately precedes the U.C.S. and is a reliable method of prediction. When the order is reversed, for example, this is called backward conditioning and is usually ineffective.

Pavlov's experiment

One of Pavlov’s dogs, Pavlov Museum, 2005

The most famous example of classical conditioning involved the salivary conditioning of Pavlov's dogs. Pavlov wanted to find out how conditioned reflexes were acquired. Dogs naturally salivate to food, therefore Pavlov called the correlation between the unconditioned stimulus (food) and the unconditioned response (salivation) an unconditional reflex. He predicted that if a particular stimulus in the dog’s surroundings was present when the dog was presented with food then this stimulus would become associated with food and cause the dog to salivate. For example, if footsteps were frequently heard a few seconds before food was given to the dogs, eventually the footsteps would elicit salivation even without food. Before being paired with food, the footsteps were a neutral stimulus since it did not produce any response in the dogs besides curiosity. With repetition, the neutral stimulus became a conditioned stimulus. In his initial experiment, Pavlov used bells to call the dogs to their food, after a few repetitions, Pavlov no longer needed the food. The food had been associated with the ringing of the bell, therefore a neutral stimulus became a conditioned stimulus and had created the unconditioned response of salivation. Pavlov referred to this learned relationship as a conditioned reflex.The conditioned reflex (food-related behavior elicited by a stimulus that has been reliably paired with food) is developed through classical conditioning stimulus (footsteps) and the conditioned response (salivation) as a "conditional reflex." Pavlov also repeated this experiment with other stimuli such as a metronome and vanilla and achieved the same results. It is important to note that when Pavlov presented a neutral stimulus after the unconditioned stimulus, no conditioning took place.

The origins of the two reflexes are different. The food (unconditioned stimulus) [US] causing salivation (unconditioned response) [UR] reflex has its origins in the evolution of the species. The tone (conditioned stimulus) [CS] causing salivation (conditioned response) [CR] reflex has its origins in the experience of the individual animal.

John B. Watson's Little Albert

John B. Watson proposed that emotions (such as fear) can be conditioned in a human being. He believed that such a task could be completed by supplying a stimulus, which causes a response naturally (unconditioned stimulus) at the same time as another object, which does not evoke a response at all (neutral stimulus).

In his experiment, Watson created a fear response in an eleven month old child, Albert, whose mother was a wet nurse at the Harriet Lane Home for Invalid children. Albert's life was normal. He was a very healthy child and one of the best developed infants that had been brought to the hospital. Albert was an easy child, who rarely even cried.

Before starting the experiment, Watson had to find out if the child was afraid of objects. During this part of the experiment, Watson showed the boy several objects like a rat, rabbit, monkey, dog, cotton wool and masks with and without hair. Watson verified that Albert did not have any fear towards these objects and therefore proceeded with the rest of the experiment. The objects that Albert are shown are the neutral stimuli of this experiment. After establishing some neutral stimuli, Watson found an unconditioned stimulus, which in this case was a loud noise made by banging a hammer on a steel bar. When the loud noise was made, Albert cried and was frightened.

When Albert was 11 months old the actual experiment started because there was hesitation about the ethics of continuing with such an experiment. To condition fear in Albert, Watson and his crew presented the rat and the noise at the same time. Albert would reach for the rat and at that moment the noise would occur. This procedure was performed a total of seven times over the course of one week. After these seven rat and noise pairs, the rat was given to Albert alone. At this point, Albert was stricken with fear and attempted to get very far away from the rat.

Continuing on with the experiment, the researchers wanted to determine if Albert’s fear would transfer to similar objects (this is called generalization). The researchers showed Albert a rabbit, a fur coat, a dog, and Watson’s gray hair and all these items produced fear in little Albert even though he was not conditioned to fear these items. Five days later Albert’s fear reaction was tested. All the items still evoked fear in the infant. Watson moved Albert to a different room to find out if the fear would still be present in different situations. If the fear only existed in the experimenting room then the results of the study would not be useful. Indeed, the fear did carry over into the other room but not in as much intensity.

The testing of Albert’s fear responses was temporarily stopped for thirty-one days because Albert was being adopted and Watson wanted to see if Albert’s fear would continue over time. After the 31 days, Albert was tested once again and the researchers found that Albert indeed still had the fear of the objects from the beginning of the experiment.

At the end of the experiment, Watson wanted to recondition Albert to not fear these objects but did not have the opportunity because Albert was adopted and removed from the hospital.

The goals of Watson’s experiments was to prove behavior is learned and to show that the Freudian thinking was wrong. Freudian thinkers believed that behavior comes from the unconscious. Watson’s experiment of little Albert explained behavior in simple terms.

Watson’s study goes against the ethical conduct of today’s society. Moreover, Albert was allowed to leave the experiment without being reconditioned. Watson states in his article of the study that such emotions can last over the life of the individual. Recent research has found that if the individual is not properly conditioned then the results may not last as long as a lifetime. The results of conditioned emotions can be shaped and changed due to experiences. This disappearance of the conditioned response is called extinction.

On another note, Watson’s study has been considered in studies and treatments of phobias. Phobias are extreme forms of fear that cause problems in everyday functioning.

Behavioral therapies based on classical conditioning

In human psychology, implications for therapies and treatments using classical conditioning differ from operant conditioning. Therapies associated with classical conditioning are aversion therapy, flooding, systematic desensitization, and implosion therapy. Implosion therapy and "flooding" involve forcing the individual to face an object/situation giving rise to anxiety; both of these techniques have been criticized for being unethical since they have the potential to cause trauma.

Classical conditioning is short-term, usually requiring less time with therapists and less effort from patients, unlike humanistic therapies. The therapies mentioned in the last paragraph are intended to cause either aversive feelings toward something, or to reduce the aversion altogether. Classical conditioning is based on a repetitive behavior system.

When a behavior that has been strongly reinforced in the past no longer gains a reinforcement, an extinction burst may occur. The animal repeats the behavior over and over again, in a burst of activity, then stops permanently.

Aversion therapy

This is a form of psychological therapy that is designed to eliminate, for example, sexual behavior by associating an aversive stimulus such as nausea with sex. Because the aversive stimulus performs as a US and produces a UR, the association between the stimulus and behavior leads to the same consequences each time. If the treatment has worked, the patient will not have a compulsion to engage in such behaviors again. This sort of treatment has been used to treat alcoholism as well as drug addiction.

Systematic desensitization

Patients might learn that the object of their phobias or fears are not so fearful if they can safely relive the feared stimulus. However, anxiety often obstructs such recovery. This obstruction is overcome by reintroducing the fear-producing object gradually by a process known as reciprocal inhibitions. A person constructs a hierarchy of events leading to the feared situation. This hierarchy is approached step by step and anxiety is relieved at every level. The fear is eventually removed if the therapy is performed correctly.

Neurological research

Much research on the neurological basis of essential learning has been conducted on the marine snail Aplysia californica, or California sea slug. While having a rather small nervous system, consisting of approximately 20 000 neurons, this snail is capable of classical conditioning, habituation as well as sensitization. This makes it suitable for experimental research on learning.[2]

Further reading

  • Pavlov, I. P. (1927). Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex (translated by G. V. Anrep). London: Oxford University Press.

See also

  • Backward conditioning
  • Behaviorism
  • Eyeblink conditioning
  • Learned helplessness
  • Nocebo
  • Operant conditioning
  • Placebo (origins of technical term)
  • Rescorla-Wagner model of conditioning
  • S-R theory
  • S-S theory
  • Second-order conditioning
  • Taste aversion
  • Edwin B. Twitmyer

"Operant" redirects here. For the meaning of operant, see Operant.

Operant conditioning is the modification of behavior by making the presence or absence of rewards or punishment contingent on what is done. Operant conditioning is distinguished from Pavlovian conditioning in that operant conditioning deals with reinforcement of behavior that has already been voluntarily undertaken, while Pavlovian conditioning is about presenting stimuli together such that a neutral one starts to take on the meaning, in the animals mind, normally associated with the other stimulus.[3]

Operant conditioning, sometimes called instrumental conditioning or instrumental learning, was first extensively studied by Edward L. Thorndike (1874-1949), who observed the behavior of cats trying to escape from home-made puzzle boxes.[4] When first constrained in the boxes, the cats took a long time to escape. With experience, ineffective responses occurred less frequently and successful responses occurred more frequently, enabling the cats to escape in less time over successive trials. In his Law of Effect, Thorndike theorized that successful responses, those producing satisfying consequences, were "stamped in" by the experience and thus occurred more frequently. Unsuccessful responses, those producing annoying consequences, were stamped out and subsequently occurred less frequently. In short, some consequences strengthened behavior and some consequences weakened behavior. B.F. Skinner (1904-1990) built upon Thorndike's ideas to construct a more detailed theory of operant conditioning based on reinforcement, punishment, and extinction.

Reinforcement, punishment, and extinction

Reinforcement, and punishment, the core ideas of operant conditioning, are either positive (introducing a stimulus to an organism's environment following a response), or negative (removing a stimulus from an organism's environment following a response). This creates a total of four basic consequences, with the addition of a fifth procedure known as extinction (i.e. nothing happens following a response).

It's important to note that organisms are not spoken of as being reinforced, punished, or extinguished; it is the response that is reinforced, punished, or extinguished. Additionally, reinforcement, punishment, and extinction are not terms whose use are restricted to the laboratory. Naturally occurring consequences can also be said to reinforce, punish, or extinguish behavior and are not always delivered by people.

  • Reinforcement is a consequence that causes a behavior to occur with greater frequency.
  • Punishment is a consequence that causes a behavior to occur with less frequency.
  • Extinction is the lack of any consequence following a response. When a response is inconsequential, producing neither favorable nor unfavorable consequences, it will occur with less frequency.

Four contexts of operant conditioning: Here the terms "positive" and "negative" are not used in their popular sense, but rather: "positive" refers to addition, and "negative" refers to subtraction. What is added or subtracted may be either reinforcement or punishment. Hence positive punishment is sometimes a confusing term, as it denotes the addition of punishment (such as spanking or an electric shock), a context that may seem very negative in the lay sense. The four procedures are:

  1. Positive reinforcement occurs when a behavior (response) is followed by a favorable stimulus (commonly seen as pleasant) that increases the frequency of that behavior. In the Skinner box experiment, a stimulus such as food or sugar solution can be delivered when the rat engages in a target behavior, such as pressing a lever.
  2. Negative reinforcement occurs when a behavior (response) is followed by the removal of an aversive stimulus (commonly seen as unpleasant) thereby increasing that behavior's frequency. In the Skinner box experiment, negative reinforcement can be a loud noise continuously sounding inside the rat's cage until it engages in the target behavior, such as pressing a lever, upon which the loud noise is removed.
  3. Positive punishment (also called "Punishment by contingent stimulation") occurs when a behavior (response) is followed by an aversive stimulus, such as introducing a shock or loud noise, resulting in a decrease in that behavior.
  4. Negative punishment (also called "Punishment by contingent withdrawal") occurs when a behavior (response) is followed by the removal of a favorable stimulus, such as taking away a child's toy following an undesired behavior, resulting in a decrease in that behavior.


  • Avoidance learning is a type of learning in which a certain behavior results in the cessation of an aversive stimulus. For example, performing the behavior of shielding one's eyes when in the sunlight (or going indoors) will help avoid the punishment of having light in one's eyes.
  • Extinction occurs when a behavior (response) that had previously been reinforced is no longer effective. In the Skinner box experiment, this is the rat pushing the lever and being rewarded with a food pellet several times, and then pushing the lever again and never receiving a food pellet again. Eventually the rat would cease pushing the lever.
  • Non-contingent Reinforcement is a procedure that decreases the frequency of a behavior by both reinforcing alternative behaviors and extinguishing the undesired behavior. Since the alternative behaviors are reinforced, they increase in frequency and therefore compete for time with the undesired behavior.

Operant Conditioning vs Fixed Action Patterns

Skinner's construct of instrumental learning is contrasted with what Nobel Prize winning biologist Konrad Lorenz termed "fixed action patterns," or reflexive, impulsive, or instinctive behaviors. These behaviors were said by Skinner and others to exist outside the parameters of operant conditioning but were considered essential to a comprehensive analysis of behavior.

In dog training, the use of the prey drive, particularly in training working dogs, detection dogs, etc., the stimulation of these fixed action patterns, relative to the dog's predatory instincts, are the key to producing very difficult yet consistent behaviors, and in most cases, do not involve operant, classical, or any other kind of conditioning[citation needed]. While evolutionary processes shaped these fix action patterns, the patterns themselves remained stable long enough to be shaped by the long time span necessary for evolution because of their survival function (i.e., operant conditioning).

According to the laws of operant conditioning, any behavior that is consistently rewarded, every single time, will extinguish at a faster rate while intermittently reinforcing behavior leads to more stable rates of behavior that are relatively more resistant to extinction. Thus, in detection dogs, any correct behavior of indicating a "find," must always be rewarded with a tug toy or a ball throw early on for initial acquisition of the behavior. Thereafter, fading procedures, in which the rate of reinforcement is "thinned" (not every response is reinforced)are introduced, switching the dog to an intermittent schedule of reinforcement, which is more resistant to instances of non-reinforcement.

Nevertheless, some trainers are now using the prey drive to train pet dogs and find that they get far better results in the dogs' responses to training than when they only use the principles of operant conditioning[citation needed], which according to Skinner, and his disciple Keller Breland (who invented clicker training), break down when strong instincts are at play.[5]

Biological correlates of operant conditioning

The first scientific studies identifying neurons that responded in ways that suggested they encode for conditioned stimuli came from work by Rusty Richardson and Mahlon deLong[6][7]. They showed that nucleus basalis neurons, which release acetylcholine broadly throughout the cerebral cortex, are activated shortly after a conditioned stimulus, or after a primary reward if no conditioned stimulus exists. These neurons are equally active for positive and negative reinforcers, and have been demonstrated to cause plasticity in many cortical regions[8].

Evidence also exists that dopamine is activated at similar times. The dopamine pathways encode positive reward only, not aversive reinforcement, and they project much more densely onto frontal cortex regions. Cholinergic projections, in contrast, are dense even in the posterior cortical regions like the primary visual cortex. A study of patients with Parkinson's disease, a condition attributed to the insufficient action of dopamine, further illustrates the role of dopamine in positive reinforcement.[9] It showed that while off their medication, patients learned more readily with aversive consequences than with positive reinforcement. Patients who were on their medication showed the opposite to be the case, positive reinforcement proving to be the more effective form of learning when the action of dopamine is high.

Factors that alter the effectiveness of consequences

How effective a consequence can be at modifying a response will tend to increase or decrease according to various factors. These factors can apply to both reinforcing and punishing consequences.

  1. Satiation: The effectiveness of a consequence will be reduced if the individual's "appetite" for that source of stimulation has been satisfied. Inversely, the effectiveness of a consequence will increase as the individual becomes deprived of that stimulus. If someone is not hungry, food will not be an effective reinforcer for behavior.
  2. Immediacy: After a response, how immediately a consequence is then felt determines the effectiveness of the consequence. More immediate feedback will be more effective than less immediate feedback. If someone's license plate is caught by a traffic camera for speeding and they receive a speeding ticket in the mail a week later, this consequence will not be very effective against speeding. But if someone is speeding and is caught in the act by an officer who pulls them over, then their speeding behavior is more likely to be affected.
  3. Contingency: If a consequence does not contingently (reliably, or consistently) follow the target response, its effectiveness upon the response is reduced. But if a consequence follows the response reliably after successive instances, its ability to modify the response is increased. If someone has a habit of getting to work late, but is only occasionally reprimanded for their lateness, the reprimand will not be a very effective punishment.
  4. Size: This is a "cost-benefit" determinant of whether a consequence will be effective. If the size, or amount, of the consequence is large enough to be worth the effort, the consequence will be more effective upon the behavior. An unusually large lottery jackpot, for example, might be enough to get someone to buy a one-dollar lottery ticket (or even buying multiple tickets). But if a lottery jackpot is small, the same person might not feel it to be worth the effort of driving out and finding a place to buy a ticket. In this example, it's also useful to note that "effort" is a punishing consequence. How these opposing expected consequences (reinforcing and punishing) balance out will determine whether the behavior is performed or not.

Most of these factors exist for biological reasons. The biological purpose of the Principle of Satiation is to maintain the organism's homeostasis. When an organism has been deprived of sugar, for example, the effectiveness of the taste of sugar as a reinforcer is high. However, as the organism reaches or exceeds their optimum blood-sugar levels, the taste of sugar becomes less effective, perhaps even aversive.

The principles of Immediacy and Contingency exist for neurochemical reasons. When an organism experiences a reinforcing stimulus, dopamine pathways in the brain are activated. This network of pathways "releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic neurons."[10] This makes recently activated synapses able to increase their sensitivity to efferent signals, hence increasing the probability of occurrence for the recent responses preceding the reinforcement. These responses are, statistically, the most likely to have been the behavior responsible for successfully achieving reinforcement. But when the application of reinforcement is either less immediate or less contingent (less consistent), the ability of dopamine to act upon the appropriate synapses is reduced.

Extinction-induced variability

While extinction, when implemented consistently over time, results in the eventual decrease of the undesired behavior, in the near-term the subject might exhibit what is called an extinction burst. An extinction burst will often occur when the extinction procedure has just begun. This consists of a sudden and temporary increase in the response's frequency , followed by the eventual decline and extinction of the behavior targeted for elimination.

Take, as an example, a pigeon that has been reinforced to peck an electronic button. During its training history, every time the pigeon pecked the button, it will have received a small amount of bird seed as a reinforcer. So, whenever the bird is hungry, it will peck the button to receive food. However, if the button were to be turned off, the hungry pigeon will first try pecking the button just as it has in the past. When no food is forthcoming, the bird will likely try again... and again, and again. After a period of frantic activity, in which their pecking behavior yields no result, the pigeon's pecking will decrease in frequency.

The evolutionary advantage of this extinction burst is clear. In a natural environment, an animal that persists in a learned behavior, despite not resulting in immediate reinforcement, might still have a chance of producing reinforcing consequences if they try again. This animal would be at an advantage over another animal that gives up too easily.

Extinction-induced variability serves a similar adaptive role. When extinction begins, an initial increase in the response rate is not the only thing that can happen. Operant behavior is different from reflexes in that its response topography (the form of the response) is subject to slight variations from one performance to another. These slight variations can include small differences in the specific motions involved, differences in the amount of force applied, and small changes in the timing of the response. The subject's history of reinforcement is what keeps those slight variations stable by maintaining successful variations instead of less successful variations.

Imagine a bell curve. The horizontal axis would represent the different variations possible for a given behavior. The vertical axis would represent the response's probability in a given situation. Response variants in the middle of the bell curve, at its highest point, are the most likely because those responses, according to the organism's experience, have been the most effective at producing reinforcement. The more extreme forms of the behavior would lie at the lower ends of the curve, to the left and to the right of the peak, where their probability for expression is low.

A simple example would be a person inside a room opening a door to exit. The response would be the opening of the door, and the reinforcer would be the freedom to exit. For each time that same person opens that same door, they do not open the door in the exact same way every time. Rather, each time they open the door a little differently: sometimes with less force, sometimes with more force; sometimes with one hand, sometimes with the other hand; sometimes more quickly, sometimes more slowly. Because of the physical properties of the door and its handle, there is a certain range of successful responses which are reinforced.

Now imagine in our example that the subject tries to open the door and it won't budge. This is when extinction-induced variability occurs. The bell curve of probable responses will begin to broaden, with more extreme forms of behavior becoming more likely. The person might now try opening the door with extra force, repeatedly twist the knob, try to hit the door with their shoulder, maybe even call for help or climb out a window. This is how extinction causes variability in behavior, in the hope that these new variations might be successful. For this reason, extinction-induced variability is an important part of the operant procedure of Shaping.

Avoidance learning

Avoidance training belongs to negative reinforcement schedules. The subject learns that a certain response will result in the termination or prevention of an aversive stimulus. There are two kind of commonly used experimental settings: discriminated and free-operant avoidance learning.

Discriminated avoidance learning

In discriminated avoidance learning, a novel stimulus such as a light or a tone is followed by an aversive stimulus such as a shock (CS-US, similar to classical conditioning). Whenever the animal performs the operant response, the CS(conditioned stimulus) respectively the US(unconditioned stimulus)is removed. During the first trials (called escape-trials) the animals usually experiences both the CS and the US, showing the operant response to terminate the aversive US. By the time, the animal will learn to perform the response already during the presentation of the CS thus preventing the aversive US from occurring. Such trials are called avoidance trials.

Free-operant avoidance learning

In this experimental session, no discrete stimulus is used to signal the occurrence of the aversive stimulus. Rather, the aversive stimulus (mostly shocks) are presented without explicit warning stimuli.
There are two crucial time intervals determining the rate of avoidance learning. This first one is called the S-S-interval (shock-shock-interval). This is the amount of time which passes during successive presentations of the shock (unless the operant response is performed). The other one is called the R-S-interval (response-shock-interval) which specifies the length of the time interval following an operant response during which no shocks will be delivered. Note that each time the organism performs the operant response, the R-S-interval without shocks begins newly.

Two-process theory of avoidance

This theory was originally established to explain learning in discriminated avoidance learning. It assumes two processes to take place . a) Classical conditioning of fear During the first trials of the training, the organism experiences both CS and aversive US(escape-trials). The theory assumed that during those trials classical conditioning takes places by pairing the CS with the US. Because of the aversive nature of the US the CS is supposed to elicit a conditioned emotional reaction (CER) - fear. In classical conditioning, presenting a CS conditioned with an aversive US disrupts the organism's ongoing behavior. b) Reinforcement of the operant response by fear-reduction Because during the first process, the CS signaling the aversive US has itself become aversive by eliciting fear in the organism, reducing this unpleasant emotional reaction serves to motivate the operant response. The organism learns to make the response during the CS thus terminating the aversive internal reaction elicited by the CS. An important aspect of this theory is that the term "Avoidance" does not really describe what the organism is doing. It does not "avoid" the aversive US in the sense of anticipating it. Rather the organism escapes an aversive internal state, caused by the CS.

  • One of the practical aspects of operant conditioning with relation to animal training is the use of shaping (reinforcing successive approximations and not reinforcing behavior past approximating), as well as chaining.

See also

  • Animal training A task that typically (though not always) requires operant conditioning.
  • Applied Behavior Analysis The use of operant procedures in applied settings.
  • Behavior modification
  • Behaviorism A family of philosophies stating that behavior is explained by external events. This is the background under which Operant Conditioning procedures were developed.
  • Classical conditioning
  • Cognitivism A theory that behavior may be explained by invoking internal mental representations and operations. This theory is in direct contrast to Behaviorism.
  • Educational psychology An academic domain that draws on Operant Conditioning for classroom management purposes.
  • Educational technology
  • Experimental analysis of behavior Field of experimental research responsible for developing operant conditioning procedures.
  • Premack principle A theory that a desirable action can be used effectively as a reinforcer for a less desirable one.
  • Reinforcement
  • Social conditioning
  • operant hoarding
  • resurgence


  1. Principles of Learning and Behavior, Domjan, Fifth Edition, page 70
  2. Kolb, B. & Whishaw, I. (2001). An Introduction to Brain and Behavior. New York: Worth Publishers. ISBN 0-7167-5169-0
  3. The Principles of Learning and Behavior, Fifth Edition, Ed. Michael Domjan
  4. Thorndike, E. L. (1901). Animal intelligence: An experimental study of the associative processes in animals. Psychological Review Monograph Supplement, 2, 1-109.
  5. Breland, Keller & Breland, Marian (1961), The Misbehavior of Organisms, American Psychologist.
  6. [J. Neurophysiol. 34:414-27, 1971]
  7. [Advances Exp. Medicine Biol. 295:233-53 1991]
  8. [PNAS 93:11219-24 1996, Science 279:1714-8 1998]
  9. Michael J. Frank, Lauren C. Seeberger, and Randall C. O'Reilly (2004) "By Carrot or by Stick: Cognitive Reinforcement Learning in Parkinsonism," Science 4, November 2004
  10. Schultz, Wolfram (1998). Predictive Reward Signal of Dopamine Neurons. The Journal of Neurophysiology, 80(1), 1-27.

External links


New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

Note: Some restrictions may apply to use of individual images which are separately licensed.