Hawthorne effect

From New World Encyclopedia
Revision as of 19:37, 1 February 2007 by Marissa Kaufmann (talk | contribs) (copied from wikipedia)



The Hawthorne effect refers to the phenomenon that when people are observed in a study, their behavior or performance temporarily changes. Others have broadened the definition to mean that people’s behavior and performance change, following any new or increased attention. The term gets its name from a factory called the Hawthorne Works[1], where a series of experiments on factory workers were carried out between 1924 and 1932.

There were many types of experiments conducted on the employees, but the purpose of the original ones was to study the effect of lighting on workers’ productivity. When researchers found that productivity almost always increased after a change in illumination, no matter what the level of illumination was, a second set of experiments began, supervised by Harvard University professors Elton Mayo, Fritz Roethlisberger and William J. Dickson.

They experimented on other types of changes in the working environment, using a study group of five young women. Again, no matter the change in conditions, the women nearly always produced more. The researchers reported that they had accidentally found a way to increase productivity. The effect was an important milestone in industrial and organizational psychology and in organizational behavior. However, some researchers have questioned the validity of the effect because of the experiments’ design and faulty interpretations. (See Interpretations, criticisms, and conclusions below.)

The Hawthorne Experiments

Like the Hawthorne effect, the definition of the Hawthorne experiments also varies. Most industrial/occupational psychology and organizational behavior textbooks refer to the illumination studies, and usually to the relay assembly test room experiments and the bank wiring room experiments. Only occasionally are the rest of the studies mentioned.[2].

Illumination studies

The Hawthorne Works, located in Cicero, Illinois and just outside of Chicago, belonged to the Western Electric Company, and the studies were funded by the National Research Council of the National Academy of Sciences at the behest of General Electric, the largest manufacturer of light bulbs in the United States [3]. The purpose was to find the optimum level of lighting for productivity.

During two and a half years from 1924 to 1927, a series of illumination level studies was conducted [4]:

  • Study 1a: In the first experiment, there was no control group. The researchers experimented on three different departments; all showed an increase of productivity, whether illumination increased or decreased.
  • Study 1b: A control group had no change in lighting, while the experimental group got a sequence of increasing light levels. Both groups substantially increased production, and there was no difference between the groups. This naturally piqued the researchers curiosity.
  • Study 1c: The researchers decided to see what would happen if they decreased lighting. The control group got stable illumination; the other got a sequence of decreasing levels. Surprisingly, both groups steadily increased production until finally the light in experimental group got so low that they protested and production fell off.
  • Study 1d: This was conducted on two girls only. Their production stayed constant under widely varying light levels. It was found that if the experimenter said bright was good, they said they preferred the light; the brighter they believed it to be, the more they liked it. The same was true when he said dimmer was good. If they were deceived about a change, they said they preferred it. Researchers concluded that their preference on lighting level was completely subjective - if they were told it was good, they believed it was good and preferred it, and vice versa.

At this point, researchers realized that something else besides lighting was affecting productivity. They suspected that the supervision of the researchers had some effect, so they ended the illumination experiments in 1927.

Relay assembly experiments

The researchers wanted to identify how other variables could affect productivity. They chose two women as test subjects and asked them to choose four other workers to join the test group. Together the women worked in a separate room over the course of five years (1927-1932) assembling telephone relays.

Output was measured mechanically by counting how many finished relays each dropped down a chute. This measuring began in secret two weeks before moving the women to an experiment room and continued throughout the study. In the experiment room, they had a supervisor who discussed changes with them and at times used their suggestions. Then the researchers spent five years measuring how different variables impacted the group's and individuals' productivity. Some of the variables were:

  • changing the pay rules so that the group was paid for overall group production, not individual production
  • giving two 5-minute breaks (after a discussion with them on the best length of time), and then changing to two 10-minute breaks (not their preference). Productivity increased, but when they received six 5-minute rests, they disliked it and reduced output.
  • providing food during the breaks
  • shortening the day by 30 minutes (output went up); shortening it more (output per hour went up, but overall output decreased); returning to the earlier condition (where output peaked).

Changing a variable usually increased productivity, even if the variable was just a change back to the original condition. however it is said that this is the natural processes of the human being to adapt to the environment without knowing the objective of the experiment being taken place. they wanted to study hard on their work because they thought that they where being experimented on individually for the effort they put into their work.

Researchers hypothesized that choosing one's own coworkers, working as a group, being treated as special (as evidenced by working in a separate room), and having a sympathetic supervisor were the real reasons for the productivity increase. One interpretation, mainly due to Mayo, was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment." (There was a second relay assembly test room study whose results were not as significant as the first experiment.)

Bank wiring room experiments

The purpose of the next study was to find out how payment incentives would affect group productivity. The surprising result was that they had no effect. Ironically, this contradicted the Hawthorne effect: although the workers were receiving special attention, it didn’t affect their behavior or productivity! However, the informal group dynamics studied were a new milestone in organizational behavior.

The study was conducted by Mayo and W. Lloyd Warner between 1931 and 1932 on a group of 14 men who put together telephone switching equipment. The researchers found that although the workers were paid according to individual productivity, productivity did not go up because the men were afraid that the company would lower the base rate. The men also formed cliques, ostracized coworkers, and created a social hierarchy that was only partly related to the difference in their jobs. The cliques served to control group members and to manage bosses; when bosses asked questions, clique members gave the same responses, even if they were untrue.

Mica splitting test room

In this study from 1928 to 1930, workers in the mica splitting room were paid by individual piece rate, rather than by group incentives. However, work environment conditions were changed to see how they affected productivity. The study lasted fourteen months and productivity also increased by fifteen percent

Definitions

Here are some sample definitions of the Hawthorne effect, showing how differently it can be defined:

  • An experimental effect in the direction expected but not for the reason expected; i.e., a significant positive effect that turns out to have no causal basis in the theoretical motivation for the intervention, but is apparently due to the effect on the participants of knowing themselves to be studied in connection with the outcomes measured [5].
  • The Hawthorne Effect [is] the confounding that occurs if experimenters fail to realize how the consequences of subjects' performance affect what subjects do[6].
  • People singled out for a study of any kind may improve their performance or behavior, not because of any specific condition being tested, but simply because of all the attention they receive[7].
  • People will respond positively to any novel change in work environment[8]

Interpretation, criticism, and conclusions

H. McIlvaine Parsons (1974) argues that in 2a (first case) and 2d (fourth case) they had feedback on their work rates; but in 2b they didn't. He argues that in the studies 2a-d, there is at least some evidence that the following factors were potent:

  1. Rest periods
  2. Learning, given feedback i.e. skill acquisition
  3. Piecework pay where an individual does get more pay for more work, without counter-pressures (e.g. believing that management will just lower pay rates).

Clearly the variables the experimenters manipulated were neither the only nor the dominant causes of productivity changes. One interpretation, mainly due to Mayo, was that "the six individuals became a team and the team gave itself wholeheartedly and spontaneously to cooperation in the experiment." In 1955 Landsberger reinterpreted the experimental outcomes as the more general result of being observed and labeled this result the "Hawthorne effect."

Parsons redefines "the Hawthorne effect as the confounding that occurs if experimenters fail to realize how the consequences of subjects' performance affect what subjects do" [i.e. learning effects, both permanent skill improvement and feedback-enabled adjustments to suit current goals]. So he is saying it is not attention or warm regard from experimenters, but either a) actual change in rewards b) change in provision of feedback on performance. His key argument is that in 2a the "girls" had access to the counters of their work rate, which they didn't previously know at all well.

It is notable however that he refuses to analyze the illumination experiments, which don't fit his analysis, on the grounds that they haven't been properly published and so he can't get at details, whereas he had extensive personal communication with Roethlisberger and Dickson.

It's possible that the illumination experiments were explained by a longitudinal learning effect. But Mayo says it is to do with the fact that the workers felt better in the situation, because of the sympathy and interest of the observers. He does say that this experiment is about testing overall effect, not testing factors separately. He also discusses it not really as an experimenter effect but as a management effect: how management can make workers perform differently because they feel differently. A lot to do with feeling free, not feeling supervised but more in control as a group. The experimental manipulations were important in convincing the workers to feel this way: that conditions were really different. The experiment was repeated with similar effects on mica splitting workers.

When we refer to "the Hawthorne effect" we are pretty much referring to Mayo's interpretation in terms of workers' perceptions, but the data show strikingly continuous improvement. It seems quite a different interpretation might be possible: learning, expertise, reflection — all processes independent of the experimental intervention? However the usual Mayo interpretation is certainly a real possible issue in designing studies in education and other areas, regardless of the truth of the original Hawthorne study.

Recently the issue of "implicit social cognition" i.e. how much weight we actually give to what is implied by others' behavior towards us (as opposed to what they say e.g. flattery) has been discussed: this must be an element here too.

Richard E. Clark and Timothy F. Sugrue (1991, p.333) in a review of educational research say that uncontrolled novelty effects (i.e. halo effect) cause on average 30% of a standard deviation (SD) rise (i.e. 50%-63% score rise), which decays to small level after 8 weeks. In more detail: 50% of a SD for up to 4 weeks; 30% of SD for 5-8 weeks; and 20% of SD for > 8 weeks, (which is < 1% of the variance).

Can we trust the research?

Candice Gleim says:

Broad experimental effects and their classifications can be found in Donald T. Campbell & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally. and Cook, T. D., & Campbell, D. T. (1979), Quasi-Experimentation: Design and Analysis Issues. Houghton Mifflin Co.

Michael L. Kamil says: You might want to be a bit careful about the scientific basis for the Hawthorne effect. Lee Ross has brought the concept into some question. There is a popular news story in the New York Times a couple of years ago:

David Carter-Tod says: A psychology professor at the University of Michigan, Dr. Richard Nisbett, calls the Hawthorne effect 'a glorified anecdote.' 'Once you've got the anecdote,' he said, 'you can throw away the data.'" A dismissive comment which back-handedly tells you something about the power of anecdote and narrative. There is however, no doubt that there is a Hawthorne effect in education particularly see: reference 1 reference 2 reference 3

Harry Braverman say in "Labor and Monopoly Capital": The Hawthorne tests were based on behaviorist psychology and were supposed to confirm that workers performance could be predicted by pre-hire testing. However, the Hawthorne study showed "that the performance of workers had little relation to ability and in fact often bore a reverse relation to test scores...". What the studies really showed was that the workplace was not "a system of bureaucratic formal organization on the Weberian model, nor a system of informal group relations, as in the interpretation of Mayo and his followers but rather a system of power, of class antagonisms". This discovery was a blow to those hoping to apply the behavioral sciences to manipulate workers in the interest of management.

What may be wrong about the quoted dismissiveness is that there was not one study, but three illumination experiments, and 4 other experiments: only one of these seven is alluded to. What is right is that a) there certainly are significant criticisms of the method that can be made and b) most subsequent writing shows a predisposition to believe in the Hawthorne effect, and a failure to read the actual original studies.

Can we trust the literature?

The experiments were quite well enough done to establish that there were large effects due to causal factors other than the simple physical ones the experiments had originally been designed to study. The output ("dependent") variables were human work, and we can expect that educational effects to be similar (but it is not so obvious that medical effects would be). The experiments stand as a warning about simple experiments on human participants as if they were only material systems. There is less certainty about the nature of the surprise factor, other than it certainly depended on the mental states of the participants: their knowledge, beliefs, etc.

Candidate causes are:

  1. Material factors, as originally studied e.g. illumination, ...
  2. Motivation or goals e.g. piecework, ...
  3. Feedback: can't learn skill without good feedback. Simply providing proper feedback can be a big factor. This can often be a side effect of an experiment, and good ethical practice promotes this further. Yet perhaps providing the feedback with nothing else may be a powerful factor.
  4. The attention of experimenters.

Parsons implies that (4) might be a "factor" as a major heading in our thinking, but as a cause can be reduced to a mixture of (2) and (3). That is: people might take on pleasing the experimenter as a goal, at least if it doesn't conflict with any other motive; but also, improving their performance by improving their skill will be dependent on getting feedback on their performance, and an experiment may give them this for the first time. So you often won't see any Hawthorne effect — only when it turns out that with the attention came either usable feedback or a change in motivation.

Adair (1984): warns of gross factual inaccuracy in most secondary publications on Hawthorne effect. And that many studies failed to find it, but some did. He argues that we should look at it as a variant of Orne's (1973) experimental demand characteristics. So for Adair, the issue is that an experimental effect depends on the participants' interpretation of the situation; that this may not be at all like the experimenter's interpretation and the right method is to do post-experimental interviews in depth and with care to discover participants' interpretations. So he thinks it is not awareness per se; nor special attention per se; but you have to investigate participants' interpretation in order to discover if/how the experimental conditions interact with the participants' goals (in participants' view). This can affect whether participants' believe something, if they act on it or don't see it as in their interest, etc.

Rosenthal and Jacobson (1992) ch.11 also reviews and discusses the Hawthorne effect.

Its interpretation in management research The research was and is relevant firstly in the 'Human Resources Management' movement. The discovery of the effect was most immediately a blow to those hoping to apply the behavioral sciences to manipulate workers in the interest of management.

Other interpretations it has been linked to are: Durkheim's 'anomie' concept; the Weberian model of a system of bureaucratic formal organization; a system of informal group relations, as in the interpretation of Mayo and his followers; a system of power, of class antagonisms.

Summary view of Hawthorne

In the light of the various critiques, we can see the Hawthorne effect at several levels.

At the top level, it seems clear that in some cases there is a large effect that experimenters did not anticipate, that is due to participants' reactions to the experiment itself. It only happens sometimes. So as a methodological heuristic (that you should always think about this issue) it is useful, but as an exact predictor of effects, it is not: often there is no Hawthorne effect of any kind. To understand when and why we will see a Hawthorne or experimenter effect, we need more detailed considerations.

At a middle level Adair (1984) says that the most important (though not the only) aspect of this is how the participants interpret the situation. Interviewing them (after the "experiment" part) would be the way to investigate this.

This is important because factory workers, students, and most experimental participants are doing things at the request of the experimenter. What they do depends on what their personal goals are, how they understand the task requested, whether they want to please the experimenter and/or whether they see this task as impinging on other interests and goals they hold, what they think the experimenter really wants. Besides all those issues that determine their goals and intentions in the experiment, further aspects of how they understand the situation can be important by affecting what they believe about the effects of their actions. Thus the experimenter effect is really not one of interference, but of a possible difference in the meaning of the situation for participants and experimenter. Since all voluntary action (i.e. actions in most experiments) depends upon the actor's goals AND on their beliefs about the effects of their actions, differences in understanding of the situation can have big effects.

At the lowest level is the question of what the direct causal factors might be. These could include:

  • Material ones that are intended by the experimenter
  • Feedback that an experiment might make available to the participants
  • Changes to goals, motivation, and beliefs about action effects induced by the experimental situation.

Science studies

If you want just to find causes and laws, not to achieve any useful practical effect, then the focus is on isolating causes by controlling experiments and avoiding things such as the Hawthorne effect. Hence, in medical research, double blind trials etc.

Note that double blind trials (where neither experimenter nor patient know which intervention/treatment they are getting during the trial) are quite practicable for testing pills (where a dummy sugar pill can easily be made that the patient cannot tell apart from other pills); but not for major surgery, nor usually for educational interventions that require actions by the learner: in these cases participants necessarily know which treatment they have been given.

Double (or triple) blind trials "control for" all four of the above effects in the sense of making them equal for all groups by removing the ability of both experimenter and participants to even know which treatment they are getting, much less to believe they know which is more effective.

They may tend to reduce the placebo effect since the patient knows they have only a 50% chance that they are getting the active treatment. However they do NOT remove the Hawthorne effect (only make it equal for all groups in the trial), since on the contrary the experiment almost certainly makes participants very aware of receiving special attention. This could mean that the effect sizes measured in some groups are misleading, and would not be seen later in normal practice. The trial would be a fair comparison between groups, but the (size of) effect measured would not be predictive of the effect seen in non-experimental conditions, due to a similar "error" (i.e. effect due to the Hawthorne effect) applying to both groups.

This could, at least in theory, matter. A case in point could be comparing homeopathic and conventional medicine. Generally a patient will get about 50 minutes of the practitioner's attention in the former case, and 5 minutes in the latter. It is not hard to imagine that this could have a significant effect on patient recovery. A standard double blind experiment would be most seriously misleading in a case where both a drug and the Hawthorne effect of attention were of similar size, but not additive (i.e. either one was effective, but getting both gave no extra benefit): conventional trial would see similar and useful effect sizes in all groups, but would not be able to tell that in fact either giving the drug or giving an hour's attention to the patient were alternative effective therapies.

Finally, neither medicine nor education habitually employ counter-balanced experimental designs, where all participants get both treatments: one group gets A then B, and the other gets B then A. This is because of the possibility of asymmetric transfer effects i.e. the effect of B (say) is different depending on whether or not the participant had A first. For instance, learning French vocabulary first then reading French literature is not likely to have the same effect as receiving them the other way round. Applied or engineering studies (Shayer) Shayer thinks there are distinct questions and stages to address in applied as opposed to "scientific" research — i.e. in research on being able to generalize the creation of a desired effect:

  1. Study primary effect: Is there an effect (whatever the cause), what effect, what size of effect?
  2. Replication: can it be done by other enthusiasts (not only by the original researcher)?
  3. Generalizability: can it be done by non-enthusiasts? i.e. can it be transferred via training to the general population of teachers? i.e. without special enthusiasm or skills. This is actually a test of the training procedure, not of the effect — but that is a vital part of whether the effect can be of practical use.

One danger is the Hawthorne effect: you get an effect, but not due to the theory. The opposite is to get a null effect even though the theory is correct because transfer/training didn't work. So you need to do projects in several stages, showing effects at each.

In stage (1) you do an experiment and show there really is an effect, defensible against all worries. But you still haven't shown what it is caused by: whether the factors described in your theory, or by the experimenter: i.e. no defense against Hawthorne. Use 1 or 2 teachers, and control like crazy. In (2) you show it can be done by others: so at least not just a Papert charisma effect, but it still might be a learner enthusiasm effect (halo). Use say 12 teachers. In (3) you are testing whether training can be done.

Note that if what you care about is improving learning and the learners' experience, then you may want to maximize not avoid halo and Hawthorne effects. If you can improve learning by changing things every year, telling students this is the latest thing, then that is the ethical and practical and practically effective thing to do.

Rosenthal's suggestions on method Rosenthal and Jacobson (1992) have a brief chapter proposing methods to address these effects, at least for "science" studies of primary effects.

They say firstly we should have Hawthorne controls i.e. 3 groups: control (no treatment); experimental (the one we are interested in); a Hawthorne control, which has a change or treatment manifest to participants but not one that could be effective in the same way as the experimental intervention. [This is the reply to wanting to do triple blind trials, but not being able to avoid participants knowing something is being done; AND is a response to measuring the size of the placebo effect as well as of the experimental effect.]

Secondly, have "Expectancy control designs": 2X2 of control/experimental X with / without secondary participants expecting a result. [Hawthorne effect and control groups are about subject expectancies; expectancy controls are about Pygmalion effect i.e. teachers' expectancies.]

So, combining these, they then suggest a 2 X 3 design of {teacher expects effect or not} X {control, experimental, Hawthorne ctrl i.e. placebo treatment}. The point of these is not merely to avoid confounding factors but to measure their existence and size in the case being studied.

N.B. A medical trial with drug and placebo groups is most like having experimental and Hawthorne-control groups but no pure control group. Adding the latter would additionally require a matched group that was monitored but given no treatment. However participants are normally told it is a blind trial, rather than fully expecting both treatment and placebo to be effective, so this is not an exact parallel.

Adair (1984) suggests that the important (though not the only) aspect of these effects is how the participants interpret the situation. Interviewing them (after the "experiment" part) would be the way to investigate this. This is also essential in "blind" trials to check whether the blinding is in fact effective. Some trials which are conducted and probably published as blind are in fact not. If the active treatment has a readily perceptible side effect on most patients (e.g. hair falls out, urine changes color, pronounced dry mouth) both doctors and patients will quickly know who does and does not have the active drug. Blinding depends on human perception, and so these perceptions should be measured.

Summary recommended method First party (cf. "single blind"): the pupil or patient Second party (cf. "double blind"): the teacher or doctor or researcher 2nd party expectancy 1st party expectancy Teacher (mis)led to expect positive result Experimental group Control group: no treatment Hawthorne control: irrelevant treatment / placebo Teacher (mis)led to expect no effect Experimental group Control group: no treatment Hawthorne control: irrelevant treatment / placebo Plus interview both first and second parties on how they see (interpret) the situation.

We know that all the above effects can have important and unexpected effects. So we cannot trust results that don't at least try to control for them. A double or triple blind procedure allows a 2-group experiment to control for them. Rosenthal's recommended 6-group approach is three times more costly. However it doesn't merely control but measures the size of all three effects (placebo, Hawthorne, and the material effect) separately AND their interactions. If the effects aren't there, that might be grounds for doing it more simply and cheaply in future. But if they are, then without the larger design, we cannot know what size of effect to expect in real life, only that there is an effect that is independent of expectations. Thus we could see a blind trial as somewhat like Shayer's stage 1 (establishing the existence of an effect), while the larger designs also address aspects of later practical stages.

Because placebo effects are so large and so prevalent in medicine, blind trials have become the standard there. Nevertheless they do not give information about the size of benefit to be expected in real life use. In fact it may initially be greater than in the trials, because the placebo effect will be unfettered (everyone will expect it to work after the trials), but may decline to lower levels later. Another way of looking at it is that blind trials test the effect of the (say) drug, but resolutely refuse to investigate the placebo and Hawthorne benefits even though these may possibly be of similar size and benefit to the patient. Drug companies may reasonably stick to research that informs their concerns only, but those who either claim to investigate all causes or those that benefit patients or pupils have much less excuse.

Currently we don't understand how any of these effects work. This could probably be done, but would require some concentrated research e.g. on uncovering how expectancies are communicated (cf. "clever Hans") unconsciously or anyway implicitly, and what expectancies are in fact generated.

See also

  • Experimenter effect
  • Reflexivity (social theory)
  • Pygmalion effect
  • Placebo effect
  • John Henry Effect

References
ISBN links support NWE through referral fees

  1. "The Hawthorne Works" from Assembly Magazine
  2. What We Teach Students About the Hawthorne Studies: A Review of Content Within a Sample of Introductory I-O and OB Textbooks
  3. http://www.mgmtguru.com/mgt301/301_Lecture1Page10.htm The Hawthorne Experiments: Management Takes A New Direction
  4. Roethlisberger, F.J. and Dickson, W.J. (1939). Management and the Worker. Cambridge, MA: Harvard University Press, 14-18. 
  5. The Hawthorne, Pygmalion, placebo and other effects of expectation: some notes
  6. Parsons, H. M. (1974). What happened at Hawthorne?. Science 183: 93.
  7. The Hawthorne defect: Persistence of a flawed theory
  8. Jex, S. M. (2002). Organizational psychology: A scientist–practitioner approach. New York: John Wiley & Sons. 
  • Mayo, E. (1933) The human problems of an industrial civilization (New York: MacMillan) ch. 3.
  • Roethlisberger, F. J. & Dickson, W. J. (1939) Management and the Worker (Cambridge, Mass.: Harvard University Press).
  • Landsberger, Henry A. (1958) Hawthorne Revisited, (Ithaca, NY: Cornell University)
  • Gillespie, Richard (1991) Manufacturing knowledge : a history of the Hawthorne experiments (Cambridge : Cambridge University Press)
  • Was There a Hawthorne Effect? Stephen R. G. Jones, The American Journal of Sociology. 98(3) (Nov., 1992), pp. 451-468, from the abstract "the main conclusion is that these data show slender to no evidence of the Hawthorne Effect"
  • Franke, R.H. & Kaul, J.D. "The Hawthorne experiments: First statistical interpretation." American Sociological Review, 1978, 43, 623-643.
  • Steve Draper, university professor of the UK.

External links

Further reading

  • G. Adair (1984) "The Hawthorne effect: A reconsideration of the methodological artifact" Journal of Appl. Psychology 69 (2), 334-345 [Reviews references to Hawthorne in the psychology methodology literature.]
  • Clark, R. E. & Sugrue, B. M. (1991) "Research on instructional media, 1978-1988" in G. J. Anglin (ed.) Instructional technology: past, present, and future, ch.30, pp.327-343. Libraries unlimited: Englewood, Colorado.
  • Gillespie, Richard, (1991) Manufacturing knowledge : a history of the Hawthorne experiments. Cambridge : Cambridge University Press.
  • Jastrow (1900) Fact and fable in psychology. Boston: Houghton Mifflin.
  • Lovett, R. "Running on empty" New Scientist 20 March 2004 181 no.2439 pp.42-45
  • Leonard, K.L. and Masatu, M.C. "Outpatient process quality evaluation and the Hawthorne effect" Social Science and Medicine 69 no.9 pp.2330-2340
  • Marsh, H.W. (1987) "Student's evaluations of university teaching: research findings, methodological issues, and directions for future research" Int. Journal of Educational Research 11 (3) pp.253-388.
  • Elton Mayo (1933) The human problems of an industrial civilization (New York: MacMillan)
  • Orne, M. T. (1973) "Communication by the total experimental situation: Why is it important, how it is evaluated, and its significance for the ecological validity of findings" in P. Pliner, L. Krames & T. Alloway (Eds.) Communication and affect pp.157-191. New York: Academic Press.
  • H. M. Parsons (1974) "What happened at Hawthorne?" Science 183, 922-932 [A very detailed description, in a more accessible source, of some of the experiments; used to argue that the effect was due to feedback-promoted learning.]
  • Fritz J. Roethlisberger & Dickson, W. J. (1939) Management and the Worker. Cambridge, Mass.: Harvard University Press.
  • Rosenthal, R. (1966) Experimenter effects in behavioral research (New York: Appleton).
  • Rosenthal, R. & Jacobson, L. (1968, 1992) Pygmalion in the classroom: Teacher expectation and pupils' intellectual development. Irvington publishers: New York.
  • Rhem, J. (1999) "Pygmalion in the classroom" in The national teaching and learning forum 8 (2) pp. 1-4.
  • Schön, D. A. (1983) The reflective practitioner: How professionals think in action (Temple Smith: London) (Basic books?)
  • Shayer, M. (1992) "Problems and issues in intervention studies" in Demetriou, A., Shayer, M. & Efklides, A. (eds.) Neo-Piagetian theories of cognitive development: implications and applications for education ch. 6, pp.107-121. London: Routledge.
  • Wall, P. D. (1999) Pain: the science of suffering. Weidenfeld & Nicolson.
  • Zdep, S. M. & Irvine, S. H. (1970) "A reverse Hawthorne effect in educational evaluation." Journal of School Psychology 8, pp.89-95.


Credits

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

The history of this article since it was imported to New World Encyclopedia:

Note: Some restrictions may apply to use of individual images which are separately licensed.