Can Deception Be Justified In Management
And Organisational Research?

                                                              

 

Peter Engholm

Monash University, May 2001

 

 

 

ABSTRACT:

Deception in organisational and management research raises a wide array of ethical and moral questions. Opponents to deceptive techniques argue that the principles of individual autonomy and respect are violated if participants are not fully informed about the study, and this could also cause them psychological harm. Deception can also have negative consequences for researchers and the society in general. Still, this paper takes a ‘cost-benefit’ approach to deceptive practices and argues that it can be morally and ethically justified as long as the study advances scientific knowledge, has no alternatives, and is constructed to minimise potential harm. Additionally, it is imperative that ethical committees give their consent to deceptive studies before they are conducted.

 

 

KEYWORDS:

Deception, Management research, organisations, ethics

 

FULL TEXT:

The use of deception in management and organisational research raises a wide array of ethical and moral questions, and although some ethics committees and associations (such as the American Psychological Association) try to provide guidelines for when it might be justifiable to use deception methods, researchers are often left to their own judgment of what might be ethically and morally acceptable methods. The first part of this paper discusses various objections to the use of deceptive methods and why some suggest that it is not justifiable in management and organisational research. The second part looks at whether some deception may be justified despite the problems it seems to create. It shall be argued that alternative methods of research should be deployed if deception is not imperative for a research project to proceed and be successful. In some cases, however, deception is necessary for the advance of scientific knowledge and could be morally and ethically justified as long as the research minimises potential harm to participants and is backed by an ethical committee or association guidelines.

In deception research, the researcher studies subjects who are purposely led to have false beliefs or assumptions about the study (Sieber, 1992; Singleton and Straits, 1998). In other words, the participants of a deception research do not fully understand the true nature of the research and cannot therefore be said to be properly informed about it. This violates one of the most fundamental codes of scientific research practice, namely that of informed consent. This code demands of researchers that they obtain the consent of human participants before research involving these participants can proceed, and it further demands that this consent is based upon the participants’ having access to relevant information about the research and properly understanding its purpose. It is an expression of sentiments of Western societies in which respect for the value of autonomy is paramount (Clarke, 1999). Everyone should have a right to freely and unconditionally decide whether to participate in a research project, but it is imperative that this decision is based on truth about the project – not lies. It is also widely argued that a discipline that is built upon the value of truth is not compatible with deceptive behaviour (Diener and Crandall, 1978), and it is also argued that lies do not have a place in our modern society of respect for the individual – therefore deception is not justifiable.

Another ethical issue is concerned with the potential psychological harm experienced by participants as a cause of deceptive methods. This argument follows from the previous issue about autonomy and goes like this: If participants do not know all the details about the research, how can they then make a rational autonomous choice not to participate if they believe they might experience strong negative psychological effects from it? Deceiving subjects about the purpose of the study raises the possibility that they will contribute to the achievement of a scientific goal that conflicts with participants’ beliefs, for example, deeply held moral and religious beliefs (Wendler, 1996).

Participants may also feel upset simply due to the fact that they have been deceived. Wendler (1996) refers to a study by Fleming that looked at participants’ acceptability of deceptive methods. The study revealed that one-third of the participants were upset by the deception, although two-thirds supported the study and said they would be willing to participate again. Withholding information or lying about the purpose of the study therefore does not only potentially harm people with moral, ethical or religious concerns, but also some people who believe that deception is not acceptable per se.

Defendants of deception argue that there are ways to deal with these problems. One of the suggestions includes letting participants know that there are some parts of the study that need to be kept secret but which will be explained to them after the study (taking the form of debriefing). Participants are told about the necessity to use deception for the advance of scientific knowledge, and are given the opportunity to get these issues clarified after the study. The expected outcome is that participants should not leave the study with less self-esteem or more anxiety than they entered with (Diener and Crandall, 1978). Research on the effects of deception and debriefing indicates that, in general, carefully administered debriefing is effective (Singleton and Strait, 1998). At the same time, however, there is a lack of evidence to suggest that this method fully alleviate all previously mentioned harms associated with the research (Clarke, 1999). A study conducted by Reeves et al. (1996) also revealed that it is difficult to generalise effects of deception across studies or even sex of participants. In other words, the problem of autonomy and unconditional choice of participating in the study is not solved simply by explaining the rationale for deceiving the participants. There would still be people who would choose not to participate had all information been given to them prior to the commencement of the study. Although the researcher carefully determines all the potential risks or harms that could be experienced by the participants and informs them about these, it would be impossible to include all aspects of the study that may produce negative experiences. It would probably also serve to confuse the participants more than inform than, as pointed out by Wendler (1996).

Another obvious critique to pre-risk assessment and explanation is that it may obliterate the whole purpose of conducting a deceptive study. There will always be issues that participants must not know beforehand, and these issues would inevitable harm some. The issue of consent and deception are two opposites that cannot both satisfyingly be solved in a deceptive study. Consider the comment by Evans (2000) to this problem:

One can hardly consent to be deceived, at least prospectively, without falling into paradox: if one knowingly consents, the deception is to that extent dissolved, whereas if the deception evades the consent then the consent was not truly given. (Evans, 2000: 189)

The risk of harming participants and exposing them to situations without their consent are not the only reasons for the argument that deception is unjustifiable in research. Another criticism is concerned with the negative effects this has on our society in general and the scientists in particular. Opponents to deceptive research argue that the more prevalent the practice of deception in social psychology, the more the science comes to be associated with the practice which leads to an erosion of public trust in scientists and their purposes (Schrag, 2001). People may also incorrectly interpret real-life situations as scientific experiments.  One event commonly referred to in the deception literature is the shooting incident at the Seattle campus of the University of Washington in 1973. Students on their way to their class witnessed a shooting and neither stopped to help the victim nor followed the assailant. When questioned later, they thought it all was a psychology experiment (Diener and Crandall, 1978; Greenberg and Folger, 1988; Schrag, 2001)!

This illustrates how deception can have ramification on both science and the society. If people get used to experiments where participants are not given any opportunity to consent to the study and suspect they might occasionally and unknowingly be involved in field experiments that resembles real-life situations, they might actually act according to this belief even when they are not uninformed participants of a study. Additionally, knowledge of deception can change behaviour of participating people, whether or not deception is involved in the particular study (Diener and Crandall, 1978). This means that findings from management and organisational research becomes distorted and less valuable as the probability is increased that people change their behaviour due to a belief that they might be deceived.

Deception may also have negative effects on researchers themselves, creating notions of guilt from lying or treating people merely as subjects and not individuals (Diener and Crandall, 1978). By justifying deception in research, people may also justify deception in other situations of their lives. An interesting piece of research would evaluate the link between the acceptance of deception for finding the truth and the acceptance of deception for hiding the truth - for example in marriages and in the political sphere.

The discussion so far does not give a positive image of deception in management and organisational research. It has been shown that deception may cause harm to participants and violate their basic rights of autonomy and self-determination. Deception may also cause problems for participants, the researchers themselves, as well as our society. Still many researchers use deceptive methods, and they are justified by various instances and researcher associations. Why?

It is widely acknowledged that some truths may not be studied if participants are fully aware of the purpose of the study (Babbie, 2001; Clarke, 1999; Goode, 1996; Wendler, 1996). This is especially the case in studies on human behaviour. In studies on helping behaviour, for example, if participants were aware that they had not been fully informed, they would quite likely recognise that the research setting was contrived (Schrag, 2001).

It shall be argued here that the use of deception methods can be morally and ethically justified, as long as the study advances scientific knowledge, has no alternatives, and is constructed to minimise potential harm of everyone involved.

The first point is concerned with the value of the project. As with all organisational and management research, the underlying assumption is that the findings will contribute to scientific knowledge and the society at large. If a deception study has little or no scientific value and knowingly would expose participants to harm, it should not be done. Alternatively, other methods could be used, such as role-plays or real-life simulation games where participants act as if it was a real-life situation.  However, as also argued by Clarke (1999), if the research has considerable scientific value, some degrees of potential harm may be justified. This argument is often referred to as the ‘cost-benefit’ approach, which states that the researcher must weigh up costs (potential harm to participants) with potential benefits (scientific value) when assessing the ethical and moral perspectives of the particular research. As pointed out earlier, deception would always involve some risk of causing harm, as participants not fully informed about the project would never have had an opportunity to make an unconditional and autonomous choice not to participate.

Considering what scientific research has contributed and will contribute to our society, it is difficult to defend the argument against deception that says it cannot be deployed if any harm is involved or autonomy violated. Allowing deception for the purpose of achieving a greater good is nothing uncommon in our society and should not be considered unethical or immoral. Additionally, respect for autonomy may not be violated just because a person has not been fully informed. As noted by Klockars and O’Connor (1979), we may deceive children, which would be justifiable if it is necessary to make them into ‘better’ adults. Similarly, we would not tell the truth about a surprise birthday party to the person celebrating his/her birthday, but would deceive him/her for a greater justifiable cause. It is not always better telling the truth, which some may seem to argue. The argument that our society does not justify some deception is also not really the true picture of it, and it might not even be a society that we want. Lies and misleading information occasionally have the potential to lead to a greater good, and therefore the issue is not so much whether deception is justifiable, but rather to what extent it can be applied and how researchers can minimise the potential harm to participants.

Thus, there is a need for a balance, and this is where external ethical committees and instances play an important part. The code of ethics of both the American Psychological Association and American Sociological Association allow for deception (Singleton and Straits, 1998), and guides researchers in creating research methodologies that minimise the negative effects of deception. Because participants in a deceptive study are not giving their full consent to participate, an ethical committee can serve as an intermediary and give the ‘missing’ informed consent to the study if the negative effects of deception are considered minimal.  Most of these ethical instances agree that deception is justified as long as:

1.      The study contributes to the advance of scientific knowledge.

2.      There are no alternatives to conducting the study that do not involve using deceptive methods. That is, deception is necessary for the study to accomplish its scientific goals.

3.      The study must not deceive participants about significant aspects that would create unpleasant personal experience. To minimise the risk of potential harm to participants, debriefing methods should always be utilised as long as they do not impede on the necessary deceptive elements of the study.

To sum up the discussion, this paper argues for a case-to-case, cost-benefit (teleological) ethical approach to deception in organisational and management research. This means that for each research study, the morality of acts should be judged in relation to the ends they serve. The researcher would initially do this judgment, but the ultimate ‘green light’ must be given by a committee responsible for reviewing research proposals involving human subjects. An absolute stance against deception is hard to defend despite the various objections brought up in this paper - the benefits gained from deceptive research methods just seem to numerous. This does not mean that the costs should be neglected - rather that they should be carefully considered and balanced against perceived benefits.

 

Any reproduction or distribution of this text is prohibited without express permission by the author. Please contact peter@engholm.nu for permission or further information.

 

REFERENCES:

Babbie, E. (2001). The Practice of Social Research, Belmont: Wadsworth.

Clarke, S. (1999). “Justifying Deception in Social Science Research”, Journal of Applied Philosophy, Vol 16, No 2, 151-166.

Diener, E. and Crandall, R. (1978). Ethics in Social and Behavioural Research, Chicago & London: The University of Chicago Press.

Evans, M. (2000). “Justified Deception? The Single Blind Placebo in Drug Research”, Journal of Medical Ethics, Vol 26, No 3, 188-193.

Goode, E. (1996). “The Ethics of Deception in Social Research: A Case Study”, Qualitative Sociology, Vol 19, No 1, 11-33.

Greenberg, J. and Folger, R. (1988). Controversial Issues in Social Research Methods, New York: Springer-Verlag.

Klockars, C.B. and O’Connor, F.W. (1979). Deviance and Decency, Beverly Hills & London: Sage Publications.

Reeves, R.A., Baker, G., Goldberg, S.J. (1996). “The Effects of Researcher Precautions on Perceptions of the Ethicality of Unobtrusive Field Experiments”, The Journal of Psychology, January.

Schrag, B. (2001). “Commentary on ‘Do the End Justify the Means? The Ethics of Deception in Social Science Research”, http://www.onlineethics.org (Accessed 12 May 2001).

Sieber, J.E. (1992). Planning Ethically Responsible Research, Newbury Park: Sage Publications.

Singleton Jr, R.A. and Straits, B.C. (1998). Approaches to Social Research, 3rd Ed. New York: Oxford University Press.

Taylor, K.M. and Shepperd, J.A. (1996). “Probing Suspicion Among Participants in Deception Research”, American Psychologist, Vol 51, No 8, 886.

Wendler, D. (1996). “Deception in Medical and Behavioural Research”, The Milbank Quarterly, Vol 74, No 1, 87-114.