How Scientific is Parapsychology?
Abstract : In the pursuit of truth, parapsychologists aim to investigate and understand the real-life anomalous experiences of people within a controlled testing environment..... How scientific is Parapsychology?

Abstract : In the pursuit of truth, parapsychologists aim to investigate and understand the real-life anomalous experiences of people within a controlled testing environment. Scepticism and criticisms of parapsychology are discussed in relation to recent research and attempts are made to suggest a strong scientific nature in parapsychological research.

What is Parapsychology?
Parapsychology is a branch of experimental psychology which developed out of the earlier work of psychical researchers who attempted to use scientific techniques to test spiritual claims of life after death and human consciousness. Originally, foundations were set up such as the SPR (Society for Psychical Research) by Victorian scientists to investigate evidence for the human soul and after a century of debate, our modern parapsychology exists to empirically examine such claims and experiences within strict scientific settings such as laboratories rather than field research and spontaneous cases.

Apparent paranormal phenomena or “anomalous experience” persist across cultures and has done so throughout history; such experiences have taken a plethora of different forms and interpretations. Are theses experiences all an illusion or fantasy? Or, is there something else more to it. Parapsychology sets out to try and establish if there is something in such experience beyond fantasy or illusion and attempts further more to measure the effects and perhaps eventually construct theories based on such evidence. However, just how valid these measurements are is a big question area in modern scientific literature and various speculations of experimenter error, poor protocol/analysis and even fraud have been purported to explain the apparent confirmatory results obtained by parapsychologists for the existence of the “something else”.

Thouless and Wiesner (1948) instigated the term “psi” as a neutral expression of the variety of anomalous experience and it has come to cause theoretical confusion in itself as to it actual intrinsic meaning. Psi is generally taken to represent the unknown factor arising in the empirical data from the event that takes place and the participant's own experience which are being monitored by the experimenters. As Dean Radin points out, there are two main areas of science- Empirical and Theoretical. Empirical science consists of the gathering of data and the Theoretical aspect comes in to play when attempting to construct models and conceptual mechanisms to explain the data. Where it is true that parapsychologists have very little theory which is highly plausible and parsimonious, they do however; have much empirical data that suggests that there is indeed an unknown factor at play. This unknown factor is referred to as “psi” rather than necessarily ESP (extra-sensory perception) or PK (psycho-kinesis) which impose a theoretical interpretation on the data.

Why all the controversy? Parapsychologists have presented anomalies within numbers that challenge the current scientific paradigm without a good ready made and acceptable explanation as to why. This has resulted in much scrutiny over procedures and methodology from the larger scientific community and consequently parapsychologists aim to be exceptionally stringent scientists.

What is a sceptic? Why be sceptical?
Being a sceptic on any genre relates to a personal demand for reasonable evidence and rationale behind a given theory. In parapsychology, the issue of scepticism is a contentious one as the meaning itself is often misused and sceptical scientists misrepresented. Many sceptics of parapsychology seem to use the term to ascribe a disbelief in the implications of proposed empirical claims. However, any good scientist should be a sceptic across every field (especially there own) in order to seek out reasonable evidence and rationale and not tend to drive their data sets based entirely on personal convictions. Providing that parapsychology uses the same stringent mathematical tools, experimental controls and procedures as other sciences, then there is no more reason to be sceptical of parapsychology's scientific validity than of any other science. However, many scientists appear to represent sceptical approaches as based on their own convictions and beliefs about the impossibility of existence of ESP etc, and this prompts us to ask the question - how much evidence is needed? Are we to view the required amount for belief on an individual level? A differing amount for each person who examines the evidence? Would that not make everyone a sceptic? Few people believe that their views are not based on reason and evidence. One could argue that the “sceptics” appear to be making a religion out of science and claiming rational supremacy.

When discussing parapsychology and bias, one must be careful when speaking of “sceptics”; as what do we mean by “sceptic”? Are we referring to those in opposition to parapsychology or those interested in parapsychology with a disbelief in the so called “paranormal”? Being in opposition the study of anomalous experience or the “paranormal” based on a belief that there is nothing to study, is in itself a biased and unscientific viewpoint and a true sceptic would not base their scientific rationale on mere personal conjecture and selective arguments. Surely parapsychologists vary in their degrees of personal belief/disbelief in the paranormal and extended consciousness much in the way that astronomers vary in their belief in black holes? Searching for and testing for the elusive does not necessarily imply a belief system that depends on either the existence of or non existence of some phenomenon. However, it could be argued that people who enter into parapsychological research, do so because of personal experience which has led them to seek the “truth” – however, this no more a spurious argument than the personal beliefs of psychologists who study schizophrenia or alcoholism, it almost seems naïve to assume that all such psychologists are working in the mental health profession because they are mentally ill or have had some personal experience of the above. Perhaps this is true for some but it is unlikely to be the justification for the existence of such a field of researchers in any scientific genre. Dean Radin even suggests that parapsychologists are as sceptical as the majority of associated scientists and colleagues as this is the necessary attitude to conduct good science research.

What makes parapsychology scientific?
Key words: Objectivity and controls, reactions to constructive criticisms -methodology, meta-analysis and statistics, artefact control- subject/experimenter effects and blind trials

In science, it is generally thought that in order to investigate the nature of something, we first must establish that there is something to investigate and then proceed to investigate and relate current knowledge and theories to it. Some sceptical critics claim that parapsychology fails on this very fundamental base – that there are no established effects to investigate but this is a self-defeating argument as in order to establish if something exists we must look at the raw data and not the proposed theories of what exists and whether that fits with the data or not. The raw data suggests the existence of some effect and so we must investigate this unknown or “psi” anomaly and base our conclusions upon the outcomes not presumptions.
Behavioural sciences have only somewhat recently come to accept the problem of subject/experimenter artefacts in testing including i.e. the uncontrolled factors which jeopardise the validity and reliability of experimentation. The reason behind the reluctance to change was largely due to the behaviourist paradigm in which the behavioural sciences were growing and based on the ideas of Skinner and behaviourists; where cognition and individual differences were thought to be largely irrelevant factors, Watson, (1913). Early studies into the Hawthorne effect, and obedience to authority such as the famous Milgram (1963) study all helped to accelerate the growth of artefact control and the utility that psychology has in scientific research. Again, parapsychology is no different, and great care is taken to ensure that experimenter and participant artefacts are eliminated where possible.

Various critics such as Wiseman, Smith and Kornbrot (1994) have brought important methodological issues to the attention of both parapsychologists and experimental psychologists in general. The exposure of certain design flaws and procedural safeguards are essential for the progression of any area of enquiry; productive and responsible research is priority. As a scientific community we must encourage fair and constructive criticisms that can be heeded for later experimental refinement and replication. A good experiment should have perspicacious intentions of scientific enquiry and results, also the information must be made accessible to others in both the scientific world and the general public, for whom we are investigating.

Controls trials for comparisons
Control trials are essential in any experiment designed to test a procedure or effect of some kind and parapsychological experiments are no exception. A typical example is where in a DMILS (direct mental interaction between living systems) style experiment, such as the feeling of being watched by an unseen observer – remote staring detection, where half of the trial will have a starer actively staring at the participant and in the other half there will be no starer. The order in which these trials will take place is randomised and precautions are always taken to ensure that the participant has no knowledge of which is which. An often cited problem with such experiments comes in the form of information leakage, usually presumed to arise from subtle cues from either starer/participant or between any interaction with the experimenter and participant.

Experimenter effects – ESP?
It has been suggested that the starer in this instance might give away very subtle, unintentional cues from changes in their breathing rhythms or body language; if the starer was to be excited in some fashion, perhaps by the staree participant cueing a potential hit, or even a dead miss then it may be plausible that the starer will show an obvious reaction of awe, frustration or even disappointment – quite unconsciously. Such expressions may travel as sound and be processed unconsciously be the staree participant. As a result of such criticism, protocols have been tightened again and again. For example, such DMILS experiments take place using participant in separate sound proofed rooms, separated by different floors, Dalton (1997). Artefacts dues to participant/experimenter cueing is real issue but can be relived by taking the above precautions and by using blind measures.

The regulation and controlling of participant selection will help eradicate deliberate deception by participants. If the experimenter is also kept blind as to the order of conditions etc, then they can not influence the participant's outcome. On the other hand, if we are to assume that ESP is possible, then we cannot reasonably rule out that the experimenter may have an awareness of the conditions etc via ESP themselves and somehow act as a conduit to the participant. Since we cannot feasibly test the “psi facilitation” of 3 rd parties without a way of ensuring control tests where there is a guaranteed non experimenter psi effect, I suggest that experimenter psi effects ought to be examined via meta-analyses of research involving differing designs of experimenter involvement and protocols. The participants that are used in parapsychology experiments vary as much as in any other behavioural science. Generally, a random sample of the population will represent both believers and non-believers (sheep and goats respectively in parapsychology terms) but various studies use a more specific population such as creative individuals (Morris, Dalton, Delanoy and Watt, (1995) – to test a specific hypothesis about such people. In either incidence it is important to account for fraudulent participation as much as possible; there are some people who would benefit (financially or in status) from faking a test in order to appear psychic and so care must taken in both participant selection and in the methodology itself. The use of regulated questionnaires where faking good and bad are accounted for is highly desirable over ones that do not.
Marilyn Schlitz has been conducting recent experiments with Richard Wiseman and has found that when carrying out the same ESP experiments (same protocols, equipment, randomization procedures and same subject population) in his lab rather than hers where both used half the participant sample each, there was a difference in results between her results and Wiseman's. Schlitz got a result (positive psi effect) and Wiseman did not. The experiment was repeated with identical conditions as before but in Schlitz' laboratory and the same contrast in results were found again. Schlitz speculates that the role of the experimenter's expectations may be relevant, Schlitz and Wiseman (1997).

Furthermore, by utilising meta-analytic techniques, experimenter effects (both interactional and non-interactional) can be monitored and attempts made to controlling the number of experimenters and limiting and standardizing their interactions with subjects in future research. Subject effects such as the “good-subject effect” Orne (1962) can also lead to artefacts which can be misleading and difficult to separate from a positive psi result. A well known method for controlling such artifactual influence and nuisance factors as experimenter/subject biasing expectancy effects (Rosenthal, 1976) and information leakage is of course to conduct blind trails. Most areas of scientific research are familiar with the idea of double blind trials – where neither participant nor experimenter is aware of the trial condition. The order of conditions and trials is randomised and set by a third party who has no influence over the experimental condition- computers are frequently used to produce the order of operation on experiments now. What is almost unheard of out with parapsychology are triple blind trials where not only the participant and experimenter are unaware of the order of conditions but so too is the person who performs the analysis using pre-determined coding systems. Such strict protocols leave very little room for scrutiny. Sheldrake (1998) conducted a literature review of papers from the physical sciences; 0.8% from the biological sciences; 5.9% in the medical sciences; 4.9% in the animal behaviour and psychology areas and an impressive 85.2% out of papers in parapsychology. However, although Sheldrake highlights the predominance of blind testing in parapsychology, care must be taken in the interpretation of the results – his results also highlight just how easy it is to subjectively interpret statistical findings. Different numbers of papers were used to derive the percentages i.e. 237 papers form the physical sciences were examined and 143 from animal behaviour/psychology papers. Furthermore, we must consider the relevance of blind trials in our areas of enquiry – the physical sciences are dealing with sub-atomic particles and the like where as parapsychologists are dealing with human subjects. Do physicists need to use blind methods? Unless we are considering the quantum theories such as Evan Harris Walker's parapsychological claims of wave-function theory, physics and its experimental protocols are probably best left to physicists.

Are parapsychologists begging the question?
Studying the elusive, the unknown factor that appears to exist in experimental data – what parapsychologists generically refer to as “psi” raises many serious questions about biasing; in particular experimenter bias. Many sceptics (such as self proclaimed sceptics Dr. Barry Beyerstein , Prof. Richard Wiseman and Dr. Caroline Watt) have suggested that some parapsychologists are merely searching for evidence – any evidence to back up their own beliefs; they may well be correct in this assumption but biasing is a problem across the whole of scientific investigation and it is unfair to suggest that this problem is unique or indeed intrinsic to parapsychology.

“I wish there were a genuinely skeptical community. I'm afraid that just about every skeptic I've ever met is what I call a pseudoskeptic. A real skeptic says, "I don't know about parapsychology and psi, and the explanations we have so far don't satisfy me. I want to look at the data." But the sceptics I've encountered claim to know already that there's nothing to it, and then they break all sorts of rules of scientific procedure to go about their debunking. Scepticism, as it is generally practiced, is neither legitimate science nor legitimate criticism.”
Charles Tart

It seem that the self proclaimed sceptics who make a point of criticising parapsychologists suggest that the power and support that such a “pseudoscience” receives is because the public are eager to believe in Extra Sensory Perception (ESP) and the like and are rather gullible; the media which supports it share much the same opinions as the public and are not in a position to critically evaluate claims of researchers. While it is true that the majority of the public believe in some aspect of “the paranormal” and that media journalists are unlikely to have a strong scientific background, it seems rather unfair to assume that there are not sceptics and good scientists among that public too. Furthermore, these sceptics are often reliant upon the media attention for the funding and advertising of their own new publications and works. There appears to be an irony in devoting research time to attempt to prove the non-existence of something when the something is generally thought to be elusive and practically unidentifiable other than through apparent statistical anomalies. It seems to me to be especially confusing when certain individuals are hailed as leading sceptics – perhaps the word sceptic is really erroneously used and “critic” or “debunker” of methodological validity would be more appropriate. Belief simply should not come in to the debate.

Parapsychology tends to use statistical tools to differentiate between results which reflect random distribution and significant effects and the statistical techniques used in parapsychology are the same standardised tools used generically across the psychological and scientific spectrum. However, as Statistics is in itself a progressing science with theory and evidence, we can not be completely confident that all the techniques used are indeed flawless, but this is not peculiar to parapsychology and hence as far as being scientific goes, using apparently valid and most convincing rules and tools is part of science. If parapsychologists are scrutinised over a particular formulae then the chances are that this formula is scrutinised everywhere. Of course, care must be taken in every field to use appropriate methods and to use them efficiently. Some of the criticisms that come form the natural or traditional sciences about the use of statistics in parapsychology are somewhat unfair as behavioural sciences use differing technique to physics and chemistry etc due to the complexity of factors involved in human subjects including personalities and environmental effects.

Meta-analysis pools together a number of different studies with similar research hypotheses and methodologies; it is an analysis of analyses. It has been correctly pointed out that results included in meta analyses could be selectively picked because of their results i.e. only studies yielding a positive psi effect. Furthermore, it is often generally purported that in parapsychology, only the results above chance are actually published and that the inclusion of all psi studies ever done would reduce the apparent psi effect to chance levels. Charles Tart makes an interesting claim against this however; he says that the number of unsuccessful experiments that would need to be included in a meta-analysis in order to nullify the apparent psi effect would require that every single person on the earth would have to perform ten; unsuccessful experiments a day over 5000 years! Such claims of File Drawer Syndrome (where non hypothesis supporting data is conveniently filed away from the eyes of others and deliberately ignored in meta-analyses) have been applicable in parapsychological research and of course all other psychological domains too, but as a result of the harsh criticisms and peer scrutiny that parapsychologists endure, great care is taken to discourage this and furthermore, the prevalence of meta-analytic procedures in psychological research is largely due to its success in parapsychology.

Meta-analysis has been promoted in the whole experimental research world largely by parapsychologists themselves and it aims to address the issues of subjectivity and bias in research as far as possible. However, as a meta-analysis is pooling the results of many studies by different researchers with different personal agendas and theoretical biases, it is difficult to be certain that these biases are accounted for. Although it seems likely that a large enough sample of studies will be representative of the wide variety of research methodologies and human biases, we cannot make such an assumption since many areas of research are heavily paradigm-oriented. Ideally, completed studies would be included into various meta-analyses in order to identify problematic trends which may have remained unaddressed in the methodology or analysis. Various hypotheses about psi effects could also be further tested and pooled allowing other scientists to have a more general and informed approach to evaluating and criticising parapsychology.

When extracting the relevant information to code and use from each study to be included in a meta-analysis, we must not only acquire the significance level and effect size but information relevant to our specific research goals which of course, will vary from researcher to researcher. For example, a meta-analysis of DMILS (Direct Mental Interaction between Living Systems) studies, might extract the information pertaining to the specific setting and circumstances of each study i.e. methodological features such as control groups, participant criteria (how they were recruited/sex/occupation/relation to experimenter and how they were assigned to groups in the experiment); moderator variables and experimenter effects. Since DMILS studies can be of varying design, such as 2 participant trials (i.e. sender/receiver) or 1 participant each time being tested with the experimenter acting as a participant, this is likely to be of interest in a meta-analysis where the “sender” or “starer” as in Remote Staring Detection trials may be thought to be a moderator variable themselves. Using specifically extracted formats or indices of data, may result in very different meta-analytic procedures been conducted upon the same resources by different people but it is almost impossible to test and estimate the extent of all the variations in people, testing procedures and population differences.

This is where subjectivity is such an issue; much in the same way as it is in analysing a literature review. Subjectivity in research can be exposed via meta-analysis where two or more meta-analyses have attempted to use the same coding or have used the same extracts of data but under a different name, the correlations of the items featured across the meta-analyses could reveal a lot of information about how certain factors are interpreted and how they are used in relation to research questions, (Wakefield 1980).

The usefulness of meta-analyses was indeed harnessed and the technique refined in parapsychology to help investigate file drawer and biasing problems as discussed, as well as spotting certain deficiencies in methodologies and scientific constructs. Exactly the extent to which such analyses can be truly impartial remains to be seen over the coming years but the accessibility of the same information to different researchers will undoubtedly result in more debate over interpretations but more agreement over data.

It seems that meta-analysis, is perhaps an idea at the epistemological heart of science where the aim is in the refining of knowledge, and direction of scientific enquiry, but does itself require a little refinishing.

Fraud is always a sad fact of any field of science; fraud is an issue that can never entirely be ruled out, especially when various participants who are unknown to the experimenters take part. It is possible that certain pairs of participants may in advance of the experiment, could e.g. work out a strategy for signalling one another or prior rote learning of ideal answers to questionnaires. Morris (1987); (Wiseman and Morris, 1995) point out that individuals who are determined to undermine an experiment can do so within the most well controlled studies if they have thorough knowledge of the methodology and prior research. However, such fraud would need to occur on many trials in order for it to influence the average.
Experimenter fraud is again a damning yet realistic possibility. Some scientists have been known to fake their results in order to achieve recognition/ funding/ or even self-deception. Every scientist must always be on the lookout for such misdemeanours and help to prevent fraud.


So what are parapsychologists doing? What are the data anomalies and what do they represent? Are they artefacts created by sensory/information leakage? Interactional effects? Psi? or maths we do not yet understand? Whatever our model or scientific belief system, these studies and parapsychological findings represent something that undeniable must be investigated for the advancement of science and understanding of human experience.
Carefully controlled experiments as discussed with adequate statistical tools and understanding are what any scientist can hope for in their research. As long as the scientists in charge of such endeavours maintain an open sceptical mind then the pursuit of truth (if there is truth), is not lost.

Dalton , K. S (1997). Unpublished doctoral thesis . The University of Edinburgh
Milgram, S. (1963 ). Behavioural study of obedience . Journal of Abnormal and Social Psychology, Vol. 67, pp. 371-378.
Morris, R.L. (1987). Minimising Subject Fraud in Parapsychology Laboratories . European Journal of Parapsychology 6 pp. 137-149
Morris, R.L., Dalton , K.S., Delanoy, D.L., and Watt, C. (1995 ). Comparison of the Sender/No Sender Condition in the Ganzfeld . Proceedings of Presented Papers of the Parapsychological Association 38th Annual Convention, pp. 244-259.
Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications . American Psychologist, 17, 776–783.
Radin, D. (1997). The Conscious Universe. HarperCollins. U.S.A
Rosenthal, R. ( 1976) Experimenter Effects in Behavioural Research . New York : John Wiley.
Schlitz, M. & Wiseman, R. (1997). Experimenter effects and the remote detection of staring. Journal of Parapsychology, 61, Sep.

Sheldrake R. (1998). Could experimenter effects occur in the physical and biological sciences? Skeptical Inquirer 22(3): 57-58
Thouless, R. & Wiesner, B. (1948). The psi process in normal and "paranormal" psychology. Journal of Parapsychology, 12, 192-212.
Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review , 20 , 158-177.
Wiseman, R. and Morris, R.L. (1995) Guidelines for Testing Psychic Claimants .University of Hertfordshire Press, Hatfield.
Wiseman, R. and Schlitz, M. (1997) Experimenter effects and the remote detection of staring Journal of Parapsychology v61, n3 197.
Wiseman, R. and Smith, M. D. (1994) A further look at the detection of unseen gaze. Proceedings of the Parapsychological Association; 37th Annual Convention, 465-478.
Wiseman, R., Smith, M. D., & Kornbrot, D. (1996). Assessing possible sender-to-experimenter acoustic leakage in the PRL autoganzfeld . Journal of Parapsychology. 60, 97-128.

© 2004

Copyright © 2005 Sinergy Group Media