Note: This is a piece I written for my Psychology degree earlier this year. While it’s not the funnest of reads, it illustrates a big debate in Psychology and one that any academic should know extensively about before decided to carry out a piece of research.
Psychological research is a relatively new scientific discipline. Having only developed the first research laboratory just over one hundred years ago (Kim, 2008, debate is still rife as to where psychology belongs on the scientific spectrum. Positivism, or natural science, takes the epistemological view that authentic knowledge can only be gained through the analysis of quantitative data gained in direct experience or observation (Skinner, 1953). Under the view of a positivist, science is theoretical and universal and the laws created through rigorous hypothesis testing are the only explanations needed to explain the world around us.
In the domain of psychology, this implies that researchers should not use self-report based methods, or, at the most should. be highly disbelieving of the accounts provided by participants. Findings must be able to be replicated to be considered a law and experimentation is the only method controlled enough that one should consider using (Smith, 2008).
This approach is tied to the idea of realism, a specific way of understanding the relationship between objects and our views of them. An example of this approach is that a computer exists as an object, and our representation of this object would be that it is something used to complete various processes. Realism advocates that objects and our views of them exist separately, but are similar enough to be seen as reflections of each other.
Unfortunately, psychology is not an area that produces similar results on a regular basis: The Stanford Prison Experiment (Zimbardo, 1971) is renowned for showing the consequences of slipping mindlessly into a prisoner or guard role, but this was unable to be replicated in Reicher and Haslam’s (2005) almost exactly similar study. Even more famously, perhaps, is how Skinner’s (1957) book, Verbal Behaviour, was refuted by Chomsky (1959) which started a whole new movement in psychology.
Lippmann (1922) heavily condemned the realist view, most famously in his quote regarding intelligence testing:
If… the impression takes root that these tests really measure intelligence… that they reveal ‘scientifically’ [a child’s] predestined ability, then it would be a thousand times better if all the intelligence testers and all their questionnaires were sunk without warning in the Sargasso Sea. (p. 1)
Psychologists on the other side of the debate take a social constructivist view of the scientific world in that everybody has a slightly different perceptual view on objects (Mehan, 1981). Take the example of a balloon; to a child it is a plaything, a toy, while to an adult it could be considered a decorative object. This difference in perception leads to a logical conclusion that all knowledge is relative and unique to the individual.
Thus this view concerns itself with the area of language. Because there is no one objective truth, the only means by which we can understand another person’s knowledge is through their own recollections: Through speech. It is thus implied that any attempt to objectively evaluate life, the universe and everything is impossible; instead one must rely upon exploring participant’s personal intrinsic meaning and ideas: We are not carbon copies of one another and so looking upon each other as mirror images does not yield correct results; “we have made ourselves what we are” (Sartre, 2007, p. 49).
Grounded theory is one such method that supports this approach. Originally created in the field of sociology by Glaser and Strauss (1967), it was argued that more theories would emerge if they were created from the qualitative data in which they were developed. Grounded theory requires the gathering of qualitative data (be it from unstructured interviews, qualitative observation or individual case studies) and codifies it by assigning open-ended codes that are refined later in the process. Codes arise from in the data; they are created not from pre-existing ideas but from what the participant thinks. Codes eventually give way to categories, a step beyond simple labelling that produces overall interpretations of the data.
Under the full version of grounded theory, the researchers would then progress back into the data collection phase and, through two techniques known as purposive and theoretical sampling, check for certain kinds of data from certain people and also check to make sure the categories and codes that have been created really do fit to reality. Negative case analysis is also applied in which cases that poorly fit the categories can be refined, amended and searched for.
Categories and codes undergo constant comparative analysis throughout. This is the observation from any other experimenters, and encouragement for the individual coding, to re-evaluate their categories. This is done in order to find new links and reassure the project that it is not being focused too heavily upon one area.
This is all completed to the point of saturation: The ideal point in which no new categories emerge and any further modification of the data is negligible. However, though this point has been reached, the research question is not set in stone; if new directions were identified or if it was made note of that the original idea is too broad or limiting, grounded theory allows for the reformation of the question as long as the emergent theory fits the new idea.
Phenomenological Analysis is another research method that bases itself in qualitative perceptions. Based on how the individual perceives the world, phenomenology is interested in mental processes and tries to discern what is real in the minds of the participants and disregards what we already know about phenomenon (Smith, 2003). Not unsurprisingly, then, this view mainly concerns itself with the use of semi-structured interviews (Smith, 1995), though diaries are sometimes used (Smith & Osborn, 2003).
The Interpretive Phenomenological Analysis (IPA) (Smith, 1995) school of thought, involves the culmination of the researchers and participants interpretation of the process. Analysis of the data gathered involves the reading of transcriptions and production of unfocused notes that arise from the researchers ideas regarding the data. These are then channelled into creating conceptual themes and then identifying key themes that form clusters. The clusters created, though, must highlight links found and created from in the data. Clusters then, finally, form a summary tale which is used to produce the report. Unlike grounded theory, phenomenological analysis allows for themes appearing weak to be disregarded and neglected.
One method of qualitative study that focuses more on patterns of pronunciation, word choice and sentence structure is Discourse Analysis. Because of this, it is performed better on naturally occurring speech that is gathered without the knowledge of the participant. Consisting of two main areas, Foucault and Discursive Psychology (Willig, 2001), Foucault is interested in labelling pre-existing social understandings and exploring social norms, something to which it believes act as building block for our understanding of the world. Discursive Psychology, on the other hand is interested more in interpersonal communication, the way in which language is used to describe psychological actions and how expressions are constructed within discourse. Both methods of analysis are concerned with the consequence of word-choice, and also evenly interested in what is not said and how that could have affected the outcome of a conversation. It is assumed by Discourse Analysis that life is made up of interactions and that talking is an action that changes these interactions: Discourse Analysis is a means in which we can understand the world we live in and broaden our understanding.
The transcript, in Discourse Analysis, of the interaction under study undergoes a process of interrogation: The emphasis is not on finding what the script is saying, but what the participants are doing. Researchers using this method do not so much interest themselves with the experience as much as they interest themselves with the way the experience is constructed in conversation. An important criterion, then, for the performing of Discourse Analysis is being able to fully understand the philosophy that is put forward by this technique.
The final technique mentioned in this paper, and one also used by the qualitative research world, is known as Theory-Led, or Thematic Analysis. This approach differs somewhat to the aforementioned ones in that it uses qualitative data to test a hypothesis. Therefore, unlike the others that create and build up the theory as they develop a transcript, it checks whether or not the data gathered is consistent with a hypothesis that is created from a consideration and, hopefully, wealth of previous research. This idea is akin, almost, to the quantitative approach. Thematic Analysis generally is less specific with regards to the previously mentioned methodologies.
Described by Braun and Clarke (2006), Thematic Analysis, much like Grounded Theory, involves the generation of codes in a systematic fashion across the data before reworking the codes into themes. The themes are then reviewed to make sure they work in relation to the data set before they are made into a ‘thematic map’. The map entails linking each theme together and refining them to generate a clearer definition before a report can be produced.
The qualitative paradigm, all in all, rejects the ideas of a quantitative approach for several reasons. Firstly, the objectivity advocated by quantitative methodologies is seen by constructivists as mythical. The very attempt to stay distant from the experiment binds the researcher into a certain role within the research process, which creates a social relationship in which the participant observes and reacts to, forming a boss and employee-esque interaction: Because the participant perceives this, any resulting behaviour can be claimed to be down to relationship. To overcome this, the qualitative angle of research takes into account researcher interaction and, in the overall product, factors this into the issue being studied (Flick, 1988).
Furthering this critique, the controls so harshly attributed to each quantitative experiment only permit the analysis of one specific component of a person, something which is superficial at best: These experimental procedures restrict the normal participant’s behaviour into a certain set of characteristics that apply only to the experiment at hand while the researcher is free to, afterwards, make sweeping statements about human nature in the encompassing area.
Finally, one must consider the very nature of replicability and validity; how can one define exactly what is reliable? There is no one objective way of stating how reliable one experiment is or is not, and when comparing research of qualitative and quantitative origins, should they really be judged in the same manner?
What may be important in this instance is to separate qualitative research from quantitative research: Just as they differ in techniques, their definitions of reliability and validity differs, too. In Guba and Lincoln’s (1994) Handbook of Qualitative Research, a clear distinction is created between the two schools views on the matter: Reliability is put forwards as being the essence of how dependable a given piece of research is, within quantitative data, and as more of a trail that leads to a conclusion in qualitative. Validity is referred to as the accuracy of measurement within the quantitative school, but is seen as a way of judging the integrity of a qualitative study.
In conclusion, while the criticism of quantitative research being obsessed with the idea of replicability holds semblance of truth, one cannot help but wonder why the two schools are being judged in the same light. Quantitative and qualitative research asks fundamentally different questions, has fundamentally different methods and is based on fundamentally different philosophies: One cannot imagine Grounded Theory Neuropsychology, just as much as one cannot imagine quantitative techniques correctly asking individuals what impact a certain event has had on them and what things truly mean to them. Lump-placing the two different approaches into similar categories is misguided: It would be much better to allow the individual paradigms to develop through researchers focusing and publishing their own data that has been thoroughly thought out than it is to overtly focus on criticising one another’s epistemological beliefs. The two approaches differ far too radically to allow for the latter idea to even be considered appropriate, let them exist: Laissez-faire.