The Brain As A Computer: Modularity Theories Of Cognition

Written by Ash

Behaviourism, the vox pop of psychology pre-1950s, held as a value that only measurable data should be studied. This reduces the human mind to little more than a black box; an item that receives information from the senses and produces the output in the form of bodily actions and behaviour. However, with the advent and rise of the computer, this image began to change: ENIAC, the first electronic computer, was formed to give answers to computations, even though the process of finding the solutions themselves were not visible, and served as the first metaphor for a cognitive scientists view of the brain.
“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” – Turing
No more, then, was the brain a sealed box that cannot be observed, but more an item worth studying and worth understanding. Perhaps one of the most notable pioneers of this field was a researcher by the name of Chomsky.

Chomsky’s view held that the behaviourist approach couldn’t explain faucets of human behaviour such as language because of the uniqueness of it; while all animals rely on stimulus-response mechanisms for learning, if the behaviourist approach is to be held then why does no other animal have language: surely if behaviour alone explains actions, they should? Human speech, too, is qualitatively different to other forms of communication, and no more is this notable than in the area of grammar.
The magnum opus of this researcher’s work is the theory of Universal Grammar (UG). Language structure can be divided into two categories: deep structure and surface structure. The first of these is illustrated by how nouns and verbs are usually seen close together in languages to form noun phrases, while the second is related more to how individual languages differ. UG emerges from the deep structure of language, the foreknowledge about grammar that all humans are innately born with.
There is support for this idea in many forms, notably within the poverty of stimuli. Plato’s Problem, as Chomsky terms it, refers to the socratic dialogue of Meno, in which Socrates is talking to a servant and shows that he understands the principles of Pythagoras’ Theorem, though he is uneducated and has never been formally taught. Plato suggests that this is because people have innate knowledge. In cognitive psychology, then, this refers to discerning how a child can acquire language without being formally and explicitly taught. Plato’s Problem is thus a discussion between the poverty of the input and the grammar that occurs in the output. Much in the same way as Meno concludes, the resolution is that people have innate knowledge: Universal Grammar.
At roughly the same time as this debate occurred, evolutionary anthropologists were attempting to teach grammar to other animals. In one specific case, researchers taught sign language to a chimpanzee, punningly named Nim Chimpsky, and noted some successes. However, there was no consistent usage of grammar. This suggests that while animals have the capacity to learn to produce language, there is no capacity to — or at least, no evidence of a capacity to — produce grammar. The idea that behavioural learning is not enough to create language is supported from this, and so is UG.
Phrases learned by Nim Chimpsky. Source:
Another supporting point comes in the form of critical language acquisition periods. After adolescence, language learning slows down significantly; people that make it to adolescence without having learned language are rare, but all cases mark the same outcome: no language is learned to the same degree as others that learned it as children. One such case involved a child known as Genie. Left in confinement and strapped into a chair until 13 years of age. She had very, very little exposure to language and never managed to develop the capacity to use grammar; the extent of her learning was ‘applesauce buy store’. Isabelle, on the other hand, escaped from a similar situation at the age of six, and by the age of eight she could speak fluently and, arguably, better than some other children of her age; ‘do you go to Miss Mason’s school at the university’ and ‘why does no paste come out if one upsets the jar’ are two examples of phrases she produced. UG states that language acquisition is guaranteed for children until the age of six, and reduced until puberty at which point it becomes extremely unlikely that language will be learned.
Language itself, though, comes in two forms: pidgins and creoles. The first is more a strategy than a language, and occurs when two people speaking different languages encounter each other, adapting each tongue into a simplified structure and vocabulary. Pidgins form when there is a need to fulfill a task, such as trading or bartering. It’s initially very restricted and has no native speakers, nor is it anybody’s first language. If a pidgin is spoken and adapted as a first language, and children learn it, then it is creolised.
 A creole, then, is a pidgin that has developed into a more complex form, gaining structure and vocabulary and becoming the native tongue of a group of people. They are fully developed languages, in comparison to the haphazard words and gestures used by a pidgin. Creoles are made possible through Pinker’s concept of a Language Acquisition Device (LAD). Simply put, this is the notion that children ‘grow’ their own form of grammar as a derivative from UG.
Developing the idea of creoles and pidgins further, the notion of linguistic universals are stumbled across. These are just similarities that are shared by all languages; design features that are absolute unconditionals of something being a language. Some of these proposed universals, then, are: major lexical categories (noun phrases and verb phrases), phrase structure rules, verb affixes that signal aspect and tense (including pluperfects), vowel auxiliaries, eh-movements and numerals. If every language on earth shares at least one linguistic universal, then it is a fairly good indicator that UG exists.
UG itself, though, is just one example of a module; a dedicated system and processes for natively specific tasks. These constructs are officially named in Fodor’s modularity thesis, which maintains that the mind is a clearly divisible set of processes that cover perceptual systems, audition and cognitive systems. Every module has four properties; they are domain-specific, innate, fast and automatic, and are encapsulated. There are many different modules, the most controversial and discussion provoking being the modules for language acquisition and processing.
Albeit controversial, then, modules are good approaches to language. We rapidly and automatically parse grammar, rapidly learn words — between 40,000 and 100,000 by the age of 20 — and we even learn words better when we’re not trying to learn them. Two proposed modules for processing language are propositional syntax (developed by Kintsch) in which brains parse grammar like a computer, and latent semantic analysis (as motioned by Landauer & Dumais) where brains catalogue words and text, and extrapolate information from them like a computer.
Kintch’s model argues that people represent language as propositions. The phrase ‘John is tall’ is represented in our minds as ‘tall (John)’. ‘I didn’t do it’ is ‘not (do (me, it))’. Propositional thought explains rapid grammatical language processing in a way that’s comparable to computer programming: it is algorithmic and automatic and ignores semantic content because of it’s domain specificity.
Latent semantic analysis, on the other hand, was developed through programming. Large amounts of text are entered into a spreadsheet and formed into a graph. On the bottom, x-axis each paragraph number is listed while on the y-axis every word is displayed. Given N documents and a vocal size of M, a word-document co-occurrence matrix can be generated. Each word position in semantic space is then extrapolated in order to estimate semantic similarity. Computers using latent semantic analysis learns vocabulary at a similar rate to school children, and so provides a rather elegant account of how words are learned. A very interesting point to note is that when marking papers, latent semantic analysis is just as accurate as exam markers.
And that’s pretty much Modularity Theories in a nuttshell. Though the computer metaphor has been somewhat dropped from modern cognitive thought, they are still worthy of note simply because they are good at explaining behaviour. I’ll be writing another piece on the refutations of modularity, social cognition, as a follow up shortly — but I’ve got a few other areas to cover first before I get to that. I hope you’re all well.

About the author


Ash is a PhD student in psychology at Northumbria University whose research focuses towards the general cognitive mechanisms of memory and attention. Most of the time he can be found writing about rubbish, or being rubbish at writing. Personal interests include philosophy, statistics and better understanding how we can convey our knowledge of science to others.

Leave a Reply