What is the massive modularity hypothesis?

Theoretical schema of massive modularity (Lundie, 2019).

If you guys ever read my previous posts on cognitive science, then you are probably acquainted with Jerry Alan Fodor and his contributions to its rise after the decline in popularity of radical behaviorism due to Chomsky’s strawman and some arguments by Lashley that have nothing to do with radical behaviorism at all.

Today we will talk a bit about philosopher of cognitive science Jerry Fodor and his notion of modularity, as well as the way this theory of modularity was taken by John Tooby and Leda Cosmides, founders of the evolutionary psychology tradition in the 90s.

Fodor and the modularity of mind

A module, according to Bermúdez (2014), is a lower-level cognitive process that works quickly to provide rapid solutions to highly specific problems. Modular systems are generally held to have most, if not all, of the following characteristics.

  • Domain-specificity. They are highly specialized mechanisms that carry out a very specific job with a fixed field of application.
  • Information encapsulation. In performing this job modular systems are unaffected by what is going on elsewhere in the mind.
  • Mandatory application. They respond automatically to stimuli of the appropriate kind. They are not under any executive control.
  • Fast. They transform input into output quickly enough to be used in the on-line control of action.
  • Fixed neural architecture. It is often possible to identify determinate regions of the brain associated with particular types of modular processing.
  • Specific breakdown patterns. Modular processing can fail in highly determine ways.

Again, none of these features are necessary; modularity ought degrees to admit. These characteristics are not defining (Frankenhuis & Ploeger, 2007). Also, in his critique of Tooby and Cosmides (we are talking about them below), he usually focused on two features: domain-specificity and information encapsulation, which seem to be the most important for him.

Although Fodor spoke of a modular mind, his idea of the mind was that of general-purpose programs called central processes with only modular systems in the periphery. The purpose of these peripheral modules was to process sensory input, while higher-order cognition, such as reasoning, is executed by central processes.

Computationalism

According to the computational theory of mind, the mind is a computer of sorts (Samuels, 1998). Samuels explains that “the brain takes sensorily derived information from the environment as input, performs complex transformations on that information, and produces either data structures (representations) or behavior as output.”

Note (skip if you want): If you have seen my previous posts on radical behaviorism, you will see that behaviorists think both of data structures and behavioral outputs as processes and, in general, as behaviors. There are no mental states but mental behaviors. This is sort of like a process philosophy, while in computationalism and cognitive science in general we see the mental as states with given properties that can change from time to time. This reminds me of Northoff’s (2018) distinction of process philosophy, and property philosophy of mind.

Finally, according to computationalism, the brain is made of lots of distinct computational devices or modules. This differs from early views about the mind as a general-purpose computer.

The Santa Barbara School of evolutionary psychology

Bolhuis et al. (2011) speak of Tooby and Cosmides as the pioneers of the Santa Barbara School of evolutionary psychology. The main tenet of the Santa Barbara School is that the mind is massively modular. Also, the mind is understood as an information-processing device that can be described in computational terms (computationalism), consisting of a large number of special-purpose systems (modules), which have been shaped by natural selection to perform specific functions or to solve information-processing problems that were important in the environment in which our hominid ancestors lived-the environment of evolutionary adaptation (EEA).

The massive modularity hypothesis (MMH) states, basically, that the mind consists of a large collection of functionally specialized mechanisms, or evolved modules, which are neurocognitive mechanisms specialized for solving particular adaptive problems that recurrently faced our hominid ancestors over evolutionary time.

Modularity. Fodor vs. Tooby and Cosmides

Although Tooby and Cosmides speak of modules in their MMH, their notion of module differs from that of Fodor’s. Let’s look here at the differences:

  • Fodor: They are information-processing mechanisms. These mechanisms lie in the periphery of the mind, and work only processing environmental stimuli. They are domain-specific and their information is always encapsulated. One module does not interfere with another. Their main features are information-encapsulation and domain-specificity.
  • Tooby and Cosmides: They are information-processing mechanisms that have been shaped by natural selection over time. They are, then, biological adaptations. These mechanisms are richly structured and functionally organized, and natural selection is the only known evolutionary process capable of generating complex, functional designs in organisms. In this case, modules can be highly interconnected and distributed across the brain. Their main feature is functional specialization, not isolation.

Here we see that there is a great difference between Fodor’s idea of modules and Tooby and Cosmides’ one. For Fodor, there are only a small set of modules in the periphery of the mind, while for the Santa Barbara School of evolutionary psychology the mind is a set of hundreds, if not thousands of modules (Evans & Zarate, 2012).

The main arguments for the MMH

Carruthers’ biological argument

According to Carruthers (2006), complex functional systems are built up out of assemblies of sub-components. He states that:

“Each of these components is constructed out of further sub-components and has a distinctive role to play in the functioning of the whole, and many of them can be damaged or lost while leaving the functionality of the remainder at least partially intact”

Carruthers (2006).

From this idea about biological systems (such as genes, cells, cellular assemblies, organs, organisms, and multi-organism units), he argues that, by extension, we should expect it to be true of cognition. Later in his book, he argues that:

  1. Biological systems are designed systems, constructed incrementally.
  2. Such systems, when complex, need to have massively modular organization.
  3. The human mind is a biological system and is complex.
  4. So, the human mind will be massively modularly in its organization

The engineering argument

Frankenhuis & Ploeger (2007) group the main arguments that were used by evolutionary psychology to develop the massive modularity hypothesis, which lies as the foundation of evolutionary psychology.

The first one is the engineering argument, according to which engineering considerations provide reasonable grounds for expecting domain-specific specialization in the human mind because functionally specialized mechanisms can be fine-tuned for fast and effective processing, while domain-general mechanisms cannot.

If there is an adaptive problem that can be solved either by a domain general or a domain specific mechanism, which design is the better engineering solution and, therefore, the design more likely to have been naturally selected for? (Cosmides & Tooby, 1994, p. 89)

Frankenhuis & Ploeger (2007).

Basically, selection pressures can be expected to produce specialization in cognition.

The error argument

According to this argument, learning processes require some element in the cognitive machinery that tells us whether our actions are a success or failure (Frankenhuis & Ploeger, 2007).

Tooby and Cosmides state that there is no domain-independent criterion of success or failure that is correlated with fitness [in general]. The reason for this is that “what counts as fit behavior differs markedly from domain to domain.” Hence, we should expect different domains with different criteria of success and failure to guide the organism’s behavioral outputs and production of data structures.

The poverty of stimulus argument (WE ARE NOT TALKING ABOUT CHOMSKY HERE)

According to evolutionary psychology’s version of the PoS argument (first developed by N. Chomsky), it is impossible in principle to learn some abilities or knowledge during a single lifetime (such as that incest avoidance is adaptive), independently of environment, because to achieve this would require observing relationships that emerge only over generations. Hence, the mind could not function as adaptively as it does without being richly and intricately structured (Frankenhuis & Ploeger, 2007).

The combinatorial explosion argument

I think this is, alongside Carruthers’ biological argument, my favorite one.

With each degree of freedom added to a system, or with each new dimension of potential variation added, or with each new successive choice in a chain of decisions, the total number of alternative possibilities faced by a computational system grows with devastating rapidity.

Frankenhuis & Ploeger argue that domain-specific architectures have the capacity to deal with explosions of possibilities for behavioral outputs and the production of data structures. This is because they contain cognitive structures that can organize information, such as:

  • Domain-specific databases
  • Domain-specific decision rules, and
  • Rules that constrain the inputs into a system.

In contrast, a domain-general system lacks such structure. Hence, it has to assess all possible alternatives it can define. These alternatives increase exponentially as the problem complexity increases and so we have the frame problem, or combinatorial explosion problem.

Conclusions

Since biological systems, even the smallest ones such as prokaryotic cells, are built up out of assemblies of sub-components, we should expect the mind, which is a biological system, to be made of assemblies of sub-components as well by extension. This leads us to the idea of massive modularity, which is also supported by the engineering argument, the error argument, the poverty of stimulus argument, and the combinatorial explosion argument.

A massively modular mind consists of a large collection of functionally specialized mechanisms, or evolved modules. These are neurocognitive mechanisms specialized for solving definite adaptive problems that recurrently faced our hominid ancestors over evolutionary time.

Some of these modules are the social exchange module, the kin detection module, the face recognition module, the mating strategies module, the emotion detection module, the spatial abilities module, the language module, and the number module.

This hypothesis is the foundation of the Santa Barbara School of evolutionary psychology, pioneered by Tooby and Cosmides.

So far, I do not know if I agree with the Santa Barbara School as a whole because I don’t know enough about it yet. Nonetheless, from a property philosophy perspective, I do believe that the mind is massively modular. Carruthers’ biological argument and the combinatorial explosion argument are the most compelling for me.

What do you think about this? Let me know in the comments section. And, once again, thanks for reading me!

References

Bermúdez, J. L. (2014). Cognitive science. An introduction to the science of the mind. 2nd edition

Bolhuis, J. J., Brown, G. R., Richardson, R. C., & Laland, K. N. (2011). Darwin in mind: New opportunities for evolutionary psychology

Carruthers (2006). The architecture of the mind

Evans, D., & Zarate, O. (2012). Evolutionary psychology. A graphic guide

Frankenhuis, W. E., & Ploeger, A. (2007). Evolutionary psychology versus Fodor. Arguments for and against the massive modularity hypothesis

Lundie, M. (2019). Systemic functional adaptedness and domain-general cognition: broadening the scope of evolutionary psychology

Northoff, G. (2018). The spontaneous brain. From the mind-body to the world-brain problem

Samuels, R. (1998). Evolutionary psychology and the massive modularity hypothesis

5 responses to “What is the massive modularity hypothesis?”

  1. Interesting survey of the literature! I usually come at these questions from a neuroscience perspective.

    I think the mind is modular, but a lot of caution seems warranted that we understand what the modules do. Often what we can say about a particular brain region is that it’s critical for some function, a conclusion we reach because that function is disrupted when the region is injured. But it often doesn’t mean that the region, in and of itself, is sufficient for that function.

    The other thing is that cortical regions are all heavily interconnected. For instance, we know the posterior regions of the brain are crucial for vision, so these areas are referred to as the visual cortex. But the processing in these regions receive feedback from other regions in the brain that subtlety alter their processing. For example, feedback from motor control regions allows sensory regions to discount the effects of motor action. And different affective states of the brain literally result in different sensory perceptions.

    So, there are modules, roughly speaking, but their function is far from clean and often heavily dependent on other processing from other modules. Put another way, thinking of modules is useful, but holding too tightly to it could lead down problematic lines of reasoning. Nothing in biology is ever simple. 🙂

    Liked by 2 people

    1. Yeah. In this case you are right to point out that modules are interconnected, and there is nothing wrong with that. In fact, to say that there is a complete encapsulation of information in modules is to agree with Fodor’s version of modularity, which is rejected to a certain extent by evolutionary psychologists, as stated above.
      For the massive modularity hypothesis, there is no problem with one module being interconnected with others. They are functionally specialized, but they are not isolated so they can receive help from other modules at least in principle. From this we see that there is nothing wrong with the idea of certain modules being specialized to develop certain functions but in an interdependent way.
      This is sort of like the structure of a company in which a department cannot fulfill all its duties without the aid and the stability of the rest of the company’s departments. Although the department of human resources is functionally specialized, it is not isolated from the rest of the company’s departments.
      By the way, thanks for sharing your thoughts!

      Liked by 2 people

  2. In short I’m with Fodor here and against Tooby and Cosmides. I consider the brain as a massively modular computer, with mind modules at its periphery, which is to say contracted from a singular mind to the brain which creates it. But when theorists get things “wrong”, it’s generally difficult for me to explain why from their point of view itself. I guess it’s that they’re thinking of mind as a standard off the shelf computer which would thus at least need to be modular to do what it does (as they explain with “biology”, “engineering”, “error”, “PoS”, and “explosion”). Conversely I consider the mind to be quite different from anything like a standard computer. I’ll briefly provide my own general model of how things work here, and thus my conception of how the Santa Barbara school gets this one wrong.

    Just as our robots are instructed by means of computational devices, I believe that brains instruct the organisms which have them. Though these biological computers needed to be massively modular, this should have only helped them with “set piece function”. By this I mean something like the game of chess (which even our idiot computers can play quite well). It’s function in the form of “If [some input]… then [some output]”. My point is that regardless of how many modules brain based computers had, in this manner they shouldn’t have been able to effectively deal with circumstances which were more “open”.

    At some point modules for “qualia” must have evolved however, and thus a functionless subjective dynamic. Then through the serendipity of evolution this experiencer should have been given a slight bit of control over certain decisions, and did well enough to eventually evolve into the full subjective experience that we know of existence. Theoretically the brain uses this subjective dynamic to deal with function that it can’t effectively program for, like how to respond to this post intelligently.

    Note that I consider subjectivity to function as a second form of computer motivated by means of a valence input (like pain), to be informed by means of sense information (like vision), and also memory. Then “thought” is the interpretation of these three forms of input and construction of scenarios about how to promote valence based interests. Conscious muscle function after a decision however should depend upon cooperation from appropriate brain modules.

    I can go into any degree of psychological explanation for this model, though the short answer for this post itself is that the brain creates and services a singular consciousness dynamic by means of countless non-conscious modules associated with brain function. This model is quite consistent with Skinner’s ideas, though might best be placed under the title of “psychological egoism”. Modern theorists who support this position (and there aren’t many even given the prominence of Skinner) should tend to be punished by means of standard moral notions.

    Liked by 1 person

    1. Hello, Eric! Thanks for sharing your thoughts with me. Here are a few thinks about both the SBS of evolutionary psychology and Fodor I would like to highlight:

      If you consider the brain as a massively modular computer, I do not see why you are with Fodor. Fodor believed that, at best, there are only a few modules in the periphery. Tooby and Cosmides argue that the mind is massively modular, that there are plenty of them. To say that the mind is massively modular, but that there is a central processing unit, I guess would be a way to synthesize them, actually.
      If you disagree with the idea of mind as a computer then, in this case, you are against both since the Santa Barbara School of evolutionary psychology and J. A. Fodor share this view of mind as a computer.

      I guess that your point about the robots is tied to early attacks of the Turing test by John Searle. Yet, here you are attacking the notion of an AI being possibly natural because it only works by using ‘if,then’ clauses, and then you extend it by saying that to believe that the mind is a computer is to believe that it works by ‘if,then’ clauses.

      Yet, I think your ideas miss the point, because the Santa Barbara School does not believe that modules are completely fixed. As some of them have written:
      “Every kind of behaviour results from the way in which our minds interact with our environment, and the mind results from the interaction of the environment with our genes. Different environments will lead the mind to develop differently and change the way in which the mind causes behaviour. […] Natural selection has programmed human development to be contingent on various environmental triggers.”
      “There does not exist a single optimal cognitive system which is insensitive to the temporally and spatially varying details of local ecologies. Our neurocognitive architecture is endowed with plastic systems specifically to deal with such details.”
      In which cognitive system refers to a module.
      In short, as Frankenhuis and Ploeger assert, variation in local environments does not oppose the evolution of adaptive specialization. Modules are not simple if,then algorithms for very fixed problems, unlike Fodor’s notion of a module.

      On the other hand, I do not see how there can be a module for something functionless as qualia. In fact, so far I still dislike discourse about qualia because I believe it to be too intricate and, long story short, we do not require qualia to do everything we do. Our biological systems can be as functionally amazed by beautiful landscapes as we are even if they lacked qualia and natural selection would have not developed ‘qualia’ just for the sake of some subjective dynamic or whatever. Qualia, at best, is just the side-effect of some other adaptations.

      Anyway, here I am not trying to dismantle your computational model or anything, just pointing a few things about the Santa Bárbara School and how they are actually compatible with some of the things you stated. More debate would require going deeper into theoretical debates in computationalism and qualia on the other hand, and I am no expert in those areas.
      But, in general, I see that your model consists of an open, conscious, and central module that is supported by a set of closed, preconscious (or perhaps unconscious), and peripheral modules.

      And, one last question, how is your model consistent with Skinner’s ideas?

      Liked by 1 person

      1. Fred,
        Apparently I didn’t get a good enough conception of the SBS versus Fodor from your post, and so guessed wrong on some things about them. All I truly grasp is my own model. Let me now try to help you grasp it a bit better and we’ll see if it sticks. I consider my model extremely simple, though that’s surely only because I’ve lived with it for decades.

        Perhaps the key idea to grasp about this “dual computers” perspective is that the brain exists essentially as a conventional “if…then…” computer where nothing is experienced, though the subjective dynamic by which you and I experience existence, works as a fundamentally different kind of computer that’s created by the brain. (The brain may be considered as something which is classically conditioned, whereas the mind may be considered as something which is operantly conditioned, or my tie in with Skinner. It’s good that both the SBS and Fodor also consider the mind as a computer, though regarding the details of its function, I believe my model to be original.)

        So back to evolution, surely at first there was only a non-conscious computer, or brain. I’m saying that regardless of how many modules that it developed, it still couldn’t function well enough under more “open” circumstances, which is to say in places where it couldn’t effectively use “set piece” heuristics. Thus the brain evolved a subjective dynamic, which is to say a medium through which existence is experienced, commonly known as “consciousness”. (Never mind what medium facilitates this dynamic, though if you get the basic model down reasonably well then that would be another topic to broach at some point.) From description of “modular” that you provided I don’t consider the subjective dynamic to be modular at all, but rather to be extremely supported by means of a massively modular brain. When I subjectively decide to press a key for example, I don’t consider there to be a subjective module that gets this done, but rather a brain module, or the first computer rather than the second. Of course the SBS may define “modular” differently such that it could be subjective.

        Earlier I mentioned three forms of input to the subjective dynamic (valence, senses, and memory), one form of processor for it (thought), and one form of output for it (decisions). That might be a bit advanced for now however. Anyway this is a psychological egoist model in the sense that we’re all presented as self interested products of our circumstances (though there’s a good deal to explain about that). Furthermore we’re classically conditioned given standard “if…then…” function of the brain computer, and we’re operantly conditioned by means of a strange purpose based computer that’s motivated to feel as good as it can each moment. This dynamic is created by means of the brain, but isn’t modular as far as I can tell (though I suppose we do tend to take subjective credit for things that the brain does).

        I had a moment and so got back to you quick for this one, though don’t worry about speed with me. I’ll always here to discuss this stuff.

        Liked by 1 person

Leave a comment

Design a site like this with WordPress.com
Get started