It is common practice in complex systems circles to use the term ‘reductionist’ and its variants as a pejorative for models they don’t like. However the term is misleading because it is not the reductive aspect of those models that is disliked, but rather the lack of an integrative and generative approach that distinguishes one category from the other. Complexity research is every bit as dependent on reduction as other approaches, perhaps more so, and thus using the term ‘reductionist’ as a contrast to complexity research is disingenuous. What these complexity scientists actually mean is that those models are non-generative (deductive) and/or non-integrative (disjoint), and labeling them ‘reductionist’ is bad marketing at best and hiding deep misunderstandings at worst. Complexity scientists (and enthusiasts) keep using that word, but it doesn’t mean what they think it means.

This point would be merely semantics if the complexity community actually clearly understood and stated what it was about those models that is being rejected. They could say, “what WE mean by ‘reductionist’ is …” and then it would just be confusing nomenclature brought about through unfortunate historical accident. Hardly the only case of such a thing. But it usually isn’t explicitly stated what it is about those models that is unsatisfactory, and categorizing those disliked models as ‘reductionist’ implies that it is the reduction that offends. But using this term for such a slogan projects misplaced dissatisfaction.

First of all, there are (at least) two types of reduction at play: mereological reduction and methodological reduction. The first one is a relation of wholes to their parts: reducing the body to its organs, those organs to cells, societies to people and artifacts, towels to their threads, liquids to their molecules, etc. Partly this is constitution (what parts make up the whole) and partly it is behavior (how behaviors of the parts translate into phenomena of the whole). In contrast, methodological reduction refers to the abstraction of an object or phenomenon into a simpler and manageable formal model of it. A model should be as simple as possible, but no simpler. So identifying what methodology to use and what to include in the formal model is another kind of reduction of the original system. These are often taken together, but not necessarily.

A simple example of pure methodological reduction is the formulation of Kepler’s Laws of planetary motion. The laws fairly accurately calculate the speed of planets at varying segments along their orbits as well as the relative orbit periods of different orbital distances. This can be done with just geometry, omitting any and all details about the size and mass of the planets and the strength of gravity. Clearly gravity and the masses determine what those orbits would be, and Newtonian mechanics can be used to demonstrate why Kepler’s laws hold, but Kepler’s model itself excludes those details as extraneous. The motions are reduced (in this sense) to simple geometric relationships…and it works. It’s not explanatory, but that’s not the only thing we use models for.

All modeling requires methodological reduction because if you include every detail in the original then you don’t gain anything from building the model. Leaving out the extraneous details is the point of modeling, and different modeling purposes will prescribe different exclusions. So one may object to a particular choice of exclusions, but no scientist can criticize another merely for employing abstraction in model building. It doesn’t make any sense to be “anti-reductionist” in this sense because it would be the same as being anti-science.

Sometimes both types of reduction are in play. Let’s take the example of “reductionist medicine” which is purported to claim that the best way to analyze the body is to break it down into its organs and analyze each of those components. So a medical researcher examines the heart and learns all about its inputs and outputs, its functions and malfunctions, its structure and physiology. Repeat this step for all the organs. Now that we have a model for each organ in the body we have a model of the body and know everything there is to know about it. And this is bad because there are important interactions among those components. The inputs and outputs of each organ form feedback loops, and so in order to understand how the body will react to a stimulus requires a holistic approach that models the whole body. The reductionist scientist has failed to model the body because it merely reduced it to the individual components and no matter how well you understand those components you can’t understand the body without the interactions. What’s wrong with this argument? It’s a straw man argument: there are no medical researchers who think they understand the body just because they understand how each organ works in isolation. They study only one organ because even that is quite complicated and requires one’s whole life’s work to master not because they think they can understand everything relevant to hearts by studying only hearts.

In order to understand the whole body one must combine all those organ models into a meta model that connects them and combines them with other tissues and fluids of the body and the environmental context of the body etc. We can call such a model ‘integrative’ because it integrates all the components into a complex whole. That’s what the complexity scientists are claiming is important: you can’t understand how the organs operate in concert to construct (generate) the body without actually doing the integrative step. Okay, I think everybody agrees with that, even the purported reductionist scientists. But, and here’s the main point, you absolutely cannot build that constructivist model without having first reduced the whole into those interacting parts.

And that’s why the ‘anti-reductionist’ label is wrong for the constructivist approach: generative modeling goes in the opposite direction of mereological reduction, but it is not counter to it. They are wholly dependent on each other. As a matter of practical fact, the integrative approach has only become possible in recent days through computers, and in many problem spaces our understanding of the parts is still insufficient to generate a useful holistic model. We can understand how power lines and transformers and relays and all the parts work individually but still have difficulty predicting/controlling large-scale power grid problems…but not for lack of trying. However this failure to succeed in integration is not what complexity scientists are typically criticizing when they accuse something of being ‘reductionist’. It is rather the failure to try.

An example of this would be a consumer decision model which examines an individual’s choice of which products to buy. Given person in isolation with certain choices to make, the model may be somewhat accurate in modeling an individuals choice procedure and outcome. Economists have long relied on such simple models of human behavior to generate predictions and recommendations, but they rarely perform well in practice. The reason is clear: people rarely make choices in isolation. It’s not just that they haven’t taken the integrative step yet, its that the decision model itself lacks the mechanisms of social interaction. There is no way to aggregate those individual choice models such that they could be useful for identifying fad products because they are essentially non-integrative. These are the kinds of models that get labeled ‘reductionist’ because they have reduced socially-embedded choices to individual decision without acknowledging or including the social mechanisms.

Clearly such an approach results in a bad general model of decision making, but the problem with it isn’t that the socially-embedded decision making was reduced to individual-level decisions. The general approach of reducing the whole to its parts is fine, good, and necessary for later understanding how individual choices do aggregate into wide-scale social phenomena. The thing that’s bad according to the complexity scientist is the particular choice of individual-level model; specifically one that fails to capture the interaction effects and therefore prevents it from playing the appropriate role in a constructivist model. But that’s not a feature of reduction, that’s a feature of performing a reduction that is ill suited to the complexity scientists’ purposes.

Another form of social model is a population model that doesn’t model the people as individuals but rather models the dynamics of the whole society (often broken down into categories, but not necessarily parts). This is a form of methodological reduction that also prevents an integrative approach. By modeling population-wide choices with a system of equations to capture aggregate behavior the model may treat the population as a continuous decision space. Again, the problem isn’t that the behavior has been abstracted (aka reduced) to a system of equations, but it’s the particular choice of equations that precludes the kinds of investigations that complexity scientists are interested in. There are plenty of excellent complexity models that treat people like gas molecules in a dimensionless fluid. The problem isn’t the approach but with how it was executed in particular.

So whether it’s part-whole relationships or formal abstraction, reduction is an essential element of all research in complexity. However, if you want a constructivist model then you need both types of reduction done in a particular way. You need to identify the relevant parts of the whole and use a formal model of those parts that includes the capacity for interaction. The two main problems that complexity scientists have with these other models are (1) you solve or deduce the solution rather than generate it, and (2) the macro-phenomena isn’t being explained in terms of the aggregated micro-phenomena. Complexity isn’t anti-reductionist, but it typically is generative and integrative. Being clear and precise about what dissatisfies complexity scientists is important for effecting the proper methodological change, and the problems isn’t reduction.