May 17: Models for Philosophy
You can use an ABM to test hypotheses for how something works, like a thought experiment. But building, running, and analyzing a model is science, in fact like an empirical science. So the relationship between the model and a philosophical point is similar to the relationship between some conceptual issue and the science of that issue. So what role does philosophy play in making and/or interpreting models? How can the results be philosophically relevant/important/implicating? Can science ever decide something in philosophy?
The structure of systems at different scales is often referred to as a hierarchy, and traditional thinking implies that this is so. Molecules make up organelles, which make up cells, which make up tissues, which make up organs, which make up organ systems, which make up bodies, and beyond. I refer to any scale at which recognizable coherent patterns in behavior are observable and describable as a level of organization. This is an epistemic notion because it relates to what we can capture in models, not to the structure of reality. Hierarchies of scale are a mixed ontological and epistemic notion: the levels are still based on what people can discern, but there is an added ontological assumption that a higher level exhaustively includes the elements of a lower level. Now I will show why levels of organization, not hierarchies, are the domains and ranges of scientifically useful reduction and emergence relationships.
One of the oft-cited features of complex systems is their ability to adapt to environmental changes and shocks. This is often contrasted with human-made engineered systems which are typically specialized and optimized to work in only the limited conditions for which they were designed. The point being underlined in these discussions is that complex systems are self-organizing (and often self-perpetuating) and thus their behaviors are contingent on inputs in ways that purpose-built systems are typically not. What these discussions often leave out is the crucial fragility that many complex systems exhibit to specific inputs and disruptions.
One of my main research thrusts is developing methodology for complex systems, and specifically in defining new measures. New measures are necessary because existing statistical techniques were developed to report aggregate properties and trends and to recreate distributions with simple mathematical models. In complex systems we care about the relations among parts and causal process that generate behaviors at multiple levels rather than smoothed-over aggregate outcomes. Statistical techniques are refined for problems of missing, confounded, or otherwise imperfect data. Agent-based models generate data without any of these flaws, but in a tremendous quantity that requires new refinements in data mining and pattern detection. Furthermore, this new clarity and proliferation of data from simulation opens the way for new measures that would have been useless on less clean data, but are vital for understanding the complex systems we now study. One direction for new measures (and the one Iím currently most focused on) is the development of measures of dynamical properties, but people seem to have a hard time grasping what I mean by dynamical properties. I will attempt to elucidate the idea here.
May 18: Evolution and the Is/Ought Gap
There are reasons to doubt that the so-called naturalistic fallacy is really a fallacy, but at this point I want to accept the major points of Hume's version of the open question argument and see how that actually helps us in comparing metaethical theories. The idea is that even if everybody agrees on the facts regarding our evolutionary history we might still disagree about which features of that story convey or produce moral value. We can use this difference to distinguish moral theories that otherwise tell the same story. It also lays bare those aspects of a evolutionary history that might do the metaethical job and thus reveal whether any such story will do the job.
A famous story in decision theory is the ass between two identical bales of hay that, because he does not have any reason to choose one or the other, dies of starvation. Other stories such as Death in Damascus and the Newcomb Problem are more complicated (see below), but also result in a deliberative state in which no unique outcome is deemed appropriate. And then there are all the mixed strategy equilibria in game theory. These are arrangements where the other player has randomized over her strategies in such a way as to make you indifferent among all of your own. In all these cases and others lacking a unique decision recommendation there isn't a thing to do, you can simply pick one or select randomly or whatever...no outcome is better than the others. Here I want to briefly consider how various indifference cases split when we consider the possibility that one could create clones and pursue multiple actions simultaneously. Depending on whether such clones are indifferent to changing places with other clones (and other such considerations) we can categorize the states as different kinds of indifference.
This post represents a small philosophical excursion from the measures of robustness research program I am working on. That research program constructs Markov models of systems from data and analyzes them so find the system's tipping points and further uses those points (and other features) to measure robustness-related properties (including sustainability, resistance, recoverability, stability, and being static; as well as their counterparts: susceptibility, vulnerability, fragility, and collapsibility. While working on the formal definitions of those concepts I realized that these are all dispositional properties and dispositional properties are philosophically interesting and troublesome. That connection immediately made me wonder if my mathematical formalism might shed some new light on how to differential dispositional properties form categorical ones; some first thoughts on that are below.
Oct 02: As True as True Can Be
Statements of fact in everyday life and in science are almost certainly in one of two categories: false or vague (lacking truth value). If a statement is supposed to represent a state of the world and is true if and only if that state actually obtains in the actual existing world then of course everything is going to be false or true by coincidence, we don't have access to the actual existing world. Such a requirement for truth, however, is stupid and completely useless. An alternative is that statements purport descriptions of models we have of the world. Models have an ontology: the things that exist in that model. Models have other features to tie those elements together such as forces, laws, rules, glue, and imaginings (depending on the model). Sophisticated models, like Newtonian physics, evolutionary biology, and our implicitly held folk models of social and physical behaviors, create a vast interconnected web of relations and dependencies; a well formulated fictional world. The most we can ever expect to mean by 'true' is true in a fictional world.
Based on previous work proving that there is no such thing as causation and, in fact, that nothing in any scientific model corresponds or refers to anything in the "real world" we are left to consider scientific models as fictions; largely coherent and consistent collections of purported entities and relationships. Theorems of a scientific model are then true in that fictional world only and are frequently incommensurable with other theories of the same domain. Kendall Walton's idea of prompts as tools to focus collective imaginative activities applies quite accurately to equations, graphs, diagrams, demonstrations, and various other representations of parts or implications of the theory. The theory as a whole cannot all be imagined occurrently (kept in RAM in the computer analogy of the mind), but exposure to parts "sets the stage" for the consideration of further parts with the underspecified portions likely filled in with components from our folk models or other nearby scientific models.
Aug 31: Heuristicism: Actions from Rules
Behaviorism is dead in decision theory; the whole framework only makes conceptual sense if it is described in terms of beliefs and desires directly (as convincingly argued for by James Joyce). However, decision theory itself is dead as an appropriate model for all but a few applications. Getting such a theory to do any work for predicting human behavior largely fails despite the best efforts of researchers in multiple disciplines to add ad hoc constraints and behavior-fitting mechanisms to the theory's formulation. Instead of attempting to accommodate the various forces on human action as levers in the mental decision apparatus we are better off modeling behavior as resulting from the interaction of changing sets of adaptive rules with an uncertain and dynamic environment. The idea is simply to take the benefits that more general agent-based modeling has over traditional game theory and extend those analogously to traditional decision theory. The traditional version of decision theory may excel as a normative theory of decision making under known risk in stylized situations, but for more interesting and realistic problems we need a heuristic approach.