A famous story in decision theory is the ass between two identical bales of hay that, because he does not have any reason to choose one or the other, dies of starvation. Other stories such as Death in Damascus and the Newcomb Problem are more complicated (see below), but also result in a deliberative state in which no unique outcome is deemed appropriate. And then there are all the mixed strategy equilibria in game theory. These are arrangements where the other player has randomized over her strategies in such a way as to make you indifferent among all of your own. In all these cases and others lacking a unique decision recommendation there isn't a thing to do, you can simply pick one or select randomly or whatever...no outcome is better than the others. Here I want to briefly consider how various indifference cases split when we consider the possibility that one could create clones and pursue multiple actions simultaneously. Depending on whether such clones are indifferent to changing places with other clones (and other such considerations) we can categorize the states as different kinds of indifference.

There are lots of questions one can start asking once the clone (or multiple self) possibility is introduced. One of them is whether the decision situation encourages multiple choices - even in nonindifference cases. So let's say that one has a choice between A and B and that U(A) > U(B). Normally that would be enough. But look, there are some problems where (case 1) no matter how many clones there were they would all pick A and others (case 2) where B is preferred once somebody is doing A. This difference tracks the difference in outcomes possible between public goods (case 1) and excludable goods (case 2), but I'm thinking more in terms of complimentary outcomes where the value of going B (like bringing hotdog buns to the picnic) is enhanced if somebody else is doing A (like bringing the hotdogs). So there are already splits between cases where each choice is mutually exclusive (have the cake vs. eat the cake) and there is no room for clones, cases where each choice can only be taken by one individual and so a clone would be forced into a lesser-preferred competing option, cases where the options are non competing which splits into the two cases above. Assuredly there are more splits to be found, but that's a start and gets down to interesting options. Under different splits one can ask if one of the clones would wish to trade places with another clone, and this can change as the clones move through the decision tree and gain more information. One can also ask about regret and how decisions converge through different paths, and several other things.

In indifference cases where U(A) = U(B) by stipulation it sometimes is the case that the options are mutually exclusive, i.e. A → ¬ B and B → ¬ A by logical or metaphysical necessity. But sometimes not; and when it's not the case it might be that (case 1) U(A&B) > U(A) = U(B), (case 2) U(A&B) < U(A) = U(B), and (case 3) U(A&B) = U(A) = U(B). What I mean here is that is you could clone yourself and do both A and B that might be better, worse, or the same as doing just one or the other. That's four categories of indifference right there. I'm not saying that the problem needs to be changed so that the joint action is actually possible, just that considering what the utilities of these hypothetical states would be to a multiplied self separates problems into useful categories in an intuitive and easy way. The usefulness of the categories will be considered later.

Now let's see if the clone approach brings any additional light on the some of the famous decision problems of philosophy: the Newcomb Problem and Death in Damascus. The setup of the Newcomb Problem is like this: a player has a choice between choosing box A, box B or both boxes. A super predictor has guessed your choice and has put $1,000 in box A for sure, and has put $1,000,000 in box B is she predicts you'll pick only box B and $0 in box B if she predicts you'll pick both boxes. So the problem is what to choose when you're actually making the choice. Picking both is the rational choice in the sense that it's more money and at the actual choosing time there is no causal mechanism by which the predictor can alter the amount in box B. But people who have picked both boxes have historically found Box B to be empty and people who have picked just box B found $1,000,000 in it. So the problem stands amongst much discussion, confusion, and publication.

Now consider the clone scenario. Imagine that the player can clone himself and each version of himself can pick an option. Since the choice set contains three choices three clones suffices: clone A chooses just box A, clone B chooses just box B, and clone C chooses both boxes. The problem is set up abstractly enough that we can make sense of each of these scenarios happening in identical worlds but not simultaneously. Clone A receives his $1000 with no problem (or interest). But what would clones B and C find in box B? It's not possible in this story that clone B would find $1,000,000 and clone C would find $0, because by stipulation the worlds are causally identical except the clones' actions. Both clones find the same amount, but what amount would that be? Note that this transforms the problem from a decision problem into a question of how the world could be set up. If the predictor predicts you are a clone B type then both B and C types would find money in box B. If the predictor predicts you are a clone C type then box B is empty for both clones B and C. But the problem doesn't specify enough to know what the predictor would predict. So my conclusion here (and elsewhere too) is that the Newcomb Problem is underspecified and hence the decision problem faced by the chooser is done under uncertainty (and hence there isn't a "thing to do" from a purely decision-theoretic point of view).

Now what about the Death in Damascus case? The story is that the Grim Reaper (aka Death) is going to meet the player in either Aleppo or Damascus tomorrow according to what Death's appointment book says. That appointment book was written up weeks in advance based on excellent predictions. The player can only choose one of those two cities and whichever city he chooses he has reason to believe that Death is waiting for him there. The best causal decision theory says that there is a reflective equilibrium considering all information which has the agent being in a state assigning a probability to each of the cities as the better course of action a sort of mixed strategy.

Send in the clones. In this case clone A goes to Aleppo and clone B goes to Damascus. One of them dies and the other lives. The way this is supposed to help the decision-making is that the chooser should act in such a way that the surviving clone's act is chosen. But is there any way to know from the set up of the scenario which clone will die and which will live? Only one city or the other is written in the book, and that is based on Death's earlier prediction about where you'd be. It seems quite clear that the only way to know where Death would be is to find out which clone dies, which makes it an empirical question. One can't know that before choosing. The chooser again faces a problem of uncertainty with no grounds to assign a probability to Death's being in one city or another or to make a real decision between the two.

In conclusion, the clone thought experiment can be used to gain some purchase on certain decision problems. In some standard decision problems the consideration of clones classifies the problems into types of choice sets. This classification has lots of potential benefits for problem-solving techniques (e.g. automata rules) and identifying analogous problems in different spaces. Cases of indecision from indifference are not a serious problem for decision theory; it's just that something outside decision theory needs to regulate the act performed. Something similar must be said for problems under uncertainty. Given the clone treatment, both of the two famous decision problems presented above seem to end as decisions under uncertainty. Obviously more discussion of other options is appropriate, but this seems the result to me. If accepted my conclusion implies that there shouldn't be a decision theoretic answer to these famous problems.