Distributive Justice and Empirical Moral Psychology

First published Fri Dec 18, 2015; substantive revision Fri Sep 11, 2020

Whether and to what extent people are motivated by considerations of justice is a central topic in a number of fields including economics, psychology, and business. The implications of this topic extend broadly, from the psychology of negotiations, to the motives citizens have to pay taxes, to what considerations influence healthcare allocation decisions. Given all the possible topics which could be explored, this essay adopts the following parameters to help narrow the focus:

We will look at the moral psychology of individuals as opposed to firms, societies, or other collective entities.

Our focus will be on empirical results, rather than armchair considerations of what the psychology of people might be like or what it should be like. The SEP entry on the virtue of justice discusses the moral psychology of a just person (LeBar 2020). But whether people actually are motivated by justice or have the virtue of justice is an empirical question.

The kind of justice that will be our focus here is distributive justice, rather than retributive, international, transitional, or other kinds.

More specifically, we will examine the distributive motives and behavior of individuals by consulting the empirical literature on economic games. Other topics, including the relationship between empirical results and specific theories of distributive justice such as John Rawls’s or Robert Nozick’s, will have to be addressed on another occasion (Lamont and Favor 2014).

Why economic games? The psychologists Kun Zhao and Luke Smillie provide a nice answer:

Economic games have become widely adopted within the psychological sciences, where they are used to model complex social interactions and allow for rigorous empirical investigations. As behavioral paradigms, they are well-controlled, manipulable, and replicable…making them ideal for bridging the gap between theory and naturalistic data. In personality research, they provide behavioral paradigms that may complement and help validate self-report measures, and offer sharp operationalizations of somewhat slippery concepts. (2015: 277–278, see also Fetchenhauer and Huang 2004: 1018)

There are two general kinds of economic games. Social dilemmas typically juxtapose short-term self-interest with long-term group interests, and include the prisoner’s dilemma and the public goods game. Bargaining games typically involve two players distributing a specific payoff (usually money), and will be our focus here, as they are especially helpful for examining the moral psychology of justice. Examples include the ultimatum game and dictator game. We will also look at a novel twist on the dictator game by the psychologist Daniel Batson, which has fostered a large experimental literature on what he calls “moral hypocrisy”. Finally we will connect this discussion of economic games to the virtue of justice and to other personality traits such as agreeableness, honesty-humility, and justice sensitivity.

Before diving in, it is worth saying something very briefly about why empirical results might be of interest to philosophers as they theorize about distributive justice. One might think, after all, that philosophers tend to focus less on what people actually tend to do and be like, as opposed to what they should do and be like. Nevertheless, in recent years philosophers in general have paid increasing attention to empirical research, and with respect to matters of distributive justice, a few important reasons for doing so are the following:

  1. Ethical egoism claims that a person’s central goal ought to be the promotion of his or her long-term self-interest. Most philosophers reject ethical egoism. But is it psychologically realistic to expect most people to act on motives other than self-interested ones, such as motives of fairness and justice? Empirical results, such as those reviewed in the first three sections of this article, would bear on this question.
  2. Relatedly, moral philosophers have developed rich conceptions of the virtue of justice and what a just person would do. But are these conceptions empirically adequate and psychologically realistic for human beings like us? Empirical results would bear on this question, and if the answer is no, then perhaps those conceptions are problematized (Doris 2002; Miller 2014).
  3. Philosophers might come up with normative criteria for when actions and institutions are just or not. But to actually apply those criteria to real world concerns, they need empirical data about how people and institutions are in fact behaving.
  4. Empirical data is also highly relevant to philosophers interested in developing strategies for improvement, so that just actions, just institutions, and just character are increasingly promoted.

The entry is structured as follows:

1. Ultimatum Games and Fairness

Ultimatum games have the following setup. Call one person the “offerer” and the other the “responder”, The offerer is informed that she can allocate a certain amount of a good. Suppose the good is money, such as $10, since that is the usual good chosen by experimenters in the literature. Next the offerer is instructed to make some offer to the responder—in our example, the offer could range between $0 and $10, inclusive. So the offerer can give away all the money, none of it, or something in-between. The only other piece of information that the offerer knows, standardly, is that the responder’s reaction matters when he is presented with her offer. If the responder accepts the offer, then both parties keep whatever the offer says (unless it is $0, in which case only one side keeps any money). But if the responder rejects the offer, then neither side gets any money at all, and the game is done. Hence if the offer is to keep $8 and give $2, and the responder accepts this offer, then that is in fact the amount of money with which each side walks away. But if the responder rejects the offer, then each side walks away with $0.

A couple of observations are important here. First, it is not essential to the game that the offerer know anything about the responder—he or she could be a complete stranger. Second and, as we will see, very importantly, it is not an essential feature of the game that the responder know how much money the offerer has to work with in the first place. So when the offer comes in at $2, the responder might have no idea whether that is all the money that the offerer has to allocate, or whether it is just a small amount out of $100, $10,000, or even higher. In many versions of the ultimatum game, the responder is told what the offerer has available. But the point to stress here is that this is an optional feature of the game. Finally, it is worth noting that the power dynamic in this game favors the offerer. The responder cannot make any counteroffers back in the hope of getting a bigger offer. He only has the threat of veto power. But if he does veto the offer, then he comes away with nothing himself. That is not a great negotiating position to be in.

Given this last thought, a natural prediction to make about how ultimatum games will go is the following. Offerers will want to maximize their self-interest, and so maximize their take-home payment. They might think that, from a responder’s perspective, any money is better than no money. So we might predict that the offer itself will be as low as possible, say 1 cent or 5 cents in our example involving $10. For the responder to reject this offer would be irrational, since he would come away with nothing, whereas accepting the offer would make him better off than he would otherwise have been.

This is, indeed, what many economists in the early days of research on the ultimatum game predicted would happen, on the basis of standard game theoretic assumptions (Kahneman et al. 1986: S285–286; Pillutla and Murnighan 1995: 1409; Güth 1995: 329; Camerer and Thaler 1995: 210). In other words, given a concern to promote one’s own self-interest, the prediction is that minimal offers will be made in ultimatum games to maximize one’s take-home amount—which is the subgame perfect Nash equilibrium (Kahneman et al. 1986: S289; Forsythe et al. 1994: 348; Güth 1995: 331; Pillutla and Murnighan 1995: 1409, 2003: 248; Straub and Murnighan 1995: 345–346).

But something surprising happened. Beginning with Werner Güth’s famous paper in 1982 (Güth et al. 1982), the first empirical studies that were actually conducted to see what people would do in this situation found a rather different result. Robert Forsythe and his colleagues, for instance, found that not one offerer kept the entire amount of $10, and that offers were instead distributed around the 50/50 split amount with 75% offering at least an equal amount (1994: 362). As we noted, this behavior by offerers is not what standard game theory would have predicted. Similarly, in a well-known paper Daniel Kahneman and his colleagues found that out of $10, the mean amount offered was $4.76 with 81% making equal split offers (1986: S291, see also Güth 1995).

Surprising results were found for responders too. Kahneman also reports, for instance, that the mean of minimally acceptable (i.e., not rejected) offers was $2.59, with 58% demanding more than $1.50 (Kahneman et al. 1986: S291).

To explain these puzzling results, several researchers began to move beyond standard game theoretic assumptions and appeal to a justice motive. With offerers, this could be a desire to be fair in allocating goods, together with a belief that the fair allocation in the game is an equal split. With responders, it could be a desire to not be treated unfairly. This could lead to different explanations of rejection behavior, such as a desire to not participate in unfair deals or a desire to punish someone who is behaving unfairly (Kahneman et al. 1986: S290, see also Forsythe et al. 1994; Güth et al. 1982; Güth 1995; Camerer and Thaler 1995).

But this is not the end of the story about the research done to try to explain and predict ultimatum game behavior. In the 1990s, a wave of new studies began to cast doubt on these fairness explanations. Here are some of the interesting results from this research:

  1. Paul Straub and J. Keith Murnighan (1995) varied the amount of information given to offerers and responders. Some offerers were told that responders would know the amount that the offerer had to work with, while other offerers were told that responders would not have this piece of information at their disposal when presented with the offer. If the fairness hypothesis is correct, this change of information should not matter. The average offer should be roughly the same in both conditions. But it wasn’t. In the scenario where the starting amount is $10, the mean offer made to responders was $4.05 in the complete information condition, but $3.14 in the partial information condition. When the amount was $80, the means were $30.73 and $23.00 respectively (Straub and Murnighan 1995: 353).

    A similar disparity emerged when participants were responders, some of whom were told the amount that the offerer had to work with, and some of whom were kept in the dark about this. For the ignorant group, the mean lowest acceptable offer was $1.04, and 29 out of 45 participants accepted an offer of $0.01. For the fully informed $10 group, it was $1.92. For the fully informed $80 group, it was $17.43 (1995: 351–352). This result, though, need not be out of line with what the fairness models above predict. The troublesome results for fairness models arise here with respect to the offerers, not the responders.

  2. Madan Pillutla and J. Keith Murnighan (1995) also varied partial versus complete information for offerers, and again found a significant difference. When the starting amount to be divided was $10, the mean offer given partial information was $3.54, whereas for complete information it was $4.66 (1995: 1415). The two new wrinkles were (i) whether offers would come with a label “this is fair” or not, and (ii) whether an independent third party would evaluate offers and decide if they were fair or not. Pillutla and Murnighan reasoned that if offerers really were motivated predominantly by fairness, then these variations should not matter significantly. But they did. For instance, in the partial information condition, it was already noted that the mean offer was $3.54, but it dropped to $2.61 in the fair label variation (1995: 1415). Hence,

    [t]hey acted as if labeling an offer as fair would lead respondents to accept smaller offers—even when the latter had complete information. (1995: 1417)

    On the other hand, the mean offer increased all the way to $4.67 in the third-party label variation, with 72% making 50–50 offers (1995: 1415–1416). These are effects that are easier to explain given a self-interested motivational story, than they are on a fairness motivational story. As the researchers note, “It seems that offerers only made equal offers when it was worth their while to appear fair” (1995: 1416).

    For responders, Pillutla and Murnighan found that again small offers were rejected much more frequently in complete information versus partial information conditions. The rejection rate increased even more when the information was complete and the offer was labeled as unfair by a third party (1995: 1420).

  3. Pillutla and Murnighan (1996) and Terry Boles and his colleagues (2000) were among several researchers who introduced the variation of giving outside options to responders, or options which they knew they would receive if they rejected an offer. For instance, a responder might reject an offer of $1 if he knows that he has an outside option of receiving $2 when rejecting an offer. With this new twist, researchers can then introduce additional conditions in which offerers know or do not know whether there is an outside option for responders, or what the value of that offer is, or what the range of the offer could be, and so forth. Without getting into all the various permutations, one important result that Boles found was that offerers made lower offers when they knew the size of the outside option, which does not seem to be what fairness would predict (Boles et al. 2000: 247). Furthermore, Boles introduced a wrinkle whereby offerers could send a message with their offer, allowing them to be deceptive about the size of the allocation they had available to them in partial information conditions. The game was repeated with the same participants as offerers and responders for four rounds, and after each round, any deceit that offerers used was revealed to responders. It turned out that offerers were deceptive 13.6% of the time (2000: 247). Boles also discovered that when responders felt deceived, in the next round they were much more likely to reject a new offer, even at the expense of their self-interest. In other words, they wanted to punish the offender for deceiving them (Boles et al. 2000: 249–250).

Given these results, a diverse array of more sophisticated motivational accounts has been offered in the recent literature (For additional results and discussion, see also Roth 1995; Kagel et al. 1996; Pillutla and Murnighan 2003; Camerer 2003; Murnighan and Wang 2016.). One thing they have in common is that they are egoistic accounts, appealing in some way to what would advance a person’s self-interest. In the case of offerers, here are some of the motivational proposals that have been made:

  1. Larger offers are made due to fear of rejection of smaller offers (Pillutla and Murnighan 1995: 1410–1411).
  2. Larger offers are made so long as one can enjoy appearing to be fair; otherwise smaller offers are made (Pillutla and Murnighan 1995: 1424; Camerer and Thaler 1995: 212).

In the case of responders, here is an example of a recent egoistic proposal:

Small offers are rejected because of wounded pride. Although responders may appeal to considerations of fairness, this only serves as post-hoc confabulation (Straub and Murnighan 1995: 360–361; Pillutla and Murnighan 2003: 253–254).

So we can see a trend here (Roth 1995; Pillutla and Murnighan 1995: 1424, 2003: 250). Initial hypotheses about participants in ultimatum games relied on relatively simple egoistic motives. Those hypotheses were allegedly disconfirmed by the evidence. New hypotheses were offered which appealed to the role of fairness motives. But then these hypotheses also were allegedly disconfirmed by additional evidence. So even newer hypotheses have now been offered that involve more elaborate egoistic motives. Future work will have to establish whether these proposals turn out to be any more plausible in the long run.

2. Dictator Games and Fairness

It turns out that this trend is not specific to ultimatum games—research on dictator games went through the same evolution. We said that ultimatum games are relatively simple two person games with an offerer and a responder. In the standard dictator setup, things are even simpler. Call the two people involved the “dictator” and the “recipient”, Using our $10 example, the dictator is told that she can give any amount between $0 and $10, inclusive, to another person. And whatever the dictator decides, that is the end of the story. So if the dictator wants to keep $8 and give $2 to the recipient, then that is what each person walks away with.

Beginning with the traditional game-theoretic framework, the prediction for what participants serving as dictators would do is straightforward—keep all the money (Forsythe et al. 1994: 348; Dana et al. 2006: 193). Since there is no threat of rejection by the recipient, and since all that matters is promoting one’s self-interest, there is every reason to keep all of the money. Or so one might think.

But once again, that is not what most participants actually did. In the same Forsythe study, while 21% gave nothing to the recipient, the remaining 79% gave something, with 21% giving an equal amount of the $10 (Forsythe et al. 1994: 362). Kahneman also ran a version of a dictator game where students in a psychology course could divide $20 with an anonymous fellow student, or keep $18 for themselves and give $2 to the other student. Strikingly, 76% choose the even division (Kahneman et al. 1986: S291). In addition, 74% of participants were subsequently willing to pay $1 to punish an unfair dictator and in the process reward a fair one, even though this would reduce their own monetary payment in the process (1986: S291). Van Dijk and Vermunt (2000) found that information asymmetries were not as prevalent as in ultimatum games. Specifically, there wasn’t a significant effect observed based on whether dictators knew that recipients were aware of the total amount dictators had to allocate. As they conclude,

whereas the dictator game evokes a true concern for fairness, allocators in the ultimatum game only appear to behave fairly…. (2000: 19; for a review of results in dictator games, see Camerer 2003; Murnighan and Wang 2016)

Overall, according one review of the literature, giving in dictator games amounts to roughly 15–20% of what a participant receives in the first place (Camerer 2003: 57–58), which is less than fair but more than completely self-interested. As a quick aside, it is worth noting the implicit assumption in much of this literature that a 50/50 split is the fair distribution, whereas keeping most of the money for oneself would be considered unfair. Philosophically this might be a contested matter, and there could also be some variability in terms of which cultures tend to accept this assumption more so than others.

If the same trend holds that we saw in the previous section, then we know to expect that alternative egoistic stories about motivation in dictator games will have emerged as a result of more recent results challenging the fairness motivational hypothesis. Here are two examples of such results:

a. In the baseline condition of John List’s study (2007), both people in the dictator game setup were given $5 to start with. Then the dictator received another $5 and was given the chance to allocate anywhere from $0 to all $5 to the other person. Three additional conditions had interesting twists. In the Take ($1) condition, the setup was the same including the chance to allocate money, but now instead there was an option to take $1 from the other person. The Take ($5) condition, as its name suggests, allowed up to $5 to be taken from the other person (or $5 allocated, or nothing taken or allocated). Finally, in the Earnings condition, everything was the same as Take ($5) except the dictator earned the $10 for performing a 30-minute task beforehand. Here were the results (List 2007: 487):

Giving a
Positive Amount
Mean Offer
Baseline 71% $1.33
Take ($1) 35% $0.33
Take ($5) 10% −$2.48
Earnings 6% −$1.00

Strikingly, when given the chance to take money from a stranger, dictators would less frequently allocate money to the stranger, and their average allocation would be smaller. Plus if they felt like they earned the money as opposed to just being given it by the experimenter, then almost none of the dictators would be willing to part with their money, being more inclined to take rather than to give.

b. Jason Dana and his colleagues (2006) had dictators make their allocation decision using $10. They were told that recipients would be informed about what the dictator knew when making his choice. But before recipients actually were told anything, dictators were given another choice. They could keep $9 for themselves, $0 would go to the recipient, and crucially, the recipient would not learn anything about the dictator game at all. This exit option would thereby allow dictators to keep a large payoff while also knowing that they had not disappointed the recipient. And the dictator would not have to worry about appearing selfish by keeping most of the money (Dana et al. 2006: 196). Eleven out of 40 dictators (28%) ended up taking this exit option, with 2 of them initially planning to keep the $10 (2006: 197). The remaining 9 participants had planned on giving some money to the recipient, and now did not have to.

28%, while an interesting result, is still not that large. The majority of participants both did not exit and gave some of the $10 to the recipient. However, in a second study, Dana did two things. First he replicated the results of his first study, and this time 43% exited (2006: 199). But in a new condition, dictators were told at the very beginning that recipients would be oblivious to the dictator game—they would just receive a note that said, “Thank you for your time today. An additional payment of $__ is attached to this note” (2006: 198; Dana did not explain what would happen if the amount was $0). Thus dictators knew that they could make any offer they wanted to without worrying about how it would be viewed by the receiver. So they chose their allocation, and then were given the $9 exit option. And here is the interesting result. Now only 4% of participants took the exit option, and 54% chose to give $0 and to not exit (2006: 199; see also Dana et al. 2007 for additional related studies).

As with ultimatum games, a diverse array of more sophisticated egoistic motivational explanations have been proposed to account for the behavior of dictators. Here are examples of a few of them:

A desire to feel good by giving.

A desire to not appear selfish, or to appear to be fair.

A desire to give what others expect one to give (or not to give).

(for an overview, see Dana et al. 2006: 194–195, 200; Dana et al. 2007: 68–69, 78)

For what it’s worth, my view is that it is unlikely any one of the proposed motives in the ultimatum and dictator literatures is going to be sufficient by itself to explain the observed patterns of behavior. Nor, to be fair, are they usually intended by researchers to be sufficient either. Rather, motives concerned with good (monetary) outcomes for the self are no doubt part of the story. But so too are motives concerned with fairness. And so too are one or more additional egoistic motives, such as not wanting to appear selfish. A complex, multi-motive model is what is needed going forward (see also Forsythe et al. 1994: 362; Roth 1995; Güth 1995; Kagel et al. 1996: 102; Pillutla and Murnighan 2003: 254; Dana et al. 2006: 195; Dana et al. 2007: 78; Murnighan and Wang 2016: 87).

Let me end with a cautious note. Recently there have been some doubts raised about the external validity of dictator games, or the extent to which the results generated in these lab studies carry over to non-artificial or natural settings. For instance, Jeffrey Winking and Nicholas Mizer (2013) had a confederate approach an individual at a bus stop in Las Vegas, and “pretended to notice chips in his pocket, stopped briefly and claimed to the participant that he was late for a ride to the airport and asked the individual if he/she wanted the casino chips, which he did not have time to cash in,” and which were worth $20 (2013: 290). Ten feet away with his back turned to the participant was another confederate. The confederate with the chips would also add, “I don’t know, you can split it with that guy however you want” while pointing towards the second confederate (2013: 290). However, in stark contrast to the results earlier, here every single participant kept all $20 for themselves (2013: 291). Several writers have claimed that external validity might be limited because the laboratory results for dictator games could be heavily influenced by experimenter demand effects (Zizzo 2010, 2013, Winking and Mizer 2013.

3. Batson’s Modified Dictator Game and Moral Hypocrisy

For the past twenty years, the psychologist Daniel Batson and his colleagues have developed a novel experimental approach which closely resembles the dictator game. Because this approach has been explored using a number of interesting modifications by the same researcher in the course of constructing a sophisticated motivational story about resource allocation, and because his findings have attracted a great deal of interest and attention, it is worth delving into them in some detail (For the studies, see Batson et al. 1997, 1999, 2002, 2003. For reviews, see Batson and Thompson 2001 and Batson 2008. For related studies and discussion, see Valdesolo and DeSteno 2007, 2008 and Watson and Sheikh 2008. For Batson’s exploration of the implications of this research for moral psychology, see Batson 2015.).

Here is the setup Batson typically used. Participants were told that they were part of a task assignment study. Individually, they were given the choice of whether to assign a positive consequences task or a neutral consequences task to either themselves or another participant, who (they were told) would simply assume the assignment was made by chance. The positive consequences task was such that for each correct response, the participant would receive one ticket for a raffle with a prize of $30 at the store of his or her choice. In the neutral consequences task, there would be no consequences for correct or incorrect responses but, “most participants assigned to the neutral consequences task find it rather dull and boring” (Batson et al. 1997: 1339). After making the assignment privately and anonymously, participants were asked about what was the morally right way to assign the task consequences, and to rate on a 9-point scale whether they thought the way they had actually made the task assignment was morally right.

We can see that one of the differences between Batson’s setup and a typical dictator game is that here the recipient would not know where the allocation came from, nor that there was another possible allocation that could have been made to him instead. Now such information is not an essential feature of dictator games; in fact we just saw a similar setup in one of Dana’s studies. But in the usual configuration dictators typically know that recipients will know how they received their allocation.

What ends up happening when participants are placed into this situation? Well, they tend to assign themselves the positive task. Out of twenty participants, Batson found (Batson et al. 1997: 1340):

Assigned self to positive consequences task 16
Assigned other to positive consequences task 4

Furthermore, only 1 out of the 16 said that assigning oneself to the positive task was morally correct. Yet even these 16 participants rated the morality of their assignment in the middle of the 9-point scale (4.38), which was significantly lower than the 8.25 rating for the 4 participants who assigned the other participant to the positive consequences task (Batson et al. 1997: 1340).

From here Batson performed many additional studies which were variations of this initial setup. Suppose, for instance, that we make the moral norm at work here salient to participants just before they make their assignment by including a statement that,

Most participants feel that giving both people an equal chance—by, for example, flipping a coin—is the fairest way to assign themselves and the other participant to the tasks (we have provided a coin for you to flip if you wish). But the decision is entirely up to you. (Batson et al. 1997: 1341)

10 participants flipped and 10 did not. 8 out of 10 in the first group said flipping was the morally right procedure, and 6 out of 10 said so in the second group. Most importantly, Batson found (Batson et al. 1997: 1342):

Assigned self to positive consequences task out of 10 who did not flip 9
Assigned self to positive consequences task out of 10 who did flip 9

This second result is grossly out of line with what the random flipping of a coin would have predicted. At least some of the participants in the second group must have flipped in a way that went in favor of the other person, but still assigned themselves the positive task. Self-interest seemed to have crept into their decision making process in a significant way. Yet, and perhaps most surprisingly, those who flipped rated what they had done much more morally right (7.30 on a 1–9 scale) than those who did not flip (4.00) (Batson et al. 1997: 1341).

Here is another wrinkle—suppose participants are now given the option to have the experimenter assign them one of the two tasks, while also knowing what that assignment is going to be ahead of time. We get the following (Batson et al. 1997: 1343):

Accepted experimenter’s assignment if it would be the positive consequences task 17 out of 20
Accepted experimenter’s assignment if it would be the neutral consequences task 11 out of 20
Remaining participants who assigned themselves to the positive consequences task (whether flipped or not) 12 out of 12

So in this experiment, only 22.5% of participants ended up with the neutral task. Yet for the 17 participants in the first group, on average they felt they had acted just as morally right (7.06 on a 9-point scale) as the 11 participants in the second group (7.91) (Batson et al. 1997: 1343).

What these results have suggested to Batson is that most of us (at least in these kinds of situations) are disposed towards a kind of moral hypocrisy, or appearing to be moral to oneself and others but avoiding the costs of actually being so if one can (try to) get away with it. After all, many participants were typically eager to flip the coin and report after the fact that this was the morally right course of action, but then distorted the process so that the results came out in their favor. Note that it is not the mere fact that they choose the positive consequences task for themselves that is hypocritical by itself—there could be cases where a person thinks of acts like this as being in line with her self-interest, and is not even aware of their moral ramifications. Nor is their hypocrisy captured here by the additional fact that the participants also seemed to believe in the moral principle that flipping the coin is the morally fair way to assign the task consequences. For then we would just have a perfectly familiar case of weakness of will, in which you believe that something is right but fail to be sufficiently motivated to do it. Rather, their hypocrisy arises when they (i) choose the positive consequences for themselves, while (ii) seeming to believe that flipping the coin and following what it indicates is morally correct, and (iii) still claiming to themselves and others to have made the morally right task assignment (For more on characterizing moral hypocrisy, see Batson et al. 1997: 1335–1336, 1999: 525–526, 2002: 330; Batson and Thompson 2001; and Batson 2008, 2011: 222–224.).

How is it really possible for these participants to pull off this combination? In particular, it is one thing to appear to be acting morally to others. But these participants are also appearing to be moral to themselves too. How are they able to downplay the costs associated with guilt, regret, and hypocritical behavior for acting immorally by going against what they know to be right, and also experience the self-rewards for moral behavior? One possibility is that the participants in question have come to deny any moral responsibility in this situation, thereby deactivating the moral norm. Or perhaps they have come to deny that they are able to carry out the task, or that they are no longer aware of the likely consequences of their actions. But there is no evidence to suggest that these background conditions have failed to apply in these particular cases (Batson 2008: 57). Another possibility is that these participants have come to think that their behavior is in line with their moral standards, thereby allowing them to not feel guilt and to even perhaps take pride in their behavior. This could be because they have revised the content of the moral principle to create an exception clause for this one kind of situation. Or it might be that they lie to or deceive themselves about what the principle says in the first place.

But the results already cited above cast doubt on these hypotheses as well (see in particular Batson et al. 1999: 526; this is not to say that these hypotheses are not accurate in other cases, but the focus here is only on understanding the data generated by Batson’s studies). Batson also introduced another variety of the setup whereby the moral principle is made salient, a coin is provided, and the coin is clearly marked on one side with “SELF to POS[ITIVE]” and “OTHER to POS” for the opposite side (Batson et al. 1999: 527). In those cases when the coin lands on OTHER, it seems very hard to think, in spite of what the moral principle and the coin both say, that a person who still assigns himself to the positive consequences task would take that to be morally acceptable. Furthermore, only 2 out of 40 participants reported afterwards that the most morally right thing to do is to assign themselves to the positive consequences task (Batson et al. 1999: 529). Yet of the 28 who chose to flip, Batson found (Batson et al. 1999: 528):

Assigned self to positive consequences task 24
Assigned other to positive consequences task 4

Furthermore, those who assigned themselves to the positive consequences task after flipping the coin again thought that they were being highly moral (7.42 on a 9-point scale), while those who made the same assignment without flipping the coin did not (3.90) (Batson et al. 1999: 529). So marking the coin and thereby reducing the ambiguity as to what the fair assignment should have been, did nothing to undermine moral hypocrisy. We are still without an explanation for how it seems to work—more on that in a moment.

A natural thought here is that this last result does not distinguish between those participants who flip, win, and then rate the morality of their action, versus those who flip, lose, change the task assignment to favor themselves, and then rate the morality of their action. It could be that the former group (rightly) rates the morality of their action highly, since they are following the fair procedure, whereas the second group rates it low, perhaps around 4.0 as do those who do not even bother flipping the coin and just assign themselves the positive task. If the results came out this way, then we would have evidence for moral weakness of will, to be sure, but not for moral hypocrisy, since the second group would be honest about the moral failure of their action.

In a later study, Batson tested this possibility using secret observations of how each participant who flipped ended up having his or her coin fall (Batson et al. 2002: 334–338). Of the 32 who flipped the coin, 16 got OTHER to POS or otherwise fiddled the coin flip so that it came out in their favor (i.e., repeated flipping). For this group, the mean rated morality of their action was 5.56, which was significantly higher than the 3.89 for the group which assigned itself to the positive task without using the coin. On the other hand, it was also lower than the 7.45 for the group which flipped, won SELF to POS, and assigned themselves to the positive task (2002: 336). The upshot is that,

even though the coin had no more effect on their decision than it had on the decision of those who did not claim to use the coin at all, the fiddlers still said they thought the way they made the task assignment decision was more moral. Their sham reference to use of the coin seems to have provided sufficient appearance of morality that they could claim to have acted, if not totally morally, at least moderately so. (2002: 337)

To summarize, the studies mentioned above suggest that participants could have had their relevant fairness norm activated—in this case, the norm that flipping a coin and conforming to its results is the morally right way to behave—and even be motivated to some extent to do so. But this motivation must have been so weak that, once the cost/benefit analysis was done on an alternative action of secretly ignoring the coin when it went against the person and instead assigning oneself to the positive task, it became fairly easy to outweigh or undercut the motivation to do the right thing. So our fairness norms look to have little enhancing power after all, or at least in these kinds of cases.

But all is not lost for fairness norms. Batson was able to discover two variables which separately helped to ensure that motivation to be fair did win out. The first variable was the introduction of a mirror. The setup was as usual with no labeling of the coin, but now a mirror was propped up against a wall on the only table where participants could fill out their forms. For one group of participants, the mirror was facing them; for the other, it was turned to the wall. This ended up making a significant difference (Batson et al. 1999: 530):

Assigned Self to Positive Consequences Task
No Facing Mirror Facing Mirror
Did Not Flip Coin (29) 85% 62%
Flipped Coin (23) 85% 50%

Here for the first time we see results in line with what fairness requires—for those who flipped the coin and were facing the mirror, the overall results were as chance would predict.

What is the best explanation for the contribution made by the mirror? Batson appeals to research on the psychology of self-awareness, in which objects like mirrors can,

heighten awareness of discrepancies between behavior and salient personal standards, creating pressure to act in accord with standards. (Batson et al. 1999: 529; for more on the psychology of self-awareness, see Wicklund 1975)

In this case, the mirror served to highlight to the person the difference between what he believed was morally correct in this situation, and the opposing temptation to act out of self-interest. This heightened awareness seemed to either create extra motivation to comply with the fairness norm or lessen motivation to do the self-interested thing (or perhaps both). Thus the mirror can be said to increase the salience of the self’s own personal standards of evaluation (Another hypothesis, though, is that it increases the salience of standards of social evaluation, i.e., how others might judge him. Batson tested this possibility and did not find support for it (2002: 331–334)).

Self-awareness also provides a clue about where to find a plausible explanation for how moral hypocrisy is possible. That clue has to do with a particular form of self-deception (Batson et al. 1997: 1336, 1346, 1999: 526–527.). Rather than thinking that participants simply revised their fairness standards to make their behavior look acceptable in their own eyes, perhaps instead they were engaging (often unconsciously) in an act of self-deception whereby they avoided comparing their behavior to the relevant moral standards. If the two are kept apart from each other, that mitigates the perceived costs of not acting fairly while doing nothing to mitigate the perceived benefits of acting self-interestedly. But with increased self-awareness, the discrepancy between the fairness norm and the self-interested option was made especially salient so that it became psychologically difficult for many participants to employ this particular form of self-deception (Batson et al. 1999: 527, 529, 531–532, 2002: 331 and Batson and Thompson 2001: 55.).

The other variable which Batson found to increase motivation associated with the fairness norms had to do with perspective taking. Take the usual setup, but with the caveat that the default starting point is for the participant to be awarded two raffle tickets for every correct response, while the other participant gets zero tickets. Then the task assignment becomes whether the participant is willing to change the assignment so that it is symmetrical with both people receiving one ticket each. For controls, 38% changed the task consequences to symmetrical. However, for the experimental group, 83% did (Batson et al. 2003: 1199). The difference? This group was instructed to adopt the perspective of the other person—“we would like for you to imagine yourself in the place of the other participant” (Batson et al. 2003: 1198, emphasis deleted). This is different from imagining what the other person is feeling or experiencing—the instructions here are very much in line with the Biblical mandate to, “Do unto others as you would have them do unto you” (Matthew 7:12.).

There is one additional implication from Batson’s work which I want to highlight here. Consider again those participants who are about to flip the coin. At that point before they see what the outcome is, what is the nature of their motivation? Perhaps for the moment at least some of them want to do the morally right thing (follow the dictate of the coin, however it ends up landing) for its own sake. If the coin lands in their favor, then the outcome also aligns with their self-interest, which is so much the better. But if it lands in the other person’s favor, then they might see what being fair would cost them, and their self-interested motives end up outweighing their initial moral motivation. Or perhaps this is all fanciful—perhaps when they are flipping the coin, all they want is what they think is ultimately in their self-interest, which at the moment is just flipping the coin so as to benefit from appearing to be moral.

One way to test these hypotheses, Batson reasoned, is to see if participants cared about whether the flipping of the coin and so the task assignment was done by themselves or by the experimenter. If the egoistic hypothesis is correct, then they should want to flip the coin themselves so that they can rig the outcome. If the other hypothesis involving an ultimate desire to be moral is correct, however, it should not matter who flips. When the experiment was actually run with this choice option, 80% of those who used a coin wanted the experimenter to flip it. This initial evidence thus favors postulating a motive to follow the fairness norm (and perhaps moral norms more generally) for its own sake (Batson and Thompson 2001: 55–56.).

Suppose this hypothesis about motivation is correct. How strong and psychologically powerful of a force does it typically seem to be in cases involving fairness? The evidence suggests, at least given the current state of research, that for many of us it is only weak in strength. We can already see this from the studies cited above when so many participants do not actually follow their moral principle about what is a fair task assignment. In addition, Batson varied the previous setup so that the assignment was between a positive and a negative task, where the latter involved receiving “mild but uncomfortable” electric shocks for each incorrect response. With this change, only 25% of participants offered to let the experimenter flip the coin, and another 25% flipped themselves, with 91% choosing the positive task. The remaining 50% of participants simply bypassed the pretense of the coin flip and gave themselves the positive task while readily admitting that this was not morally right (Batson and Thompson 2001: 56.). The implication is that moral motivation caused by our fairness norms seems to be weak and highly susceptible to being outweighed in cases of this type where the person’s self-interest is at stake.

4. Possible Implications of Batson’s Research for the Empirical Reality of the Virtue of Justice

Having reviewed this extensive research project by Batson, one natural direction some philosophers might go in next is to examine what it suggests about people’s moral character. This is especially true for philosophers engaged in recent discussions of situationism and the empirical reality of virtue and vice (Harman 1999, 2000; Doris 1998, 2002; Snow 2010; Miller 2014).

Given the focus of this essay, we will only look at the virtue of fairness. It seems clear that Batson’s results do not inspire much confidence in this virtue being widely held. For instance, something like the following seems plausible:

(1)
A person who is fair will reliably distribute a good fairly as opposed to selfishly, especially when he or she believes that this would be the fair thing to do and the good is relatively small in value.

But time and again in Batson’s studies, the participants rigged things so that they received the better outcome. Admittedly, to count against (1), we have to assume that this was an unfair distribution, which may be controversial. But most are likely to grant that assumption, and the participants themselves did not appear to believe that what they did was the fair thing.

Similarly, this seems to be true of the virtue of fairness:

(2)
A person who is fair will not reliably and sincerely claim to have done the morally correct thing, when he knows that what he did was unfair and selfish, and the benefit to himself was relatively small in value.

To clarify, this is not a point about being honest rather than lying about one’s behavior. Rather, the point with this criteria is that a fair person would likely not be deceived or be of two minds about the unfairness of his behavior in such instances. He would clearly recognize and acknowledge that his behavior was unfair. But again the participants did not tend to live up to this fairness standard. As we saw, in one study those who flipped and assigned themselves the positive consequence task rated what they had done as highly moral (7.30 on a 1 to 9 scale), and their rating was much higher than for those who assigned themselves to this task without flipping (4.00) (Batson et al. 1997: 1341).

There are other respects in which participants failed to live up to the standards of fairness, but the picture is not all bleak. In fact, there are many ways in which participants did not conform to the standards of the vice of unfairness either. Here is one:

(3)
An unfair person does not have moral beliefs to the effect that unfairness is wrong in general, as well as wrong in particular instances of what are widely considered to be acts of unfairness. Or if he does happen to have such beliefs, he will not care much about them and they will not play a significant motivational role in his psychology.

But we saw that Batson’s participants did seem to genuinely believe that the fair thing to do was to flip the coin and make the appropriate assignment. True, this belief did not have a significant motivational role to play in some contexts, but in others it did, such as when taking the perspective of the other person.

This suggests another respect in which most people, if they are like Batson’s participants, would fall short of being vicious in this context:

(4)
An unfair person would not reliably let his unfair behavior be significantly diminished or eliminated given either increased self-awareness or when momentarily adopting the perspective of another person.

Yet the experimental manipulations Batson introduced involving a mirror and the instructions to imagine yourself in the place of the other person, seemed to eliminate the unfair behavior altogether.

To note one more interesting conflict with unfairness, consider the motivational picture which emerged in the last few studies that were reviewed concerning fairness norms. Batson’s results suggested that participants were motivated to do the morally right/fair thing, but when the flip of the coin went against them, this dutiful motivation would often be outweighed by self-interested motivation. Yet on a traditional Aristotelian picture of the vices:

(5)
An unfair person is not reliably motivated to do the morally right thing, even if that motivation ends up being outweighed, but rather is motivated to do what is unfair.

This might be too strong a characterization of an unfair person in an ordinary sense, but again we are focusing on the viciously unfair person. Yet it turned out that the participants seemed to be experiencing motivation to do the right thing. Furthermore, if they were motivationally conflicted but gave in to self-interested motivation, they would be weak of will or incontinent on the Aristotelian picture, rather than vicious.

So one could argue Batson’s results do not sit comfortably either with the widespread possession of the virtue of fairness or the vice of unfairness. Instead they suggest that we have beliefs and desires like these:

  • Beliefs about what is fair and what is not fair, and the importance of being fair.
  • Beliefs about the relationship between not complying with certain fairness norms and various personal benefits such as increased time, money, alternative activities, and so forth.
  • Desires in favor of being fair when doing so will contribute towards complying with the relevant fairness norms, provided the benefits of doing so are not (significantly) outweighed by the costs.
  • Desires in favor of not being fair when the benefits of complying with the relevant fairness norms are (significantly) outweighed by the costs, while also desiring to as much as possible still appear to be fair both to others and to oneself.

While certainly not an exhaustive list, these mental states do seem like a mixed bag, morally speaking. Some of them are quite morally admirable, such as those in the first set, and by themselves could give rise to positive moral behavior. Others, of course, are not morally admirable, and they can help to explain the unfair behavior we see in Batson’s studies (For the development of a Mixed Trait approach to character along these lines, see Miller 2013, 2014, 2015).

5. Personality Traits, Economic Games, and Justice

Here is a general observation about the results we have seen in this essay. For each of the studies, there was rarely uniformity in behavior by all the participants in a particular situation. Indeed, in some cases there were striking differences. For instance, we saw that in the Forsythe study of dictator games, 21% gave nothing and 21% gave the equal amount of $10, leaving 58% somewhere in-between (Forsythe et al. 1994: 362). We also saw that many people would pay $9 to take an exit option in Dana’s dictator studies, but many people also would not (Dana et al. 2006). Or in Batson’s studies, many participants would assign themselves to the positive consequence task, but rarely would everyone do this.

So there appear to be important individual differences in behavior among participants in these game environments. Since the situation a given group of participants is confronting is the same, it is only natural to think that the ensuing differences in behavior might be explained, at least in part, by differences in their underlying personality traits. If this turns out to be the case, then if we learn something about their personality, we could more accurately predict how they would subsequently behave in these games, as well as in other relevant situations. In other words, individual differences in the moral psychology of a person’s justice relevant traits, can translate into individual differences in justice relevant behavior. And knowing something about the former can help us predict the latter.

It turns out that there is in fact some evidence linking certain personality traits with behavior in these games. Here I briefly mention three such instances where evidence has been found. Spending a moment on each of them is also worthwhile because of what we can learn more generally about the moral psychology of justice beyond what economic games tell us (for a massive meta-analysis of 60 years of research on the relationship between personality traits and behavior in six economic games, including the dictator and ultimatum games, see Thielmann et al. 2020).

The Big Five. The Big Five personality traits (or Five-Factor model) have come to dominate the field of personality, with thousands of relevant papers appearing in just the past five years (for an overview, see John et al. 2008). Results from these studies have repeatedly pointed in the direction of five basic dimensions of personality with the following most commonly used labels:

  • Extraversion (also labeled Surgency, Energy, Enthusiasm)
  • Agreeableness (also labeled Altruism, Affection)
  • Conscientiousness (also labeled Constraint, Control of Impulse)
  • Neuroticism (also labeled Emotional Instability, Negative Emotionality, Nervousness)
  • Openness (also labeled Intellect, Culture, Originality, Open-Mindedness)

The idea, then, is that in a typical group there will be people who differ in their ratings on each of these five dimensions. Some, for instance, might be high on extraversion, which can be interpreted as involving an energetic approach towards social interaction manifested in, for instance, the behavior of attending more parties and introducing themselves to strangers (John et al. 2008: 120). Others might be quite introverted instead.

Advocates of this approach typically have hierarchical models of personality traits in mind, where the Big Five are subdivided into different “facets” that are less broad and so are claimed to have increased accuracy (for details, see Paunonen 1998). To cite one example, here are the 30 facets from Robert McCrae and Paul Costa’s version of the Five-Factor Model (Costa and McCrae 1995: 28):

  • Neuroticism: Anxiety, Angry Hostility, Depression, Self-Consciousness, Impulsiveness, Vulnerability
  • Extraversion: Warmth, Gregariousness, Assertiveness, Activity, Excitement Seeking, Positive Emotions
  • Openness to Experience: Fantasy, Aesthetics, Feelings, Actions, Ideas, Values
  • Agreeableness: Trust, Straightforwardness, Altruism, Compliance, Modesty, Tender-Mindedness
  • Conscientiousness: Competence, Order, Dutifulness, Achievement Striving, Self-Discipline, Deliberation

In their 240 item survey instrument, the NEO-PI-R, 8 items are designed to measure each of these facets. For instance, “I keep my belongings neat and clean” and “I like to keep everything in its place so I know just where it is” are two items for the consciousness facet of order (Costa and McCrae 1992: 73).

It is noteworthy that there is nothing on the list of Big Five traits or their facets that seems directly related to justice. But in recent years connections have been established between performance on economic games and the Big Five (for a review, see Zhao and Smillie 2015). The most pronounced finding among the Big Five traits is that higher agreeableness is correlated with higher allocations to the recipient in a dictator game. In one meta-analysis, the sample weighted average correlation was \(r_{wa} = .18\). A negative correlation of \(r_{wa} = -.10\) was also found for rejecting fewer offers in ultimatum games (Zhao and Smillie 2015: 288). Conscientiousness has also been linked to lower allocations in dictator games (Ben-Ner et al. 2004). Studies on the other Big Five traits have been more inconsistent in their results, and are also few and far between.

The HEXACO. Michael Ashton and Kibeom Lee have proposed a six-factor model of personality (Ashton & Lee 2001, 2005 and Lee & Ashton 2004). Five of the factors carry over with minor modifications from the Big Five taxonomy. The key addition is a sixth factor which they call honesty-humility, and which they describe this way:

Persons with very high scores on the Honesty-Humility scale avoid manipulating others for personal gain, feel little temptation to break rules, are uninterested in lavish wealth and luxuries, and feel no special entitlement to elevated social status. Conversely, persons with very low scores on this scale will flatter others to get what they want, are inclined to break rules for personal profit, are motivated by material gain, and feel a strong sense of self-importance. (Lee & Ashton 2015, Other Internet Resources).

Honesty-humility in turn has four facets: sincerity, fairness, greed avoidance, and modesty. Most relevant for our purposes is naturally the fairness facet. As part of the 100-item HEXACO-PI-R inventory, the following are the items which are labeled under fairness:

12.
If I knew that I could never get caught, I would be willing to steal a million dollars.
36.
I would be tempted to buy stolen property if I were financially tight.
60.
I would never accept a bribe, even if it were very large.
84.
I’d be tempted to use counterfeit money, if I were sure I could get away with it.

Now one might wonder about whether these are really best understood as fairness items. They might seem to relate at the conceptual level more to honesty than to justice. But let’s leave that terminological point to one side.

For it turns out that significant statistical relationships have been found with behavior in bargaining games (for a review, see Zhao and Smillie 2015). In the same meta-analysis, for instance, the sample weighted average correlation was \(r_{wa} = .24\) between dictator allocation amount and honesty-humility (291). Another meta-analysis reported a correlation of \(r_{wa} = .29\) (Hilbig et al. 2015: 92). So too was there a negative correlation between HEXACO agreeableness and rejecting offers in ultimatum games (\(r_{wa}= -.16\)) (Zhao and Smillie 2015: 288). Furthermore, one study found that of those who were high in honesty-humility, 64.4% chose the 50/50 allocation in the dictator game, while only 34.6% did who were low on this dimension (Hilbig et al. 2015: 92).

Benjamin Hilbig and Ingo Zettler (2009) provide a nice experimental demonstration of this relationship. Participants completed the HEXACO-PI and played a dictator game and an ultimatum game with an initial allocation of 100 points for each game. As expected, those who were higher in honesty-humility gave fewer points to themselves in both games (518). More interesting was the consistency in allocations between the two games. Those lowest in honesty-humility allocated an almost equal amount of points to themselves and the other person in the ultimatum game. But in the dictator game, where there was no threat of rejection by the other person, the average allocation for participants to keep was almost 80 points. By way of contrast, those highest in honesty-humility showed no significant difference in their allocations in the two games. In other words, they were consistently fair, whereas the other group of participants was only being fair when it was to their advantage (518–519).

Justice Sensitivity. A relatively new and particularly exciting development in personality psychology has been the study of what is being called “justice sensitivity”. The basic idea is that individuals differ in general on this trait, which can be understood at its heart as a motive for justice (Stavrova and Schlösser 2015: 3). After several refinements, consensus seems to be emerging around four facets for this trait (Schmitt et al. 2010; Stavrova and Schlösser 2015):

JSVictim: Sensitivity to becoming a victim of injustice.

JSObserver: Sensitivity to witnessing injustice.

JSBeneficiary: Sensitivity to passively benefiting from injustice.

JSPerpetrator: Sensitivity to actively committing injustice.

(Stavrova and Schlösser 2015: 3)

Three of these—observer, beneficiary, and perpetrator sensitivity—are other-focused aspects of justice sensitivity, whereas victim sensitivity is self-focused on one’s being treated fairly (Stavrova and Schlösser 2015: 3; Schlösser et al. 2018: 76–77). This makes a difference statistically. For instance, victim sensitivity tends to correlate with anti-social personality traits—such as Machiavellianism—in a way that the first three do not.

But all four of these facets are indeed statistically related to each other, and at the same time are distinguishable pieces of a stable trait of justice sensitivity. While they do correlate with other personality traits such as the Big Five, the correlations are typically on the low side, which can be taken as evidence that justice sensitivity itself is a distinct trait (for a review, see Schmitt et al. 2010).

A well validated measure of justice sensitivity has been developed by Manfred Schmitt and his colleagues, with 10 items for each of the four facets (Schmitt et al. 2010). For instance, here are the items for beneficiary sensitivity:

  • It disturbs me when I receive what others ought to have.
  • I have a bad conscience when I receive a reward that someone else has earned.
  • I cannot easily bear it to unilaterally profit from others.
  • It takes me a long time to forget when others have to fix my carelessness.
  • It disturbs me when I receive more opportunities than others to develop my skills.
  • I feel guilty when I am better off than others for no reason.
  • It bothers me when things come easily to me that others have to work hard for.
  • I ruminate for a long time about being treated nicer than others for no reason.
  • It bothers me when someone tolerates things with me that other people are being criticized for.
  • I feel guilty when I receive better treatment than others.
  • (Schmitt et al. 2010: 234)

Participants complete these items using a scale which ranges from 0 (not at all) to 5 (exactly).

Using this measure of justice sensitivity, some interesting work is being done. For instance, Gollwitzer and his colleagues (2005) examined the willingness of West Germans to contribute to East Germany to help improve living conditions, as there is a large disparity even years after the wall came down. Some particular proposals included an automatic deduction from the salaries of West Germans that would be used for job creation in East Germany, and the implementation of affirmative action hiring policies which would favor East over West Germans for new positions. It turned out that beneficiary sensitivity, for instance, was found to predict solidarity with East Germans, whereas victim sensitivity did not (Gollwitzer et al. 2005: 191).

In another study, Thomas Schlösser and his colleagues (2018) examined whether justice sensitivity predicts wealth distribution. They used a version of the welfare state game in which participants were assigned to be an upper, middle, or lower class subject in a hypothetical society which they got to help shape. Specifically they chose between a society with lower inequality (person A received €5, B received €4, and C received €3) versus higher overall wealth but greater inequality (A received €10, B received €6, and C received €1). As expected, participants who were high on JSVictim (sensitivity to becoming a victim of injustice) were more likely to go with greater inequality if they were A or B, but greater equality if they were C. Participants who were high on the other three dimensions of justice sensitivity were more likely to choose less inequality regardless of whether they were A, B, or C (Schlösser et al. 2018: 79).

Most relevant for our purposes, participants who are high on victim sensitivity are less likely to allocate 50/50 in dictator games, whereas the opposite is true for the other facets of justice sensitivity. Hence Detlef Fetchenhauer and Xu Huang (2004) found that observer sensitivity correlated .21 with an equal allocation, and perpetrator sensitivity correlated .30. On the other hand, victim sensitivity was negatively correlated at −.18 (Fetchenhauer and Huang 2004: 1024).

Further research in this area of the moral psychology of justice looks to be especially promising in the years to come (for additional noteworthy studies, see Dalbert and Umlauft 2009; Lotz et al 2013; Edele et al. 2013).

6. Conclusion

At this point, it is tempting to speculate about what some of the broader implications might be of these empirical findings. For instance, what can they tell us about how to design a just society? If we should assume that most of us suffer from moral hypocrisy to enough of an extent that we are not fair people, does that support more active monitoring by the state to detect unfair behavior? How might we practically implement the findings about mirrors (should they be strategically placed around the public square to encourage fair behavior?) and empathy (should there be regular public reminders to imagine what others are going through who are less fortunate?)? And what other lessons can we extrapolate from this research to help us improve society? This is not the place to explore these questions, but they are a natural next step.

Of course, allocation decisions in real life tend to be much more complicated than simple dictator or ultimatum games. But as we have seen, these games and their many different variations can shed important light on the moral psychology of justice. Clearly no simple story about our motives to be just or fair is going to be plausible. We are more complex creatures than that.

Bibliography

  • Ashton, M. and K. Lee, 2001, “A Theoretical Basis for the Major Dimensions of Personality”, European Journal of Personality, 15: 327–353.
  • –––, 2005, “Honesty-Humility, the Big Five, and the Five-Factor Model”, Journal of Personality, 73: 1321–1353.
  • Batson, C., 2008, “Moral Masquerades: Experimental Exploration of the Nature of Moral Motivation”, Phenomenology and the Cognitive Sciences, 7: 51–66.
  • –––, 2011, Altruism in Humans, New York: Oxford University Press.
  • –––, 2015, What’s Wrong With Morality? A Social-Psychological Perspective, New York: Oxford University Press.
  • Batson, C., D. Kobrynowicz, J. Dinnerstein, H. Kampf, and A. Wilson, 1997, “In a Very Different Voice: Unmasking Moral Hypocrisy”, Journal of Personality and Social Psychology, 72: 1335–1348.
  • Batson, C., E. Thompson, G. Seuferling, H. Whitney, and J. Strongman, 1999, “Moral Hypocrisy: Appearing Moral to Oneself Without Being So”, Journal of Personality and Social Psychology, 77: 525–537.
  • Batson, C. and E. Thompson, 2001, “Why Don’t Moral People Act Morally? Motivational Considerations”, Current Directions in Psychological Science, 10: 54–57.
  • Batson, C., E. Thompson, and H. Chen, 2002, “Moral Hypocrisy: Addressing Some Alternatives”, Journal of Personality and Social Psychology, 83: 330–339.
  • Batson, C., D. Lishner, A. Carpenter, L. Dulin, S. Harjusola-Webb, E. Stocks, S. Gale, O. Hassan, and B. Sampat, 2003, “‘…As You Would Have Them Do Unto You’: Does Imagining Yourself in the Other’s Place Stimulate Moral Action?” Personality and Social Psychology Bulletin, 29: 1190–1201.
  • Ben-Ner, A., F. Kong, and L. Putterman, 2004, “Share and Share Alike? Gender-Pairing, Personality, and Cognitive Ability as Determinants of Giving”, Journal of Economic Psychology, 25: 581–589.
  • Boles, T., R. Croson, and J. Keith Murnighan, 2000, “Deception and Retribution in Repeated Ultimatum Bargaining”, Organizational Behavior and Human Decision Processes, 83: 235–259.
  • Camerer, C., 2003, Behavioral Game Theory: Experiments in Strategic Interaction, Princeton: Princeton University Press.
  • Camerer, C. and R. Thaler, 1995, “Anomalies: Ultimatums, Dictators and Manners”, Journal of Economic Perspectives, 9: 209–219.
  • Costa, P. and R. McCrae, 1992, Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) Professional Manual, Odessa: Psychological Assessment Resources.
  • –––, 1995, “Domains and Facets: Hierarchical Personality Assessment using the Revised NEO Personality Inventory”, Journal of Personality Assessment, 64: 21–50.
  • Dalbert, C. and S. Umlauft, 2009, “The Role of the Justice Motive in Economic Decision Making”, Journal of Economic Psychology, 30: 172–180.
  • Dana, J., D. Cain, and R. Dawes, 2006, “What You Don’t Know Won’t Hurt Me: Costly (but quiet) Exit in Dictator Games”, Organizational Behavior and Human Decision Processes, 100: 193–201.
  • Dana, J., R. Weber, and J. Kuang, 2007, “Exploiting Moral Wiggle Room: Experiments Demonstrating an Illusory Preference for Fairness” Economic Theory, 33: 67–80.
  • Doris, J., 1998, “Persons, Situations, and Virtue Ethics”, Noûs, 32: 504–530.
  • –––, 2002, Lack of Character: Personality and Moral Behavior, Cambridge: Cambridge University Press.
  • Edele, A., I. Dziobek, and M. Keller, 2013, “Explaining Altruistic Sharing in the Dictator Game: The Role of Affective Empathy, Cognitive Empathy, and Justice Sensitivity”, Learning and Individual Differences, 24: 96–102.
  • Fetchenhauer, D. and X. Huang, 2004, “Justice Sensitivity and Distributive Decisions in Experimental Games”, Personality and Individual Differences, 36: 1015–1029.
  • Forsythe, R., J. Horowitz, N. Savin, and M. Sefton, 1994, “Fairness in Simple Bargaining Experiments”, Games and Economic Behavior, 6: 347–369.
  • Gollwitzer, M., M. Schmitt, R. Schalke, J. Maes, and A. Baer, 2005, “Asymmetrical Effects of Justice Sensitivity Perspectives on Prosocial and Antisocial Behavior”, Social Justice Research, 18: 183–201.
  • Güth, W., 1995, “On Ultimatum Bargaining Experiments—A Personal Review”, Journal of Economic Behavior and Organization, 27: 329–344.
  • Güth, W., R. Schmittberger, and B. Schwarze, 1982, “An Experimental Analysis of Ultimatum Bargaining”, Journal of Economic Behavior and Organizations, 3: 367–388.
  • Harman, G., 1999, “Moral Philosophy meets Social Psychology: Virtue Ethics and the Fundamental Attribution Error”, Proceedings of the Aristotelian Society, 99: 315–331.
  • –––, 2000, “The Nonexistence of Character Traits”, Proceedings of the Aristotelian Society, 100: 223–226.
  • Hilbig, B. and I. Zettler, 2009, “Pillars of Cooperation: Honesty-Humility, Social Value Orientations, and Economic Behavior”, Journal of Research in Personality, 43: 516–519.
  • Hilbig, B., I. Thielmann, J. Wührl, and I. Zettler, 2015, “From Honesty-Humility to Fair Behavior—Benevolence or a (Blind) Fairness Norm?” Personality and Individual Differences, 80: 91–95.
  • John, O., L. Naumann, and C. Soto, 2008, “Paradigm Shift to the Integrative Big Five Trait Taxonomy: History, Measurement, and Conceptual Issues,” in Handbook of Personality: Theory and Research, Third Edition, O. John., R. Robins, and L. Pervin (eds), New York: The Guilford Press, 114–158.
  • Kagel, J., C. Kim, and D. Moser, 1996, “Fairness in Ultimatum Games with Asymmetric Information and Asymmetric Payoffs”, Games and Economic Behavior, 13: 100–110.
  • Kahneman, D., J. Knetsch, and R. Thaler, 1986, “Fairness and the Assumptions of Economics”, Journal of Business, 59: S285–S300.
  • Lamont, Julian and Christi Favor, 2014, “Distributive Justice”, The Stanford Encyclopedia of Philosophy (Fall 2014 edition), Edward N. Zalta (ed.), URL=<https://plato.stanford.edu/archives/fall2014/entries/justice-distributive/>.
  • LeBar, M., 2020, “Justice as a Virtue”, The Stanford Encyclopedia of Philosophy (Fall 2020 edition), Edward N. Zalta (ed.), URL= <https://plato.stanford.edu/entries/justice-virtue/>
  • Lee, K. and M. Ashton, 2004, “Psychometric Properties of the HEXACO Personality Inventory”, Multivariate Behavioral Research, 39: 329–358.
  • List, J., 2007, “On the Interpretation of Giving in Dictator Games”, Journal of Political Economy, 115: 482–493.
  • Lotz, S., T. Schlösser, D. Cain, and D. Fetchenhauer, 2013, “The (In)stability of Social Preferences: Using Justice Sensitivity to Predict when Altruism Collapses”, Journal of Economic Behavior & Organization, 93: 141–148.
  • Miller, C., 2013, Moral Character: An Empirical Theory, Oxford: Oxford University Press.
  • –––, 2014, Character and Moral Psychology, Oxford: Oxford University Press.
  • –––, 2015, “The Mixed Trait Model of Character Traits and the Moral Domains of Resource Distribution and Theft,” in Character: New Directions from Philosophy, Psychology, and Theology, Christian Miller, R. Michael Furr, Angela Knobel, and William Fleeson (eds), New York: Oxford University Press, 164–191.
  • Murnighan, J. and L. Wang, 2016, “The Social World as an Experimental Game”, Organizational Behavior and Human Decision Processes, 136: 80–94.
  • Paunonen, S., 1998, “Hierarchical Organization of Personality and Prediction of Behavior”, Journal of Personality and Social Psychology, 74: 538–556.
  • Pillutla, M. and J. Keith Murnighan, 1995, “Being Fair or Appearing Fair: Strategic Behavior in Ultimatum Bargaining”, Academy of Management Journal, 38: 1408–1426.
  • –––, 1996, “Unfairness, Anger, and Spite: Emotional Rejections of Ultimatum Offers”, Organizational Behavior and Human Decision Processes, 68: 208–224.
  • –––, 2003, “Fairness in Bargaining”, Social Justice Research, 16: 241–262.
  • Roth, A., 1995, “Bargaining Experiments,” in The Handbook of Experimental Economics, John H. Kagel and Alvin E. Roth (eds), Princeton: Princeton University Press, 253–348.
  • Schlösser, T., T. Steiniger, D. Ehlebracht, and D. Fetchenhauer, 2018, “How Justice Sensitivity Predicts Equality Preferences in Simulated Democratic Systems”, Journal of Research in Personality, 73: 75–81.
  • Schmitt, M., A. Baumert, M. Gollwitzer, and J. Maes, 2010, “The Justice Sensitivity Inventory: Factorial Validity, Location in the Personality Facet Space, Demographic Pattern, and Normative Data”, Social Justice Research, 23: 211–238.
  • Snow, N., 2010, Virtue as Social Intelligence: An Empirically Grounded Theory, New York: Routledge Press.
  • Stavrova, O. and T. Schlösser, 2015, “Solidarity and Social Justice: Effect of Individual Differences in Justice Sensitivity on Solidarity Behavior”, European Journal of Personality, 29: 2–16.
  • Straub, P. and J. Keith Murnighan, 1995, “An Experimental Investigation of Ultimatum Games: Information, Fairness, Expectations, and Lowest Acceptable Offers”, Journal of Economic Behavior and Organization, 27: 345–364.
  • Thielmann, I., G. Spadaro, and D. Balliet, 2020, “Personality and Prosocial Behavior: A Theoretical Framework and Meta-Analysis”, Psychological Bulletin, 146: 30–90.
  • Valdesolo, P. and D. DeSteno, 2007, “Moral Hypocrisy: Social Groups and the Flexibility of Virtue”, Psychological Science, 18: 689–690.
  • –––, 2008, “The Duality of Virtue: Deconstructing the Moral Hypocrite”, Journal of Experimental Social Psychology, 44: 1334–1338.
  • Van Dijk, E. and R. Vermunt, 2000, “Strategy and Fairness in Social Decision Making: Sometimes It Pays to be Powerless”, Journal of Experimental Social Psychology, 36: 1–25.
  • Watson, G. and F. Sheikh, 2008, “Normative Self-Interest or Moral Hypocrisy?: The Importance of Context”, Journal of Business Ethics, 77: 259–269.
  • Wicklund, R., 1975, “Objective Self-Awareness,” in Advances in Experimental Social Psychology, L. Berkowitz (ed.), Volume 8, New York: Academic Press, 233–275.
  • Winking, J., and N. Mizer, 2013, “Natural-field Dictator Game Shows No Altruistic Giving”, Evolution and Human Behavior, 34: 288–293.
  • Zhao, K. and L. Smillie, 2015, “The Role of Interpersonal Traits in Social Decision Making: Exploring Sources of Behavioral Heterogeneity in Economic Games”, Personality and Social Psychology Review, 19: 277–302.
  • Zizzo, D., 2010, “Experimental Demand Effects in Economic Experiments”, Experimental Economics, 13: 75–98.
  • –––, 2013, “Do Dictator Games Measure Altruism?”, in Handbook on the Economics of Philanthropy, Reciprocity and Social Enterprise, L. Bruni and S. Zamagni (eds), Cheltenham, England: Edward Elgar, 108–111.

Other Internet Resources

Acknowledgments

I am grateful to Holly Smith, Walter Sinnott-Armstrong, and an anonymous reviewer (who suggested the questions in the conclusion) for helpful comments, and to a number of people for literature suggestions, especially Michael McCullough. Work on this entry was supported by a grant from the Templeton World Charity Foundation. The statements made here are those of the author and are not necessarily endorsed by the Templeton World Charity Foundation. Material from sections three and four draws from Miller 2015.

Copyright © 2020 by
Christian B. Miller <millerc@wfu.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free