Rationality As Xenophobia And Showing Off

by Robin Hanson, February 2001
Do I contradict myself? Very well, I contradict myself. I am large, I contain multitudes. Walt Whitman

Why should an agent be rational? The usual answers come down to the idea that irrational agents can be "exploited" by outsiders. The same arguments, however, apply to any group of agents that does not appear as if it is a single rational mind. And at the group level, such exploitation does not seem to be much of a concern. Perhaps we seem so much more obsessed with individual rationality because we evolved to take apparent rationality as a signal of mental ability.

Why Be Rational

Rationality is usually described as a set of constraints on the preferences and beliefs of agents, typically including preference transitivity, logical consistency, and probabilistic coherence. The most common arguments for these constraints seem to come down to avoiding "exploitation" by outsiders.

For example, if your preferences are not transitive, an outsider might turn you into a "money pump." If you strongly prefer A to B, B to C, and C to A, then an outsider might offer you A in exchange for B plus $1, then C in exchange for B and $1, and finally B in exchange for C and $1. In the end you would have the same B you started with, and have lost $3.

If the statements you believe are not logically consistent, then any statement whatsoever can be derived from your statements via a sequence of standard logical deductions. Thus by being logically inconsistent you open yourself to being persuaded of anything an outsider might want you to believe.

Finally, if your degrees of belief do not follow the standard axioms of probability, and if you are willing to bet at least a small amount either way at the odds implied by your degrees of belief, then an outsider can construct a "Dutch book" against you. This is a set of bets such that you are guarenteed to lose money no matter what happens. A Dutch book can also be constructed against you if you change your beliefs according to a rule predictably different form the standard Bayesian conditioning rule.

All of these arguments for being rational seem to be motivated by something akin to xenophoia, a fear of being exploited by malicious outsiders. If negative outcomes from being irrational only happened on occasion by chance, one might not want to spend much effort trying to avoid them by being more rational. But the more one feels exposed to trade, persuasion, or bets with malicious outsiders, the more frightening irrationality might seem, and the more "hyper" one might become about maintaining one's rationality.

Aggregating Agents

The above arguments for rationality are usually intended to apply to a single person, often but not always extended across time. But these arguments also seem to apply to any group of agents of any sort, extended in any way across space and time.

For example, when agents in a group differ in their preferences, an outsider can profit from these differences by becoming a middle-man, and "exploiting" gains from trade within the group. Of course this is not possible if the group has already fully exploited internal gains to trade, but in this case the standard market analysis says that everyone then has the same marginal rates of substitution across commodities. Thus when the group is not "exploitable" by outsiders, it acts toward outsiders as if it were a single rational agent with a transitive set of preferences. (In technical terms, when the outcome is "Pareto optimal," it maximizes some "social welfare function.")

Similarly, if group members can be persuaded using logical deductions from statements made by group authorities, then if such authorities make statements that are inconsistent with one other, it is possible to persuade group members of anything. If, in contrast, the set of citeable group authorities do not disagree with one another, at least on the topics on which they are considered authorities, then the authoritive group beliefs function as if they came from a single self-consistent agent.

Finally, if people in a group differ in their degrees of belief regarding some claim, then a bookmaker can profit by becoming a middle-man in making bets between group members. As with differing preferences, such profits are not possible if group members have already made all the bets they want to among themselves. But in this case the betting market odds describe a consistent set of degrees of belief; the group again acts toward outsiders as if it were as if it were a single agent with coherent degrees of belief.

Thus the standard arguments for individual rationality, which amount to a fear of exploitation by outsiders, also suggest that a group should try to act like a single rational agent.

Accepting Foreigners

Does this all mean that all possible agents should try hard to act as if they were one huge "borgish" rational agent? Not necessarily. All else equal, one might prefer one's group to appear to outsiders as if it were a single rational agent. But if all else is not equal, the costs might not be worth the benefits.

In fact, economic theory has catalogued many reasons why market trading might reasonably fail to realize all theoretically possible gains from trade. Asymmetric information, inabilities to commit, and other "transaction costs" can prevent the equalization of marginal rates of substitution across all commodities, and the formation of market betting odds on all claims.

Such "market failures" can allow outsiders to discern that a group is composed of agents with conflicting preferences and beliefs. But outsiders can typically only profit from this ability, filling middle-men roles that group members could not, if they in some sense have lower transaction costs as middle-men. And an increase in trade due to the entry of lower-cost outsiders is usually considered by economists to be a good thing. Entry of lower-cost outsider middle-men typically makes group members better off, while making a group act more like a single rational agent.

Many towns, for example, work hard to entice lower cost businesses to come and "exploit" them. And it is a xenophobic fear of being exploited by outsiders, perhaps more than any other single cause, that has kept poor nations from becoming rich.

Thus the standard arguments for rationality seem self-defeating when applied to groups. "Exploitation" by outsiders helps to produce rationality. So the main reason to be rational cannot be to avoid such exploitation. If being rational is good, it must be for some other reason.

Accepting Irrationality

If groups of people can reasonably tolerate a lot of group irrationality and outsiders who "exploit" it, then apparently individual people should tolerate internal irrationality as well, as similar issues should apply within brains as between them. Traditional discussions of rationality, however, are usually far less tolerant of irrational individuals.

This different treatment might be reasonable if individuals were different in a relevant important way. And of course individual brains do differ from groups of brains in many striking ways, such as physical continuity. But it is not clear how relevant most such differences are. One relevant parameter is the transaction costs of maintaining rationality internally, relative to using outsiders. But it is not clear that the costs of coordinating parts of minds and brain are so small, relative to groups of minds.

For example, it might seem that communication between brains is much more costly than within brains. While this is surely true for specialized neuronal pathways, abstract communication within a brain may little easier than between brains, as both require the creation of broad brain states to create linguistic expressions. And in large groups people can specialize on particular topics and use topic-specific communication networks. For example, others often seem better than we are at detecting our own inconsistencies.

The shared genetic interests of the different parts of a brain would seem to make it easier for one brain part to believe what another brain parts tells it, relative to what another brain tells it. At least this should be true for "cheap talk," where the only costs of saying something from its influence on listener actions. However, much of the comminication within and between people may instead use costly signals. And there does seem to be a great deal of self-deception going on, at least some of which makes sense as a way to persuade others. (The more you respect of your own ablities, for example, the more others may respect them.)

Even if there were a lot more secrets (or asymmetric information) between people than within people, however, this can can explain much less disagreement between people than is commonly thought. Yes, when people base their beliefs on the information available to them, differing information can induce differing beliefs. But for Bayesians (or even Bayesian wannabes) with common priors, merely becoming mutually aware of one anothers' beliefs on aim is enough to induce agreement. One need not explicitly learn each piece of evidence such beliefs are based on.

An Aside On Bayesian Priors

The simplest excuse for group-level disagreement is to posit that, in addition to high transaction costs, group members also hold different Bayesian priors. This requires that disagreements are on average anticipated, since by definition one cannot be surprised about someone else's prior. But perhaps this implication is more realistic than it seems. Uncommon priors also imply, however, a discontinuity somewhere in the causal path of agents with priors.

Consider the following space-time diagram. In it, three long and skinny grey regions describe three individual people, who are each limited in spatial extent but live a long time. One person gave birth to the other two. Region A describes a single person at a particular time, while region B describes a group of two people at the same time. Region C describes a particular person extended across a substantial length of time.

Agents In Space-Time
Traditionally, region A but not region B is expected to be rational in the sense of being transitive, consistent and coherent. Region C is also usually expected to be rational, although there is some tolerance of "time-inconsistent" preferences. In a Bayesian framework this means that although different parts of C may hold different beliefs, they are all expected to be based on the same Bayesian prior. Agents seem to be expected to use the same prior as the "past selves" which caused them to come into existence.

Region D is very similar to region C, in that it also describes a causally-connected sequence of agents across time. The difference is that region D includes an event where one person gives birth to a new person which the region then tracks over time. Presumably much more information is "forgotten" along this causal sequence than in a typical region C, but it is not clear why the amount of forgetting is especially important regarding the expectation that agents across the region should base their beliefs on the same prior.

If regions like C and D should be rational in the sense of beliefs across the region being based on the same prior, however, then it seems that we must also expect such rationality from regions like B as well. After all, the two distinct minds within region B are both connected to region D through a sequence of regions each of which looks like either C or D. Thus unless there is a reason to reject aggregate rationality for regions like D, agents with a common ancestor should have a common Bayesian "ancestral" prior. In this case, agents in regions like B should agree, at least on each claim where they are mutually aware of their opinions.

Does Consistency Signal Quality?

Perhaps the real reason we expect so much more rationality from individuals than from groups is not so much that it costs groups more to be rational, but that it benefits them less.

It seems that much of human cognitive and mental activity evolved in large part as ways to signal the quality of one's genes to potential mates. If low quality minds find consistency harder to maintain, and if it is relatively easy to indentifying inconsistencies relative to other mental flaws, then we might expect people to be excessively concerned with appearing consistent, relative to the non-signaling benefits such consistency might bring.

Of course consistency might not signal the desired mental abilities if it were achieved by not having opinions, or by parroting the opinions of others. On the other hand, when opinions become too unusual, listeners might have a harder time evaluating their consistency. And having very different opinions might signal being an outsider unfamiliar with local ways. We might thus expect people to have opinions on many topics and to have a somewhat but not excessively different mixture of opinions from the people around them. This suggests that groups might have too much tolerance for group-level irrationality, relative to the non-signaling benefits of rationality.

Thus signaling might explain both an individual level obsession with rationality, paired with relative disinterest in group level rationality.

Conclusion

The main arguments for rationality seem to be based on a fear of exploitation by malicious outsiders. But while these arguments are usually intended for the level of individual persons, they also apply to groups of people. Yet group-level irrationality is not much lamented. Economic analysis suggests that substantial irrationality is inevitable, and that groups typically benefit from "exploitation" from outsiders. This casts doubt on arguments for rationality based on fear of exploitation by outsiders.

All of this suggests we should be much more tolerant of individual level irrationality than we seem to be. And to the extent we expect individuals to be consistent with their immediate "prior selves" in time, we might expect all agents with a common ancestor to share a common prior. Perhaps we are so much more concerned about individual level rationality because it is used as a signal of genetic quality.