Last September I ran a project testing the merits of peer-reviewed history articles, by selecting three articles at random and analyzing their methodology and its underlying Bayesian logic (because, really, all sound epistemic reasoning is Bayesian: see A Bayesian Analysis of Susannah Rees’s Ishtar-in-the-Manosphere Thesis; A Bayesian Analysis of Kate Loveman’s Pepys Diary Thesis; and A Bayesian Analysis of the Winling-Michney Thesis on Redlining). As another project I’ve selected three articles at random from among credible open-source journals in the subject of philosophy, once again in order to analyze their arguments in a way that makes clear their Bayesian structure, and what grasping this about them can tell us about sound philosophical methods.

This time I stuck to only journals listed in the Directory of Open Source Journals, under the rubric “Philosophy (General),” in the English language, which do not have “article processing charges” (because I do not believe a journal that charges its authors can be trusted to put quality over profit). These criteria identified 199 journals. I then excluded journals that don’t publish exclusively in English (to ensure all my readers could access their content), leaving 38 journals. I then randomized to the journal and issue, and then article, to select three from that set most recently published. However, I excluded all articles that consist of “history of philosophy,” as that is a history subject and should not be confused with doing philosophy. This includes articles aiming to merely elucidate what a historical philosopher thought, rather than defend what they thought as correct. And I excluded book reviews. Because in both cases I only wanted to evaluate work contributing directly to the advancement of philosophy.

Randomized, and after dismissing article results not meeting the above condition, selected were the 30th, 17th, and 8th journals listed: the Journal of Social Ontology, published by De Gruyter (a major German academic publisher); the Feminist Philosophy Quarterly, published by the University of Western Ontario (yes, we randomly got one of the only two journals among the 38 that specialize in feminism); and Philosophy and the Mind Sciences, published by the Institute for Philosophy at Ruhr University in Bochum, Germany. In these:

Today I will discuss the Faria Costa article. In coming weeks I will address the others.

The Faria Costa Thesis

The overall gist of the Faria Costa thesis is that there is an aspect of our epistemology rooted in the ontology of what it means to be part of a group with shared goals that on final analysis is actually normative; meaning, there is something we ought to think about that, and not just something we will think about it. Because we not only expect “our” group to share certain attitudes—we need it to. The fact that epistemology can have normative implications I’ve discovered before. In Epistemological End Game I found that the entirety of epistemology rests on a normative proposition about when we ought to believe something (or not) and what principles we ought to pursue for that very reason. In short, if you want to know the truth, then you must follow certain principles; and if you want things to go better for you, then you must know the truth. Faria Costa gets to a similar side-result.

My summary and analysis won’t substitute for reading his whole article. So I recommend you do. The Faria Costa thesis (which admittedly is a bit hyper-specific and obscure) is that our feeling of “agency transformation (from I to we) involves a change in the normative attitude” of someone who feels they are working as a team, because a genuine ontology of “we” as an agent (as opposed to “I,” individual actors merely working in concert) requires “the belief that others will also perform team reasoning” and that this belief rests on reasons that aren’t “otiose” (pointless; unnecessary; superfluous). Faria Costa claims he can use “the theory of affordances, which is the idea that the environment provides ways to interact with it,” to “argue that when a person perceives as a group member” (seeing themselves as “we” rather than “I”), “she associates herself and the other members with the group’s mosaic of affordances,” and it is this, he claims, that “triggers a feeling of joint ownership of the agency.” You can think of “affordances” here as options, things you can do (and can’t do), given a certain environment (physical and social).

As stated his thesis is really a question in psychology. Ordinarily we would expect a rigorous scientific research protocol to be engaged to establish these claims about how humans think and feel and why. Philosophy can propose hypotheses for later scientific investigation, and even set out the case for which is more likely to bear out, but it should not over-claim. Faria Costa’s argument can be understood in this sense. But one can also frame the same question in terms of ontology: rather than make claims about subjective human psychology (what people “happen” to think or feel, regardless of its correspondence with any truth beyond), instead make claims about the available ontological realities that such thinking could call upon in its defense. In other words, what Faria Costa can be taken to argue is that it would be objectively legitimate for someone to see and understand things the way he describes; that, in other words, joint agency is in that sense objectively real and not just a convenient illusion or delusion. That would squarely settle the matter in the domain of philosophy, in particular “analytical metaphysics,” the semantics of what we do with the data established by the sciences to fill out the furniture of reality.

Faria Costa builds on previous studies in psychology (“affordance” theory, “social identity” theory, even studies of the child development of transfer from “I” to “we”) and game theory (“team reasoning” and the like), which is the correct order of argument: philosophy cannot contradict, and must build its premises from, the most solid findings of the sciences (see my discussion of the “ladder of methods” in Sense and Goodness without God II.3, pp. 51-62). And he aims to resolve a problem in the logic of decision theory called “The Hi-Lo Paradox.” The basic problem at hand is that cooperative strategies should be intuitively correct, but humans often cannot perceive this fact (for various analytical or epistemic reasons).

For example, in a Hi-Lo game, two people can both choose Hi or Lo, or one choose Hi and the other Lo. If they both choose Hi they win, say, a hundred dollars; but if Lo, just one dollar; and if they each choose differently, they win nothing. Obviously, we should always both choose Hi. But this is true only when the distribution of outcomes is clear. What if we don’t know which option our partner is choosing? For example, in team sports, sometimes one player can’t telepathically know what a teammate is planning so as to “be in the right spot” to carry off a successful play. Or what if they can, but it is unclear to them that the option being chosen is in fact the best? Clearly, then, successful cooperation requires coordination not only of action, but of epistemic understanding.

The problem Faria Costa is articulating (and trying to solve) is how we can establish an ontology or a perception-state whereby our idea of “joint agency” is actually causally important to outcomes, rather than superfluous and discardable. One can imagine concluding that there is no real thing here, that “joint agency” doesn’t exist, and even the concept “does no work”; so we should just drop it, and stick to what actually does the job (of successfully coordinating action). So, “For example,” Faria Costa says, “communication could explain how one team-reasoner believes the other will also perform team-reasoning,” and thus explain how team reasoning could be causally important, but “communication could also enable individual-reasoners to solve the dilemma” on its own. For example, maybe we don’t need to believe in joint agency; we can just always speak in terms of individual self-interest, and thus use communication (“persuasion”) directly to successfully coordinate action to our mutual benefit, bypassing any need to speak or think of “joint agency” (“we” didn’t do it; “I did it, and you helped”). Likewise if team-reasoning arises from the fact that “when we perceive another person, we perceive a rule-follower,” as then coordination can simply result directly from each individual seeing that; they don’t need to believe (and perhaps ought not even believe) that actual joint agency existed (there is again no “we,” just “you” making decisions on your own based on what you know operationally about the world, and “me” simply predicting what you will thus do and making my own decisions based thereon).

Faria Costa illustrates this with the following example (emphasis added):

For example, if we are both in a rowing boat and I perceive the environment as a group member, it means that I will detect what the group can do. I will associate both of us to the group’s mosaic of affordances. This will trigger me to feel joint ownership of the agency. I will feel that dealing with the environment is up to us. That is, it is up to us to row and get out of the lake. I will [thus] feel entitled to demand you to row too or to rebuke you if I discover that you are not rowing, regardless of whether we can talk or not. Therefore, the agency transformation (i.e. from perceiving as an independent individual to perceiving as a group member) will involve a transformation in the normative attitude of the members. This normative layer enables team reasoning to solve the dilemmas without rendering it otiose.

In other words, perceiving yourself as part of a group, and thus seeing the affordances available to the group, is causally relevant to each member of the group taking the correct coordinating action. Joint agency is ontologically and conceptually necessary for the most effective action. Basically, you can’t easily row a boat with just one oar; whereas two people each taking an oar and coordinating their action can accomplish things no individual could. There are things the group can do that individuals cannot. And perceiving this can rationally warrant certain normative attitudes, such as that it is “wrong” for the other party to ignore their oar (they are “failing” the group, as well as even themselves). When they agree that even they themselves, as an individual, want to get out of there, then they ought to take up the oar on their side and coordinate their station (on the objective basis of such normative logic, see The Real Basis of a Moral World). There isn’t any way to get this rational justification without that causal understanding; mere communication can’t do it (without communicating this very fact); perceiving individuals as mere rule-followers can’t do it (without this very rule being implicated); etc.

Faria Costa concedes “not all cases of joint agency will involve collective reasoning, and not all cases will involve a normative aspect,” for example people can think of their nation’s army or their favorite sports team in terms of “we” are defeating “our” opponents, but there is no coordination involved (fans usually aren’t really helping the team or the army win), so maybe there won’t be any normative foundation for such thinking (it becomes harder to explain how one “ought” to agree with you and thus rationally justify “punishing” someone who doesn’t cheer that team or that army). Faria Costa is only concerned with the subset of joint agency cases where actual coordination is involved. He sets other cases aside. But in the cases in question, Faria Costa claims to prove that “a team reasoner needs to believe that the others” on their team “are also team reasoners in order to solve the dilemmas” posed by coordinated action; and that “the conditions for a person to frame as a team,” i.e. to see from the “point of view” of the group as a whole and not just the individuals in it, “and to believe that others are also team reasoners” must be something that won’t “render team reasoning” causally unnecessary in the first place.

Faria Costa demonstrates this by using it to solve that problem in Game Theory. When two people are playing the Hi-Lo game but can’t plan or communicate with each other or even observe so as to predict each other’s choices, there isn’t any rational basis for choosing Hi over Lo. Yes, choosing Hi creates the potential for a big payoff, but you get nothing if the other person chooses Lo. And absent any other logic, there is no reason for the other person to choose Hi over Lo. They might choose Lo in the hopes of getting at least a dollar. Now, we intuitively might say, “But, shouldn’t you assume you both will go for the highest payout?” But why should we assume that? What drives this intuition? It is, of course, what Faria Costa proposes: “the player who thinks as a team member will think ‘What should we do?’ instead of ‘What should I do?’.” And thus, “we” should choose Hi. Without this reasoning, it’s really just a coin flip which we should choose. We can only expect the other player to choose Hi if we think of us both as a team working together toward a common end. We need to conceptualize joint agency. And it is this conceptualization that drives us both to choose Hi. That outcome would not be possible any other way. We simply have to think of each other as a coordinating unit. If we don’t, we can’t reach obvious conclusions like this.

Thus, as Faria Costa points out, “An agent, at its most basic level, is an entity that can choose between alternatives and has preferences,” and it happens to be an objective ontological fact that groups have preferences apart from those of its individual members. Two people in a boat might both prefer to get out of the lake they are in; but only the two together as a unit will prefer to coordinate its oars. The individual’s preferences create the problem (getting out); the group’s preferences solve the problem (coordinating the oars). So perceiving the problem from the point of view of the group is essential to perceiving (and thus enacting) the solution. Likewise, while each individual makes separate choices (choosing an oar, moving it as needed), the end result is the group making choices—indeed choices neither individual could make alone (without telekinesis or more arms).

Another example one could adduce is standing watch: no single human can stay reliably alert for 24 hours, so maintaining guard over or against something can only be accomplished by coordinated action (various people taking shifts and ensuring no gaps in coverage result). But to understand this, you have to see the whole scenario from the group’s point of view (the whole overlapping watch schedule); you won’t see this if all you “see” is your own role (diligently maintaining just your own watch). Hence getting angry that someone didn’t do their part can only be rationally justified by framing the situation in a group context: ditching your watch harms the group, not just the individual. And it is this normative judgment (that that is bad) that drives individuals to think of themselves as “we,” a group solving a group problem.

Hence the last component of Faria Costa’s thesis: there needs to be a “change in the reasoning,” which is “that the person will think on a group level,” and thus perceive group interests and the options available to group action that aren’t available to individuals; but there must also be a “change in the normative attitude,” such that “the person feels there is a demand to perform team reasoning,” and thus “hold all group members accountable for the group’s performance.” It is this normative aspect that “provides the necessary stability for team reasoning to work properly.” Everyone must believe this normative framework is rational and just, and believe that the others believe that. The existence of the normative framework thus crucially motivates coordination. Even if the sole “punishment” is guilt (someone feeling disappointed in themselves that they acted irrationally or let their colleagues down), there must be a correct sense that this is appropriate. Hence a total sociopath who never feels anything the like might never be motivated to engage in team thinking, and thus will only think of themselves and not the group. The result will be sub-optimal even for the sociopath, but that is why sociopathy is a mental illness: it cripples rational thought.

To illustrate what I mean, consider an attempted alternative hypothesis: group agency is just a scaled-up version of multicellularity. You are just a collection of individual cells, and you think of yourself as a unified individual, your “self,” because of their coordinated actions, and coherent structure. So, can’t team reasoning just be this? We are each just cells in a body, the body being the group, and so “group” agency is just an ontological analog of individual agency. This wouldn’t work for one crucial reason: groups don’t have independent sentient minds, yet their members do. We are not mindless cells obeying a central consciousness. In a sense groups do act like minds (all friendly and hostile AI problems identically manifest in systems of people, e.g. nations, cults, political parties). But those groups aren’t independent intelligences, nor are their members mindless cells. So how do we get people (who are independent intelligences) to coordinate as if they were cells in a body, and do so according to a central intelligence that doesn’t actually exist beyond what can only be imagined by the individuals? Well, I think Faria Costa just answered that question.

Bringing the Thesis Home

“Imagine you see a big rock,” Faria Costa reasons, “It seems heavy and it does not afford you a way to move it.” But “imagine there is someone else near you.” Now “the presence of another person affords a way to move the rock.” And that means “when you perceive that there is another person present, you perceive the presence of another agent, who is also exploring and detecting the environment.” You perceive different “affordances” in the environment in that case: now that there is a possible team, there are things that can be accomplished in that environment that couldn’t before. We also perceive affordances based on social norms, which follow from what position in a social system someone appears to occupy. If “the other person” in the rock scenario is an active shooter, we won’t perceive them as likely to help us but rather to harm us; the affordances are thus different. But if they are a co-worker and we have been hired to clear the field the rock is in, we will perceive them as more likely to be someone who has chosen to follow their social role.

We use normative expectations like this all the time—far more than people realize. For example, in a poker game you have to read your opponent’s mind, by modeling what they are likely to do in the circumstance you perceive them to be in, and testing different hypotheses by the evidence afforded (their tells, their previous patterns of behavior, and the like). But the whole while you are also simply assuming they intend to follow the rules of poker, and not just reach over and fish through the deck for the card they want, right in front of you, and declare a win. Or pull a gun and just rob you. We are continuously assuming the people we encounter, interact with, or otherwise have to predict the behavior of, are following some sort of rules; rules that differ by the role or roles they appear to have selected for themselves, but rules nevertheless. And the rule-set we perceive someone to have taken on is another environmental affordance, creating some allowances and blocking others. This is as objectively real as that rock and its size; we can be wrong about it (deceived as to what rules they intend to follow; analogous to being deceived as to the actual weight of the rock), but there is nevertheless some objectively real fact we are attempting to get at (psychology is reducible to neurology, after all).

Faria Costa also explores social identity theory, which has similar impacts. People make a lot of decisions based on their assumptions regarding what is appropriate to their social identity, sometimes even to their detriment. Think of Trump voters voting against their own interests because of a perceived shared identity, like tanking a DNC proposal that would help them merely because it was a DNC proposal (“we are against whatever they are for”). You can then use this to predict people’s behavior, and thus coordinate your own behavior with theirs as needed, all without ever thinking of each other as a team (like a stock broker cashing in on an options bet based on that voting behavior, which requires no team thinking; indeed, she might be laughing her ass off at the gullible idiots whose stupidity just made her a bundle, while they might never even know how she exploited it). Thus, when we only rely on identifying a person’s normative position in a maze of social roles, or their social identity, to predict their behavior, or both, “team reasoning” (the idea that “we” are doing this; thinking you are part of a group with shared goals) isn’t actually needed to coordinate action. It’s otiose.

None of this is novel. Faria Costa is reiterating well-established principles in psychology, sociology, and philosophy and logics. What Faria Costa wishes to contribute is a solution to a very particular problem that has arisen amidst all of this: how “team reasoning,” and thus group agency (the notion that “we” accomplished something), can remain useful (and indeed objectively real). Because:

The conditions for team reasoning should not enable individual reasoning to solve the dilemmas, otherwise team reasoning would not be necessary. As long as it is possible to solve the dilemmas without using team reasoning, then, using Ockham’s Razor, we are better off by not using team reasoning at all.

In other words, we could just ditch the “we” and simply coordinate individual selfish actions. There really is no “we” in that case. It’s just an illusion or delusion. Worse, it wouldn’t even be useful as an illusion or delusion. Otherwise, even if it were false, we still would expect making people believe it is true somehow nevertheless motivates them to act as a team. But how would even that work? This is the link in the chain of reasoning that Faria Costa is trying to fill. It just so happens that once he has filled it, the missing link ends up being objectively real. The thing he is talking about isn’t a convenient motivating lie. It’s an actual fact.

To get to his solution, Faria Costa digresses into defining what he means by various terms, from “environmental affordances” to “ownership of agency,” as any good analytical philosopher must do. He then uses the concepts he just defined and built to solve the problem he identified. Two people staring at a giant rock in their way can perceive the situation whereby each of them is “an independent, isolated agent,” albeit aware of each other’s agent-perspective. I can perceive my affordances (my strength in relation to the rock), and your affordances (your strength in relation to the rock), and I could imagine, say, paying you to apply your affordances to stack with mine, to move the rock. But then we aren’t imagining ourselves as a group or a team, but as self-interested individuals coming to an arrangement; and even then, I am paying you. It’s a one-sided arrangement. And it’s still activated by assumed social norms (that we won’t cheat each other in the transaction, for example). That means, “absent social norms or shared values, there is no room for” either of us “to feel entitled to demand anything from the other.” If I have no way to pay you (nor you me), we’re stuck there just staring at a rock in our way.

But (emphasis again added)…

When a person perceives as a group member, she frames her agency as part of the interaction between the group and the environment; the group with its own collective mosaic of affordances. She perceives the possibilities of interaction as the group’s possibilities … [and] this triggers a feeling of joint ownership of the agency. … She will frame the actions of the other person as contributions to the group’s performance [and feel] each is entitled to demand the other to behave accordingly. … This means that the agency transformation, i.e. perceiving myself as part of a group agent, triggers a feeling of normative unity.

The group becomes an entity of its own, capable of things each individual member is not—but only if all (or enough) of the members perceive themselves the same way, as part of a group with its own goals and abilities, as distinct from just the goals and abilities of each individual. If you and I are staring at that rock and we both want it out of our way, and then perceive this as a group goal, something “we” want, and then adopt the corresponding normative stance—you should help me, I should help you—then we will be motivated to just team up and move the rock. It is not enough to perceive that we could do this. We must perceive that we should do this. Then we will do it. Because I expect you to share the same attitude; and you expect the same of me. We then perceive what we are doing as a joint effort, something “we” did, not something just you or I did. We as a unit had its own agency; and we correctly see ourselves as each owning a part of that. We’ve gone from “I have to move this rock” to “we have to move this rock,” and from “I got that rock moved” to “we moved that rock.”

Hence the unique powers of the group only objectively materialize when the members of that group step into that normative perspective. As long as they don’t, the group powers never materialize, and thus aren’t real, just hypothetical. But as soon as we do step into that normative framework, those new affordances become real. We could sometimes effect the same result without that (like, me hiring you to help me), but team reasoning is more versatile (I might have nothing to pay you with). And it is thus normative thinking about group agency that objectively creates group agency. This does not require taking any stance on objective morality, because all it requires is an understanding of imperatives as hypothetical, which everyone agrees objectively exist: it can be objectively true that if we do certain things, then certain other things will result. That’s certainly a fact. So if we want to access the affordances of group agency, then we must adopt certain attitudes. Therefore, conversely, if we adopt the normative stance of team reasoning, then we will have access to the affordances of group agency. The attitude itself creates the material outcome.

That this is also how moral truth arises from the facts of the world is not something Faria Costa argues; though I have. But he doesn’t need to. Even a purely relativist understanding of normative propositions gets his result. Which is why Shelly Kagan was able to secure a rare trouncing of William Lane Craig on the Moral Argument by simply sticking to an articulation of social contract theory. An objective moral stance will arise from a subjective understanding of your options in any social system. Just as here: once you understand that adhering to certain norms, and expecting others to as well, will create physical powers of group agency that otherwise won’t exist (and yet that you have continual need of), you have all the rational justification you need for adhering to those norms and holding others to account for them. I have in the past described morality as a technology of social cooperation. Faria Costa has essentially provided the logical and ontological basis for that notion: morality creates group affordances, and thus is itself a group affordance, something available to members of a society to choose, to accomplish mutual goals. Hence my designation of what I believe to be the correct moral philosophy as Goal Theory.

Bayesian Analysis of the Faria Costa Thesis

Is Dr. Faria Costa right? How can we evaluate his thesis and argument so as to answer that question? His case depends on two essential prongs of analysis: one is strict analysis, the mere logical breakdown of concepts and possibilities; the other is empirical analysis, how as a matter of empirical fact does the world actually work, and does it align with what Faria Costa is arguing—or is there something he has overlooked or gotten wrong? Bayesian reasoning requires establishing prior probability (usually what you are proposing needs to be in line with all human background knowledge, e.g. scientific and other prior findings; it needs to be “typical” or “expected” or “in line” with past cases) and a favorable likelihood ratio: which means the evidence you present for your theory, collectively, has to be substantially more probable (more what we expect to find) if your theory is true than false.

In respect to strict analysis, I do not see any errors in what Faria Costa has articulated. He hasn’t produced any contradictions or gaps in describing the option-space. There are no pertinent concepts he has overlooked; he was quite thorough. In Bayesian terms, if he had erred in the analytical aspect of his thesis, it is probable that a diligent thinker would spot that, and thus improbable that none will be detected. Not impossible, however; philosophy is much less certain an enterprise than any science. The probability Faria Costa has erred in some way is not vanishingly small, and thus we should keep a serious eye out for (and always ourselves mull over) counterarguments. But provisionally, it looks pretty secure.

In respect to empirical analysis, Faria Costa leans on prior studies and established understanding in scientific and analytical fields, which lends a decent prior probability to his position (nothing he argues goes against, and all of it has reasonable support, from other sciences and findings), and though the bulk of his case rests on numerous thought experiments (hypothetical scenarios), the shear number and variety of them, and the fact that they all track real-world situations and not preposterous ones, lends strong credence to his case. He correctly brackets possible exceptions (variants of perceived group agency that don’t involve coordinated action, for example). And it does not appear that he has left out any obvious counter-examples. So in the end it seems improbable that he has incorrectly described human behavior (individually or socially). And since his conclusion follows from that and his conceptual analysis, the evidence he presents is reasonably improbable unless his thesis is correct. Again, future counter-studies remain possible. So we must take this conclusion provisionally. But it’s sound so far.

Conclusion

Faria Costa claims that “the main argument I am making in this paper is that agency transformation (from individual-reasoner to team-reasoner) will also involve a normative transformation,” and he has presented adequate evidence of that, and I can think of no counter-evidence from my own experience or prior research. He also “argued that we can use the notion of affordance to analyze human behavior,” which likewise holds up. His final step, the core of his thesis, “is that this notion of affordance can help us to explain the normative aspect of the agency transformation.” That, too, tracks. He recognizes the problem: “If affordances themselves cannot be normative,” and indeed, by themselves, they aren’t, “and if the normative background,” like social-identity theory or positional rules-following, “is insufficient to explain team reasoning,” then “this means that I have a double challenge” to meet. First, “I have to explain how the affordances of an interaction can involve a normative aspect, which supports team reasoning without rendering it otiose,” and do so “without referring to social norms.” So what we want to know is: has he accomplished that final double-step?

Faria Costa argues “that a person feels an association with her performance according to her mosaic of affordances,” which he calls “feeling ownership of the action.” This appears to correctly describe the psychological facts of people engaged in team reasoning; and indeed he cites some prior science and analysis supporting it. Though I would prefer more. A search of “group agency” on Google Scholar returns thousands of results, and though many are on different questions concerning the matter, I do wonder if any will challenge or support Faria Costa’s case that “perceiving as a group member means that you will contrast your agency with the group’s mosaic of affordances” and it is this that “triggers the feeling of joint ownership of the agency,” which involves “the feeling that we are accountable for one another concerning the group’s performance,” which is a straightforward description of what we mean by adopting a normative framework. This appears correct. It is improbable that he could adduce so many (albeit hypothetical) examples illustrating this, which match our own personal experiences with feeling this way and seeing and hearing others do as well, unless his analysis was accurate. But a more thorough review of the science (and a proposal for any further science if current science is lacking) would improve the quality of Faria Costa’s contribution.

That aside, the crucial component of his position is this: “It is possible to achieve this feeling without having prior or future interactions and without the existence of previous agreements or social norms.” In other words, he argues, we can expect any rational agent to be able to perceive the affordances of group agency and understand the need to select them, and thus can justifiably rebuke or think ill of anyone who then doesn’t. They are irrational; they are letting everyone down; they are shooting themselves in the foot; or they lack basic mental competencies. This remains objectively the case even if there is no prior agreement or communicated or negotiated understanding (though it can still arise from both); which is why we can be justified in feeling this way. This seems correct. Empirically, we do do this; people really do think this way. That’s less probable unless he was right. And analytically, it is sound. Once we define “rational person” it does indeed follow that they will have access to the same facts and be able to work out the same conclusions. Most people are not, of course, consistently rational. But most can be led by reasoning to realize what Faria Costa is saying is true: normatively framing group agency creates “affordances” that won’t exist otherwise. Selfish approaches (like one person paying for another’s aid) exist, but they have real limitations that don’t exist for motivated team reasoning, and motivated team reasoning does seem to depend on this normative understanding of being part of the team.

I have often made this point, without understanding all the research and conceptology Faria Costa has valuably collected and summarized, in recounting my experience in boot camp for the United States Coast Guard. Boot camps often involve perplexing cruelties like punishing the entire unit for the mistake of one person. But our Training Instructor always explained the reasoning behind every such supposed injustice: in combat, at sea, on any rescue or law enforcement mission, if one person drops their end, everyone dies (or loses, or suffers, or otherwise gets slammed with a cascade of dangers or labors that otherwise would have been avoided). One of the purposes of boot camp, as our TI continually explained, was to get us to start thinking as a unit rather than as individuals—in other words, to recognize that there are affordances available only to group agency, and that we could only succeed (and stay alive, win, make life easier and safer—and save lives and catch criminals) if we activated those affordances; and we can only do that reliably if we believe we are a team and expect of each other and ourselves what that entails. Without those beliefs, those particular affordances won’t be available.

In other words, access to the extraordinary capabilities of a combat-and-rescue unit requires adopting Faria Costa’s normative stance, our mutual “ownership” of group agency. There is no other way to obtain it. And this appears to be an objective, ontological fact about any sentient thinking machines. We did have to be taught that. But once we understood it, we could use it. We had learned the way to access those affordances. And it was exactly as Faria Costa describes: seeing group agency as distinct from individual; and adopting the normative stance needed to motivate it. We “could” rely instead on just trying to predict individual selfish behavior and coordinating our actions with it (rule-following), but this approach is exhausting, unreliable, and limited. Likewise if we relied instead on social-identity theory (“Do what you are told because that’s who you are now” or “Don’t let us down because you are one of us”), which suffers from many more failure modes.

Indeed, such thinking can result in actually overlooking group-agency affordances, and thus acting in ways ultimately corrosive to the group: like when the military or police “protect their own” from justice, incorrectly perceiving this as good for the group, when in fact, ultimately, it undermines everything that the group even exists for, and produces societal “backlash effects” that also undermine its every goal. Whereas recognizing “justice is our group’s goal,” and then normatively holding each other to that, will access all the related affordances. This is when you get military or police who actively root out bad actors in their ranks because they have violated the very norms necessary for successful achievement of the group’s goals. They perceive them as acting against the group, because the group is then defined by what its actual goals are, and not by the individual’s “social identity” or expected “rule-following.”

This is ironically illustrated by reversing the pronouns of Faria Costa’s formulation in a scene in the television series Firefly. Spoilers. But here goes. Mal, the captain of a spaceship, catches Jayne, his crew’s muscle, selling out fellow crew members for cash, and feels entirely entitled to kill him (by spacing him in orbit). He ultimately decides to give Jayne a second chance. But not before giving him a lesson in Faria Costa’s theory. Mal’s speech through the comlink, staring at Jayne through a window as the airlock door sits open while the ship ascends to orbit (so Jayne’s death is quite viscerally imminent), is simply this: selfishly acting against any of his crew is acting against him. “You did it to me. That’s what you don’t understand,” Mal says. “These people are part of the crew now, so you did this to me.” Mal is speaking as if he were an individual (his pronoun is “me”), but his intent is quite clearly the reverse: he means you did this to us. Mal is one of the “us.” In fact, as captain, he represents the group and its agency.

Getting Jayne to recognize that acting against the group simply is acting against each individual in the group (including someone he might actually fear crossing, like Mal) is precisely the kind of “group agency” thinking that Faria Costa is arguing for, and indeed arguing is objectively justified. It is true that the group has goals that are undermined by selfish action; it is true that those goals can only be achieved if each member sees themself as part of the group, and their actions as the group’s actions, and holds each other to account for that. Mal felt justified in holding Jayne to account for this reason. And we, the audience, concur. Because objectively, measuring the situation according to the actual material goals of the group and what it can and cannot achieve without its members agreeing to certain norms and holding each other to them, Mal is correct. He is rationally justified.

This is fiction, of course. And one can debate whether murder would be the correct choice in that case. The scenario imagined in that episode is bizarre relative to ordinary social situations we are familiar with—these are people in a highly dysfunctional and perilous social system—so it won’t necessarily track the same recourses. But the overall model does track reality: Mal is justified in holding Jayne accountable in some fashion, for the very reason Faria Costa articulates, because he has to in order to access the group’s affordances; and the lesson Mal needs Jayne to learn will unlock material affordances otherwise unavailable to the group (or indeed anyone in the group) if Jayne does indeed learn it. Thus to access the utility of “group agency” simply requires normative reasoning. And I think Faria Costa does a good job of explaining this by exploring and ruling out all alternative causal models of it.

§

To comment use the Add Comment field at bottom, or click the Reply box next to (or the nearest one above) any comment. See Comments & Moderation Policy for standards and expectations.

Discover more from Richard Carrier Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading