Macrocognition

Full Title: Macrocognition: A Theory of Distributed Minds and Collective Intentionality
Author / Editor: Bryce Huebner
Publisher: Oxford University Press, 2013

 

Review © Metapsychology Vol. 18, No. 32
Reviewer: Olle Blomberg

We frequently talk and write about organizations and corporations believing, planning, and wanting things. Could they literally be in such mental states? Could a collective be minded? In Macrocognition: A Theory of Distributed Minds and Collective Intentionality, Bryce Huebner argues that they could.

        The book has two parts. In the first, Huebner presents a general theory of “system-level cognition” drawn from computational cognitive science. This provides a theoretical framework applicable to both human and non-human animals as well as to groups, from honeybee colonies to collaborative high-energy physics research teams. In the second part of the book, Huebner proceeds to deploy his theory to defend the claim that various kinds of collective minds are not only possible, but that they are already among us. Here, Huebner engages with virtually all of the arguments that have been raised against this claim. While I will highlight what I think are some weak points below, in general, Huebner successfully addresses and responds to these arguments. The book provides a sustained defense of the idea that some collectives are genuinely cognitive systems.

        In the first part of the book, Huebner sets out three plausible constraints (or principles) that an account of collective mentality ought to satisfy:

Principle 1: Do not posit collective mentality where collective behaviour results from an organizational structure set up to achieve the goals or realize the intentions of a few powerful and/or intelligent people. (p. 21)

This principle should be understood so that it rules out cases where some agents are using an organization as a tool, in such a way that all the intelligence in the organisation derives from the deliberation and decision-making of those few agents. In such cases, we gain no explanatory leverage by treating the organisation itself as an intentional agent.

Principle 2: Do not posit collective mental states or processes where collective behavior bubbles up from simple rules governing the behavior of individuals; intentional states, decisions, or purposes cannot legitimately be ascribed to such collectivities. (p. 23)

        Many collective effects–the formation of traffic jams say–can be explained by simply appealing to individuals acting according to simple rules. There is no need to posit anything like a collective cognitive agent. On the other hand, it seems reasonable that we should also avoid positing that there is such an agent when the collective is merely a puppet that inherits all its intentionality from members who intend that it acts as a rational agent. This brings us to the third constraint:

Principle 3: Do not posit collective mental states where the capacities of the components belong to the same intentional kind as the capacity that is being ascribed to the collectivity and where the collective computations are no more sophisticated than the computations that are carried out by the individuals who compose the collectivity. (p. 72)

In the context of the philosophy of social ontology and collective intentionality, this constraint is controversial. It excludes, for example, Christian List and Philip Pettit’s (2011) influential account of group agency (itself controversial of course, as any account of group agency will be!). According to List and Pettit, some groups (such as juries, committees and corporations) are rational agents in virtue of some group members intentionally acting together to enact “a single system of belief and desire” (2011, p. 34). List and Pettit show that in order for the group to satisfy standards of rationality regarding a set of interconnected propositions, the group’s judgements or preferences with respect to these cannot be simply determined by what the majority of the group members judges or prefers. Instead, the members must “collectivize reason” and adopt some other decision produce, such as first using majority voting on some of the propositions (the premises) and then infer what the group’s stance is with respect to other propositions (the conclusions).

        For example, suppose that three judges decide whether a defendant has breached a contract. A contract has been breached only if there, first, was a valid contract in place, and secondly, only if what the defendant did actually constituted a breach of it. Now, suppose that the judges’ attitudes regarding this case are the following:

 

           Contract?      Breach?       Liable?

Judge 1:    Yes            No             No

Judge 2:    No             Yes            No

Judge 3:    Yes            Yes            Yes

——————————————————————-

Majority:    Yes            Yes            No

 

        If the group’s stance regarding each of these three different propositions are decided by majority voting, then the group will not be acting as a rational agent even if each member is rational. However, if the judges adopt a premise-based decision procedure–that is, if they derive the conclusion (Liable?) from majority votes on the premises (Contract? and Breach?)–then the group can live up to ordinary standards of rationality. Intriguingly though, the group’s judgment regarding the liability of the defendant will be discontinuous with what the majority of the group members judges.

        Adopting a liberal functionalist conception of mental states, List and Pettit argue that groups that have collectivized reason in this way ought to be treated as rational agents that exist, in a weak sense, over and above its members. Furthermore, once the group is recognized and treated as an agent by its surrounding community, there will be normative pressure on the group’s members to continue to uphold collective rationality and, as a collective, take responsibility for the group’s judgments and actions.

        This account of group agency does not satisfy Huebner’s third constraint, Principle 3. Each judge has the capacity to vote on the premises and infer the conclusion and this capacity is of the same “intentional kind” as the capacity that is ascribed to the group. Like List & Pettit, Huebner argues that only groups whose goal-directed behavior makes sense from the perspective of the intentional stance have collective mentality (that is, only groups whose behavior can be predicted and made sense of if treated as rational). It must be possible to consistently and systematically ascribe beliefs and desires to the group in such a way that the group’s behavior makes sense in light of those beliefs and desire. But according to Huebner, while this is a necessary condition for collective mentality, it is not sufficient: “[W]e should only posit mentality where interfaced networks of computational systems are jointly responsible for the production of system-level behavior.” (p. 13) According to Huebner, as this is the most plausible general story about cognitive architecture around, it provides our currently best understanding of what cognition is. If groups are genuinely cognitive systems, then they should arguably be implemented in this kind of architecture. If I understand Huebner correctly, then it is these considerations about cognitive architecture rather than considerations about explanatorily superfluity that motivate principle 3.

        One example of such a genuinely cognitive system is a navy navigation team studied by the cognitive anthropologist Edwin Hutchins in the 1980s. Huebner argues that this team, together with its various tools, is both interpretable from the intentional stance and made up of an integrated network of specialized computational subsystems. Now, Huebner uses several other case studies to illustrate and defend his view, but I will look at the case of the navigation team in some detail.

        At one point during Hutchins’ fieldwork on a US navy ship, the vessel was near land in restricted waters. In such circumstances, a team of about five people is involved in the activity of fixing the position of the ship. Its position had to be plotted on the chart (map) at intervals of only a few minutes. To fix a ship’s position, two lines of sight from the ship to known visual landmarks have to be drawn on the chart. Simplifying slightly, “the fix cycle” runs as follows: With the help of telescopic sighting devices called alidades, two “bearing takers” determine the bearing (direction) of one landmark each. These bearings are then reported over a telephone circuit to a “bearing timer-recorder” who jots them down in the bearing log. The “plotter”, standing beside the bearing time-recorder, then plots the lines of sight on the chart to determine the ship’s position. The ship’s position in the world should then correspond to where the lines intersect on the chart.

        Hutchins glosses the work of the navigation team as a socially distributed computational activity:

The task of the navigation team […] is to propagate information about the directional relationships between the ship and known landmarks across a set of technological systems until it is represented on the chart. Between the situation of the ship in the world and the plotted position on the chart lies a bridge of technological devices. Each device (alidade, phone circuit, bearing log etc.) supports a representational state, and each state is a transformation of the previous one. Each transformation is a trivial task for the person who performs it, but, placed in the proper order, these trivial transformations constitute the computation of the ship’s position. (1990, pp. 206–7)

        Now, Huebner claims that when we encounter a goal-directed system with this kind of distributed computational architecture, there is “prima facie evidence” that we have encountered “a genuinely cognitive system” (p. 168). (Indeed, Hutchins himself went on to claim that what he studied was a socially distributed goal-directed cognitive system in his monograph Cognition in the Wild.) What remains to be shown, according to Huebner, is that the propagated representations aren’t merely recordings that are interpretable by the human agents in the system, but that they are (also) representations for the system. If this can be shown, then we have reason to treat the distributed computational system as a genuinely cognitive system.

        But isn’t it the plotter rather than the navigation team who represents the ship’s position? In discussing this case, Huebner writes:

No individual crew member represents the ship’s location, but the output on the chart represents the location as a result of the coordinated activity of various distinct subsystems. (pp. 154-155)

        Huebner is aware that this is unlikely to persuade the sceptic. While it is true that the plotter cannot reliably represent the position without the output of the various other subsystems that are “upstream” in the fix cycle, it is arguably still the case that it is the plotter who represents the ship’s location by interpreting the intersecting lines on the chart as indicating the ship’s position in the world. Here, Huebner makes an intriguing Vygotskian move. He argues that it is a mistake to think that public representations (such as the intersecting lines drawn by the plotter) only have content that is derived from the interpretations given to them by natural persons, interpretations that themselves have “intrinsic” or “underived” content. Rather, our interpretations of public representations are themselves derived, in part from subpersonal neural representations, in part from the shared norms and standards that enable us to correctly “produce” and “consume” the public representations (p. 171).

        A strategy Huebner often adopts when responding to arguments against the idea that collectives can be minded is to show that if the argument is sound, then it is equally sound when levelled against the “hypothesis” that individuals are minded. For instance, Huebner responds in this way against arguments to the effect that positing collective mental states are explanatorily superfluous. If one thinks that the need for positing collective mental states is eliminated since the behavior of a collective can always be explained by appeal to the make-up of individuals and their interaction, why isn’t the need for positing individual mental state equally superfluous? After all, the conduct of an individual can be explained by appeal to the interaction of various computational subsystems with an external environment. Huebner responds to the objection that public representation are representations for the individuals in a group rather than for the group itself in a similar way: He equates what is going within the group with what is going on inside the individual. Just as personal-level public representations are meaningful in virtue of shared standards and norms, so sub-personal representations have their content in virtue of some kind of “norms” (semi-demi-norms?) regarding how they should be produced and consumed by subsystems. For example, the activity of neurons in the visual cortex may represent the presence of vertical lines in the visual field in virtue of how the output of those neurons should be “consumed” by downstream subsystems in order for the visual system to effective guide adaptive behavior.

        I find this quite convincing as an argument for why it is legitimate to take personal-level representations–like the intersecting lines on the plotter’s chart–as playing a kind of dual role. The intersecting lines both constitute a personal-level representation (when interpreted by the plotter) and what we might call a “subsystemic” representation that plays a certain functional role within the organization of the navigation team. (I argued for something similar in Blomberg 2009.) Huebner’s core project is to argue that many collectives exhibit “kinds of collective mentality that can be studied with the tools of cognitive science.” (p. 40) As he convincingly argues, we have good reasons to accept that such collectives exist. However, it is important to keep in mind that this is actually quite a modest conclusion.

        Thus far, the conclusion is that the navigation team is a cognitive system in the same sense that a honeybee colony (see sect. 8.2) or a neural subsystem that subserves face-recognition is a genuinely cognitive system. The intersecting lines on the chart represents the location of the ship for the system in the same sense that the waggle dances of forager bees represent the quality, distance and direction of various foraging sites within the context of the activity of the colony as a whole. However, Huebner hasn’t established that the lines represent the ship’s location for the system in the sense that they represent this location to the plotter. For the collective of the navigation team to represent the location of the ship in that sense of ‘represents’, the team itself would have to be sensitive to the norms and standards in virtue of which the representation gets its meaning. And for evidence that it exhibits such sensitivity, one should arguably focus on how the navigation team itself interacts with other agents (individuals and other collectives) rather than on the computation and integration of information that takes place inside the navigation team. At the end of the book, Huebner briefly attends to such matters, namely the possibility of collectives being sensitive to norms and responsive to reasons, although he does this using different case studies. In an interesting (if tentative) discussion of collaborative authorship and knowledge production in high-energy physics, he argues that some collaborative research teams may be persons that can take moral and epistemic responsibility for the claims they make (via spokespersons and in scientific papers).

        Macrocognition is clearly a valuable addition to the literature on distributed cognition and the philosophy of cognitive science. However, beyond this, I sometimes lost track of why Huebner thinks that it matters whether or not distributed computational systems are minded. At one point, Huebner writes:

[T]he theory of distributed cognition must provide a strategy for distinguishing between cases where coordinated activity yields genuinely collective representations and cases where coordinated activity arises because individuals exploit shared and public resources as they carry out socially manifested behavior of some sort. (p. 178)

        But it is not obvious why the theory must provide this. Here, I think the book would have benefited from a more explicit discussion of why distinguishing genuinely cognitive systems from other distributed computational systems is important. If the question of whether a system is minded concerns whether it is conscious or whether it is a morally and epistemically responsible person, then it is clear why we should pay attention. When the question is whether it is a cognitive system similar to, say, a honeybee colony, then it is much less clear what is at stake.

        There are good reasons for why philosophers and cognitive scientists should take an interest in distributed computational systems. For instance, it has the potential to transform our self-understanding. An overarching point that Hutchins (1995) makes is that classical cognitive science has (or had) mistaken the properties of certain socio-cultural systems for the properties of individual human beings. Hutchins argues that cognitive science needs “cognitive ethnography” in order to get a proper functional specification of human cognition. We need to know what kinds of tasks that people face outside the captivity of the laboratory, and which tools they use to get them done. Similarly, by focusing on how we make sense of corporations and hold them accountable in terms of their beliefs, desires and intentions, we may be led to revise our view of what the function is of such folk-psychological sense-making in general. Rather than merely a tool for predicting the behavior of others, including individual human beings, perhaps its function is partly akin to that of a corporate PR spokesperson: a tool for making oneself and others more predictable (as Austen Clark (1994) argues). Huebner refers to this and much other literature (e.g. work on the socially distributed nature of autobiographical memory by John Sutton and colleagues) from which such lessons concerning ourselves could be drawn, but he doesn’t highlight them. Moreover, none of these wider lessons (that go beyond the philosophy of cognitive science) depend on whether or not the distributed computational systems of which we are parts are themselves minded. Or so it seems to me.

        Perhaps Huebner could say here that in order to get a proper understanding of collective persons and collective responsibility, the best line of inquiry is to start by trying to understand more minimal forms of collective mentality, and then build on that. However, it is not entirely clear what the connections are between questions about collective personhood and responsibility on the one hand and issues concerning what is required for a collective to be a genuinely cognitive system on the other. Couldn’t a corporation be a person that we can sensibly attribute moral responsibility to (and that can itself sensibly take such responsibility) even if it isn’t a genuinely cognitive system? It is not obvious to me that it couldn’t. Why would it matter that the corporation doesn’t implement a certain cognitive architecture if it robustly behaves and responds to other agents in normatively appropriate ways? I’m not convinced that collective persons need to be genuinely cognitive systems in a way that matters to cognitive science. Rather, I’m inclined to think that Huebner’s project and that of e.g. List and Pettit are at least partly orthogonal. At any rate, more work is needed to connect the dots between these different kinds of projects.

        That being said, an interesting feature of the book is precisely that it engages with issues in both the philosophy of cognitive science and in social ontology. Typically, these two streams of literature do not interact much, despite the fact that there are clearly some shared concerns (works by Deborah Tollefsen, Thomas Szanto, Georg Theiner and Rob Rupert are some exceptions). Anyone interested in this intersection should read Huebner’s book.

References

Blomberg, Olle. (2009) “Do Socio-Technical Systems Cognise?”, Proceedings of the 2nd AISB Symposium on Computing and Philosophy: 3-9.

Clark, Austen (1994). “Beliefs and desires incorporated.” Journal of Philosophy 91(8): 404-425.

Hutchins, Edwin (1990). “The technology of team navigation”, in Intellectual Teamwork: Social and Technological Foundations of Cooperative Work, edited by Jolane Galegher, Robert E. Kraut and Carmen Egido, ch. 8: 191–220, Lawrence Erlbaum Associates.

Hutchins, Edwin (1995) Cognition in the Wild. MIT Press.

List, Christian and Philip Pettit (2011). Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford University Press.

 

© 2014 Olle Blomberg

 

 

Olle Blomberg is a Ph.D. student in Philosophy at University of Edinburgh (UK) and a freelance journalist. He is interested in the philosophy of social and cognitive science, the philosophy of technology, as well as science and technology journalism. For information about his freelance writing, see http://www.olleblomberg.com/english.html. Information about his Ph.D. research can found on his University of Edinburgh web page [http://www.philosophy.ed.ac.uk/postgraduate/students/phd/OlleBlomberg.html]