Last month I launched my three-part series on analyzing peer-reviewed philosophy papers with my Bayesian Analysis of Faria Costa’s Theory of Group Agency, where I explain my process and how I selected the articles for review. Second up is “Uncomfortably Close to Human: Robots and the Neocolonial Politics of Care” by Shelley M. Park, in the latest issue of Feminist Philosophy Quarterly. As noted previously, Park is a professor of philosophy and cultural studies at the University of Central Florida with a substantial publication record in the field. Her paper contains numerous assertions; I will only focus on its essential components. But even minor points I don’t address that I might take some issue with, I can vouch for the fact that after adjusting them in light of any critique I might give, they would remain correct. Any fatal criticism I have, is given here.
The Park Thesis
Park analyzes the socio-cultural trope of “caretaking robots” (often captured in popular fiction; she uses the television show Humans as her model), finds that it draws upon colonial ideas of a servant-slave economy, argues that this should (and often does) worry us, and employs this example to argue for the moral utility of what is known as the “uncanny valley” effect. So one thesis she aims for here is the generalization: that the cognitive phenomenon of revulsion or unease at “the uncanny” can be a useful moral-evaluator. Her example is “caretaking robots” in fiction, and in one modern fable in particular, the show Humans (though she references others: from films like Her, Blade Runner, and Ex Machina to television series exploring possible futures with social robots, like Black Mirror and Westworld). Almost all of her paper’s content pushes only the sample case, however, not the generalization. So I will evaluate the latter, but she might not herself have been as confident of that as of her only analyzed example.
Nevertheless, understanding Park’s argument requires some acquaintance with “the uncanny valley effect.” As Park explains, this was first articulated by robotics engineer Masahiro Mori (though the general concept of the “uncanny” was first articulated by Sigmund Freud). There is a good article on it at Wikipedia, discussing ongoing scientific research on and criticism of the concept, much of which Park also takes into account. Wikipedia’s description is apt:
Mori’s original hypothesis states that as the appearance of a robot is made more human, some observers’ emotional response to the robot becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion. However, as the robot’s appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy level.
The same effect has been seen in human responses to things other than robots. For example, our emotional reaction to cadavers, which falls in between observing the sleeping or the awake, neither of which evokes the same negative response but usually the reverse. Likewise our shifting emotional responses, depending on where a thing falls on an inverted realism curve, to prosthetics, puppets, dolls, paintings and statues, the deformed or diseased, and certain instances of pareidolia. It might also be found in the exploration of “off” behavior of people, animals, and objects across the entire horror genre, especially in visual media, though novels and other versions of “storytelling” can evoke it: consider for example Wells’ The War of the Worlds, Lovecraft’s The Colour Out of Space, or King’s Pet Sematery. The science so far appears to be converging on an evolutionary cognitive model whereby the effect is stimulated by a detection of a certain kind of incongruence between appearance and expectation.
Philosophy must not ignore or contradict but build on the findings of the sciences (Sense and Goodness without God, pp. 49-61). So the current scientific understanding of the uncanny valley effect will become a factor in how we evaluate Park’s thesis. But also, to establish her thesis, Park deploys several premises, some where analysis is at issue (what does it mean for something to be a “moral capacity,” what does it mean to say we “should” see or use something a certain way, what does it mean for an emotion to be “useful,” and so on) and some where factual claims will be at issue (like the question of there being any link between colonialism, as a historical cultural reality, and our current uneasiness with echoes of it; and, in turn, with contemporary fictionalization of caretaking robots). One such facet is Park’s reliance on other “postcolonial theorists’ uses of psychoanalytic theory,” which may be the most questionable premise under her case (given that Freudian psychology, for all its good ideas, has largely been exposed as pseudoscience; it has been supplanted with far more empirical regimes like CBT).
Understanding Park’s study will also require familiarity with three particular terms of art she is working from: “colonialism,” “neocolonialism,” and “postcolonialism.” Colonialism is a socio-historical phenomenon that built the Western world—whether overtly, as with the once-global British Empire (establishing and controlling “colonies” across the world), or covertly, as with the “United States” being, essentially, a colonialist empire creeping across an entire continent (we just killed all the natives, moved our own people in, and called the resulting colonies “territories” and then “states”). Neocolonialism is a re-entrenchment of the old colonial system under a different machinery of control (e.g. replacing direct control and exploitation of other nations and subordinated peoples with indirect mechanisms of exploitation such as economic, diplomatic, and cultural imperialism). Postcolonialism is the critical study of these phenomena and their impact on current peoples and present states of affairs, especially without taking for granted—but instead questioning—the “unreliable narratives” of what’s happening (and historically has happened) that are typically invented and promoted by colonial powers.
Outline of Park’s Argument
As with my analysis of Faria Costa’s thesis, my summary is no substitute for reading Park’s entire paper. So I recommend you do. It is well worth it. But key steps in her argument are, first, to “examine an alternative hypothesis for the cause of human discomfort with humanoid robots: their lack of appropriate affect,” in other words the particular incongruence between what robots look and act like and what they are actually expected to think and feel; and, second, to establish that companies “convincing consumers that robots have feelings” may be generating another iteration of the “uncanniness” effect: “a psychosocial symptom of an unresolved, traumatic past,” given how we used to similarly rationalize the mistreatment of an exploited caretaker class during the colonialist era (and do still among the upper class in neocolonialism, such as with the use of underpaid non-citizen “immigrants” as caretakers; and among the middle class in its treatment of service industry workers). So Park will “argue that we might best understand human repulsion toward ‘synthetic humans’ as a projection of our discomfort with our own inhumanity” toward those we already exploit (a fact she notes is also often gendered, preserving colonial-era sexism as well). From this she will argue to a normative conclusion: that “consumer desires for care as a marketplace commodity should unnerve us” (emphasis mine).
The way Park frames this is “that our sense of uncanniness when faced with social robots may be a psychic echo of past colonial traumas.” In non-Freudian terms: it may be an artifact of our (often suppressed or evaded) discomfort with our own historical past. And if that’s true, she argues, then this would support our finding possible epistemic value in “our sense of uncanniness” in other areas of life. In other words, this emotional response may be telling us something we need to know and track, as Park argues it is doing in this one case, so we should amplify rather than suppress it—not only here but maybe wherever else we encounter it.
Park does not analyze the epistemic value of emotions generally, but I can bracket that here: whatever else in her case may or may not be sound, it is compatible with my theory of emotions as pre-rational reasoning systems. On that model, it is not the case that emotions are always correct or reliable (hence, for example, “uncanniness,” just like trust, fear, or anger, can be triggered in error), but it’s also not the case that emotions are useless data to be “disregarded” by rational thinkers (Sense and Goodness, pp. 193-207). Emotions are essential sources of decision-making information; they should be treated critically, but that does not mean dismissively (since proper critical thinking aims at correct belief, not universal doubt). We do not have to assume “our sense of uncanniness” is always correct or accurate to nevertheless heed the advice that often it might be, therefore we should pay attention to it.
So the question is…should we?
Analysis of Park’s Argument
Park begins by analyzing how, in practice, real-world social-robot marketing and design has actively avoided uncanny-valley effects, by making sure social robots don’t look too human, or even human at all. She intersperses this analysis with apt observations about fit-to-design; for example, she correctly notes that “the audience” to the movie Her, in which the social-AI is voiced by the “sultry” Scarlett Johansson, “would be considerably less likely to accept an operating system as a potential love interest had that system sounded like Edith Bunker or Gilbert Gottfried.” Likewise in Her, one might also note, “the uncanny” was avoided by simply not giving her a particular body at all (and an attempt to rectify that in the film is depicted as inevitably awkward for that reason). IMO, another film illustrating the point is Cherry 2000, where the audience starts out comfortable with the titular robotic sex-partner “Cherry” (indeed, without spoilers, she is so convincingly built they won’t yet even know that’s what they are watching)—until her mind begins to show signs of limited consciousness, and her body begins behaving in an inhuman manner, triggering the audience’s “sense of the uncanny.” Then it doesn’t feel right.
Park observes the trend lately has been to build social robots that can “fake” emotions and emotional responses to diminish the feeling of the uncanny, while not risking falling “just short” by trying to make them look too human. Yet this may have created a new problem, as the idea that robots feel things itself starts to become “unnerving.” Park notes that people are in practice more comfortable with robots that have agency (that make decisions and do things) than with robots feeling actual emotions, partly because it is difficult to convince ourselves they really do, so their mimicking emotions begins, again, to look uncanny, thus making us uncomfortable all over again. Quite simply, robots faking emotions is creepy, menacing, cold, or disturbing.
It is here that Park delivers a key step in her argument that I cannot do justice to without simply quoting her (emphasis again mine):
In attributing minds to robots, we do so in selective ways that—for better and often for worse—parallel our habits of mind as humans, including our implicit biases. Feelings of uncanniness may arise when we begin to sense these unconscious biases at work. As the cultural unconscious begins to surface and threatens to erupt into consciousness, we are—and should be—unnerved.
[Because] implicit biases may take many forms but are most typically subconscious associations that cause us to have feelings and attitudes about other people based on characteristics such as race, ethnicity, gender, age, and appearance. It is unnerving to have these revealed as they are often at odds with our explicitly stated beliefs about other groups and individuals and thus also at odds with our sense of self.
On this there is a tremendous amount of scientific confirmation, well beyond even what Park cites (see my discussions of noncognitive racism, for example, in Actually, Fryer Proved Systemic Racism in American Policing; Google Scholar catalogs now literally thousands of studies verifying this for race and gender). So her claim has initial plausibility: the incongruity between our conscious beliefs and unconscious biases and assumptions can be expected to be unnerving whenever exposed to us.
Where Park’s Argument Could Falter
Here Park moves to her case study: the depiction of social robots in the science-fiction series Humans. Her analysis depends very much on the particulars of the milieu constructed in that series, which might not be as portable as Park’s argument requires. In that series, the robots are effectively slaves (indeed a main theme of the series is their rebellion), which is not true of robots whose sentience is not at issue—from the actual social robots marketed today, to, arguably, those marketed in Cherry 2000, or the “earlier models” of the robots in the series Westworld that their creator occasionally blows the dust off of and revisits to remind him of the progress he had made. Nor is it true on the other side, of genuinely sentient robots in milieus where they are treated with the same moral regard as ordinary humans.
Park surely would not make her argument about the oldest and most ubiquitous sex robots in human history: vibrators. Nor would it work for any of the articulated sex robots we now have (which are parodied in the film Serenity), which are just as insentient. Conversely, it would not work on the other side of the valley: consensual interactions (business or personal) with sentient androids who effectively share rights, equality, or autonomy similar to humans (as depicted in series ranging from Star Trek: TNG to Almost Human, and even some installations of the Terminator franchise; likewise the film 2010, the sequel to what began as an AI “horror” tale, 2001). In fact, this would hold even for the series Humans that Park herself is using as an example: in that narrative, the robots eventually seek out and pursue consensual business and personal relationships—not only with each other, but with “real” humans as well. So Park’s argument requires something more than the triggers of the uncanny merely being robots. What if what’s uncanny is not that at all—but simply that they are slaves?
This is a central theme of the Blade Runner franchise for example: in that milieu the synthetic people (called “replicants”) do not evoke the uncanny at all (apart from their rare displays of inhuman ability), because they are veritably indistinguishable from humans (this is even articulated as “the problem” with them). What is morally disturbing, rather, is the fact that the “mere trivia” of their being manufactured is used as an excuse to enslave them, so as to treat them as products, exploitable and disposable—much as American slavers employed skin color, fabricated doctrines of racial biology, and myths of “the happy slave” to the same end. This most definitely activates our discomfort when we realize the parallel. But of course in non-racist viewers this discomfort turns to sympathy rather than fear or unease. If you process the shame you feel for your ancestors’ and your nation’s sins that this reminds you of, “the uncanny” might not correctly capture what you feel, either about that or the replicants.
So Park may have a problem here. We could have a viable (maybe even simpler and more plausible) theory of our emotional reaction to Humans than the one she is trying to advance here. It exhibits, essentially, the same scheme as Blade Runner; the only difference lies in the way the creators use “the uncanny valley” as a tool of art (avoided in Blade Runner), and the detail with which Humans explores the milieu (such as actually depicting the robots being raped, rather than only casually mentioning it; Pris’s entire history of sex slavery, as also Zhora’s effective past as a child soldier, merits barely one line Blade Runner, and I’m sure that was a deliberate choice: the authorities’ casual disregard of these horrors is the point of the film); or in the starkness of its depiction: the rape scene Park focuses on in Humans is of the same species but still in significant ways distinct from the more complicated and ambiguous sexual coercion scenario depicted in the original Blade Runner, which notably tracks eerily close to actual ambiguities in many human sexual interactions (particularly in such unenlightened times), and yet hardly anyone noticed this until decades later. What was in the 80s thought a romantic scene is now admitted to be substantially darker; it now makes viewers uncomfortable (though the art is actually better for it).
Society changed. The audience changed. Yet by being so realistic (fully depicting the complex emotions of both parties) the assault in Blade Runner will shock but does not evoke the uncanny. It doesn’t fall into that valley. Whereas the analogous scene in Humans does, and would have even in the 80s, precisely because of the way it plays out: the victim simply lets it happen, emotionally inert. She acts more like the object she is actually being treated as. The starkness of this evokes the uncanny, both in the audience and, as depicted, in the perp. One can say it’s not her “being a robot” that is emotionally activating of that feeling here, any more than the realization that the replicants in Blade Runner are essentially children does (they are mere years old and deliberately kept emotionally stunted; and if you pay attention, you’ll notice the actors consistently perform each replicant’s character as a child). That realization evokes horror, to be sure. But that’s not what we mean by the uncanny. We are not repulsed by the replicants; when we see this, we increase our sympathy for them. They become more human.
Park argues that in Humans, the robot-character “Anita’s unconvincing smiles reveal” the “domestic worker’s alienation from the emotional labor they are expected to perform,” replicating the social reality that, “It is her job to make others happy by appearing happy,” and accordingly, “Her failure to do this convincingly makes those around her uncomfortable.” Park then argues that “it is our own ability to dissemble—to pretend, for example, to care (about strangers, friends, or lovers) when we do not, that should unnerve us.” And so it is in these “unnerving ways” that Humans “reminds us of our own inhumanity,” by which she means humanity’s inhumanity, not necessarily individual members of the audience (per her note 15). That is a reality between the privileged and “the help” (from slave days of yore to servant/server days of now), and Humans does capture it in exaggerated relief. That this also connects with Park’s point about our colonialist past is also demonstrated in my Bayesian Analysis of Kate Loveman’s Pepys Diary Thesis, where the scenario Humans constructs exactly tracks a demonstrated historical reality. All the artists involved in its production are deliberately evoking these facts; and what they did unnerves precisely because it does that.
On one level all those awkward scenes simply are what they depict: we are being reminded that Anita is struggling to fake her way through her enslavement; and we share the resulting social embarrassment depicted in her owners. But we are not merely feeling awkward because they are. We recognize the sinister element to these scenes: we do not sympathize with her owners’ embarrassment. It shames us, because we recognize the pattern in our knowledge of our past (and for some people, maybe even their present): this reflects an immoral reality of class and slave relations. That we are watching a slave, and her owners awkwardly being made uncomfortable to be reminded of that, is what is unsettling to us.
The show uses evocations of the uncanny to effect this result. Park describes several solid examples. But despite that, we are not repulsed by Anita. We are repulsed by the people using her. The only thing her being a robot changes, from this being a depiction of any servant-master relationship in history, is that she fails robotically (contrast this with the more nuanced and human failures of replicants in Blade Runner, which specifically avoid the uncanny valley). A human slave who failed would be fired, beaten, or killed; or threatened thereby, and quickly get good at faking it. Anita can’t. And this makes her owners uncomfortable. But that is because they are slavers. Certainly the audience, not being slavers, won’t identify with them. Maybe someone of the upper class who relies on underpaid “help,” and floats through life with illusions about how their servants feel, could come to realize, “Oh shit, that’s me.” I wouldn’t count on that happening very often. But the main goal of the art is to remind us all of who we as a people once were and easily could be again (and some maybe even still are), as a moral warning.
Does This Land?
Park does accurately grasp and identify these facts—for example, that despite the ensemble (writers, directors, actors, editors) making sure Anita activates our aesthetic feeling of the uncanny during her rape (as in other scenes of her domestic status), in result it is Anita’s rapist who is the one who comes across to us as creepy, not Anita (and Park goes on to add many other poignant analyses like this). But this means the sense of the uncanny in her case is actually being subverted by its artistic creators, to create an entirely different effect than the one identified by Mori (and studied by scientists). Revulsion at Anita’s treatment is moral revulsion at the way humans throughout history can rationalize their inhumanity (hence the irony in the series title, reminding us to ask who is this show really about). The “uncanny valley” is indeed used as an artistic device in Humans. But the story is subverting the entire notion of it: Anita is not a mindless mechanism; she is not the one making us uneasy. We are being led to look past the uncanny and see her as a person—with sympathy rather than revulsion—in spite of her being uncanny.
Once we understand this, Park’s argument can proceed. She is not arguing that social robots being uncanny is a moral message we need to heed. She is arguing that our feelings of the uncanny can be triggered in ways that imply moral lessons we should pay attention to. As she puts it, “The question is not whether we can design robots to replicate (or serve) humans in less unnerving ways; it is, rather, whether we can transform ourselves (and our desires) in response to being unnerved.” In other words, can we learn something from this. Obviously we can. The team of artists producing the series Humans totally intends this; just as those producing Blade Runner did, only without using the uncanny as a mechanism. So clearly the lesson was conceptualized in the first place, which means it can indeed be learned.
What is the lesson intended in Humans? To Park, “the uncanny arises” in that series “when we sense the parallels between our technological present and our colonial past,” such as in the way we as a people once treated slaves, and sometimes now treat servants (and even members of the service professions generally); likewise anyone marginalized—from imagining all Mexican immigrants are thieves and rapists to thinking any woman who is knock-out drunk, or even just alone with a stranger, “deserves” to be raped, and other things people still even today believe with alarming frequency; and once believed more pervasively, and thus could again someday. Colonialism, as a matrix of narratives and cultural assumptions designed to maintain imperialist power-hierarchies and economies of exploitation, is indeed being evoked here. If it wasn’t for our past, we would merely find what is depicted in Humans inexplicably peculiar, an oddity we couldn’t explain. Instead we find it disturbing—it makes us uneasy—precisely because we know that was once us, and could be again.
Park’s recourse to psychoanalytic theory at this point becomes unnecessary to her argument. Excising it leaves the remainder of her argument untouched. So that flaw in her paper can be extracted without harm to the thesis. It remains true that “we are historically entangled in these stories of violence,” and “failing to acknowledge this may—and often does—lead to repetitive injuries in the present,” as we fail to see and thus process and thus learn the lesson. This is true. But Park then concludes that “Encounters with the uncanny may signal that we are engaging in such processes of repetition,” which is the crucial step in her generalized thesis: can she get from the use of the “uncanny valley” in depictions of social robots, to the conclusion that this aids our moral epistemology, helping us perceive moral lessons we need to learn?
The Uncanny Valley as Moral Epistemology?
Generally, I would say, the answer is no. There are too many counter-examples. The main logical flaw in Park’s analysis is verification bias: she doesn’t look for counter-examples, much less assess their frequency. First, the use of the “uncanny valley” by Humans is an artistic device that is neither typical (most social robots in art and society don’t track this narrative at all) nor necessary (hence the same moral messaging is accomplished in the Blade Runner franchise without it). Second, the “uncanny valley” is triggered in countless scenarios devoid of any such moral messaging. There is no “moral lesson” in being creeped out by a cadaver or a doll; or indeed even someone deformed, where in fact that feeling is contrary to where one’s moral sentiment should be landing: we should not be creeped out by people with mere deformities, and thus should not seek to “enhance” rather than “suppress” that feeling—exactly contrary to Park’s prediction.
Counter-examples thus destroy Park’s generalized thesis. The “uncanny valley” is simply not inherently useful to moral epistemology; it is just another emotion whose function is distinct and is as fundamentally neutral as any other: it alerts us to certain incongruences between appearance and expectation; what moral salience that has will simply vary. Sometimes it will align with moral principles (as it was designed to do in Humans); sometimes it will grossly misalign with moral principles (as when we allow this to dictate our reaction to circus “freaks”); and more often than not it will have no moral salience whatever (as when we are creeped out by an “almost but not quite real” animatronic robot face in a tech-conference display, more the context of Mori’s original concept).
What Park wants to argue is that human discomfort with the moral pitfalls of “having servants” is important, and should not be swept under the rug but heeded: there is a reason this makes us uncomfortable; it should do. She is right about that. She is also right about this deriving from our historical past and cultural present. We wouldn’t feel this way but for the parallels we notice with our historical past. And this could indeed translate someday into our treatment of real caretaking robots (whenever those really exist; so far, all we have are fakes). That is the moral warning of such works as Humans and Blade Runner.
But Park is wrong that “the uncanny valley” connects with this in any general way. The reason people find our actual social robots faking emotions uncanny and thus uncomfortable is not that this reminds us of our classist, exploitive past; we know those robots are insensate, and it is that very fact that makes their faking emotions uncanny. This is not at all what is happening in our reaction to Humans, where the opposite condition is in place: we know Anita is not insensate and thus isn’t faking “having emotions,” she is faking which emotions she shows to her masters (as even Park admits, “viewer unease is prompted here not by the suggestion that a synth has emotions but instead by evoking fears that the caregiving synth may harbor the wrong emotions”), which actually makes Anita akin to human slaves and servants. And that is why that show reminds us of our classist exploitive past.
Park isn’t wrong about that. But that doesn’t sustain her general thesis, that “the uncanny” is generally useful as a tool of moral epistemology. She also hasn’t correctly connected actual social robots (which are mere machines that fake being people) with our colonialist memory. That might undergird our hope that we will one day have the kind of caretaking robots reflected in Humans, leaving us to morally consider whether they will be real people and thus whether we should treat them as such, or revert to our shameful historical patterns. The moral warnings about that future intended by such works as Humans are valid (what exactly would a moral society with such caretaking robots look like?). But that quandary isn’t what’s causing the uncanny-valley effect with social robots today. Nor does it sustain Park’s epistemic generalization. Everything else she argues is valuable and well-put. Her paper is worth reading and citing, even building on. It only drops this one ball.
To illustrate the distinction I am drawing, between her study’s valuable content and its one unsuccessful proposal, consider an example she develops from the ad campaign for Humans, whereby a hoax led people to think these androids were actually developed and available for sale:
According to Adweek, the “creepy ads” for synthetic humans “freaked out” many Britons who mistook them for real. So what was creepy about the ads? To be sure, Sally’s somewhat vacant stare and expressionless face—she tries to smile but never convincingly—is creepy. To fully understand what is uncanny here, however, one needs to place Sally’s unconvincing smile (like Anita’s unnerving automated laughter) in a historical context.
Park gets later to the fact that there were as many ads for “Charlie” as for Sally, and that Charlie was, notably, Black (Sally, White). But even with Sally, Park’s point remains sound: in attempting to create a deliberately unnerving campaign for the show, the advertisers resorted to old-colonial stereotypes about “the help.” Being a servant is women’s work (or even, “what Black people do”). And they evoked moral aspects of that dynamic (like pretending to be happy in one’s role). As Park notes, “Gendered divisions of domestic labor in the Western world dictated that women in white middle-class families in the 1950s should aspire to nothing more (nor less) than becoming happy homemakers.” Humans is mocking this regime by making you feel uncomfortable about it; and it is accomplishing that by using “the uncanny valley” as a clever tool to bring into relief what usually (in reality) would be more skillfully hidden (hence Sally’s robotic smile is unconvincing). That is an artistically clever use of “the uncanny valley,” but it is just that: a clever artistic appropriation of an emotional response that by itself has no such moral function. Park definitely gets the show Humans. She gives a solid analysis. But we can’t get her general thesis out of that.
Bayesian Analysis of the Park Thesis
There are several propositions Park is advancing in her study. And she does not do as good a job as she could of making clear which ones are central and which only serve as premises building toward it. But to evaluate the soundness of her case we need to isolate those major premises and track whether they produce any separate conclusion by any valid logic. To that end, I think we can isolate these questions, relating to her premises:
- Do depictions of the uncanny in social-robotics fiction (Humans in particular) evoke moral discomfort in the ways Park outlines?
- Do those moral discomforts arise from the historical reality of analogous human mistreatment that has been in various ways standardized in colonial and neocolonial societies and lore?
- Is this what is causing feelings of the uncanny in reaction to actual social robots today?
- And does this mean feelings of the uncanny are useful as a general signal of moral concerns that warrant emphasizing rather than suppressing those feelings?
I think Park well establishes 1 and 2 (being proved “yes”), but not 3 or 4 (remaining “no”). As explained last time, Bayesian analysis entails that the probability of a theory being true follows from multiplying two factors: the prior odds and the likelihood ratio. The prior odds measure how well the theory tracks human background knowledge and prior similar cases. The likelihood ratio tracks how expected (or unexpected) the evidence is on that theory, relative to alternative explanations of that same evidence—and this means not just the evidence presented, but the evidence that we have: if important evidence has been left out, it must be put back in before this step is evaluated.
In cases 1 and 2, Park has ample support in prior probability (1 is the kind of thing we expect in art; and 2 has been amply documented in countless prior studies); and in support she presents ample analogs and examples, while there are no strong counter-analogies or counter-examples to adduce. The probability of the evidence being that way is low—unless her hypotheses are true. A decent to high prior probability multiplied by a decent to high likelihood ratio leaves these conclusions highly probable. It’s not impossible that someone could alter this outcome by presenting sufficient counter-analogies or counter-examples overlooked, but I do not think that’s likely at this point.
However, the reverse is the case for 3 and 4. There, prior probability is against her (emotions rarely have this kind of universal moral salience; and the single most available cause of uncanniness with modern social robots is the opposite of her concern: that we know they don’t actually have emotions, and despite every attempt by marketing executives, we can’t be fooled into forgetting that). And there are far more counter-examples than examples, reversing the likelihood ratio as well. That is highly improbable if hypotheses 3 and 4 are true. They are therefore probably false.
Feelings of the uncanny have no inherent moral salience, just as with almost any other emotion—feelings of trust, joy, hate, love, can track, miss, or even contradict moral realities. Their root cause is disconnected from moral evaluation. Tracking the mere incongruity between appearance and reality may or may not have any moral relevance, and may or may not be “morally good.” So it is not automatic that we “should” enhance rather than suppress feelings of the uncanny. Sometimes we should. Sometimes we shouldn’t. And most of the time, it really doesn’t matter; no moral facts are implicated either way. But apart from this (and apart from trying to connect her thesis about Humans to actual social robots now; and apart from a very brief flirtation with “psychoanalytic theory” that could have been converted into the terms of modern cognitive science and thus done without), everything Park argues holds up. And what she has detected (if merely taken a couple of wrong inferences from) is that the artistic team behind Humans has deliberately employed feelings of the uncanny to communicate its moral warnings about our suspect colonial and neocolonial attitudes about “help” and “servants,” and the exploitation of our fellow human beings generally. And those moral warnings definitely should be heeded.
An eye-opening opportunity to actually get embedded in such “movies” dealing with the obverse, humans as very grateful robots is to go on a Caribbean cruise.
I did with my family as a gift from my mother in law. They were each so happy with the experience.
I however, being garrulous, talked to the maids, stewards and deck hands. It turns out they are mostly from poor communities in thr Balkans or Poland and The Ukraine. They have zero health benefits. Those who become sick or injured are dropped off at a port. No labor laws protect them.
I examined their posture and facial expressions by sitting where I could clearly see their passing faces mirrored in a polished stainless steel refrigerator, as they exited the kitchen. They were clearly fatigued and miserable. Remarkably on entrance to the breakfast room, their faces beamed joy at seeing us.
They insisted on carrying to our table a single piece of toast on a plate, all with such loving deferent courtesy!
At night I heard pumps get activated and start churning. It was 3:00 am and in thr moonlight I saw orange and purple effluent being pumped just a mile offshore from an island we were to visit in the morning. I reported it when we landed, as it seems the company would rather spoil the coastal habitat that pay the needy islanders to clear the bilge tanks as a civilized visiting ship would do.
When we finally landed back in the USA, there was an opportunity to give tips to all our “helpers”, their smiles having paid off.
But then I saw a departing elderly tourist struggling with his suitcase as he was descending the very deep long stairs from the top deck to the shore. The case fell and opened up in the water.
All our so friendly gracious bowing helpers, ignored his please for help as he dangerously leaned over reaching attempting to grab his floating case drifting in the narrow gap.
After about 20 minutes, a white uniformed ships officer, embarrassingly standing to attention to bid tourists “farewell and to return”, fetched a long poke with a hook, and retrieved the man’s suitcase and what clothes he could reach!
But the robotic gracious and loving helpers made no effort to help, lingering, hugging each other as if the man was a piece of trash!
My family didn’t notice anything wrong during the trip and had a great time, and as for the man with the suitcase, I was “fussing about him as he was no doubt careless, packed too much and anyway the ship helped him in the end!”
So my highly educated and privileged family would no doubt, not go deeply into “Bladerunner” beyond, “So that’s how it will be in the future!”
It’s only a fraction of a percent of us who travel on the discomfort curve of “uncanny”, which I think is one of the 5 primary emotions proposed and likely equivalent to “disgust”.
That (albeit rather small and declining) industry has always mystified me for those very reasons: how those companies operate and how they treat their personnel is like a fossil of the 19th century. Much at sea is, actually. Our merchant marine (and commercial equivalents) are not much better treated.
Isaac Asimov, establishing the precepts he later codified in his essay “Social Science Fiction,” was on this topic at least as far back as his positronic robot story “Satisfaction Guaranteed” in which the human knew that the TN prototype (Tony) was a machine, but the emotional component overwhelmed that knowledge. I wonder what Dr. Park would make of this story 60 years later.
He also did a lot of that throughout the Robots stories in general. There’s definitely some sexism and some weirdness assumed in those stories, but there’s also certainly some truth to it.
Regarding psychoanalytic theory, I would point out that the modern use of it in sociology and a lot of philosophy isn’t really based on assuming that it’s actually capturing fundamental phenomena well. Freud and Jung were seeing something, and they could often capture some interesting psychodynamics. But insofar as there’s any validity to their approach, it has historically not been with actual treatment, or prediction of behavior, because there’s too much tea leaf reading. Instead, what it’s useful for is thinking about how people think about themselves. People really do have these myths and archetypes that have these psychodynamic elements. Some of this is still almost certainly cruft that needs to be sorted through, but generally when someone like a postcolonialist is using psychodynamic theory they are making a much more restrictive argument about things like media content.
As for implicit bias, it’s in a complicated state right now in the literature. It’s definitely something that’s there: I don’t think it’s like Dunning-Kruger where we’ll start finding that it’s at best ephemeral and at worst an artifact. But the (or rather, a) problem is that any behavior that one could attribute to implicit bias could also potentially be attributed to explicit bias that is being covered up. It’s pretty clear that implicit bias is a thing, but there’s a lot of people pushing back against its prevalence. Some of that is definitely politically or ideologically motivated, and I have definitely not been wholly convinced by their arguments, but in particular it’s increasingly clear that bias training on its own can sometimes have limited effects because biases are themselves nestled in social interactions.
This is actually one of the very rare times where I will take an evopsych theory seriously, but I’m actually more convinced that the uncanny valley really is a fundamentally rooted part of human nature. It makes a lot of sense that the really intricately complicated system we have to track if something is human or not will produce horror when it’s being triggered incompletely. That having been said, Park is almost certainly correct that an additional factor, and probably one of major importance, is the way we view the marginalized, the powerless, etc. Robots have been used as metaphors for slaves and underclass people since literally the beginning of the term’s coinage. I agree with your analysis that her argument can be read superficially as conflating our complex suite of feelings about the potentially uncanny in total with the specific uncanny valley sensation, which is very visceral and is something that people experience even in contexts where biases are not terribly likely. I’m not going to be that much more meaningfully made uncomfortable by an improperly-designed moving face that’s a black woman than a white man (or vice versa), and I suspect only the most odious and conscious of racists would be.
However, I do suspect that there is probably an activation threshold that can change as a result of these kinds of social factors. Her point about the voice in Her is a good one. And I bet that there’s specific ways that the revulsion and discomfort with the uncanny appear that will track with general social senses of the Other.
I thus agree that the uncanny as a sense does not necessarily carry substantial benefit to moral epistemology. It potentially can, but almost any human feeling can. I’m curious if she can develop a more specific case out of this.