The Good Robot

N. Katherine Hayles on Feminism, Embodied Cognition and AI Regulation

June 01, 2021 University of Cambridge Centre for Gender Studies
The Good Robot
N. Katherine Hayles on Feminism, Embodied Cognition and AI Regulation
Show Notes Transcript

In this episode, we chat with N. Katherine Hayles, Distinguished Professor of English at UCLA and James B. Duke Professor of Literature Emerita at Duke University, about feminism, embodiment, cognition, and human-AI relationships. We explore the role of feminism in science and technology, what productive conversations between engineers and humanities scholars look like, literary depictions of non-human embodiment and cognition, and the distribution of cognition across human-AI systems.   

This episode includes an ad for the What Next|TBD podcast. 

KERRY MACKERETH (0:01):
Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.

ELEANOR DRAGE (0:32):
Today, we’re speaking with N. Katherine Hayles about feminism, embodiment, cognition, and human-AI relationships. Hayles is Distinguished Professor of English at UCLA and James B. Duke Professor of Literature Emerita at Duke University. We explore the role of feminism in science and technology, how to start productive conversations between engineers and humanities scholars, literary depictions of non-human embodiment and cognition, and the distribution of cognition across human-AI systems. We hope you enjoy the show. 

KERRY MACKERETH (1:05):
Hi, thank you so much for joining us, it really is such an honour. Would you mind introducing yourself, telling us what you do, and also what brings you to the subject of gender and technology and feminism? 

KATHERINE HAYLES (1:18):
Sure, thank you. Thank you for having me. So my name is Katherine Hayles. And when I publish, I often go by the name N. Katherine Hayles. But Katherine Hayles is fine. And I taught for 15 years at UCLA. And then I moved to Duke University, in 2008 and at that time I officially retired from UCLA. I taught 10 years at Duke University, and then retired from Duke, whereupon my husband and I moved back full time to Los Angeles. And I now have a courtesy appointment of Distinguished Research Professor at UCLA in English.

And what brought me to feminism and technology? Well, to me, it's a slightly odd way to ask that question. I've been a feminist at least since graduate school and I am somewhat shamefaced to admit that's half a century ago. So that's quite a long time of being a feminist. And when I started scholarly work, I basically had a choice between focusing on Shakespeare, which was the topic of my dissertation, or beginning what was then an entirely new field of literature and science. So I chose the latter path and broadened that so it's not only literature and science but literature, science and technology. And since feminism is very much an integrated part of my worldview, that meant bringing in feminism into that picture as well.

ELEANOR DRAGE (3:03):
Super thank you! So our podcast is called The Good Robot and we ask, what does good technology look like? And we also ask what good human-technology relationships look like in the context of, for example, customer service agents that are paired with AI systems. This to me seems to speak to your work on how agency is distributed between human and technical cognisers - so different kinds of thinking creatures - which process information at very different speeds. So can you tell us then what you think a good and equitable human-technology relationship looks like?

KATHERINE HAYLES (3.42):
I think that a good human-technology relationship is one that foregrounds human values, what's beneficial for humans, what fosters human welfare, and equity and is also good for the planet. So part of what I think feminism involves is a biophilic orientation, to the environment and to the planet. So reaching beyond humans to the nonhuman world as well. And my recent book on thought was focusing as you suggested on the idea of cognitive assemblages, and distributed cognition and distributed agency. So part of the argument that I was making there was about the cognitive capacity of non-human living forms. And my argument is that all living forms possess cognitive capacities, but also that artificial life forms possess cognitive capacities as well. So one of the great challenges of our era is to find ways to integrate these other cognisers into our psychological, social, and economic schemes without losing the human values that I think should be paramount in those relationships. And as you suggested, artificial cognisers are vastly different than humans, in many respects, but I'll just mention two here. One is temporality. So as you suggested, artificial life forms operate at enormously faster speeds than humans are capable of. And the other issue that I think is extremely important are the differences in embodiment. And sometimes, people speak of AI as though these cognisers don't have bodies. But of course, they all have bodies, it's impossible to exist without having a body. It's just that they're embodied in radically different forms than humans are, which lead to many misunderstandings, misrepresentations and misalliances. So how do we move forward in this complex environment? I think this is particularly difficult for Americans because Americans have a tradition of individualism and self-reliance. And I think one of the first things that we have to accept is that in the contemporary world, in developed societies, cognition is distributed and agency is distributed. So we can start at the outset by setting aside those prized qualities like freewill, like individualism, like self-reliance, not that they're altogether irrelevant but they don't serve very well in environments where cognition and agency are distributed. So I think the sooner we accept the fact that we're not the only cognisers on the planet, the better off that will be. But that doesn't answer the question, really of: how should these relationships go forward? There are researchers who are very interested in developing AI life forms for their own sake. They're interested in the thrill of discovery of innovation and so forth. I tend to look at that entire project somewhat sceptically, I don't think that it's a good idea to develop advanced AI if that means a diminishment or even an extinction of the human race. So there are people who would say, well, fine, like Hans Moravec, if we can develop AI, we should recognise them as our children and it doesn't matter so much what happens to the human species. I differ in that regard, I think it matters very much what happens to the human species. So that's why I say that I think AI development should be, should accept as a primary premise that the purpose of AI is to enhance and foster the human values that are at the centre of the human species. That said, it's a lot easier said than done, of course. So I’ll mention here just one book, Sarah Spiekermann’s Ethical IT Innovation. And Spiekermann’s rather traditional because she wants to base the idea of ethical IT innovation on consensual values. I'm not necessarily opposed to that. But a lot of what she suggests specifically really doesn't hold up in the real world. For example, she notes that in AI development, there is a period when the technology has a lot of flash, and everybody wants to use it, regardless of whether it's really useful or helpful or not. And so she draws this graph where you get a huge initial uptake in the technology. And then as people realise it's not all it's cracked up to be, they abandon it and then slowly, the real value of the technology begins to emerge. So we could say that chatbots fall into that category; you could develop chatbots just because you're thrilled with the technology and “hey, look what I can do”. But that's different from saying in what contexts will chatbots actually be useful and helpful and not simply an irritation to the people who encounter them or a seduction to the people who encounter them. So her approach which is to say “don't design - her approach boils down to this: don't design for functions. I'm going to do this because it has all these cool functions that have never been possible before. Design from the outset for value, what am I trying to achieve? What is the value here? And how can I design this AI or any, really any technological object in such a way that it fosters these values and doesn't hinder or undercut them? And that I think is a very sound approach.

KERRY MACKERETH (10:27):
Wow thank you so much there were so many amazing and rich insights and I certainly resonate with what you were saying in that last section around this uptake of technology because of the excitement, the hype around it and I feel like we certainly see this in the AI industry where there’s this pressure to be using this kind of technology. Now hopefully we’re moving onto that stage of [asking] what values are important when we design AI and what AI can bring to these particular situations. I did want to draw out something else that you mentioned earlier which was the importance of embodiment, and I was wondering what does feminism or feminist thought bring to your understanding of the centrality of embodiment when we’re thinking about technology?

KATHERINE HAYLES (11.12):
Well, I think feminism is absolutely absolutely crucial here because there have been many feminist studies that have shown that the tendency to abstract from embodied realities has typically been a masculinist pursuit, and that feminism in general has been denigrated because it is associated with the body with more earthly things giving birth and so on, whereas masculine intellect is celebrated because it's precisely removed from those feminist essential values. So yes, I think feminism is crucial. But I think the problem actually goes deeper than that. And that, it stems from the very human tendency to want to project an anthropocentric personality or subjectivity onto anything that is recognised as having cognitive abilities. So we see this with animals, certainly. But we see it also with technological objects, which are increasingly responsive, autonomous, and intelligent on their own. And the area here that I'm working on currently has to do with deep fakes. So as you may know deep fakes use recurrent artificial systems to learn over and over and over what are the voice patterns, the movements, all those elusive qualities that enable humans to communicate, and then use them to create a video or recording or some other technological artefact that has the face of say, a celebrity pasted onto a porn star body, performing sexual acts, or ... it can be used in a multitude of ways. And not all of these are bad, some are amusing and some are used for entertainment. But it does raise deep questions about how we evaluate when another being is conscious. So right now, as far as I know, there are no artificial life systems that are conscious. They're responsive, they can perform cognitive acts, for sure. But I know of no AI system that has been demonstrated to be conscious. They can model consciousness given the proper algorithms. But of course, that's different than actually being conscious. And as you know, this is a famous problem in philosophy. How do I know that anyone is conscious? How do I know that you're conscious? How do I know that I'm conscious? Well, the practical answer to that has always been, well, I know that I'm conscious. And because you're a human like me, I can make the assumption that you're conscious as well. But what happens when we encounter an artificial intelligence form that has created a simulacrum, that is amazingly like another human being? Well, the temptation there is inevitably to project consciousness onto that entity. And so our novels, our films, are full of conscious AI beings. And I'm beginning to think that there are a lot of art forms like the novel for example, which cannot exist without consciousness that is, they cannot exist without portraying consciousness. And if that's so, then the challenge for a writer for example, is how do you depict an artificial entity which is not conscious but has high cognitive abilities. And I know of no successful novel that has done that. I can give you lots of novels that presume the artificial entity is conscious, and then go on to speculate what artificial consciousness would be like. The most recent one I've read was Annalee Newitz’s Autonomous about an autonomous, conscious robot. But I think that this area, which is in my expertise, literary studies, illustrates a more general problem. And that is, what do we, as humans, embodied creatures who have evolved to project subjectivity like ours onto everything that's cognitive do when we encounter these systems, which are not conscious and which are profoundly different from us in embodiment, and in whatever kind of cognitions they have. So there's an enormous clash here between our assumptions about cognitive abilities and the increasingly pervasive reality of creating cognitive beings which are not only different in embodiment, but different in the way that they achieve cognition. 

ELEANOR DRAGE (16.36):
I’m glad that you’ve raised the issue of how we narrativise non-conscious artificial cognisers because I’m halfway through Ishiguro’s Klara and the Sun and think that his depiction of the humanoid Klara is really compelling. I’m also midway through Pharmako AI which is a book co-authored by GPT-3, and I think, gives some really interesting insight into what themes are of mutual interest to humans and machines, and perhaps how collaborations between different kinds of cognisers can give rise to new perspectives on, for example, ethical issues. I want to just loop back to what you were saying about the harms that can arise from human-AI interactions. How do you think that feminism can help make AI less harmful for certain groups and individuals? What is it in particular that feminism offers? 

KATHERINE HAYLES (17:33):
Well, I think, am I correct, Eleanor, that you mentioned Louise Amoore's Cloud Ethics. Is that correct? Yeah? Well, I think that study is a wonderful example of how feminism can weigh in on these complex situations. And just to mention very quickly, a couple of others: Mireille Hildebrandt's Smart Technologies and the End(s) of Law is one of those and then Antoinette Rouvroy has also written on algorithmic governmentality from a feminist perspective. So I think that what feminism can do is to draw on the deep and very broad archive of feminist strategies to begin to analyse closely these complex situations that involve algorithms of various kinds, and then to suggest ways to remediate that. So for example, both Rouvroy and Hildebrandt really emphasise the importance of regulatory frameworks, that there’s a huge necessity for laws governing things like surveillance, anticipatory disciplining, where an algorithm anticipates what you're going to do before you do it and then intervenes at speeds that elude your consciousness, so kind of slips under the horizon of consciousness because it can work so much faster than the human brain and delude you with all kinds of suggestions, which operate as kind of subliminal pointers to direct your behaviour in ways that it might not otherwise be directed. So it's situations like that I think lend themselves to regulatory frameworks that would reign in what we could call the excesses of capitalism, where all of these wonderful tools are directed simply to sell you something, or to guide your behaviour in ways that's going to make money for the corporations. And those I think are all very legitimate areas for the kind of regulatory frameworks that Hildebrandt and others are suggesting.

ELEANOR DRAGE (19:57):
Yes I completely agree and we’ll put Amoore, Hildebrandt and Rouvroy in the reading list to go with this episode - thank you these are really great examples of feminist approaches to understanding what kind of regulation is necessary and sufficient in AI. I think what feminist works have most given me is a really good understanding of what it is about a system’s functionality that makes it particularly harmful. In particular Louise Amoore’s analysis of the Boston Police Department’s collaboration with Geofeedia has been extremely influential to the way I’m thinking about how information that produces actionable intelligence is spread across multiple settings and stakeholders, and what this means for regulation. These are the texts that motivate much of our industry work. I’m actually finding that the engineers and practitioners that we’re working with are really interested by the questions and methods that the humanities offers AI. It’s a complete myth that engineers aren’t interested in this kind of thinking, they absolutely are and we’re excited to see that. 

KATHERINE HAYLES (21.09):
I'm so I'm so glad you say that, because that has been my experience as well. And I've had numerous, numerous encounters with engineers, roboticists, and so forth. And I've always found them very welcoming. But I'll add here a caveat. If you're a humanist, like me, you can't expect them to show up at your office, you have to go where they live, and where they hang out, take the first move, say, “gee, can I come visit you in your lab”. And then once you're there, you can begin to ask questions. And on the basis of that begin making suggestions and so forth, intervene at the point where designs are still being forged and still being thought through, not try to add on ethics at the end as if it were icing on the cake, but it has to be baked in from the beginning. And to make those suggestions and those kinds of productive interventions, you have to respect your colleagues, first of all, you have to go where they hang out, and you have to be open to what they're doing. And then if those prerequisites are met, my experience has been there more than happy to see you. They're more than happy to talk with you. They're thrilled that you're interested in their work. And they'll be quite open to suggestions. And of course, not all your suggestions are going to be useful because you may lack the technical knowledge. But they're the beginning, the basis, the foundation for further enlightening exchanges.

ELEANOR DRAGE (22:47):
Yes this is exactly what’ s happening at the moment with humanities scholars making the first moves and meeting practitioners on their turf and with an understanding of what problems are most important to them. I think one of the productive conversations we can have between industry and academia is when the right time is to consider ethics, because we know that slapping on an ethical framework at the end is too late, the potential harms must be taken into account before something is built so that you still have time to ask key questions like is the AI that we're building the right tool for this job and in what context should it be used. 

KATHERINE HAYLES (23:34):
 Yes, exactly. And a perfect illustration of that currently for me is face recognition technology, where not only are there all kinds of biases there are really scary things happening where law enforcement is teaming up with corporations like Microsoft so that they’re beginning to use face recognition technology on something like driver's licence photos and so forth. So yeah, I think that's another case where regulatory frameworks would really be useful to simply say, you know, when someone has their picture taken to get a driver's licence, they're authorising one use for that, but they're not authorising other uses, like having law enforcement scan their photos to find a perpetrator of some kind, they never agreed to that. So the idea of data privacy and the right to opt out of surveillance is an important issue here.

ELEANOR DRAGE (24.36):
I think that practitioners are now beginning to think about how the systems that they build can be deployed for secondary uses in ways that might compromise the safety and privacy of the people who encounter these technologies. We know AI is dual use but it’s still important to consider right from the problem recognition or design phase the risks of the system being used for other purposes. But as you’ve just said the responsibility to build a system that doesn’t reproduce or exacerbate existing social harms isn’t just with developers it’s also with the other stakeholders that are responsible for making sure the technology or the data isn’t also being used by, for example, law enforcement. And this raises interesting questions of who is responsible or accountable when people suffer unduly because of AI. So creating responsible AI I think also means accounting for the distribution of knowledge and responsibility in any given system.

N. KATHERINE HAYLES (25.41):
Well, the issue that you raise here about the kind of responsibility that designers take for the technology and which is part of their desire to design it in an appropriate way, I think speaks to a very troubling issue when you get to autonomous technologies like self driving cars, and that is where does responsibility lie? Is it with the software? Is it with the designer? Is it with the car owner, the car manufacturer? And I, I can't begin to untangle all those complexities. But I'll just mention an important distinction that two researchers for Floridi and Sanders make between responsibility and accountability. And so they argue that in a case of distributed agency, the agent, a given agent may be accountable, but not responsible. So to give an example that they don't give, but say that, say that you have a dog that's been trained to be vicious to fight other dogs in a dog training, illegal dog training operation. What they say is using their terminology, the dog would be accountable for its actions, but not responsible for its actions. Because yes, the dog is capable of harming people. But in a sense, it's not the dog's fault. That's how he's been trained. So they argue that the appropriate action for someone who - some entity that is accountable, but not responsible is censure. So in the case of the dog, maybe the dog would be sequestered until it could be trained otherwise, or in the case of a piece of malware that's infecting your computer, it might be appropriate to isolate it and then to dismantle it. So the responsibility would lie with the people who designed the system in the first place, who were responsible for setting up the parameters within which the distributed agent acted. So that would also apply to something like drone warfare. The drone itself may be delivering fatal strikes, but the drone is accountable but not responsible because it's a piece of programme machinery that's been programmed to operate in a certain way. So I think that's a beginning to just disentangle those two possibilities, accountability and responsibility. Of course, it doesn't answer all the questions, how do you determine whether it's accountable or responsible and so forth? But at least it's a start in beginning to disentangle all the complexities.

ELEANOR DRAGE (28:31):
Completely, though disentangling issues of accountability and responsibility also involve asking how descriptive and self-explanatory these terms actually are. I don’t know whether you’ve encountered the new AI lexicons and dictionaries that are coming out but they’re really interesting - there’s AI Now’s New Critical Lexicon that Kerry is writing an entry on AI Nationalism for, and which highlights how conversations around harms and responsibility often mostly originate in the global North and inadvertently take the infrastructural and regulatory landscapes and histories of Euro-America as the baseline for thinking about ethics and regulation in AI. 

I think that’s all we have time for so thank you so much for joining us, it was a real honour. 

KATHERINE HAYLES (29:21):
Well, thank you and thank you for the opportunity.

READING LIST

By the Guest: 

Hayles, N. Katherine (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press. 

Hayles, N. Katherine (2005) My Mother Was a Computer: Digital Subjects and Literary Texts. Chicago, IL: Chicago University Press.

Hayles, N. Katherine (2017) Unthought: The Power of the Cognitive Nonconscious. Chicago, IL: Chicago University Press.

The Guest is Reading: 

Amoore, Louise (2020) Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press. Link 

Floridi L. (2016) “Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions”. Phil. Trans. R. Soc. A 374: 20160112. http://dx.doi.org/10.1098/rsta.2016.0112

Hildebrandt, Mireille (2016) Smart Technologies and the End(s) of Law Novel Entanglements of Law and. Technology. Cheltenham: Edward Elgar. Link 

Newitz, Annalee (2017) Autonomous. London: Orbit. Link 

Rouvroy, Antoinette, et Thomas Berns. "Algorithmic governmentality and prospects of emancipation. Disparateness as a precondition for individuation through relationships?", Réseaux, vol. no 177, no. 1, 2013, pp. 163-196. Link 

Spiekermann, Sarah (2016) Ethical IT Innovation: A Value-Based System Design Approach. New York: Taylor & Francis. Link