The Good Robot

Jack Halberstam on Tech, Resistance, and Invention

June 18, 2021 University of Cambridge Centre for Gender Studies
The Good Robot
Jack Halberstam on Tech, Resistance, and Invention
Show Notes Transcript

In this episode, we chat to Jack Halberstam, Professor of Gender Studies and English at Columbia University, about the relationship between resistance and invention, and why social media is even worse than we already know. He asks: Why does the state need to know your gender? Why are bodies subjected to technological recognition and how can we evade it? How are homophobia and transphobia operating under the banner of “security”, in, for example, AI used in airports? What’s the glitch in Ex Machina? How has the family shifted during lockdown? and much more. 

Content Warning: This episode contains references to invasive and transphobic practices at the airport. 


This episode includes an ad for the What Next|TBD podcast. 

KERRY MACKERETH:
Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode

ELEANOR DRAGE:
Today, we’re talking with Jack Halberstam, who's a Professor of Gender Studies and English at Columbia University and a celebrated scholar, writer, and activist. We discuss the relationship between resistance and invention, and why social media is even worse than we already know. He asks why the state need to know your gender, why bodies are subjected to technological recognition and how can we evade it, and how homophobia and transphobia are operating under the banner of “security”, in, for example, airports. We also talk about the glitch in the film Ex Machina and how the family has shifted during lockdown. We hope you enjoy the show. 

KERRY MACKERETH:
Thank you so much for being here with us, it really is such an honour. Could you please introduce yourself and tell us what brought you to the intersections of gender and technology? 

JACK HALBERSTAM:
Yeah, thank you so much for having me. I am Jack Halberstam. I'm a professor at Columbia University where I teach in English in Gender Studies. And I, I'd say generally speaking, I work in the area of cultural studies and questions about feminism, gender and technology have been at the forefront of this field for a long time. One of my first published essays ever was called Automating Gender, and was about the vexed relationship between at the time what was considered to be the female body and new forms of technology.

ELEANOR DRAGE:
Our podcast is called The Good Robot which is more of a provocation or a question than a statement, so we’d like to ask you what you think ‘good technology’ is or might look like, whether it’s even possible, and if so, how are we supposed to work towards it?

JACK HALBERSTAM:
Yeah, I mean, I like this title anyway, The Good Robot, because it's such a fantasy from an earlier moment I think of future technology, the idea that humans would invent a kind of robot world that would help us to automate labour and create, you know, a kind of substrata of intelligent technologies, who would take some of the bite out of all the things that humans didn't like to do. And of course, that has not really come to pass. And I think the questions about technology now, far less into this sort of moral realm of good and bad, and much more into the crisis realm of environmental decline. So good technologies right now, I think, really need to refer to definitions of technology that are oriented towards solid environmental practices on the one hand, but on the other hand, have some kind of vision of, you know, social, I don't want to say progress, betterment maybe. Technology can mean anything from a knife and fork after all to new forms of digital connection. So good technology is kind of too capacious of a term. Any technology could be good or bad. So our metrics have to fall out of the good and bad dichotomy. And, you know, enter firmly into the realm of how do we want to rethink technology in the era that we're in an era, obviously, defined by global pandemic, but also an era in which technology for all its promise and all of its potential and all that it seemed to offer humans in the way of cleaning up the environment and improving relations between people and reducing social inequality. None of those things have played out. So the only good technology in my opinion at this point is technology that is, you know, a platform for doing some of the hard work within a project of social justice and social transformation.

KERRY MACKERETH:
Thank you so much for that answer. I’d love to hear a bit more about how you think queer and feminist methodologies can help us understand the harms produced by technology? So for example, how can we re-wild queerness in the face of projects like automated gender recognition, which attempt to tame sexuality into something observable and quantifiable? 

JACK HALBERSTAM:
Yes, so in this question, notice how technology has, you know, sort of shifted from the idea of a robotic form, or a very specific kind of technological invention into a, you know, a general project that has definitely been laden with expectations around gender. You know, for many of us the gateway drug if you like to thinking about technology was Donna Haraway’s “A Cyborg Manifesto”. And in that manifesto - it was such a brilliant intervention at the moment that is circulated if you think about the fact that this essay, I believe it was published in the mid 1980s, you know, in its original essay form. That manifesto really intervened in simplistic dichotomies between men and technology, women in nature, and that nature-culture binary was suddenly cleaving in the mid 80s, into a sort of nature-technology binary, and Haraway made that early intervention just to say, this is not going to go well if we quickly assign, if we continue to assign technology to men, and make women into the, you know, remaining representation of something called the natural world. And so feminist methodologies for the most part, have had to make some kind of nod to this early work that Haraway did, in order to interrupt, you know, the kind of lovefest between men that has tended to, you know, represent technology in the in Euro-American contexts. So queer and feminist methodologies pay much more attention I think to the uses of technology, as opposed to just the virtuosic invention of things. And to the ways in which already existing ideas of the body, of social relations, of gendered behaviour simply get transferred into whatever new technology we're talking about. So, for example, I know of some folks at Google, who are engaged in a really interesting project where they're trying to create software to scan any kind of media content and then give it a rating on the basis of its, you know, appropriate gender or sexual representation. And the intentions behind this are really good, the intentions are to give a quantified account of how women, trans* people, non-normatively gendered bodies, and so on are represented in the media, and to then be able to present back to media companies, this, you know, very clear account of how you have done very poorly or very well, in terms of your representational output in relationship to a wide range of social diversities. So this is a really good idea. Unfortunately, what happens with this kind of software production is that it goes on while retaining many of the sedimented ideas, many of which are quite toxic about what a woman is, about quantification in the first place. Right. So this media software might just say, oh, there were 59 appearances of a female-bodied person standing alone on the stage or on the screen. Therefore, you know, you've scored very highly on our, you know, gender and media, quotient score or whatever, when in fact, that woman who was alone on the screen might have been talking about her boyfriend the whole time, or she might have been enclosed in a domestic environment, or she might have been established only in relationship to children and not in relationship to a larger public. So I think, you know, feminist and queer technologies are not a value added later, kind of way of thinking about, you know, how technology can be improved with these methodologies. The only way for these methodologies and these new technologies to bear any kind of, you know, have any kind of relevance, I think, to a rapidly changing world is by allowing these foundational understandings to get changed in the process of quantifying them. And so we'd have to ask what, what's your definition of woman? And how can we think in unconventional ways about a woman on screen like beyond, you know, what, Laura Mulvey, some 30 years ago called the female gaze, beyond the idea that the woman is on the screen to be looked at. So I think, you know, there are just so many opportunities right now to use technology to rapidly and radically transform representation. And it's amazing if you just watch mainstream film to see how rarely queer and feminist methodologies are really taken into account in a very, very foundational way. So I think that's a bit of the struggle right now. 

I mean, you're asking about very, yeah, very important technologies as well like automated gender recognition. And automated gender recognition, again, does not account for the massive social changes in how we have decided to define gender over the last just decades, right? So the technology of gender has changed rapidly in our society with new visibilities of various kinds of trans* people. And with increasingly young people identifying as either trans or non-binary. And yet, when you go through a scanner at the airport, or you put, you know, look into a machine that’s scanning your face with recognition software, that machine is still working in a binary recognition mode. And, of course, as many many theorists have pointed out, computer languages operate in a binary mode anyway using the building blocks of zero and one. And so the question is, you know, can we - yeah - can we transform modalities like gender recognition, to recognise multiple genders on the one hand, but on the other hand, can we use queer and feminist methodologies to question how these technologies are being used? And why do we even have such technologies in the first place? Why when you go through a scanner at the airport, a body scanner? Why is coloured pink or blue? You know, as somebody who is ambiguously gendered, I, every time I go through that scanner, I am in a no man's land, where the machine itself constantly finds inconsistencies in my bodily presentation. And the last three times I've gone through the body scanner, the, you know, male and female TSA workers have been unsure as to who's to pat me down, number one. But secondly, there's always, what they they always say there's a problem in the crotch area, and we're going to have to pat you down, you know, so, here we are in an era in which this machine is supposed to scan the body efficiently, and find out whether you have any concealed weapons when what is actually doing is policing, non-normative gender presentation, and unfairly targeting racially profiled individuals. So not only, we don't just want to sort of incorporate queer and feminist ideas into existing technology, a) we have to transform the technology from the ground up with some of these foundational assumptions being questioned. But b) we have to ask what the technology is for and whether we as feminists and queer scholars even want to sign on to this massively distributed network now of gender recognition devices that are spread out across the social landscape.

KERRY MACKERETH:   
Thank you so much, that was such a rich answer and so many things that I’m sure resonated with Eleanor and I in the field of AI where gender is often taken as a final variable to be laid over the top of these systems to check that there is no bias rather than a foundational principle to be building in from the very bottom up. I’d love to loop back to something you were talking about at the beginning of that answer, this idea of the nature/culture or the nature/technological binary that Donna Haraway so beautifully problematizes in her piece from the 80s, the Cyborg Manifesto, and it does interest us because AI does sediment this nature/culture binary in all sorts of ways. An interesting technical one that Eleanor’s flagged before is that forms of unsupervised machine learning extract information from the data to put that data in classes, and this is described in computer science terms as a system being able to automatically find natural classes in that data. So we can see then why the assumption arises that these categories are ‘natural’ and that they emerge autonomously from the data. What to you are the most damaging ways that forms of technology sediment the nature/culture binary, and how do you think that this can be resisted. 

JACK HALBERSTAM:
Mm, you know, again, that's a tricky question just because technology in that question is such a big term. I mean, there are technologies that sediment the nature-culture binary, that come in the form of pencils, you know, oh here, little girls, you can have these pink pencils with tassels and here, little boys, you can have retractable pens with you know lead within them or something so that that's the problem isn't it, that the binary of gender is already part of the algorithm of embodiment from which many scientific investigations begin. And I think, you know, I don't think that we want to just sort of lay the responsibility for the persistence of the nature-culture binary at the doorstep of technology. What we need to do, and this is a sort of really crude answer in many ways is, is make sure that there are multiple forms of gendered and racialized people who are a) writing about technology in our culture, and b) inventing new forms of technology, and then c) the way in which we understand technology has had to already take into account these gendered and racially specific critiques, because for a long time, I think, you know, we've seen an exponential growth in the way in which technology is, you know, sort of saturates human life in the last, some people would say, 40 years, and certainly since we became embedded in internet culture. And since we had, you know, multiple mobile technologies at our fingertips it is increasingly the case that the human is completely interpolated by technology. And for that reason if these divisions between male and female, passive and active, nature and culture, persist through these technologies, which of course they do, then there's an increasingly unlikely chance that we will shake those foundations. So I think the more that we become so thoroughly scripted by our machines, the less likely we are to shake free of binary orientations to the world. Even as you know, people look around the culture and they say, wow, you know, gender has changed so much in the last 10 years, and then you get these articles about the transgender tipping point. And it seems as if real progress has been made, until you realise that trans* people are just a new niche market for neoliberal capitalism. And in fact, trans people have simply been folded into existing structures of, you know, profit, and capitalization and so on. Not that the emergence of a very strong and vocal trans* community has completely changed how we see gender. And these are the dangers of technology: people are simply absorbed into the existing algorithmic structures, and that those algorithmic structures were created with all of these other binaries firmly in place. And that then you're in the loop where there's no way out. And I think that - how can this be resisted? I mean, it has to be resisted. It can't be resisted simply in the technology; technology is not a fix for a whole set of nested social problems. Those social inequities have to be resolved in many places at once politically, socially, psychologically, structurally, and technologically. But it's not like Elon Musk, you know, inventing a battery or something will, you know, create way less carbon emissions. I mean, we could already say if you even take an example like that, the new car batteries that you find in the Tesla, apparently have a shelf life of eight years. So often technological fixes that look transformative on the outside when you investigate them further, and you find out that these massive batteries that have all kinds of, you know, damaging potential for the environment, when they're trashed, are having to be recycled every eight years. And that a battery after all doesn't run magically, it runs on electricity. So no, you're not putting gas in the car, but you are putting an enormous amount of pressure on the electrical grid, right, and the electrical grid is not without a carbon footprint. So I think so many of the technologies that are marketed today as green or as gender-considerate, or as socially responsible, when you investigate further, you find that without the consideration of many of the deep questions about human interaction that are debated today in Universities in the humanities mostly, but also in the social sciences, in cultural studies, gender studies, Black Studies and so on. Unless those questions are at the forefront, we will end up with a technologically rich world that is imposed upon a completely stratified population, where the same divisions that divided rich from poor 100 or 200 years ago continue to apply. And that's why technology, you know, is not exactly the answer to anything. The resistance has to come alongside technological invention.

ELEANOR DRAGE:
Absolutely and I’d like to turn now to thinking a bit more about functionality or failure, when is a machine understood to be functional and when it is seen to fail. My friend and media theorist Federica Frabetti draws on your work to argue that norms and harms often manifest as failure in AI, for example I’m thinking about the polemic around soap dispensers that only worked for white bodies, white hands, or when the UK government’s photo checker that I actually used myself to upload a picture that’s now on my passport, that system was revealed to be only half as good at authenticating Black women’s passport photos as the photos of white men. Failure in these cases is both incredibly distressing for the people involved and also the point at which the power dynamics embedded in these systems become evident. So what can you tell us about how failure exposes the norms that are performed and developed through technology?

JACK HALBERSTAM:
So this is a good example of what I was saying in relation to the last question, which is, you know, that the question here is, oh, isn't this distressing when a soap dispenser doesn't recognise black hands, or a photo checker fails to authenticate black woman's passport photos, okay. But there's a pre-existing set of conditions that precede let's say, the HMRC photo checker that fails to authenticate any number of photos from people of colour, and that is why are we subjected to all of these forms of technological recognition and scanning in the first place? So, you know, as my friend, and as trans* theorist, Dean Spade always says about, you know, trans* people who are trying to make changes on their driver's licences and their passports to reflect the gendered identity within which they live rather than the sexed identity that was assigned to them at birth. You know, there's much debate about how difficult it is for trans* people to make these shifts and present themselves at the Department of Motor Vehicles and then having confusion emerge when they're asking to change the gender on the licence and so on. Dean Spade asks a bigger question, which is, why does the state need to know your gender? Why are we given these two choices? And then why must we adhere to those choices? And for Spade, this question precedes the question of whether a machine is able to read a face or whether that machine has been set up to read white facial markers. And therefore, white people are quickly passing back and forth across borders, and, you know, biometric scanning technologies recognise them immediately as legitimate kinds of people. All of those questions have to first of all be framed by: why are we under such tight surveillance from the state? And do we always agree to and consent to the multiple forms of surveillance that we are under and these are big questions in a world full of CCTV cameras and strict border control, while we all operate under supposedly freedom of information acts at the same time. Just to give you a personal example of that, I have something called - I had something called - Global Entry which was a deep background check that would be carried out on you and then would in return give you an expedited passport that will expedite your passage back and forth across the US border, which for people who travel a lot is a really, really important identification card to have. So this year when my Global Entry renewal was up, it was denied, and was denied after five years, and most usually, you're denied because you've been arrested for something or there's a felony charge. Neither of those things have happened for me. And the only thing I can come up with is that I sometimes am recognised under two names, my passport says Judith, but my public persona is Jack, I have no other way of explaining why a federal department would not give me a card that I have held for five years and would not simply renew it. So these failures in the system are often a little bit more pernicious than a failure. They are the very active wing of policing, that the biometric machines have concealed the biometric machine is just, it's represented as like, oh, this is just a security scan. Don't worry about it, you just pass through this, and then we know that you're not going to blow our plane up, or you're not coming into the country for seditious purposes, or whatever it may be. But in fact very big decisions about you are already made prior to that scam. And they are made in relation to much less tangible metrics that the state uses, and applies on a regular basis, in ways that constantly give white businessmen around the world a pass, and then catch in their webs of surveillance, all kinds of non-normative bodies. And, you know, precisely because of American and British white supremacy, bodies of colour are particularly vulnerable to capture by supposedly neutral forms of technology. And that's where all of these questions that we're asking today have to come into focus, that you cannot change the technological bias of a machine, if it is embedded in a culture where those biases have been inscribed into every part of everyday life. And those biases go under the name of racism, sexism, and, you know, transphobia, for the most part, so I, you know, the failure that is embedded in those machines tells us a different story altogether. It's not that we need a better machine that can scan black women's photographs better. What we need is a different orientation to security, and biometric systems of recognition in the first place. And those are the questions that we should be asking rather than being drawn into a debate about better technology.

ELEANOR DRAGE:
That definitely speaks to how Kerry and I understand gender and race as systems that are created in order to better surveil certain bodies, to whom border crossing is then differentially accessible. We see the term ‘identification system’ - whether that’s gender or voice identification - as a misnomer - they don’t identify gender and race, they're control mechanisms, right. Norms are unstable, insecure, and have to be iteratively enforced, and the repeated instructions encoded in these systems are really good at naturalising and fixing race and gender. 

When I was reading your book The Wild Things it made me think about how humans use technology to reign in gender and race, to attempt to make these systems stable and legible and interpretable. This is so important to us as humans because we define ourselves through technologies of race and gender, and so it’s kind of brilliant and telling that these systems are always on the brink of failure. Technology in this way is a wild space: it’s never stable, it’s always constantly being reigned in from bugs, from errors, unexpected behaviours, interference, and other random elements. Technology is, I think, like the wild, a space against which humans at least try to exercise their dominion. So exposing how technology exceeds these attempts to be reigned in can be really generative in revealing how failure is always at the heart of all human systems. I love the work of art history scholar Christina Grammatikopoulou who theorizes the glitch as the “wilderness in the machine”, a space which resists predictability. So I’d like to ask you what you think the benefits are and perhaps also the limitations of thinking with the digital wildness of the glitch? 

JACK HALBERSTAM:
Yeah, the I've written a little bit about the glitch, and the way in which this idea of a technology that just has a little hiccup in it, you know, is nothing to be concerned about is in fact, put potentially a portal to the undoing of some of those, the technological determinism that seems built into the machine. So I think, and you know, I know that there's a book called Glitch Feminism. And there, the glitch is a very provocative occurrence. And remember the glitch is both, it represents a kind of time-space function in many senses, it happens in the time that we can’t account for, and it happens in a space that we can’t access. So it has a lot of really kind of imaginative capacity. And I loved in the essay that I wrote on the glitch, I really liked thinking about the glitch. But my guess is that, you know, like everything in capitalism, that glitch is easily incorporated back into various kinds of profit regimes, because if the glitch can be incorporated into what we might see as the friendliness of the machine, right, the reason that the machine is disturbing in so many films from 2001, A Space Odyssey to Blade Runner, is because of its perfection, that seeming perfection. And it's only when the machine begins to break down that the human sees how foolish they were to invest in the fantasy of the perfect machine on the one hand, but the breakdown of the machine also seems as if it makes the human and the machine sort of into both rivals and equals. So the glitch is easily incorporated into a kind of humanising of machine worlds that can easily just become another component that makes the technology seem user-friendly, rather than a problem. That said, you know, the power of the glitch is its unpredictability on the one hand, and the idea that something that was programmed into a machine is fallible, on the one hand, and its fallibility has outcomes that were not predicted, or were not allowed for by the inventor. A film that I actually quite like, but also have critiques of, about exactly this is Ex Machina, which is a great film, by, I believe Alex Garland, about an inventor who - and it was quite a prescient film in that it understood that social media was being misread by huge numbers of the population as a connection tool. And the inventor in the film understands that social media is not a connection tool, it's simply a way to collect massive amounts of data that can then be uploaded into new and better forms of AI. And this is where the Good Robot sort of comes into play. It's both, you know, the failure of imagination, the patriarchal failure of imagination of the film and its genius, is that this idiot guy who's you know, touted as this big genius, this Bill Gates type of character, can think of to do with the robots that he creates, is create an army of sex workers. Right. And this is really interesting to me. And I commented on this in an essay I wrote about Ex Machina, that going back to, you know, very early films about female cyborgs, like [Fritz Lang’s Metropolis, for example, it's very difficult for the male imagination, or like the TV show recently, Humans to think about the robot as anything other than a wholly submissive technology for his sexual pleasure. Think of all the things you could do if you actually invented an army of intelligent machines, you could set about doing all kinds of, you know, socially beneficial things in relation to the environment, and work, and other things. All he can think of to do with it is come up with a whole bunch of spectacular looking, often Asian-inflected robots with whom he can have sex and who can serve him meals and so on. It's like The Stepford Wives. But there is a glitch, that what saves his film is the glitch in the machine, which is that the robots are so intelligent, that they are able to develop emotional responses like anger. And the beauty of the film is that the director allows the genius hero to be unceremoniously killed through the collaborative efforts of the army of female robots that he's invented. And it's a great moment and the great, the ending of the film where one of the robots leaves the compound and goes off to a helicopter waiting to fly one of the scientists back to the city is an opening, there's the glitch, you know, the female coded robot has escaped, has killed, and is off using human technology to wreak her vengeance on the world. And I love this open-ended conclusion, which I think is is the the most, you know, hopeful kind of impact of the glitch of a glitch in the narrative or the glitch in the machine, a glitch in the technology is that it opens the technology for purposes that were not originally intended. And that's probably the best we can say about the glitch.

ELEANOR DRAGE:
I’m really interested in secondary uses, partly because all technologies are dual use and partly because when we build a bit of software we often have no idea as to what its future application will be. You also mentioned Metropolis as the antecedent of Ex Machina, and I remember seeing for the first time Fritz Lang’s glorious maschinenmensch and wondering what the world would look like through her eyes? What does she see? Of course she sees what her makers see, what the data she’s programmed with allows her to see. So really it’s not a question of what we want machines to see but how we want them to see. This is why when we think about and build machine vision there are so many questions that need asking, and you’ve raised lots of them already when thinking about the use cases of AI and when it’s appropriate and when it’s not appropriate for machines to attempt to read bodies. I wanted to ask you about whether glitches can revitalise understandings of what it means to correctly read a body through technology, but given what we’ve discussed already can you also tell us what you think we should be concerned with when we develop machine vision? 

JACK HALBERSTAM:
I think that the problem is the idea that there's a correct way to read the body and then can the machine capture that, you know, exactly as you're questioning your own premise. I think that's the way to go here. The question is, why are we wanting to read bodies all the time. We, I think it's important as, as feminists, as queer people, as people who are interested in technology but also deeply suspicious of it, it’s important to see how technologies are often for something that we as users, as just quotidian users, don't always recognise, you know, whether it's the, oh, hey, we have better scanners at the airport to keep you safe in the plane, which is not really about keeping you safe in the plane it's about shoring up the nation's borders, and intensifying a war that's being fought every day against people of colour and people of Middle Eastern origin and so on within the borders of the US. It's got nothing to do with the border - it's got nothing to do with your safety and everything to do with that sort of the building of the wall that has been so, was so much of a triumphant part of the Trump regime, for example. So similarly, this idea of recognising bodies is deeply suspicious to me. And we don't want a technology that's better at recognising bodies, we want bodies that are better at evading technological recognition. We want to think more in terms of opacity and eligibility and autonomy from some of the technological regimes that seek to capture us. And we want different and less regulated ways of actually making connections that are not constantly being monetized while selling you on staying connected to people you know. I mean, the best example of massive misrecognition of technological function has been social media. I mean, it really is. At this point how people can continue to believe that when they post on Facebook, they're just sharing content is beyond me. I think Facebook, Twitter and even Instagram are incredibly manipulative, pernicious technologies that are doing all kinds of things. In terms of surveillance in terms of collecting information in terms of selling, in terms of, you know, saturating people's worlds with new forms of consumption, and so on, that most of us are not particularly aware of because we are casual users who wander into these spaces without the requisite amount of suspicion. And, you know, I've even written a little manifesto called “Get Off”, which is, it's whimsical more than anything in which, you know, I argue that maybe - and this, I'd like to think of this as a kind of solidarity was something that Argentinian feminist Verónica Gago, calls the general strike, which for her is not simply a strike of workers, but it's a strike of domestic labourers who don't even get recognised for their work for homeless people who aren't seen as workers for the unemployed who don't enter into the sphere of labour, right, the general strike is all of these people who are being marginalised by new forms of governance and technology. I'd like to think of getting off, getting offline, as part of a general strike where we simply withhold our content, our digital labour, our streams of data from all of these voracious and invisible companies that are just soaking up everything that people put out. And, you know, I've been reading this week in the paper about outrageous amounts of money that are being paid to people on substack, you know, where people pay to access various writers content, and the company makes itself look legit and benevolent by handing out paychecks very visibly to very visible often queer and trans* writers, in order to, again, make it seem as if the project here is to get people's writing out into the world, when that's not the project at all, the project is to soak up mini-markets, micro-markets that individuals have created with their writing, and then turn the individual into a content producer. You know, this the way that people who have thousands and millions of followers on Twitter serve as influencers and you think, you know, they think, oh, people are so interested in my opinion, no, people are interested in you followers, they don’t care about your opinion, no they care that your opinion is pithily expressed to [your] followers but once the followers are there its a well-placed ad that’s the entire purpose of you being on Twitter in the first place, right? I think we’re so naive. Maybe I’m just speaking generationally, and maybe younger people are way more naive but I have a feeling that they’re way more naive because some of them are being paid and it makes it seem as though there’s a distributive logic in how these platforms are being funded and paid out but in face very particular people are being paid on substack and it’s not the people writing very radical tracts about changing the world, it’s people who are producing pithy commentary that will draw subscribers that want to be entertained. It’s kind of like plus ça change, the more things change the more things just make money and I think that any of us that are flattered into thinking we have become great content providers when we have become very vital vectors for capital accumulation - these are the naivetes that we have to confront in the next period of time. 

KERRY MACKERETH:
This is so fascinating and I wish we could continue considering this huge debate you’ve raised around these sorts of promises of, at the very least, neutral technologies that, as you said, then take on a capital life of their own in ways that we might not fully understand or predict, but sadly we’re now drawing to the end of our wonderful interview with you and Eleanor and I wanted to know what you think the study of machine intelligence can do for feminism and queer theory because we’ve heard a bit in this interview about everything that feminist and queer approaches bring to our understanding of technology very broadly conceived but we’re particularly interested in what you said in one of your first essays “Automating Gender” when you talked about the ways in which “postmodern feminism […] can find positive and productive ways in which to theorise gender, science and technology and their connections within the fertile and provocative field of machine intelligence” (“Automating Gender”). So could you explore just in these last few minutes the fertile interplay between these two fields. 

JACK HALBERSTAM:
Well, you know, I published that essay Automating Gender, God, I don't know, maybe, in the early 90s anyway, and we just don't live in that world anymore, do we. I mean, that's, what, 30 years ago, we barely use the term postmodern anymore. It's not a question of just, you know, accessing the promise of machine technology, because that promise has already expired and turned into something way more sinister. So, I do think, however, that there are new generations of scholars in queer and feminist theory, who are figuring out how to produce media content, for example, and circulate it outside of the usual frames and platforms. And I'm, you know, I'm deeply interested in the work that people are doing both, you know, online, Web TV shows, or movies that people are able to circulate to thousands of people separate from a studio system. There are possibilities for individual users to access niche markets and access them without the usual channels being open to them. It's just that we're continuing on the path of sort of rewarding individuals, and not really changing the mode of production for want of a better phrase. So I'm not, I’m probably not as hopeful now, as I was 30 years ago about the promises of technology. I think that we're all so tethered to the machine, that the idea that the machine is going to work for us is a little bit naive at this point. So maybe I've gone from a kind of machine optimism to machine pessimism. And I, you know, especially after the pandemic, honestly, I think we all are interested in IRL, you know, we would like to be back in physical space with other bodies and less mediated in our interactions. And, of course, after something so massive has happened there's always a hope that we will not go back to normal. But I'm afraid that the signs seem to be pointing in the opposite direction. I mean this pause that we've been in is very long for a world that was spinning so fast, this very long pause that people have been in, should have given us time to rethink our relationships to technology, to machines, even as we became ever more dependent on them, in a world of lockdowns, and separation and social distance. But I think that the opposite is probably true. I've, you know, I, I think that we become so reliant that we will not break from the machines in the way that we probably should. And I think that even as things like the family, the family is a technology at this point, it’s a technology for organising social relations, stratifying reproduction, forming very clear lines of inheritance and so on. That's what the family is. It's a technology for distributing wealth through the proper channels and making sure it never leaves those channels, which are mostly white and already rich, right? Those kinds of technologies, like the family have been under duress, and I'm a bit more hopeful that potentially people have seen the limitations of this mythical structure that was supposed to just sort of magically distribute happiness and contentment and in fact, what we found during the pandemic was that people actually don’t want to spend more time with their family whatever they may say in public, they actually want to get away from this, you know, lousy technology. And now I have a little bit more hope, you know, I think that people have seen the limitations in some of the social mechanisms that we've adopted for relation. And I wonder whether people will go right back into that mythology and my sense is that they will not. And, you know, as you begin to unpick the locks on some of these social forms, it is possible that we will then, you know, express our dissatisfaction across wider and wider parameters. So, not a happy ending, but hopeful in terms of of ending things, ending things that have become, you know, deeply problematic or suffocating, or that, you know, at this moment where we're on the verge of opening up, the hope here is that we want something different than what was happening when we went into lockdown in the first place.

ELEANOR DRAGE:
Mmm and while it’s been a terrible time for people in unhappy living situations or that have been really ill or have loved ones that have been ill, it’s been interesting to see these new lockdown bubbles that have emerged outside of family units. I’ve been really lucky this time round in the UK’s third lockdown to be part of a strange non-family unit who I spent Christmas with and we could support each other.

JACK HALBERSTAM:
Yeah, yeah, I think that's so important, the way people will have have tried to think about, you know, social life outside of the family and social responsibility, but also how COVID has made very clear that despite, you know, massive amounts of technology, medical and otherwise, the inequities of race and class pay off in disastrous ways under something like a pandemic where the distribution of death, mortality and disease across populations of colour and poor people is, you know, so intense, and so much worse than across affluent white populations. Again, this kind of information, you would think, like the information that was delivered to an American population by the nine-minute video of George Floyd being killed by a police officer, you want to see [that] these kinds of technological interventions have immediate impact, and in some ways they have and in other ways that impact is still to come. 

ELEANOR DRAGE:
And it’s been really disappointing to see that instead of information from camera phones, from social media, from people’s use of these new technologies being used for good, for having this really positive impact, instead of being used for completely monstrous applications like, I don’t know whether you been following, but the ACLU, found that Geofeedia, which is a search-by-location tool that allows customers to access and filter real-time social media feeds anywhere in the world were collaborating with the Baltimore Police Department in 2015 to make arrests at the Freddie Gray protests. And what they were doing, was the Baltimore Police Department had access through Geofeedia to Instagram’s APIs and they were using that to be able to see who was present at the Freddie Gray protests, to see people’s faces, and then to run those faces through its own facial recognition system of people with warrants for arrest, and this means that they could go out into the protests and issue these arrest warrants and effectively shut down the protests. And to make it worse, Geofeedia didn’t just use their software for that, they didn’t stop there with that collaboration, they were keen that the Freddie Gray protest was used to train its algorithms to identify what a violent protest looks like. So which are the protests in the future that are going to be shut down because Geofeedia’s algorithms have identified them as dangerous or anomalous? Ones that have a disproportionate amount of black bodies in them.

JACK HALBERSTAM:
Right. But how much did the police use that for the white supremacist you know, uprising, the riot of white supremacists on January 6? I mean, that entire mob should be in jail by that same logic, and yet, we've heard so little about the prosecution of people whose faces were completely uncovered, because they're anti-maskers, who had they been black would have been shot upon site. So isn't that interesting? You know, it's not just the technology is it? It's the way in which the technology is deployed over and over again, against the same population, leaving other populations, white people, mostly white supremacist in particular, free to do whatever kind of protest, whatever kind of, you know, fascist activities they may please, you know? So it's the technology and then it's the combination with absolutely legitimised forms of social distress that creates exactly this kind of disaster. 

ELEANOR DRAGE:
Absolutely, and Simon Browne’s Dark Matter is fantastic on this. Although, I want to push back a little bit on what you said - I think people are aware. We’ve been teaching what it means to be a prosumer for a long time now, in school I remember hearing that word and you have an implicit understanding of generating content and how you can be both a producer and a consumer and that is the core of what Facebook’s business model was and why it did so well. So people aren’t incentivised to leave, they like producing their own content and they dont mind so much where it goes and what worries me so much of the Freddie Gray protest example that were targeted because they uploaded photos of themselves onto Instagram, how many of those people are actually going to leave instagram? I would say not a lot. 

JACK HALBERSTAM:
Yeah. Well what can it take to, you know, engage in social revolution? I mean, it's - leaving these social technologies, you know is kind of the least of it in some ways but this is why I find Paul Preciado’s work so helpful because it kind of explains the way that we're addicted to the very technologies that regulate us and it's the cycle of addiction frustration, excitation, desire for recognition that pulls us exactly back into the very zones that are policing us, you know, and it's that work plus, Ruha Benjamin's work, all of the work that you know, Simone Browne’s work, all the work that's being done at the intersections of race, sexuality and technology are giving us complex accounts of why we're so stuck and how hard it's going to be to get out of.

ELEANOR DRAGE:
Well thank you so much, Kerry and I are not afraid of an unhappy ending. 

JACK HALBERSTAM:
Okay, even the good robot is, you know, sometimes down.

ELEANOR DRAGE:
The good robot is the utopia beyond the pages, it’s the place from which we work. Thank you very much for joining us today, it’s been brilliant. 

JACK HALBERSTAM:
Thank you.


Reading List 

By the Guest

Mentioned in this episode:

---. (1991). Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine. Feminist Studies, 17(3), 439-460. doi:10.2307/3178281

---.  (2011) The Queer Art of Failure Durham, NC: Duke University Press.

---. (2019) “This is Not a Love Story” Ex Machina Screenplay Book. A24, New York. 

----. (2020) Wild Things: The Disorder of Desire. Durham: Duke University Press. ISBN 978-1-4780-1108-8

---. “Off Manifesto” (2014) Feminist Art Coalition. Big Machine/Republic. https://static1.squarespace.com/static/5c805bf0d86cc90a02b81cdc/t/5db8b20e583bec29713ab2c7/1572385294504/NotesOnFeminism-1_JackHalberstam.pdf 

Other:

Jack Halberstam (1995) Skin Shows: Gothic Horror and the Technology of Monsters. Durham: Duke University Press. 

---. (1998) Female Masculinity. Durham, NC: Duke University Press.

---. (2005) Halberstam, Judith. In a Queer Time and Place: Transgender Bodies, Subcultural Lives. New York: New York University Press. 

 ---. 2012. Gaga Feminism. Boston: Beacon Press, 2012.  

---. Trans*: A Quick and Quirky Account of Gender Variability. Oakland: University of California Press, 2018. ISBN 978-0520292697


What the Guest is Reading 

Ahmed, Maryam. “UK passport photo checker shows bias against dark-skinned women”, 8 October 2020, https://www.bbc.co.uk/news/technology-54349538

Barad, Karen (2015) “Transmaterialities: Trans*/Matter/Realities and Queer Political Imaginings,” glq: A Journal of Lesbian and Gay Studies 21, nos. 2–3: 394.

Preciado, Paul B. “BAROQUE TECHNOPATRIARCHY: REPRODUCTION”,  Artforum International, 1 Jan. 2018, www.artforum.com/print/201801/baroque-technopatriarchy-reproduction-7318

Preciado, Paul B. (2014) Pornotopia: an essay on playboy's architecture. New York: Zone Books. 

Benjamin, Ruha (2019) Race After Technology: Abolitionist Tools for the New Jim Code. Oxford: Polity. 

Browne, Simone (2015) Dark Matter: On the Surveillance of Blackness. Durham, NC: Duke University Press.

Haraway, Donna (2016) Staying with the Trouble: Making Kin in the Chthulucene. Durham, NC: Duke University Press.

Haraway, Donna (1985) “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century”, Socialist Review 80: 65–108.

Keeling, Kara (2019) Queer Times, Black Futures. New York: nyu Press. 

Russell, Legacy (2020) Glitch Feminism: A Manifesto. London: Verso. 

Snorton, C. Riley (2018) Black on Both Sides: A Racial History of Trans Identity. Minneapolis: University of Minnesota Press. 

Spade, Dean. “Documenting Gender”, 8 DUKEMINIER AWARDS: BEST SEXUAL ORIENTATION & GENDER IDEN. L. REV. 137 (2009). https://digitalcommons.law.seattleu.edu/faculty/670

Gago, Verónica (2019) How to Change Everything. Verso.