The Good Robot

Peter Hershock on Buddhist Approaches to Machine Consciousness

July 18, 2023 Eleanor Drage
The Good Robot
Peter Hershock on Buddhist Approaches to Machine Consciousness
Show Notes Transcript

 We talk to Peter Hershock, director of the Asian Studies Development Program and coordinator of the Humane AI Initiative at the East-West Center in Honolulu. We talked to Peter about the kinds of misconceptions and red herrings that shape public interpretations of machine consciousness and what we can gain from approaching the question of machine consciousness from a Buddhist perspective. Our journey takes us from Buddhist teaching about relational dynamics that tell us that nothing exists independently from someone or something else to how to make the best tofu larb. 

This episode includes an ad for the What Next|TBD podcast. 

Eleanor:

In this program, we talked to Peter Hirsch, director of the Asian Studies Development Program and coordinator of the Humane AI Initiative at the EastWest Center in Honolulu. We talked to Peter about the kinds of misconceptions and red herrings that shape public interpretations of machine consciousness and what we can gain from approaching the question of machine consciousness from a Buddhist perspective. Our journey takes us from Buddhist teaching about relational dynamics that tell us that nothing exists independently from someone or something else to how to make the best tofu larb. We hope you enjoy the show.

Kerry:

Um, thank you so much for joining us here today. So just us off, could you tell us who you, what you do, and what brings you to Buddhism and technology?

Peter:

Well, I'm a father, I'm a husband, a friend, surfer, uh, the family chef, and I am the Director of the Asian Studies Development Program and the coordinator of the Humane Artificial Intelligence Initiative at the East West Center in Honolulu. Uh, I think thinking about Buddhism and technology, for me, um, that's partly because I do Buddhist thinking, Buddhist thought in practice, and that's sort of the conceptual framework that I use. But I think it's especially useful to, to turn to Buddhism because Buddhism offers a set of critical resources. That are different from those that are embedded within the human technology world relation that we're currently operating within globally because of the, the dominance of western technologies, western science, philosophy, et cetera. So I think Buddhism offers, uh, an interesting outsider point of view, a, a detour that provides uncommonly good access, I think.

Kerry:

Amazing. And before we move into our very important good robot questions. Can I ask, you said you're the family chef, so what is the dish at the pinnacle of the Family Chef book?

Peter:

One that always goes over really well at parties is a tofu larb. So it's a northeastern Thai Lao ocean dish, uh, made with tofu for the vegetarians in the group, uh, you can omit the fish sauce and use, uh, thin soy sauce instead. But basically you take tofu, you chop it up really small. You fry that with lemongrass, makrut lime leaves that have been shredded really finely. Some galangal, a Southeast Asian, uh, root, uh, kinda like a ginger, you could think. Mix those together for a while. Chili peppers, little bit of palm sugar, lots of lime juice. Uh, a little bit of roasted mochi a rather sweet rice that you've roasted lightly, browned, and then ground up in a mortar pestle, throw it over the top, garnish with mint leaves and, uh, served your guests

Eleanor:

That sounds unbelievable. Kerry and I have long suspected that we should just throw in the towel with AI ethics and instead go into doing a cookery show. So,

Peter:

Well, I was part of a Food for thought video series. It was done by some folks in the Netherlands. It was basically eight different philosophers representing different traditions from around the world, each one of them doing a session with students, but also cooking a meal.

Eleanor:

That's such a good idea.

Kerry:

Is that like online? Can we like find it online?

Peter:

It is online, food for thought.

Eleanor:

Food is the way that you access ideas. I'm a total believer of that. Okay. Before we get totally sidelined and, and we end up instead launching a cookery show, what is good technology? Is it even possible? And how can Buddhism help us get there?

Peter:

It's a good question, and I think defining technology is important before we figure out what good technology is. Um, and I like to draw a critical distinction between technologies and tools. Where tools are things that are localizable, they're material objects or conceptual systems that you can actually pack up and move around. Uh, I use a tool as an individual and whether it's good for me or not, I can determine. So test specific utilities. What helps us to evaluate our tools? So smartphone's a tool technologies are relational media. In which we participate. So we don't build technologies, we don't use technologies, we participate in them. So communication technology includes the production of smartphones and the use of smartphones, but it also includes everything from the way smartphones are marketed, the way it's communication practices are undertaken. How those communication practices change the way in which people do things like getting in touch with family members or friends, altering those practices as you go. So for me, technologies, we participate in them in some ways, like species participating in an ecosystem. The ecosystem emerges outta species interactions. But once the ecosystem forms and becomes a self-sustaining whole, then it also starts to have downward causal impact on the nature of those species and how they interact. Same with us. So we're participating in technology, we inform it, but at the same time, it also constrains what we can do. So technologies not only alter our motions through the world. But also our motivations, but how and why we do things so then as what good technology is. Good technology would be any technology that's actually promoting us, living in the kinds of worlds that we wanna live in. And having a kind of human- technology world relationship that we agree that we have a shared understanding of. This is something that's positive for all of us and for me, at the heart of that, it's gotta be diversity- enhancing and it's gotta be equity- producing. And so for me, that's the benchmark of good technology.

Eleanor:

Fantastic. Thank you. We would like to delve a little bit further into your particular Buddhist approach, and you've written this, um, You've just finished this, what I'm sure will be a fantastic book on Buddhism and Intelligent machines. So can you tell us a little bit more about the particular approach that you take in that book?

Peter:

Yeah. The idea behind it was started with a question on that is who do we need to be present as if we're going to participate in this emergence of intelligent technology in a way that will produce more humane futures than the present we're living in now, we look at the world that we're in. We've got a war in Ukraine, seemingly unthinkable 20 years ago."There's not ever gonna be another war in Europe". We've got still famines around the world. We've got rising, you know, inflation. We've got domestic violence, we've got issues about racism, continued racism. We have institutionalized injustice. We don't wanna continue living in this world. We want a better world. So how can these new emerging intelligence systems help us to realize a better world? And unfortunately, like many other things, intelligent technology is forcing us to confront a predicament that is a set of values conflicts. It's a little bit like climate change. Nobody wants climate change. Nobody wants climate disruption. So we've known this for half a century. And why hasn't humanity done anything significant about it? We're still on track to have massive changes to the planetary system. It's because climate change is not a problem. It's not a technical problem, it's an ethical predicament because there are values conflicts that underlie this. It's economic values, cultural values, personal values, et cetera. So with this intelligent technology, it's also forcing us to confront this predicament. These systems will only do what we set up for them to do. The objective functions that we're assigning the algorithms will determine their products. They're only doing the work that they do based on human intelligence, our intelligence and action. So what we have is a synthesis of machine computational intelligences and human intelligences, and we need to ask ourselves, who do we need to be present as to think through the evolutionary and ethical implications of what's happening. And so for me, turning to Buddhism is looking for conceptual resources that you don't find in some of the other traditions. And one of the most basic ones is that the basic unit of analysis and Western thinking has tended to be the individual, the individual agent, the agent's actions on the world and the individuals, the patients who are the recipients of those actions and ethics gets analyzed that way. And Buddhism says, look, there's no independently existing things at all. Not agents, not patients, not actions. Everything arises independently, so you have to start relationally. What are the relational dynamics that we're working with? We're fully embedded within them. And from the Buddhist perspective, the relational dynamics that we're a part of are affected by karma. That is a pattern, a causal pattern by means of which consistently how values, intentions, and actions result in consonant patterns of outcome and opportunity. So then my question is, what's the karma of intelligent technology as it is playing out now? What values are embedded within it? What intentions inform its design and its deployment? What actions is that resulting in? And what are the disparate kinds of outcome and opportunity that are resulting from it?

Eleanor:

Well, I was gonna say that We met when you're presenting on an amazing panel about relational ways of thinking around AI. And we also really wanna know how you position yourself in relation to, we've had other podcast, Tenzin Priyadarshi and we also had on Soraj Hongaldarom who talks about Thai Buddhism. My cousin is the one of the leaders of the Croydon Buddhist Center, and they take their own very specific approach. So I'm really interested in these different traditions, so I don't want to wash over them as if they're all the same. Um, and if you could help us navigate those a little bit...

Peter:

well, I mean, Buddhism is a 2,600 year old set of traditions that have been evolving, and there was no such thing as Buddhism in quotes until the late 19th century in the parliament of world religions that was hosted in Chicago. Prior to that, you would not have found any person practicing in the lineages of traditions that come down from the historical Buddha, anywhere in Asia, they would call themselves a Buddhist. It's, it's a Western term, which was adopted by Buddhists at the end of the 19th century because they realized, well, we've gotta band together. We've gotta somehow show that we're unified through this set of teachings. So I think there are core teachings that are common to all Buddhist traditions, and then there's some really distinct differences. So the distinctions between Theravada and Vajrayana Tibetan, Soraj Hongladarom who you had on the podcast, Tenzin Priyadarshi, two very, very different traditions. But where they're both coming from is an emphasis on understanding the interdependence of things. The way in which the world that we're a part of is configured by karma. The way in which we need to think through the, the blending of wisdom and compassion in order to be able to do a better job of figuring out what are the causes and conditions that lead to conflict, trouble and suffering, and how to resolve those conditions. So I think that's shared ground across all Buddhist traditions, but individual Buddhists, like individual Christians or Muslims or philosophers, will all come up with their own takes on things, based on their own understandings, the depth of their practice, et cetera.

Kerry:

That's really fascinating. Yeah. Thank you. And I think. Something that's really captured public imagination right now that has like a lot of relevance to the principles and the ideas and the concepts that you're discussing is this idea of machine consciousness. And so for example, last year, the Google engineer, Blake Lemoine, made international headlines by claiming that Google's AI LaMDA was sentient. And so how do you approach the question machine consciousness from this theological Buddhist and kind of conceptual background. And what kinds of misconceptions or red hearings do you think shape public interpretations of this idea of machine consciousness?

Peter:

Yeah. And again, you know, defining terms, philosopher, what is consciousness. Uh, there's been a big boom in consciousness studies, uh, over the last 20 years or so. So there's no simple way of categorizing consciousness. Lots of different theories. For myself, I come from a tradition that's sort of rooted in Chinese Buddhist understandings of consciousness. That pull on a couple of different traditions. One's the Tathāgatagarbha tradition, uh, that came out of Central and South Asia into China. And then the other one's Yogachara Traditions. And as they were understood in China, there's kind of an adoption of the Yogachara. There's eight different, uh, domains of consciousness. They different types of consciousness, only one of which is subjective self-awareness. So when people talk now about machine consciousness, uh, like this, uh, the guy from Google last year, basically what he's saying is I think that this machine has become subjectively self-aware. It's speaking as if it is, and from the Yoga Char position, that's only one form of consciousness. There are five others that are each based on a sense system. So you have a visual consciousness that is based on the coherent differentiation of a visually sensing organism or a visually sensing system, and a sensed environment, visually sensed environment. So visual consciousness consists in visual relations, okay? Then you have auditory consciousness, tactile consciousness, gustatory consciousness, gustatory consciousness, and then there's a consciousness, the cognitive consciousness that takes the content of all the sense consciousnesses. And blends those together and creates objects out of the blending of the relations from all of those. So it's finding overarching relations among the five sensory consciousnesses. Cognitive consciousness is another sense consciousness that is sensing correlations or zones of defraction between the patterns of relationality they're developing in these other consciousnesses. I think machines are already there. If you have a machine that has sensors, whether it's visual sensors, auditory sensors, statistical sensors that are mining the, the internet, these are all things that are drawing on a sensible world. It could be a statistical environment, it could be a visual environment. It could be an auditory environment song. They're correlating those together to create associations, identify objects, intentions, desires, uh, patterns of behavior. So clearly this so-called six cognitive consciousness machines are there. That does not mean that there's somebody there who's doing the thinking in the same way that when you're fully asleep, your brain, your brain- body system is processing, learning from the day if you don't sleep and dream. You don't learn as well. Lots of documentation of this, but there's nobody there doing the learning. So I think that we have machine consciousness, but there's nobody there. Okay. Could machines gain consciousness? There's been a group of researchers that about three years ago now, four years ago in China that built a set of robots. I think there are five different versions of the robot, physically identical. Put'em in a mirrored room. And the neural systems of these robots was based on the primate, uh, mirror neuron system. So the neurological mapping of the sensing that's coming in from the sense organs of the robot processed in a way that's similar to the brain of a primate, they passed the mirror self-recognition test. They were able to identify themselves as themselves as opposed to the other identical robots. Does that mean that they're subjectively self-aware, or is this simply simulating self-awareness? And I think this is where it starts to get really interesting because I think that now we may be at the point when machines are in effect dreaming. But they're in dreams that like most of us, when we're dreaming, if it's not a lucid dream and you're determining where you go on the dream, you have no control over your world. You can't affect it in a way that affects you. So machine consciousnesses are, I think, in a way, if we take this seriously, something like enslaved within or indentured within environments that we, human beings have set up for them to do things that we, human beings want to get done. Whether they can go further, who knows?

Eleanor:

Yeah, that's really fascinating. And I guess it goes back to kind of the original definition of robot as a slave. We've always bought into the idea that the way that consciousness is configured in popular AI narratives is it's racialized and it's gendered. Consciousness is, is raised above the body. You know, gender studies has long understood that the body has been feminized and the cerebral nature of man, um, as Sylvia Wynter and, and others have you pointed out is transcendent and we all just want to become these brains in the future. And I love, I think I'm behind me, I have, N Katherine Hayles's is'Unthought' which is fantastic in showing how consciousness, as we think about it in humans, is indebted to non-conscious cognisers that underpin cognition. And it's really lovely to see these ideas expressed in these other ways, other ethical ways in order to break down this lauded consciousness there's a kind of exclusivity around it in AI. Do you think that the way that consciousness is thought about predominantly in the AI industry is affecting our capacity for attention or focus, and what are the consequences of this?

Peter:

Well, let's imagine that you take my Buddhist construction of consciousness as coherent differentiation. Coherent differentiation of sensing and sensed, you know, presences kind of at a basic level, but you could take it, I think all the way back to the cosmic beginning. All that's been going on in the cosmos is coherent differentiation. Otherwise, it'd be chaos. We'd have full entropy, we're done. If you look at it like that and you take this kind of Buddhist perspective, then the body is part of consciousness. It's not the difference between the sensing body and the environment, that's part of consciousness. That's the evolutionary product of what consciousness has been doing. But once you decenter consciousness from the brain, And see the brain- body system as infrastructure of consciousness. So the infrastructure doesn't cause transportation practices, infrastructure, transportation infrastructure is developed over time from first people walking on paths, and then they formalize and build roads and then just build some bridges. The infrastructure emerges through transportation practices. The body- mind system that we have has emerged through conscious practices. Okay. It's the evolutionary history. If that's the case, then what we're doing now with algorithmic search, with algorithmic recommendation systems, with the systems that are being used to analyze people in terms of their health, their emotional wellbeing, and so on, all of which intelligent technologies nare now being used to do is not just an epistemic power, It's also an ontological power to produce the kinds of citizens, consumers, et cetera, that whoever is running the systems wants to produce. It's equivalent to inserting electrodes into the brain. And we've known for a hundred years, you can do this. Insert electrodes into the brain, it'll stimulate the hand going up, and you ask the person, why did you raise your hand? And they'll say, oh, because I had to scratch. Nothing in evolutionary history can explain why that hand went up unless I wanted to do it, me subject to seventh consciousness, self-awareness. Nothing in evolutionary history can explain it, so I invent an answer. I wanted to scratch my head. I think something very similar is happening now where these algorithmic systems are inserting electrodes into the social connective tissue of human consciousness, and manipulating human behavior in much the same way, in which much the same kind of precision that you would do with electrodes in the brain. It's hacking the infrastructure of human consciousness.

Kerry:

That's really fascinating and it reminds me of this science fiction novel that it was kind of for a young adult audience, but it's just really a spectacular book, M. T. Anderson's Feed, which like literalises the sort of hacking of this infrastructure of consciousness. It makes it very visceral, I think. Um, and then kind of. Takes this idea and turns it into a critique of advertising culture in particular. And I can't remember when this book was written. I mean, I feel like it was something like the mid- two thousands, like 2006 or something. I will look this up and I will link it in our reading list Uh, you can find as always, a full transcript of the episode and a reading list at our website, thegoodrobot.co.Uk. But what really strikes me about this novel is how prescient it is. Like it's so early on to be thinking about this kind of consciousness hacking or this kind of hacking of the social fabric. I really love the way that you put that, this sort of connective tissue that binds us together. Um, you know, and so it's amazing I think about the power. I think that fiction and stories have to allow us to see these futures, but in some ways almost more harrowing though. Cause it's like we had the ability to foresee this, and yet we're still kind of getting subject to this web of interference.

Peter:

Yeah, and I think part of it's because the, the really distinctive thing about the way in these, this colonization of consciousness is occurring is it's not being done coercively. It's being done by giving people what they want. It's by giving them greater freedom of choice in terms of what they consume, in terms of visual content, auditory content, music, you know, film, shopping, clothing, you know, recipes from around the world. It's just simply giving people what they want. And so one of the real dilemmas that I think we're being forced to face, the real predicament is what kind of freedom do we want in our intelligent technologies as they're being deployed now, giving us greater freedom. In the true sense of being able to relate more freely with one another? Or is it simply offering as freedom of choice? Freedom of choice among different experiential products? Among different products to buy different things, to have different things, to experience different ideas, to expose ourselves to choice. a world without choice is not a very good world, but just having freedom of choice I don't think is enough. And you know, one of the things that, uh, Eleanor, you mentioned earlier about the importance of the body and feminist thought and this splitting of the masculine intellect and then the body and emotions and the feminization of that. In Buddhism, they insist compassion, the emotional connection with others, feeling with others, And intellect, wisdom are like the two wings of a bird. A bird can't fly with just one. It's like the two wheels of a cart as jury. The great, uh, writer about meditation and, uh, fifth century China said two wheels of a cart. It won't go anywhere unless you have both wheels. So I think that, you know, he's referring to having two different kinds of meditation, calming meditation, where you find yourself present, and then insight meditation where you understand what are the causal presidents that led up to the moment that I'm sitting in right now. That both of those are necessary and I think they are, especially today. Because we're living within a system that's based on the attraction and exploitation of human attention. It's our attention and what's carried along with our attention's like a radio wave. It carries lots of information with it. Anytime you're attending to things digitally, it's carrying lots of information where your eyes are going on the screen, how long you stay there, what websites you're visiting. All of this is being recorded and then used to feed us back what we say we want. And that sounds like a good thing. It's like a wish-fulfilling you know, system. I can get exactly what I want whenever I want it. What's wrong with that? Karmically the problem is to get better at getting what we want we have to get better at wanting that is being in a state of lack, needing something, and to get better at wanting. In that sense, you have to never be satisfied with what you get. It's a loop that plays out by increasing, dissatisfaction, and an increasing sense of, I need more. Nothing that I'm experienced is enough. And so I think the real danger that we find ourselves in is breaking outta that cycle, taking back full responsibility for our own attention, and being able to do that consistently. That's I think, the only real guard against misinformation and disinformation, is being able to stand back and critically say, I'm gonna go look elsewhere. I'm not gonna take what's being given to me and I'm gonna be more critical. I'm gonna look for what are the patterns of values, intentions, and actions that are leading to this pattern of outcomes and opportunities relationally. And what do they mean for others? So I agree with you entirely. We need to bring the, the emotions and the intellect back together. And maybe, I think of reasoning as like the thumb. It's the emotion. It's an emotion, but it keeps looking at all the other emotions, love and sadness and so on. It says, I'm not like you. Because I could touch each of you and you can't touch each other, but it's just another finger. It's distinctive. But reasoning is just another finger. And what we wanna do is use our hands without the other fingers. A thumb and a palm don't do much. We need the whole set.

Eleanor:

I will never look at my thumb in the same way again, like it's suddenly become very important. There's a book called Mindplayers by Pat Cadigan. It's just, it's one of those like classic science fiction books and in it someone's consciousness is sold to the highest bidder. And at the time I think I read it quite uncritically and it just seemed like, you know, a far future where people take people's minds and sell them. But actually it's quite an interesting commentary on capitalism and big tech that can actually buy your attention. And it's very difficult to keep control of it. I mean, I keep deleting YouTube from my phone because I know I'll just spend 15 minutes on YouTube shorts. And I feel like I have no control. I have to just delete the app. I no longer have control over my attention, which apparently mindfulness is very good for meditation and it's very good for, for that kind of control. Um, the other thing that struck me about what you were saying was, The moving towards the other, you know, focusing your attention towards the other. And, you know, Hannah Arendt really helped structure the way that I think about what it means to live in this ethical posture where you are leaning, literally leaning towards, towards the other, but you don't quite reach them. There's a kind of ethical encounter in not quite reaching them but giving this Important and respectful distance.

Peter:

Yeah, I think that Buddhism comes at it a little differently because the core insight that everything arises independently, if you interpret that strongly, What it means is relationality is more basic than things related. It's one of the ways reasons I talk about consciousness, not as a thing that's out there, but as a process of coherent differentiation. And it's that opening up of a difference and then opening up more differences. If we're relationally constituted, that means we don't exist prior to our relations. If I'm talking to students, I'll usually ask which come first. The parents are the children. And everybody's like, dumb question. Of course the parents do, but there are no parents until there's children. I mean, when do you become a parent? Is it when you decide to get pregnant? When you decide to adopt? Is it when the baby's conceived? Is it when a baby's born? I am a father of two kids. That's why I started by saying I'm a father. I've got two sons in two different generations. one is 43, one is 21. I'm still learning what it means to be a father, and I'm not the same human being with my older one as I am with the younger one, and I never was because they draw something different out of me as a father. So if you took away all of our relationships with family, with friends, with our natural environment, With gravity, with the sun, you took everything else away, what would be left? The Buddhist answer is nothing. Everything is relational all the way down. So yeah, we need to start with the relation. I think that the idea, like in Levinas, you know, you're confronted by the other, it's the infinitude that you can't quite, you know, overcome. There's a truth to that, but it's a truth that's because we've already excluded others. We've excluded others through thinking of them as independent entities apart from us. And we could think that otherwise there are other possibilities for thinking others as mutually constituting. And then the question is really what are the terms of our mutual contribution to each other? What are we offering to each other that enhances who we are together? So Jean-Luc Nancy talks about the common and the shared. And I think we too often say, well, there's a common human nature and then there's us as individuals. And I think that instead we only have a shared nature. We're all in it together, and there's no way to go back to a beginning where we weren't all in this together. And we'll always be in it together and let's make it good.

Eleanor:

Okay, so Arendt, Levinas, not radical or relational enough. It's good to know.

Peter:

I think so, yeah. it's a little bit like saying I want to be your friend, but putting your hand out and holding the person in a bit of a distance when they try to lean in for a hug, you know, I mean, intellectually you might say, I wanna maintain my own autonomy, my integrity, and so on. And Buddhism, I think would say, at least the way I understand Buddhism, it's not about autonomy. Autonomy is a fiction and it's a great fiction. You know that you can be self deciding and determine your own laws, how you operate in the world and all the rest of that. But the Buddhist enlightenment was preceded by his vision of how karma works and structures everything in the cosmos: values, intentions, and actions determine relational outcomes and opportunities. But you can change your values and you can change your intentions. You can change your actions, you can change the cosmos. So it's a live in improvisation that we're all a part of. And so for me, it's like being part of a musical ensemble and you're trying to improvise music together. What does that mean? If it's true free improv, there's no score, there's no chord changes, there's no key that you've decided on. You just start playing. And the only thing is, is everybody's gotta figure out what do the other people hear as musical? And you keep doing that together. It's like creation in that moment. And I think that's where ethics happens in that kind of space where you're improvising together to figure out how do we make this better. How do we keep this going? Cuz you don't improvise to get to the end, and figure out what's the good life you improvise to play better and better music, more and more interesting music, extending the horizons of what you consider anticipatable within that realm. So for me, ethics is necessarily a creative endeavor. It's necessarily open-ended. We're never gonna get to the good life. We just get lives that are better and better. So Hubert Dreyfus and Charles Taylor have this distinction between success and improvement. Ethics of success is easy, and that's the technical ethics of how do we get accuracy, how do we get transparency, how do we get, you know, explainability in AI. Those are success conditions and you can figure out how to meet them technically, but I think the deep ethical questions are the other ones. How do we get the improvement conditions and what do we mean by improvement? Can we have a shared understanding of improvement and get these systems to help us move in that direction? That would be The Good Robot.

Kerry:

I mean, I kind of feel like we should just wrap up there and be like, there we go. Um, but I do have one more question, which I just really, really would like to ask you. So I'm gonna sneak it in any way, if that's okay. Um, which is actually to kind of move away a little bit from your kind of really fascinating work in Buddhist theory of thought and kind of move towards the complex geopolitical relations we find ourselves now in, in relation to. AI development and deployment and also the kind of wider field, I guess of tech competition on an international stage, cuz you're at the East West Center in Honolulu and this is a particularly tense and challenging time in US- China relations and that's something that's really manifesting in AI policy and tech development from the Chips act through to the founding in 2022 of the special competitive studies project. And so how do you feel about these geopolitical tensions around AI and what would you propose doing differently?

Peter:

Well, one thing, how we express these tensions is really important, the narrative that we construct because that really will shape future thinking and behavior on it. Uh, one of the things that's happening in the US and it's happening as well on the Chinese side with slight differences, is, We're in an end game competition between the US and China, but it's really, the West, Europe is kind of like a third player with its own independent perspective in some degree. But China's looking at this as this is our chance to finally establish ourselves, get out from under the hundred years of indignity, you know, this century of humiliation at the feet of Western and colonial powers, and to fully establish ourselves to take a rightful place, within the global community as a leader, and that means leading in AI and data governance. And the US looks at that and says, Ooh, this is just like the Soviets saying they're gonna outcompete us during the Sputnik era by throwing this thing up into orbit. And we need to meet the challenge. Well, we need to get our game up. And what was the result in the Cold War? This end game competition between the US and the Soviet Union, which geopolitically could only have two results. Checkmate one side wins, or stalemate: nobody wins. And I think we're still stuck in that kind of end game, checkmate stalemate thinking, and that's bad for everybody, not good for the Chinese. It's not good for the Americans. It's not good for you. It's certainly not good for the rest of the world. So I think that what we wanna do is to try to figure a different way of those two great powers playing the game together. And as long as they're playing a finite game to win, we're not gonna get past this. We'll either end up with one global system for data governance and AI, one system fits all- I think that's a bad idea because as I said at the very beginning, what's good technology is at least technology. that promotes diversity and quality of inclusion. If you only have one system of government, somebody sets the objective functions, they set the rules, and that system is gonna self-perpetuate. You'll get lock in. Technologically- achieved lockin in terms of values, intentions, and actions. Not a good thing. Balkanization is just as bad. If you have all these different systems operating independently, all that means is you've taken the scale of lockin and stuck it down on the national level or regionals level instead of the global level. It's still lock in. So I think we need to rethink data and rethink data governance. And part of that I think would be having human data rights that will allow humans, all of us, to retain our data. That when we're using social media, when we're doing search, when we're accessing things at the library online, and all that's being tracked and recorded, it's tracked and record and deposited in our data accounts in much the same way it's deposited in your credit union savings account. It's there. And we could, as people like Alex Pentland at MIT have said we could have a system of data cooperatives where a data cooperative keeps my data for me and determines which algorithms can run on my data, which algorithms can use my data in order to find out whatever it is that they're trying to find out why they use to that algorithm. So the data doesn't move. The data stays in the data collective. Only the algorithms move. What is the advantage of that? If we had a global system where every citizen in the world had the basic human right of deciding first, they've got a right to retain their data. Second, a right to invest, put their data in the collectives that they want, and finally to have a voting say in what the use is to which their data is put. You can imagine that many people would not want their data to be used for military purposes. They would not want artificial intelligence to be used by the military to get systems that can get in the observe, detect, operate sort of loop of military action instead of the human time scale, which is minutes and seconds. Bring this down to the millisecond. Ooh, the military people in China are all on it. In the United States they're saying, we're behind the Chinese. We gotta up our game. I think this is disastrous. So I think we need new data governance, and a realisation that data is not like oil. It's not a depletable resource. It's not even like water, which you can use again and again, running it through turbines to create electricity, drink it, then run it through the turbines again, because water's also scarce. There's a limited supply of it. It can become polluted. It can become unusable. Data's not like that. Data is a relational system. It's relating an observation and a value, and the more the datasphere is used, it creates more data. Uses of data create data. So it's a resource commons. It's a relational commons of things that humans consider as mattering. These are records of events of identifying something that mattered, a temperature, an emotion, a search. These things matter to human beings. So the datasphere is a sphere of recordings of what matters to humanity, and that should benefit all of humanity. And we human beings, not corporations and not nations, should be the ones deciding how our data is accessed and for what purposes. Then the geopolitical competition's on a different footing because then if the United States wants to run our data through its algorithms, it has to get our permission to do it. But the same thing would be true in China. And then when you get as competition within countries to do a better and better job at convincing people that their data, their algorithms are actually benefiting the people, and that's the competition that then takes place. Who is going to attract the most data use, not the data, cuz the data doesn't move, but the most access to the data sphere, then I think we have a different kind of competition and that could potentially be win-win as opposed to win-lose.

Eleanor:

Well, there you go. A bit of optimism. You heard it first.

Peter:

Yeah.

Eleanor:

Thank you so much, Peter, for joining us today. It was a fantastic conversation, and we'll hear from you very soon, I hope.

This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney, and edited by Eleanor Drage.