The Good Robot

The Venerable Tenzin Priyadarshi on Buddhism and AI Ethics

June 01, 2021 University of Cambridge Centre for Gender Studies
The Good Robot
The Venerable Tenzin Priyadarshi on Buddhism and AI Ethics
Show Notes Transcript

In this episode, we chat with The Venerable Tenzin Priyadarshi, President and CEO of the Dalai Lama Centre for Ethics and Transformative Values at MIT, about how Buddhism can inform AI ethics. We discuss the problem with metrics, how to make meaningful contributions to ethics and avoid virtue signalling, why self-driving cars coming out of Asia and Euro-America prioritise the safety of different road users, and whether we should be trying to make machines intelligent, wise, or empathetic. 


This episode includes an ad for the What Next|TBD podcast. 

KERRY MACKERETH (0:01):
Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode. 

ELEANOR DRAGE (0:32):

Today, we’re chatting with The Venerable Tenzin Priyadarshi, who's the President and CEO of the Dalai Lama Centre for Ethics and Transformative Values at MIT. We discuss how Buddhism can inform AI ethics, including the problem with metrics, how we can make meaningful contributions to ethics and avoid virtue signalling, why self-driving cars coming out of Asia and Euro-America may prioritise the safety of different road users, and whether we should be trying to make machines intelligent, or wise, or empathetic. We hope you enjoy the show.

KERRY MACKERETH (1:06):
Thank you so much for joining us here, it’s such an honour. Would you mind introducing yourself and telling us what you do, and what brought you to the study of Buddhism and technology?  

TENZIN PRIYADARSHI (1:15):
My name is Tenzin Priyadarshi. I am a Buddhist monk. I run the Dalai Lama Centre for Ethics and Transformative values at MIT. And I guess it's just by virtue of curiosity and interest. I've been a monk since I was 10 and so my interest in Buddhism has been fairly early on in my life. And I had an interest in philosophy and philosophy of science. And that led me to the study and design of technology, and so I tried to blend the two.

KERRY MACKERETH (1.48):
Fantastic and could you explain a little bit more to us, what does buddhism mean to you and what does it mean in the context of technology?

TENZIN  PRIYADARSHI (1.55):
I guess Buddhism is a vast sort of field of knowledge. And people derive different things. But for me it's about understanding the nature of reality, it is about having principles and disciplines to live a wholesome life that allows for growth that allows for becoming a better human,  that allows for thinking about how to bring more sanity to the chaos that we live in.

ELEANOR DRAGE (2:30):
Like Kerry and I, you’re interested in shaping the conversation around ethics. I worry that some of the ethical frameworks that continue to proliferate in AI spaces are at best bland non-performatives and at worst can supercharge the already misdirecting values imparted by capitalism. Our podcast is called The Good Robot, so with this in mind we’d like to ask you what kinds of ethics frameworks we should be developing and what ethical technology even looks like? And finally can ethical technology be built or even make commercial sense within a capitalist system? 

TENZIN  PRIYADARSHI (3:09):
Those are all very, you know, good questions, all very loaded questions. I think the first thing is that we have to be observant that there's a lot of virtue signalling that happens in the tech industry, just like it continues to happen in Wall Street and banking industries, and so on at any time. You know, if somebody’s critical of it they try to spin off social responsibility and ethics-related messages. But I think it requires a sincere reflection if we are to sort of take ethical principles seriously, then the first thing we need to get off is this horse of public relations and public perception, that should not be the thing that should be driving our desire to be ethical and design ethical technologies. The second thing is that, you know, there are aspects of timeliness and cost involved, you know, most often capitalism as well as tech industries are used to thinking in terms of compliance frameworks, which is, you design a product and deploy it, and then if something goes wrong, legally or otherwise, then you try to make amends. What I have been sort of trying to push for or propose, is using ethics at the design stage. So we are not sort of, you know, waiting until the end of things, because obviously, everything depends on the scale of things. And more and more technologies are built with the idea of scaling up fast. And when you're scaling up fast, chances are the damage can also scale up fast. And so it's just not a very efficient thing to wait and just to look at the compliance framework. And so we need to sort of rethink how we want to introduce ethical framing in design process itself, which implies asking the right questions which also implies having the right group of diverse designers of diverse participants in the room that you cannot just leave questions of wellbeing you cannot leave questions around optimization around wellness just to technical folks, you know, that that these problems are interdisciplinary in nature, and therefore it requires a diverse team of people.

ELEANOR DRAGE (5.30)
You’ve said that metrics can do violence to the spiritual landscape - can you explain to our listeners what you meant by that and what kinds of harm might result from ascribing value to whatever it is that we want to make quantifiable?  

TENZIN  PRIYADARSHI (5:19):
I think that the challenges that - you know, I'm not against quantitative analysis of things, I think they're useful, data gathering in that process is useful, it helps us gain certain kinds of information and then iterate whatever it is that we are trying to work on, but humans are not simply quantifiable objects, meaning that's why there needs to be a balance of both qualitative analysis and quantitative analysis. And qualitative analysis means that we look at stories, we look at narratives, and we look at certain values that we want, human society or civic society to exemplify, but recognise that we don't have very good quantitative measures yet, questions around happiness, questions around wellbeing, around unkindness, around civic trust, that these are all important aspects we can all agree in order to create a wholesome fabric and civic society, but we do not have a very good quantitative understanding of these issues. And sometimes the challenge with metrics is that you know, it's as the problem of when you have a hammer, everything appears to be a nail. And so when you're seeking just the quantitative aspects of things, then chances are that we only work in space, where metrics are available, where quantitative analysis is available, which leaves a vast aspect of human behaviour, organisational behaviour up to [chance] because we are not willing to engage with it because we simply say to ourselves that oh, you know, there are no metrics for it.

KERRY MACKERETH (7:17)
That’s really fascinating and it’s really wonderful to hear for someone who works in qualitative research and really believes in what it can bring to the production of new and better technologies. I was really interested in what you were saying before about optimization and wanted to ask you who and what we should optimise technology for? So for example in the field of AI when we’re trying to address some of the harms that emerge from these technologies, one of the problems I think is that we’re often  optimizing machines for a certain kind of cognitive capacity that we call ‘intelligence’ without understanding what intelligence means or critically interrogating what we’re doing there, so yes back to that question of optimisation, who should these technologies be used for and what should they be trying to do?

TENZIN  PRIYADARSHI (8:01):
I think again, you know that is a vast question, because the society doesn't have an agreement on shared values that we should be optimising for. However, as I mentioned earlier, it requires an honest and sincere reflection so that we are not sort of layering false narratives on our design process. So if you're optimising for efficiency, which is what much of the AI industry thus far is driven by, how fast [speed] and how convenience can be sort of embedded in these things, which has its place, you know, I mean, that's what augmentation helps us to do. And there are places, there are judicial systems, healthcare systems, which could utilise certain kinds of efficiency optimizations. But you have to sort of get back again, you know, to ground zero and ask yourself that, is that all that we want out of these, these technologies? And if we were to frame it well, in terms of, you know, should we be optimising to promote more trust, more kindness, more empathy, then what would these algorithms look like? How would we design for things and it is questions around empathy and trust-building that also gives us more sensitivity around the aspect that our own data is corrupt, that our old data is biased. Yes, it is available, it is available in vast amounts, but just because it's available in vast amounts and ready for use, should we be using that to train our algorithms? Or should we create new sets of data? So these are again questions that we need to ask ourselves because, you know, one of the biggest challenges of such kinds of processes is that already you know, humans get very creative about moral agencies, whose responsibility and so on. And so when you design a machine learning platform or AI for for certain kinds of systems that are directly influencing civic society, like the judicial system, or how does somebody qualify for bank loans or credit scores and things of that nature, it's very easy for a human being to simply blame the machine learning platform and say, Well, I want to help you, but my computer doesn't, you know, I want to help you, but the scores that you have given, we entered it, and you know, it's the computer that's making the decision. And that would pose I think, a major challenge in civic society if we continue to operate in that framework.

ELEANOR DRAGE (10.35):
Thank you, and I was thinking about what the relationship is between intelligence and wisdom and you talked a bit about wisdom before, and earlier this morning I was reading about how early machine learning research also contributed to theories of human intelligence, so how then can we build systems that contribute to understandings of human wisdom, or that develop understandings of human wisdom? What is the relationship between technology and wisdom and how can we develop that?

 TENZIN  PRIYADARSHI (11.05):
You know, one of the simple examples that I give is, is, you know, from a simple data perspective and genus sorting (11.20), one recognises that tomato is labelled as a fruit, but you never put it in your fruit salad. And that is the distinction between intelligence and wisdom, which is that you understand from a data perspective that this qualifies as such, but wisdom requires a certain degree of actualization, that is perhaps sometimes gained through trial and error methods, and sometimes intuitive and sometimes otherwise, meaning that that intelligence in its most sort of conservative form does not account for other kinds of things. By virtue of which humans make decisions, you know, intuitive knowledge, we don't have very good ways of replicating it. Even things like how humans make ethical decisions, or ethical choices, we don't have a very good way of replicating [that], I mean, you know, recognise the fact that it has been 3-4000 years, since, you know, in different civilizations, we have tried to code ethics and law. And we don't have a very good way of teaching it, yet. So to try to replicate in a machine learning platform is kind of a daunting task, you know, meaning that if, if it was just, you know, simple syllogisms of either/or, or a syntax of, you know, of those natures, where you're thinking in binary modes, then it's something else, but but human choices and human decision making is not just binary. And so wisdom comes in there, which is that it allows us to deal with certain degrees of complexity, that we may not be able to navigate just by virtue of pure data, or pure intelligence. 

ELEANOR DRAGE  (12.56):
We’re particularly interested in Buddhist approaches to conceptualising Human-AI ecosystems, so relationships between humans and technologies broadly speaking, and I’m thinking here of all sorts of jobs where humans are now working really closely with AIs, take the chatbots that help customer service teams or the automated decision making systems that inform a verdict made in a court of law. These are really intimate human-techno working relationships. What’s your take on how Buddhism can uniquely help to build better human-AI ecosystems? 

TENZIN PRIYADARSHI (13:33):
I think, you know, any ecosystem is only as good as its ability to self-correct and grow. Otherwise, it becomes a stagnant system. And similarly, when you're looking at, you know, any kind of synergy between human-AI or creating human-AI ecosystems. There are multiple issues to be concerned, one is, does it allow humans to grow in one form or the other? Or are we simply delegating certain kinds of tasks to this, and that's the sole purpose of designing such things? Meaning that you have to ask the question of mutual growth. Again, I don't know whether we are able to create any form of awareness or self-awareness in AI systems. But for humans, you know, that is a preoccupation. You know, that is something that they have to keenly reflect on, questions of meaning, questions of purpose And the issue becomes, you know, does AI help contribute to that? Will it help us become more self-aware in certain forms? Chances are that yes, we can create certain kinds of reflective algorithms. There are certain kinds of feedback loops that we can establish to do that. But that's what it means, you know, what is the functionality of that ecosystem, and how does it promote self-correction and aspects of mutual growth? We know, for example, that humans are highly emotional when it comes to choices, perception and decision making, perhaps certain forms of machine learning or AI systems can offer more objectivity in how we look at the world or how we make certain kinds of decisions, and that, again, is an ability in terms of self-correction. But my colleagues in Silicon Valley and elsewhere, you know, when they try to sort of hypothesise the idea that that, you know, such … delegating relationships in this ecosystem, where we delegate much of our work, and so on to do AI systems will leave us with a lot of free time. I think that's a very presumptuous thing to say because, again, humans, by and large, as a group, do not have a very good history of what they do with their free time. So I think there are a lot of presumptions that are built in that somehow we will end up with free time, and we'll do only good things, or we only do things that will be, you know, helpful in creating a more wholesome world. And I think this pandemic has given us some insights into that sort of, into that experimentation in some ways.

KERRY MACKERETH (16:13)
Thank you, that’s really interesting and I like the way that you draw out, firstly, this idea that we might have a lot of free time, firstly as a result of these kinds of task distributions and I feel like the emergence of other kinds of new technologies have really shown the way that new technologies have also generated new forms of work which can become more and more invasive in many ways, but also, secondly, whether or not we’ll actually be able to do good things with that time being a very overt question if it even does happen. I wanted to change tack very slightly in thinking about the ways in which different values are embedded into AI and other kinds of new and emerging technologies and I was wondering, do you think it’s possible to embed a heterogeneity of cultural values into AI? 

TENZIN  PRIYADARSHI (16:55):
Is it possible? Yes. You know, is it easy? Probably not, because, again, remember that we humans, ourselves are not very good with contextual decision making and contextual perceptions and things of that nature. So it requires, you know, embedding context, it requires for a system to understand in what context what things might be useful, and again, that our ethical choices are not just rule based. So I think, you know, one of the challenges is the variations and cultural understandings and social values that are not entirely rule-based. And even humans have difficulty in contextualising how they make choices and how they make decisions. So for an AI to account for all the context to be able to make a good decision is something that would be challenging. I don't think it's impossible, but it would be challenging. And again, we should not sort of take a reductionist approach in trying to sort of embed something of this nature into a machine learning system. So let me give you an example. When we work with automotive industries, you know, one of the scenarios that we present is, there's a self driving car that is going at a certain velocity, and it is going to hit an obstacle. And in order to avoid it, it has to take a left swerve, or a right swerve. Now on the right, there is a motorbike, where the rider has a helmet on. On the left, there's a motorbike, where the rider does not have a helmet on. Now, which way should the car swerve? Now, if you give that scenario, in many of the Western countries, they would say, oh, the car should swerve to the right, because the biker on the right has a helmet on, has extra layers of protection and so on. But if you give that example in Asian countries, they will say no, the car should swerve to the left, because the guy on the right is following the rule, he has the helmet on, but the guy on the left doesn't have a helmet on. So he's breaking the rule already. So why are you punishing the guy who's actually following the rules? So, you see, people make decisions, you know, very differently. And it becomes a challenge when you have to make quick decisions, you know, informed by your biases informed by how you understand rules, informed by cultural norms, and so on. So I think perspective-taking, and contextualising are two sort of important modalities that we need to figure out, you know, how we can embed those in machine learning and AI systems. 

KERRY MACKERETH (19:41)
Fantastic, and I’m really interested in what you’ve said around the question of context and around how different forms of decision-making patterns need to be embedded into AI. So I almost want to loop back to something you said earlier about human-AI ecosystems and the ways in which the potentially more objective character of these machines might help mitigate or audit forms of human decision-making which are very emotional. But I wanted to press on that a bit harder and say, do you think machines can be objective or do you think they are inherently more objective in their decision making processes than humans are? 

TENZIN  PRIYADARSHI (20:16)
I think they can be more objective in the sense that, you know, emotions are at times useful, and at times not so useful, you know, as we know, from even our day-to-day experiences of those things. You know, when we make certain kinds of decisions and choices in situations of duress, distress or anger, it leads to qualitatively different kinds of decisions. And chances are that an AI system will not be experiencing those things. And so yes, they will make much more objective decisions in those kinds of contexts. You know, but the reverse is also true that if, you know, humans make decisions based on kindness, altruism, compassion, and are able or willing to override certain kinds of rules, in order to aid other humans … I'm not entirely sure if AI systems would be able to do that either, meaning, you know, you start asking things like how do you embed values such as forgiveness in a criminal justice system, where we have examples of lawyers or judges or legal systems not always doing it well, but sort of deploying this idea of forgiveness, giving somebody a second chance and things of that nature, despite what the historical data about that individual might be, we are willing to forgive, we are willing to give a second chance. Those are, again, partly emotional responses. The issue becomes, you know, will AI systems be able to do that on the spot, based on the kind of, you know, again, algorithms that we're designing, in terms of efficiency and objectivity, and so on.

ELEANOR DRAGE (21:57):
We’ve talked about what Buddhism can do for technology, and at this point we usually ask how technology can shape Buddhism, but we’re not just thinking here about meditation apps - although they now exist in abundance - but also the kinds of technologies that have been used for centuries in Buddhist practice, and we’re defining technology in our work broadly as techne, as craft. How has Buddhism evolved through lots of different kinds of technologies? 

TENZIN  PRIYADARSHI (22:29)
You know, I think when the word technology was introduced into our vocabulary, we used to make a differentiation between inner technologies and outer technologies. Meaning that anything that is a study of techniques and disciplines over a long period of time. You know, Buddhism too started with a  trial-and-error method, but it has had the benefit of 2500 years or more to sort of go through a process of elimination and validation to understand, you know, what works best. I think, you know, there is a tendency, where we are trying to sort of augment certain things to facilitate practices, or learning around these things. That may be useful, you know, I am yet to see completely but, you know, I mean, it's mostly useful in the context of a body of literature now available at your fingertips rather than you going to a library and, you know, dealing with 10,000 manuscripts at a time. And perhaps, it might give certain kinds of feedback mechanism that allows you to reflect more deeply or reflect with certain kinds of aids that might be helpful. But I'm yet to see those things.

KERRY MACKERETH (23.43)
Thank you so much for coming on our podcast, it was really so wonderful to hear you talk about your work and the perspectives you're bringing into the field of AI ethics, they’re so valuable and we really appreciate your time. 

READING LIST:
Nilsson, Nils J (2009) The Quest for Artificial Intelligence. Cambridge: Cambridge UP. Pg. 404 Nilsson discusses how Edward Feigenbaum’s EPAM programme modelled the performance of humans, and “is still regarded as a major contribution both to theories of human intelligence and to AI research”. 

Feigenbaum, Edward A. and Herbert Simon, “EPAM-like Models of Recognition and Learning,” Cognitive Science, Vol. 8, No. 4, pp. 305–336, 1984

Richman, Howard B., Herbert A. Simon, and Edward A. Feigenbaum, “Simulations of Paired Associate Learning Using EPAM-VI,” Complex Information Processing Working Paper #553, Department of Psychology, Carnegie Mellon University, March 7, 2002, http://www.pahomeschoolers.com/epam/cip553.pdf. [404]