In this talk, UWCSEA parent Dr Ayehsa Khanna speaks about the importance of communication, transparency, and bias in the design and application of artificially intelligent tools. She also explored the role of risk in planning to use AI from a governance point of view and what it means to have digital justice in the centre of our thinking.
Watch this session if you are interested in learning about the ethics of using AI in what we do.
Webinar recording
Transcript
Dr. Ayesha Khanna: Hello, everyone. It's such a pleasure to speak to you today about the importance of artificial intelligence and ethics. So over the next 25 minutes, I'm going to speak to you through some examples to illustrate how important it is to think about an ethical framework when we build AL powered products and services. So let's come up with the first one, which is the importance of communication.
I don't know if you remember, but during the pandemic there was a dog, a robot dog that was deployed in many of the parks in Singapore. And this dog was bought by the Singapore government, by Boston Dynamics. It was made by Boston Dynamics and brought into our parks. And whenever people were not too distant from each other, it would use its computer vision and it would tell people "please sir madame you're less than one metre apart." Now, this was covered in all the media and in all the communication there were children around it. People were taking pictures. Nobody seemed disturbed by it.
But something very dissimilar happened in New York City, the exact same dog. The robot dog was brought in by the New York Police Department, and NYPD was going to use it to help defuse bombs, which would have been great because you could have saved human lives and the robot could have gone there and taken that chance about whether an explosive device was dangerous or not. But this was never communicated properly to the public. Instead of accepting it and thinking of it as a safety measure, the public thought this dog, this robot dog, had a gun and was going to accompany the police on raids. There was a huge uproar and NYPD actually had to stop using the dog.
So what is the most important thing? And there are several important things that I speak about, but communication and buy-in of citizens is very important. You're going to use data. You're going to use A.I. You're going to use robots. Then you must communicate why you're doing it, how it works, where the data has been collected. And it is your responsibility as a government to communicate that or as an entrepreneur or an employee that makes such AI powered products, or even as a student experimenting with them to communicate it to the public.
Now, there are times when we encounter AI, but we don't even know that it's AI. In fact, we see chatbots all the time, and these chatbots are on websites and let's be honest, they're a little bit annoying because they don't understand what we're saying. But that's about to drastically change. Natural language processing or the ability for machines to understand what humans are saying no matter which language, increasingly not just English, is going to become exponentially better in the next five years. It's the most exciting area in AI and soon everything from your fridge to your phone to your car will understand what you're saying and be able to follow your instructions. But that also means that you may encounter an agent that you're speaking to on the phone or chatting with online or meet in a virtual environment. Maybe you even end up getting into a relationship with an AI agent that sounds very human. And my point of contention is not whether it's a problem to have a relationship with an AI or not.
That's something different to be discussed. The problem is when the AI does not declare itself an AI and pretends to be human. This is not ethical. If you are going to build machines, then you want that machine to make very clear that it is not human. That means if it's a customer service agent, it would say, Hello, I'm Ayesha. I am an AI agent, or Hello I am Robert and I am an AI, young AI, football player in the metaverse. So it always has to declare itself. This transparency is incredibly important And these are encounters that we'll have more and more and more as artificial intelligence becomes prevalent in our lives. Now, it's very important when you are making the artificial intelligence and not just on the receiving end of it, but while you're making it that you don't want it to be biassed in any way.
So we talked about the importance of communication and of transparency in terms of it being AI, not human. And I want to talk about the way it behaves once it has declared itself what it's supposed to do and sometimes what it does is unfair to the very people it's supposed to serve. Now, I'll give you an example. Amazon decided at some point that it wanted to have an AI that would actually help human resources recruit individuals for engineering jobs, and that seems like a fine idea. It would save time. The human recruiters could spend more time trying to understand the strategy behind why certain roles needed to be fulfilled, spend more time with a candidate. But that first step of filtering the resumes was left to the AI. And The AI then just started looking at all the data and the history of the company.
The data had bias in it. And why did it have bias in it? Because the male human resources managers had a biassed approach and had preferred men over women for engineering jobs. So that is very important for anyone building an AI to pay attention to data. And in order to do that, there are very clear processes that you can follow. You can check the data, the fairness metrics that show you if it's neglecting a particular minority or a certain class of people. And it's the same reason why women or minorities are often flagged by facial recognition systems as suspicious even criminal because it doesn't recognise them, because the data that it was trained on was primarily on male white populations. And now Google and others are trying to rectify this problem now that they've become aware of it. So you have to look out for bias otherwise, it will be very unfair to certain parts of the population, which is unethical. So the next thing that you may think of is, well, you know, I have it. I've checked the bias and it's completely fine.
Now you have to put on your critical thinking hat, which all of our kids are taught a lot at UWC. But sometimes as adults, we forget. And you need to think about the unintended consequences of artificial intelligence. So, for example, you know, a large number of the population in the world is under-banked or unbanked. That means they don't have access to a bank account. They can't get a mortgage. They can't get a loan to send their kids to to college. They can't even afford health insurance. And because they are often uneducated or don't have financial transactions or a bank account, the system punishes them by not giving them loans, loans without which they cannot start their companies, for example.
So there is a new area of providing loans to people based on assessment of their credit risk or their risk of default based on other things other than their education status, for example. And that's perfectly fine. Somebody who from your phone, you can see a woman who wants to open a beauty salon doesn't have an education. But you see, she always pays her bills on time. You see that she's been studying on beauty salon courses. You see that she's being very judicious and responsible and organised on her phone. Even the fact that she charges a battery on time is a sign of a person who's organised can be used as a criteria to start giving small loans to the woman. All well and good. Great to have that and provides a great way for the under-banked of the world, which are hundreds of millions of people to get loans.
But here's the thing. The next step that they thought about was, well, why don't we just give them loans for everything, right? Not just for a fridge, which they need to keep their food, OK? Or for a course that they want to take. Let's then give them, she wants to buy jewellery. Let's give her a loan. She wants to buy a car. Let's give her a loan. And in fact, we know that there are certain things that she really wants to buy. And we know she has a weakness for expensive makeup. So let's just start with giving more of that. And then we earn that little bit of interest rate. And that is the buy now pay later And some companies are being very ethical and responsible. But some are just capitalising on the fact that you can keep giving loans to people
And at some point, you have to think of the unintended consequence of what may have been born out of a genuine desire to give people loans is that they become increasingly indebted. And if they have a weakness to spend beyond what would be considered financial intelligence or wellness, then they are, you are actually causing them financial distress in the long run. So this is the next evolution where you think about unintended consequences of some of the things that you may be putting into place. So the best way when you're building a product, especially AI product, is to systematically sit with your team and think about all intended and unintended consequences. Really do scenario analysis. What could go wrong? Questioning. Why?
Because this is new. Because it's our responsibility to be ethical. And think about these things. Whereas we traditionally have not really thought about these things much, given that we don't realise that AI can kind of nudge and manipulate human emotions, unlike traditional software engineering, So these are what, you know, come under the bigger umbrella of governance, right?
So a lot of what I've talked about, bias, algorithmic manipulation and unintended consequences, transparency, communication, these should not be ad hoc. What they need to be is a systematic set of policies that as citizens, you demand of your private companies, as governments you regulate and as responsible intrapreneurs, entrepreneurs, employees, employers, you make sure are set in the course of building an AI product. So you make sure from the moment an idea is sprung up to the point that the data is collected, to the point that it is saved and checked for bias, to the point that the algorithms are made. And then a conversation happens on whether they are manipulative or not. And remember, the law will not stop you unless you're actually being manipulative. But sometimes you yourself will stop yourself because even though it's not against the law, you see the long term unintended consequences and you just don't feel comfortable with it. And it's important to trust that instinct and feeling don't think the machine is so smart and that it will do everything right. It could optimise for the wrong benefit and in the long run could hurt you and hurt your customers.
So one of the things that's very important is the privacy of the information that AI is getting. I talked about governance and how to make sure that it doesn't do harm. But one of the things you have to be transparent about and communicate with your customers and citizens is what are you doing with their data? Where are you putting it? Where are you storing it? Are you selling it to someone? Will this forever live in your database? By the way, these are questions that until the European Union really put a stake in the ground and said that you have a right to be forgotten, that means if I did something in my twenties that was rash, I didn't hurt anyone. But I'm embarrassed by it. Why should it forever live in Google? I should have the right to have it deleted so that as an adult it doesn't affect my chances of getting a job. There was nothing like this before. I have a right to ask where is my data stored? And that question is very difficult for companies to answer because they create all of these algorithms.
And my personal data may be in many different places. But California has a new law called CCPA where I could go to any company and I could say, Where is my data kept? Tell me exactly where it's kept. And if I don't do that, the government will fine me. So these are the ways where regulation, corporate governance come together. With civic activism as well. And civic activism is very important. I'll give you an example. In California, the judicial system was trying to give decisions on which prisoners should get parole or not. And so the judge, the prisoner would come in front of the judge and the judge would have an AI assistant a programme that would make a recommendation on whether to give a person parole or not and give some assessment of whether they would be a danger to society and are ready for parole or not. They discovered that consistently an activist discovered this, consistently recommending against parole for African-Americans. So there was bias in it and nobody realised it. But civic activists did.
So you and I have a responsibility. This is what I'm talking about. It's just not for companies. We have to be aware because tomorrow it could be one of us that is affected if unethical practices or carelessness, and to be careless is to be unethical because of the power of AI to impact a person's life. And that's why the European Commission's latest rules have said that you need a risk based framework for AI if it's really going to affect your well-being like facial recognition systems, then private sector cannot use them. If it's going to make a decision on a mortgage or your student loan or your health insurance, that's high risk, which means the governments should audit you if you don't have proper governance they can fine you up to 6% of annual revenue, which is significant and a deterrent to misbehaviour. But then if I'm on a site and it's recommending a book, to me that's low risk. And so they're not going to audit me as much. And I like that. I'm a techie, I'm an optimist, but I am a big fan of governance. Unless we mitigate the risk, how can we really continue to use it more and more to amplify its benefits?
And lastly comes the big question of digital justice. Digital justice means something happens to you in the digital world, and it applies to bullying or other things by humans. But it also applies to mistakes that AI made. An AI bullied you in the metaverse or punched you There have been, unfortunately, harassment that has happened in certain digital environments. It could happen by a bot. It could happen by a human. When you are a digital victim, what's your recourse? What does the law say? Where do you go?
And that's a paper that I worked on with the World Economic Forum. And I think it's really an emerging area to consider because we need a proper system of justice when artificial intelligence is becoming so ubiquitous everywhere from the cars we drive to our mobile phone, to potentially colleagues that we have.
In China, employee of the Year for one of the largest real estate companies was an AI bot because she was so effective at speaking to customers who were delaying their payments and encouraging them and convincing them to make their payments on time. Don't be surprised if your colleagues, your assistants, robots or AI agents, maybe even some of your children's friends, maybe even your friends, but when they are wrong, who is responsible? Who owns them? There should be clear lines of accountability. Who wrote the code? Who is the person who created that software? It is that company's job to implement a governance framework in line with the ethics and values of the society in which it is operating. And to suffer the consequences if it does not abide by those ethical regulations.
Which brings me to my final point, which is AI ethics no doubt is important, but where does it start? Where does this self awareness start? Well, it starts right here at UWC at school. It starts right here in grade 1,2,3, all the way up to International Baccalaureate and beyond, where we teach students and children to live in a world of technology, to appreciate it, to use it, to be imaginative and creative, but also to be critical thinkers. To constantly evaluate the pros and cons and be human centred humans first, but sustainable first, and all of the values that they learn in school, to take them right back to how they design their processes to create ethical AI for everyone.
So here are just some of the things that I consider are important. I'm sure you have some things on your mind as well. And I hope one day we can meet in person to have a chat about them. Thank you so much for having me here today. Bye.