LØRN Case #C1167
The Challenge of bias
In this episode of #LØRN Silvija and Christian Fiesler talk to Elin Hauge , a professional speaker on AI. This is part of our series on Artificial Intelligence created with BI. They will discuss what AI is, not what it should be, and why bias is a natural part of AI, and the challenges of AI going forward.

Elin Hauge

Public speaker

Elin Hauge

Christian Fieseler

Professor of Communication Management

BI

"Even though we talk about AI in the market all the time these days, I think a lot of leaders still struggle to understand what it really is."

Varighet: 37 min

LYTTE

Ta quiz og få læringsbevis

0.00

Du må være medlem for å ta quiz

Ferdig med quiz?

Besvar refleksjonsoppgave

Tema: Digital etikk og politikk
Organisasjon: Elin Hauge
Perspektiv: Storbedrift
Dato: 220407
Sted: OSLO
Vert: Silvija Seres

Dette er hva du vil lære:


What is AI

How does bias impact the use of AI

What are the challenges with AI

Mer læring:

Lena Lindgrens  Ekko-et essay om algoritmer og begjær

Del denne Casen

Din neste LØRNing

Din neste LØRNing

Din neste LØRNing

Dette er LØRN Cases

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. 

Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

Vis

Flere caser i samme tema

More Cases in the same topic

#C0061
Digital etikk og politikk

Glenn Weyl

Professor

Princeton

#C0147
Digital etikk og politikk

Hans Olav H Eriksen

CEO

Lyngsfjorden

#C0175
Digital etikk og politikk

Hilde Aspås

CEO

NCE iKuben

Finn Amundsen

CEO

ProtoMore

Utskrift av samtalen: The Challenge of bias

Velkommen til Lørn.Tech - en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

Silvija Seres: Hello and welcome to a case by LØRN and BI Norwegian business school. This is a part of the series we are creating together to explore main challenges and opportunities in creating good AI as part of, BI’s course responsible AI leadership. My co-host, as in the rest of the series is Christian Fieseler who's a professor of communication management at BI and Elin Hauge, who is a strategic business adviser, professional speaker and startup mentor. The topic today will be the challenge of bias in responsible AI. So as in the other discussions in this series, I will first ask the two of you to briefly introduce yourselves, and then I would like to ask Christian to introduce the topic Why are we talking about this and why are we talking with Elin? But first, Christian, who are you?

 

Christian Fieseler: Yeah, wonderful. Thank you, Silvija. So my last name is German. 

 

Silvija: And Elin. Who are you?

 

Elin Hauge: I'm a Norwegian medical physicist and operational researcher by education. A couple of decades ago, not very long ago, I worked about half my career in the insurance industry and then the other half in consulting in I.T. industry. Lately, I've worked a lot in the startup ecosystem in Europe, not just in Norway. And I also work as a professional speaker both in Norway and internationally.

 

Silvija: Very, very good. So, Christian what are we going to talk about today and why did you want to invite Elin to be our guest lecturer?

 

Christian: Many different reasons. So first of all, Elin has, I think, years of experience with many different heads, so many different experiences.So essentially someone who can take very complex subject matter such as, say, AI and discussions around describing what AI is, especially in its newer iterations like machine learning, deep learning, can be somewhat challenging. And you are really good at breaking it down, right to essentially explain to people why it matters, how it works, and what are the different elements which essentially go into that challenge. And we talked previously before and you had a really wonderful way of explaining to me how a bias, for instance, works in machine learning and AI and why that is not always a bad thing, why it is sometimes a thing that just is. I think in order for us to understand what responsible AI is, I think we also need to talk about the notion of bias and whether we want to avoid it, whether we can always avoid it, and how we deal with essentially that the faults in our data are not the fault in our data. But with the idea that oftentimes artificial systems like artificial intelligence are a mirror of our oftentimes not perfect world. And that I found really, really interesting when we had our conversations that we essentially are able to really break it down for people like me to understand.

 

Elin: So as a speaker, I very often need to kind of explain to leaders first, what is AI about? Because even though we talk about AI in the market all the time these days, I think a lot of leaders still struggle to understand what it really is. So that's where I typically start. And the way I try to explain it is that AI is about taking mathematical recipes, and then we apply these recipes to the data that we have in our systems. And maybe we have generated these data over five, ten, 20 years, something like that. And then out comes a model. Now the purpose of doing this is to replicate human decision making. And the model then helps us to predict potential future outcomes or behaviors based on the data we have. So basically the algorithms that look for patterns in the data that can be used to predict future outcomes. And it is as simple as that and as complicated as that. We can come back to the complications a bit later.

 

Silvija: As an AI person myself as well, I want to expand on what you just said, because I think a really important part of AI now is that it not only replicates human decision making, but actually is able to surpass it. Sometimes it is able to see patterns in the data that the human couldn't see, and it also has now access to enormous amounts of data. So some of these data are gathered by physical sensors and are physical and therefore objective. And some of the data are based on past human decision making and behavior. And I guess that's where the worm gets into paradise.

 

Elin: Yes. And that's where the complications come in, because, as you say, these algorithms can churn through much more data and find details that the human brain cannot. So that's why we use these algorithms for what I would also say, replication of human observation skills, because it's about replicating our ability to see and as you say, surpass our ability to see because the algorithms can go into finer details than the human eye can do. We use them to find patterns in language and to generate language. And again, the algorithms are better than humans to go through large volumes of languages and words and across languages, and then also just enormous amounts of structured data. For example, the algorithms can again churn through much larger amounts of data in a shorter time and again surpass human decision making. But still, it is still about replicating how the human brain works in reasoning and making decisions. So it is not magic.

 

Silvija: I just want to give a couple of examples here. So we make people imagine these pictures. One of the examples I want to give is Google's conference about a couple of years ago where they started showing off these DeepMind products that they have now. And there was a video of correction tape, a sound tape of a conversation between a computer making an appointment with a hairdresser and a real woman hairdresser on the other side, accepting this appointment. And the thing that made the whole world stop and take a deep breath is that the human hairdresser in likeness with the whole of the audience did not understand that this is a computer making a booking. And the interesting thing is that in order to make it sound human, they made it a bit messy and then it was really impossible to understand that it's not a human. So it's getting really good at copying humans in a way or as you said, replicating human behaviour. And then the other example I wanted to give is just the negative side of that.

 

Silvija: And that's where I guess, Christian and you in the introduction have been talking to me. Why is bias there? Why is bias unavoidable? And why must we learn to live with it, but in a good way. And they've been trying to make these AI driven judges, for example, that would be giving out sentences to small offenders. And very quickly. And they were trained on past offensive behaviour and trying to calculate the likeliness that somebody will commit a new crime as they come out. They were much harder on young men of colour then than on others. And so we saw it wasn't fair, they were judging them harder. Similarly, Google tried a similar project with hiring, so they wanted to use past hiring data and professional experience data to see which people should they employ that have the biggest potential of doing great in Google. And surprisingly, it was young white men that seemed to be doing best, but of course they were the most hirable. But of course that's passed on past behavior, which isn't always fair or neutral. 

 

Elin: So back to why I wanted to explain how these algorithms work on data, because that is the core. We train algorithms on the data that we have collected through many years, and these data are basically just a documentation of human behavior, whether it is a judiciary system or it's hiring or it's sales data or its school entrance record of some kind or medical treatments. It is only a documentation of human behavior. And if we were to assume that those data were fair, unbiased and of high quality, we would be assuming that humans are rational decision makers all the time. We are not only a very small fraction of the decisions we as humans make every day are in fact rational. Which means that all our not so rational decisions are also documented in the data that we use to train these algorithms. And that's where we find ourselves looking into a huge mirror when coming to situations where bias becomes a topic, we are actually looking at a replication of our own human behaviour which isn't always all that nice. We still have some of those stone age behaviours with us. We tend to take care of our own tribe first. We don't like the other ones because they might be dangerous. We make sure that we feed our own first and if we can get power we will try to get that power. All these stone age behaviours are somehow still with us. We haven't developed that much over the last 100,000 years or so. Then we are training these algorithms to replicate human behaviour and we end up replicating human stupidity as well.

 

Silvija: Not only replicate, but the problem with A.I., is that I guess it can magnify it, and can scale it up big time.

 

Elin: We are scaling human thoughtlessness actually. Yes.

 

Silvija: But why is it unavoidable or is bias ever good or is it more a matter of learning to live with it and then learning to keep correcting for it?

 

Elin: Well, so bias only means a preference. So in any given dataset, there will always be a bias just because any data set is only a subset of the entire world, which means there will always be a bias. But bias in itself isn't positive or negative. It just is. It is a preference within that dataset. But the question we need to ask ourselves is what kind of bias is there? Is it acceptable and how much bias is there? Can we live with it? And if it's not acceptable and we can't live with it, then we need to do something. And this is also where the coming AI Act from the European Commission is very important and the foundation of that act is to protect the fundamental rights of the individual, which means that we are actually stating in that act that we cannot accept bias on gender, ethnicity and other very sensitive personal parameters. So we have then in law stated we cannot accept that kind of bias, but there will always be other types of bias. And sometimes it could be, for example, in the manufacturing process, a bias towards small screws versus large screws. And does it matter? Well, maybe not. It's just a consequence of the data you have.

 

Silvija: I will leave the floor to Christian in a second. But I just want to pull it into another example. It's good to give people these images so that they can grapple with the dilemmas themselves. And I was sitting and thinking of the driving ethics of a self-driving car. So there is an algorithm in there and the algorithm makes that car drive as safely as possible. And then we have many of these old ethical dilemmas in AI, the shunting problem being one of them. So let's say a kid jumps in front of the car and should the car hit the kid and 60% chance it might die or should the car swerve to a track next to you and kill you? And all of this has to be built into that algorithm somehow. And I guess these decisions are also not always global. I mean, in Norway, we protect the pedestrians, the weak, the soft traffic traffic participants in Saudi Arabia, there might not be the same preference for, you know, who do you who do you prioritize in traffic? So I guess many of these things also show that I can't always be global.

 

Elin: No. And if you extend that analogy a bit further and take more parameters into it, you see that it becomes impossible to think of it in a global perspective. So, for example, if you were to develop a self driving algorithm for the Indian market, you would train it to avoid cows.

 

Silvija: Right.

 

Elin: But if you take that to the Norwegian market, the AI could very easily mistake a moose for a cow. So it will end up avoiding the moose and hitting the tail or killing me. And that makes no sense. And I think if you dig too much into those discussions, you end up thinking we will never have autonomous cars because how are we going to solve this?

 

Silvija: But there is a super important management perspective in what you just said, and it is that we can't leave all the AI development to global companies. Some of this needs to be managed somehow, regulated somehow at a national local level, and it will have to be related to our values and our religion and our culture in some way in the future, right?

 

Elin: We might even find ourselves in a situation where I choose a car based on what kind of algorithm it has? Is this algorithm trained to kill me or the moose just to give a kind of absurd example? But if it is transparent what the algorithm will choose, then I, as a user of the car buyer of the car, will also have access to that information. Which means my decision might actually be impacted by. Whatever the algorithm is trained to do. So yes, I agree we end up with more local decisions, but at the same point, the time that means that the leaders, the decision makers in the local market need to understand why we need to have this discussion. They can't just say, well, somebody else will have to solve it. No, you need to understand why we have this discussion, because you will eventually potentially also be responsible for the outcome. You can't leave it to the machine.

 

Silvija: And the citizens the voters might actually demand to see. How have you actively formed these tools that will be forming our society? Right.

 

Elin: Yes. And with the coming act again, there will be a very strict demand for transparency of the algorithms with this level of importance for our life and well-being. Right. So there will have to be transparency.

 

Silvija: Christian, you had the point about computational complexity versus precision. What was that about?

 

Christian: Good question. And I think I have to break it down simply because the idea is somewhat complex. Right. But what I just found really, really interesting in your discussion, which you were having, was the point of complexity in a very general sense. Right. And when we talk about these types of complex systems and adaptation to them, it reminds me a little bit of this idea of the butterfly effect. Right? A very small action can have a very large outcome. What I find interesting is that when we talk about transparency and having open discussions and giving the consumer the choice, we are then also introducing another layer of complexity where essentially people play the algorithm, right? This is why we are very reluctant, for instance, to explain clearly how a credit credit scoring algorithm works or how algorithms and social media work. If you are an aspiring Instagram creator, you would be very eager to learn how you get first placement, any type of visibility, content serving algorithm, right? Who gets to be seen first. So whenever people have incentives, not getting harmed by their car, getting something by being the first one to be recommended, I think they also come then this layer of people playing the algorithm in play, right? So it's an interesting discussion of how much transparency can you really have until human nature essentially kicks in and when people start to play with that? So that's a really interesting aspect about complexity, which you point out, that I think most likely the solution cannot be one global system which will be adapted every other year or so. I guess it needs some type of collaborative open systems which are very much open for quicker adaptation at local levels to deal essentially with these complexities. And now to the question about complexity versus precision, I think, or this idea where you want to make the trade off, right, having the best possible algorithm, the one which serves as the best possible content that makes the best decision where it is freedom from error. I think that's a very interesting discussion also. You're correct to say that our world is biased. But it is also biased for the very simple reason that people have certain preferences. That people are getting used to being used to something. And my question then essentially boils down to how much on the one hand? How okay are we essentially with algorithms replicating current human preferences kind of like that. People who like this type of content prefer to have male employees at a technology company. So do we really want to essentially fiddle with the algorithm? Is that something which is a good thing to do or not?

 

Elin: Good question. Now, with the rise of the metaverse, I've asked myself the question several times: how far into this digital landscape do we really want to go? And is all digitalization good for us? Should we want it? And on the other hand, I think having a lot of people ask themselves those questions over many hundred years. And maybe I should just stop being so negative. But I still need to. I think we need to have this discussion with ourselves every now and then. How much do we really need or want to leave to algorithms and digital solutions? Where do we lose humans in this?

 

Silvija: I want to answer that question as well. And what I was thinking is that, first of all, I think it's important that we have an active and kind of moral review of how we are participating in this new digital hybrid life. But I think that we can't decide yes or no. It's a little bit like you don't like gravity. Well, too bad you live with it. And I think you don't like digitalization. Well, too bad it's coming and AI is coming. So it's very much learning to work with this new level of tech and learning to live with them. And I think the complex kind of moral landscape that you're painting here, both you and Christian, is super important. We can't be just passively wandering in this metaverse and hoping that somebody figures out a new, better future for us. I think we have to be asking those questions, but I also think and we had a wonderful discussion, Christian and I, which will be coming in one of the later sessions with a lawyer, a very interesting lawyer, who was telling us about how regulation needs to be done more at the level of principles rather than concrete rules for exactly how should the self-driving car regulate. But if it's wildlife, you as a country should be able to decide, maybe on a slider, what priority would you give that versus a duck on the road versus a human on the road. There should be some sort of higher level principles. And I think this is very important because attitudes in these principles will be changing. We can't rewrite the AI at a detailed level every second year and the data will be what it was. But it’s like the whole MeToo development, the last five years. Absolutely a no go ten years ago, people wouldn't have been creating the same data. So I think that's why it's super important to have some sort of a parametric approach to AI, because the world is changing and we need to be adjusting to it, right?

 

Elin: Yes. And if I can continue on that reasoning, I very much agree with you that we sometimes need to say, well, maybe not in this direction, because I think there are so many use cases where we really need these new technologies for the future of our planet, the future of our societies or our businesses. And then there are quite a few use cases where we should maybe rethink. Do we really need to apply these technologies here? And then I want to bring in one more perspective, and that is that we tend to think that anything digital is sustainable, right? Because it doesn't leave any footprints, it just floats around and there's wireless and there's data. We can't see them, but that is not correct. Anything digital has a very physical footprint and we need to think more in terms of computational efficiency. What is the footprint of the computations? I need to run my AI for this use case and we need to take that computational efficiency into the ESG reporting and into the business cases. I can give you a very small, tiny startup example I think is kind of funny, but to me it explains where maybe we should say this is too much decadence, we don't need this. And that was a startup who had come up with an idea of a hydroponic flower vase. It was just a vase for one plant type herb or salad or something like that. And then they had added a chip to it, or maybe more. Anyway, it was a digital solution in the pot to monitor the health of the plant, and then you could get the status of the plant on your mobile phone. To me, this is an example of what we do not need. We do not need an app with a data tip to monitor a plant in a pot. It’s nothing to do with the future of food production. It's nothing to do with sustainability. It is only a design item, a cool gadget, but still it has a very physical footprint because of the data chip and mind you, the data type, meaning the semiconductor industry is really dirty. So it definitely has an environmental footprint. And then we're using data to monitor and predict the future life of this plant. But it doesn't really serve any purpose other than it's a cool gadget. But on the other hand, we have all these examples within renewable energy, within health technologies, food production, where we really need these new technologies for a sustainable future. 

 

Silvija: I won't even get started on blockchain and its environmental impact. But going back to Christian and his computational complexity and his point that these computers can calculate so much and all these many layers in deep learning and so on, the models get more and more precise, but it has quite a big cost, as he was saying, both computationally and then precision wise. There are errors, there are mistakes. Sometimes these learning algorithms actually find patterns that are not there. I think that is called phantom patterns. I guess it's as Christian was saying, perfect precision doesn't exist. Right. And we need to learn to live with that.

 

Elin: Yes. Algorithms, they just make assumptions based on a pattern. It is not a factual cause and effect. Answer. So there will always be errors, just like with humans. Humans make mistakes too, and we accept that. So we also need to accept that the algorithms trained to replicate and surpass humans, they also make mistakes because they also won't be better than the data we have given them to train on. Right.

 

Silvija: So what I'm bringing with me as kind of the biggest impression from our conversation. And then I'd like Christiane to do the same afterwards. I'm thinking it's been a wonderful exploration of bias data, where data is created by past human behaviour and AI is then replicating our decision making and sometimes kind of putting a magnifying glass on our unfair preferences from the past. So rather than calling the algorithms and the data faulty, perhaps we need as humans to take our human responsibility to keep adjusting and improving and also remembering that even with all the amazing computational power and mutational complexity that Christian was talking about, there will always be a need for a human in the loop. So as much as we create the faulty data and the algorithms do their job as well as they can, we are the last link that needs to make a decision, and we are the only ones that should be exercising ethics and morality in this future. That's algorithm driven and data driven. That's basically where I'm thinking managers, but also all citizens and all consumers need to remember that AI doesn't take away that responsibility from them. It can't think for them. It can't be moral. Instead of them, they need to find a way to exercise that action.

 

Silvija: What do you think, Christian?

 

Christian: Everything you said. And in addition, what I took in now from your ideas, Elin, is also this idea of thinking in layers, right? That it sometimes seems a little bit daunting to talk about bias. Right. And say hiring is biased. Our self-driving vehicles might have some sort of bias down the road. But what you explained really well to us, I think, is this idea that we can break problems of bias down, right? It has many, many different components. So the idea of unfairness and hiring might need to be broken down into global systems, regional systems, local systems. It might have adaptions towards industries and all that. So essentially breaking a problem or a challenge of bias down into its different continuing parts and then thinking about where do we need to tweak at our specific layer? I think easier and maybe a related idea which you also raised is the idea of do we need computation? And by extension then also this potential of error for this particular sub problem on our layer or not? Or is there anything where we, for instance, can do better with any type of human oversight or a system which maybe has less precision, but also then less potential for any harmful outcomes. So this idea of layers and essentially thinking about tradeoffs or decisions of do we really need this right now given all the external circumstances? I think that is something which I took from your explanations.

 

Elin: I think if I can give one last comment, there is a wish for these leaders to stop telling each other that AI is a black box and we don't understand it because that is not true. We do understand the mathematics behind very well, and we have understood this for about 100 years almost, at least 80. We really need to take responsibility for the fact that these algorithms train on the data we have given them. So we as humans are responsible for the data and the data quality and understanding the data. And only when we say, yes, I do accept that it is not a black box. I do accept that there is a mathematical recipe and I do accept that I have a responsibility for those data. Only then can I, as a leader, really understand my responsibility for the decision making. Around how we apply these algorithms as well.

 

Silvija: I think that's a great conclusion as a very faulty human. I pressed the wrong button right now so I couldn't find you on my screen. Sorry. So I think you have a quote that I really like and I'd like to kind of just throw it in at the very end of our conversation. And we were asking you basically, what's your general greatest worry? And you say something like that, greed will surpass responsibility. And I think that's a super important thought when we are creating systems that are bigger than our markets, that are bigger than our financial mechanisms of self regulation. And what do you think, Elin, will we use AI for good.

 

Elin: We will use A.I. for good. But we will also, as a total humanity, use A.I. for bad. It's just the way humans are. If we had all been just good, fair and rational, we wouldn't have a war in Europe right now. And I think that is just human nature.

 

Silvija: Don't blame it on the tool. Blame it on the user is what you're saying.

 

Elin: Yes, AI is just maths applied on data. You can't blame the mathematical recipes. They just do what we have told them to do and we have trained them on the data. We as humans need to take the responsibility for what we train them on and how we apply these algorithms, which can be really powerful tools for good and for bad.

 

Silvija: Thank you both very much for a very inspiring and educational chat.

 

Christian: Thanks so much.

 

Du har nå lyttet til en podcast fra Lørn.Tech – en læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å ha lyttet til denne podcasten på vårt online-universitet lorn.university.

 

Quiz for Case #C1167

Du må være Medlem for å dokumentere din læring med å ta quiz 

Allerede Medlem? Logg inn her:

Du må være Medlem for å kunne skrive svar på refleksjonsspørsmål

Allerede Medlem? Logg inn her: