LØRN Case #C1169
Autonomy in transportation
In this episode of #LØRN Silvija and Christian Fiesler talk to Loek Vredenberg , a seasoned professional working with implementing and operationalization AI.This is part of our series on Artificial Intelligence created with BI. They will discuss the different use cases and considerations when implementing AI.

Loek Vredenberg

CTO

IBM

Christian Fieseler

Professor of Communication Management

BI

"AI will be the best or worst thing ever for humanity, so let’s get it right"

Varighet: 38 min

LYTTE

Ta quiz og få læringsbevis

0.00

Du må være medlem for å ta quiz

Ferdig med quiz?

Besvar refleksjonsoppgave

Tema: Digital etikk og politikk
Organisasjon: IBM
Perspektiv: Storbedrift
Dato: 220407
Sted: OSLO
Vert: Silvija Seres

Dette er hva du vil lære:


How to approach AI

What considerations to take when implementing AI

What are the challenges with AI

Mer læring:

Life_3.0:_Being_Human_in_the_Age_of_Artificial_Intelligence av Max Tegmark

Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World av Lansiti & Lakhani

Det store spillet, hvordan overleve i algoritmens tidsalder av Bår Stenvik

Del denne Casen

Din neste LØRNing

Din neste LØRNing

Din neste LØRNing

Dette er LØRN Cases

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. 

Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

Vis

Flere caser i samme tema

More Cases in the same topic

#C0061
Digital etikk og politikk

Glenn Weyl

Professor

Princeton

#C0147
Digital etikk og politikk

Hans Olav H Eriksen

CEO

Lyngsfjorden

#C0175
Digital etikk og politikk

Hilde Aspås

CEO

NCE iKuben

Finn Amundsen

CEO

ProtoMore

Utskrift av samtalen: Autonomy in transportation

Velkommen til LØRN.Tech - en læringsdugnad om teknologi og samfunn med Silvija Seres og   venner.

 

Silvija Seres: Hello and welcome to the case of LØRN and BI Norwegian Business School. This is a part of the series we are creating together and we're trying to explore the main challenges and opportunities to create good, fair and responsible AI and this conversation will be a sort of a guest lecture to the, by course, responsible A.I. leadership. My co-host, as in the whole series, is Christian Fiesler, professor of communication management at BI and our joint guest is Loek Vredenberg, who is the CTO of the Norwegian part of IBM. So we're going to play this over 30 minutes in a format where you have to introduce yourself first. Loek, we want to get to know you and a little bit personally as well. So if you have some sort of exotic hobby, you will have to let us know. And then we are going to chat and find out what you mean by practical and fair. I and then Christian will try to conclude with what we both think is the best idea we managed to extract from your head in this informal chat.

 

Loek Vredenberg: Good plan.

 

Silvija: Very good. Well, then I have to ask you gentlemen to introduce yourself, and we will start with our dear professor first.

 

Christian  Fieseler:Yeah, hello. I'm Christian Firsler, one of the hosts of this podcast, together with my colleague Sampson, which you might meet later. And yeah, very happy to interview a few thinkers and practitioners about how we can responsibly build AI in Norway and beyond.

 

Silvija: Very cool. So you've been introduced as a thinker and the practitioner of AI. What do you say?

 

Loek: Well, I think that's correct. I've dabbled with a number of implementation projects in Norway. The last, I would say, seven years, maybe both directly involved or as a sponsor of the project. And I also read a lot. Yes. So I have opinions, as do my colleagues and my family by the way, I have a lot of opinions on everything. Basically, if they're good or bad, I don't know for others to judge. But I think it's important to have opinions so you can have a dialogue in discussion with others and then form that opinion over time. I'm Dutchman, living in Norway. My colleagues call me the Flying Dutchman. Even if I can't fly, I'm a technology optimist. As I call myself. But there are obviously and that's basically also interesting in this talk, there are negative effects on all kinds of technology, development, research, development and implementation that we need to take care of. And that is something that I'm very, very focused around, both with my projects where I'm involved, but also in general. I don't think all technology that we're developing at the moment are good or are driving us in the right direction as a society. And that's something that we need to discuss.

 

Silvija: So, here's my reaction. And I am also very opinionated as my family and my colleagues would say, I believe that we actually need to take positions. And, you know, waiting and seeing is not the strategy. It's basically cowardly. Sometimes I think we need to think carefully about things and I need to think and rethink and reposition as the situation changes. But I think we actively need to think about the future. It's like David Bowie says, the future belongs to those who can hear it coming. And I think we need to be active listeners, which is what the students now are doing as well. And then the other thing I want to say is that I'm not a technology optimist, by the way. I'm a technology opportunist. So I think that's even more active. So it's like you say, we need to think about the opportunities. We need to believe that the future is good, I think, in order to be able to both live and want to build something. But I think we have to be also a little impatient in finding the right opportunities and openings for our companies, for our and then and then exploiting them in a way that supports our cultural values as well. What do you think about that?

 

Loek: Yeah, I totally agree. I think that we need to take a stand on what we believe is proper implementation of technology versus improper technology implementation. And then we need to ask as Groucho Marx said: "I'm a man of principles. I have many. But if you don't like them, I have many more". And that's not the way we should treat principles. They should mean something. So there are some of the things that I've seen internally in our organization when we talk about ethical development, research on AI types of technologies and how you do that. There has to be some sort of governance structure around that. That is not too tight, because otherwise you stymie innovation and development. But at the same time, there has to be some sort of correction. And in my role as lead architect on many, many projects where I also was responsible for this this governance in the project, somebody from from the organization, from the client where I was working with a large public sector client, they said, it's not the fact that you can say yes to things that is important. It's the fact that you can say no to things and stop them. And that's where we should do much more, I think where we say this is right, let's promote that this is wrong. Let's stop that. And we don't do that enough.

 

Silvija: I think I think you're right. And I think that sometimes we also, I think, cop out a little bit because we say this is something we don't know enough about. This is something I don't know enough about. So I think it's also a joint responsibility to figure things out as quickly as the world changes. And that's partly what we're trying to do here. Christine and I are beginning to converge on an idea. And Christine, please stop me if you disagree. But this idea of regulating technology, etc., I think the word principles that you use, look, is extremely good because the way that we try to both manage projects and growth and even the regulation of technologies, a little too rules based, a little too specific. So what I'm hoping for is that maybe in this course we can also discover a few guiding principles for developing responsible AI and new technologies related to that. And maybe we can now play a little bit with some ideas around those principles. What do you think, Christin?

 

Christian:Absolutely agree. Yes. I think the idea that you both right now point out. Right, that on the one hand, we should be principled, but also opportunistic. I think that is not necessarily just a position. Right. But because in order to be opportunistic as maybe as understood as figuring out what works, what doesn't, where can we essentially create the greater good? I think that should be governed by principles. And I would really, really love to hear what you, for instance, look, as a practitioner, good principles to live by or to design by or manage by.

 

Loek: Well, in my organization I'm quite old, so I'm a lifer in IBM, as they say. I started there in 1985. So it's a long time ago. And in IBM, we have been focused really around how do we do this properly? And as you may know, we have had a research department for a very, very, very long time. And they do basic research. Basic research is quite interesting, but it could also potentially be dangerous. So we have from the outset been very focused around doing that type of research properly, having the right boundaries around that also for AI. So when we started really in earnest to work with this technology in maybe the mid-2000, we did a lot of stuff before that, but we really began focusing in the mid-2000 when we started working on the Japanese solution in research, we basically established three main principles. One is that the concept or the notion that I should augment human intelligence, not replace them, or not artificially create artificial intelligence or augment human intelligence, i.e. positioning AI as a tool. Second is that the data, which is always important in these kinds of solutions, the data and the insight gleaned from that data is owned by the creator of the data that never transfers. And this I know there's a number of organizations in the marketplace today that don't look at it this way, but we do. And second or thirdly, that new technology is not just for AI, but new technology in general has to be transparent and explainable. If we don't know what's going on, then it's in essence or on the outset is dangerous. It could be a good result, could be a wrong result. We don't know because we don't know how it works. So those were the three main principles. But we thought that's not enough. You can't run a business based on those three principles. You need to operationalize those principles in some way. So then we said, okay, how do we do that? We need to have some characteristics on what is, as we started out, fair or good AI and there are some there are some characteristics you could say. Well, first it is obviously fair. AI needs to be fair. That means that it gives equitable treatment to the individuals that are subjected to the AI models that are being used. And that means that you have to have a data set that is trained, that encompasses all the subject elements of the population that you want to use a model for. For example, in a lot of terms, we call that bias. If it's not correctly trained, it's biased. And so that's obviously that's important that it needs to be robust. So when you expose an AI model on the Internet, for example, as a service, we need to make sure that it cannot be attacked by anyone who wants to bombard it with data, with a specific view of the world, and then change the way the AI basically acts. So you need to make sure and ensure and there are technologies to do that, to make sure that that AI is robust, then it needs to be transparent.

 

Loek: So what we've done, and this is just a proposal for IBM research, is basically copying what is being done in foodstuffs. When you buy canned food, you have on the label what's inside of the can. Basically it's a very good comparison because you don't know exactly what's in the can because it's opaque. You can't see through it. So you need to have something that describes what the model does, how it was trained with data, etc. This is what we call a fact sheet, which is basically a certificate of the AI model that's being produced. And then last but not least, AI needs to maintain data privacy. So even if you expose your train data with personal information on people, once the AI model has been trained and it's using that knowledge that insight in the training data for individuals that use the model afterwards, there should not be any possibility to understand what data has been used to train the model. Because that needs to be private. Obviously also the data from the person, it's subjected to the AMLO. Also has to be private as well, so it needs to be protected. So from that perspective, we're not talking maybe not as much about fair and good AI but AI that we implement with trust, AI that we as human beings, as customers, as patients that we trust that the AI model will do good for me when I'm subjected to that more. That's what we're trying to achieve.

 

Silvija: Can you give us some examples? Because IBM has been at the forefront of developing AI in many cases. So some of your favorite examples of where this works well?

 

Loek: I think we have a lot of good examples in health care, actually. Even if we have struggled. We faltered. But we are doing a lot of good work now with models in health care, especially around image recognition of different types of radiology images, for example, or trying to identify cancer or lesions or those kinds of things. And there's a lot of progress. I know there's a lot of investment going in from a lot of companies in the industry, not just IBM. And I think there's been really, really good progress and there's a real incentive to do that because there are a lot of problems in the way we are doing that type of work today. There's a lack of radiologists. They don't have enough time to do proper investigation of the information, the medical technical units like scan scanners and those kinds of things get higher and higher resolutions. So there's more and more detail on the pictures that are maybe not seen by my doctor. So using AI in that area is really, really good. But on the other hand, if you look at image recognition or face recognition, which basically you could say is the same type of technology that we're not doing good. So we had technology doing facial recognition. And we actually were told by a customer of ours that our facial recognition technology did not do well. It was biased. It did not capture all the nuances of the different types of faces. So we work together with the customer, IBM Research. We produce a new data set where we thought, this is a non-biased, all-encompassing data set with images of over a million people involved in that data set. And we still didn't get it. We still were not able to create an AI model that was unbiased, that actually was fair and balanced in the way it performed. So that led us actually after our AI review board internally said we should stop this. Then Arvind Krishna, our new boss, was actually one of the first things he did. He announced that we were going to get out of the image recognition for facial recognition altogether. So we stopped developing, we stopped the research and we stopped the products. And that actually gave us a good idea on regulation as well because regulating say, image recognition is bad: stop that! In general, then we wouldn't have been able to do that type of image recognition in health care. Because we didn't have that technology then we couldn't use it. So we started thinking there has to be a way where we can say we can use this type of technology in some ways, in some use cases, in some contacts, but not in others.

 

Loek: Thereby, we started thinking about more risk based type of regulations and our chief ethics officer, Francesca Rossi, she's part of this EU Council around AI and she's working with the EU members on this AI Act that are working on the regulation that is coming from the EU around implementing and developing AI in the EU. And they started talking about we have to be more position focused around regulation, saying we can use it here, we can't use it there, and if we use it then we should use it in this way. Much more precise than just saying image recognition can't do that. So. I think that was a good example of how we tried to do it, right. We found out that there was no way that we were going to get there. From a technical perspective, they said, okay, then we have to stop it. We have to really do something else, something different. Other areas, I think, where we use image recognition, for example, is in the concept of industry 4.0, where we do product quality image analysis in product lines. So we have cameras in the factory looking at the products going on the conveyor belt, taking images, doing all kinds of different types of spectral analysis, and then saying, is this product correct? Yes or no? Now there's maybe not as much ethical or fairness in those kinds of applications of image recognition, but at the same time, you could say that if it doesn't capture faulty products and that could be whatever, it could be a bicycle, it could be food, could work, were actually working with DG Foods and frozen foods to research programs here in Norway where working with them not only image recognition side but on the other side with getting value out of that using blockchain. But the interesting thing is that there are repercussions of not having a good model, identifying those problems with the products. It has consequences for the consumers that buy those products that are in the initial afterwards. So I think there's a lot of good use cases and good examples of implementations of that there and then we have a lot of focus on the natural language processing, a lot of documentation where we can speed up the way structured or unstructured data, I should say process based documents can be processed within a company with higher quality than if you didn't use those kind of models. So actually, there's a great example in Canada made by a student called Do Not Pay. Have you heard about that? It's a crazy idea from a student, he said. He's sick and tired of getting parking tickets, so he created a jackpot that basically had read the parking laws in his state. And that chatbot can ask a number of control questions to when you've gotten a parking ticket and then understand if that parking ticket was correct or not. And if not, it creates a letter with all the right legal terms that you can send to the one that owns the parking lot and say, AI contested this parking ticket and it saved millions of dollars and it created an AI type of app. It's not really what we did, but still it's making more and more sophisticated use cases. And we did that in the course of a week. He had the first version in there during the weekend and then he finished it during the week. And if you look at that, the adoption of that technology, it makes people get legal access that otherwise wouldn't have had legal access. So it makes processes where there's constraints in society at the moment. Moreover it's available for other people, not just for the rich people. So I think and that's why I say I'm a technology optimist. I love those kinds of use cases where we actually can speed up those kinds of processes and free up capacity and have the people that work in those processes use their capacity elsewhere.

 

Silvija: You are Dutch and you have lots of opinions and I need to summarize them. So what I've heard you speak about is that you believe in technology's power of being inclusive, democratizing, and there are lots of really good examples of that in what you just said. And then it's really important that when there is an example of the opposite, it's the technology`s responsibility to backtrack and to say no is super important. But, you know, all of this is very difficult. I mean, AI is very complex in itself. And now we're talking about lots of other stuff, ethics, etc., surrounding it. So what advice would you give to companies, managers, future managers listening to this course to get started with AI in the right direction? You know, I'm looking for something that's relatively simple through three steps, you know, and that doesn't scare them breathless from the start.

 

Loek: I think first and foremost obviously it's starting with the problem. What's the problem you want to solve understanding that problem really? And is that something that both has value for you in your organization, but also for either your users or your customers that they can get value out of as well if you solve that problem. And second then is look at the data that you need to solve that problem. So obviously, is the problem that could be solved using this type of technology and then finding out what type of data do you need? Do you have that data? Is it personal information or sensitive personal information or not? Is it a use case where you actually decide if people get money, like, for example, social services, if you take decisions or at least recommendations of a person's application, yes allowed or no. Those kinds of things you should be aware of and then act accordingly with regard to risk. So I think that's in one word, it's a risk assessment of the idea that you have and if this, for example, is just I want to speed up the way I do this work, which I do manually now I want to automate this and speed it up.

 

Loek: Then most likely, there's not a lot of risk of creating an AI model that is not fair or that has bias or whatever. Then I would say, experiment with it, do it. At the same time, if you feel that there are restrictions, then at least make sure that you have the right skills in house that understands that risk and if not work with partners. There's a lot of good service partners in the market in Norway. Work with IBM, IBM gives support. We're working together with the Norwegian Data Center, for example, in that capacity, and give both advice and support to clients that want to start with AI and implement that in operations and we do a lot of that work actually for free. So there's a lot of and obviously academics. Nora is a great partner, Norse there's a lot of groups that actually are willing and interested to work with commercial organizations, also public sector organizations on that. So three things: understand the problem, understand the risk of the problem, understand if you have the data and if you don't have the data, can you license it? Can you buy it from somebody else legally? Do this properly, make sure that you have this data ownership resolved, even if you use external data. Thirdly, make sure that you have the right skills in-house or align yourself with somebody who has the skills, who has the experience. And it's the same thing with handling data as such after GDPR came. That's there's still a lot of companies, I think, that don't handle data accordingly. They are not aware of the risk they're running of not handling personal information properly, making sure that that they are handling according to basically the law and that they're running a risk of getting into the papers for the wrong reasons, hurting their brand and the same can happen with application of AI the wrong way, especially if this AI Act, which EU is still working on, becomes a goes into effect. Then we need to make sure that when we start projects that we actually do those projects in accordance with the law. But otherwise, I think, as you said, we need to experiment. We need to be able to do these things in experimental ways, maybe even with synthetic data or other types of test data that are not privacy driven. 

 

Silvija: I have to summarize a little bit and unpack before we get into synthetic data. So now you're at a different level altogether. So what you said is basically the most important thing is to identify the problem. Figure out what you want to solve. And I think this is so important because so many companies start from the opposite end and say, oh, let's figure out what I can do for me. It's actually what do you need to solve? And how could good statistics and good data help you to figure it out, identify the problem, but then also think about the risks and be very, very kind of truthful and fair about those risks because you might be getting on the road, which you really don't want to be getting on, and then find partners, find data or buy data, get the skills I heard you say, and then build your model, your hypothesis, and use the model and continue learning. Make sure that you're legal in terms of both GDPR and the upcoming regulation. Perhaps the most important last point is that you said experiment and play. You know, this is something that is an amazing tool. And I think it's a part of this exploration of what it can do for me within the principles that we've defined for safety. Now, if you agree with that summary, I'd like you to say two words about what's the point of synthetic data. And then we need to ask Cristian to make sure that he has got the most important idea out of your head.

 

Loek: What's the point with synthetic data? In a lot of use cases, we need to use proper data specifically about people. And that's where usually the issue lies. So one of the things that we've been struggling with, and this is the industry, not just IBM struggling with, is typically the data that AI uses is in a lot of cases, unstructured data. So how do you anonymize or depersonalize unstructured data? There are ways to do that. But how do you prove that you have really taken out all the identifiable data from the data set? That's one problem. We can't prove it. Mathematically, it's not possible to prove that if you have a document of 60 pages with information on the person. Is that just the name, the address, the age that you should remove? Or are there actually things said about that person that you could use to do a Google search and then find the person online because he has in a Twitter feed or on his Facebook page written about an episode in his life or her life that actually concerns that.

 

Silvija: So that's the only way to do that is to make fake people.

 

Loek: And there comes synthetic data. The research now is much more focused around using real data, but turning it into a fake person. So making sure that what are the characteristics of that data translates that into a fake person with the same types of characteristics, obviously then different ones, but that can trigger the same type of reaction of a model. So the problem with anonymizing is that you filter out important characteristics of the data which then is used to train the model and the model doesn't understand or is not reacting properly because you've taken out the characteristics that basically that person is. So if you are able to synthesize those characteristics from the person into a new dataset, which is a fictitious person but has similar characteristics, then you can still use it to train the model and you get the right results. So that's an important area of research, I think where there's a lot of work to do still, but it's very promising.

 

Silvija: Christian, what's the one thought that you want to hear about and what the one thought you will remember?

 

Christian:I will remember many, many thoughts, and I think you will as well. But it's a little bit hard to say. This is one thought, right? Because, you had this wonderful manner of essentially breaking it down into several interconnected steps. We have to approach it this and this and this way. But I think to maybe put a little bit of a tie around what we are discussing today, I like the point that you both raised this idea of experiment and play where it can do no harm. And I think in one of our upcoming lectures, we will also talk a bit more about the idea of sandboxing, finding some type of collaborative learning space to figure this technology out. And I think what you also just mention with the idea of synthetic data or earlier on maybe making the conscious decision, this is a field where we can currently experiment and learn, for instance, maybe more in food processing than in facial recognition. I think both of these approaches are right thinking where is experimenting, playing, learning, where is that? With the current way that we understand the technology, is that feasible and safe? And on the other hand, doing that with tools like synthetic data, which are inherently safe, that is, I think, a really good point that stuck in my head, which I didn't consider as such before.

 

Silvija: I think from my perspective, I actually love this idea of experiment and play as well because I think we can get so heavy talking about all the mathematical complexities of AI and, and all the ethical complexities of AI. And I think it's a technology that you only understand once you start playing sandboxing like Christian was talking about now. So there is a bigger risk of not doing it at all then doing it stepping slightly in the wrong direction and then, you know, learning quickly and adjusting your way of applying it. So I guess a responsible AI in my head is also partly actually active AI and then thinking responsibly.

 

Loek: I'm making sure that you always have to, I think a good example of reflection is if that AI model was used on me: Would I like that or not? If the answer to that question is no, don't do it. Or at least make sure that I have to rethink and remodel or redesign the model. Until I say yes to that type of question. So asking those kinds of questions, both when you create your AI solution or when you do research in both areas. So right from the start and right where you are applying the technology you need to ask that same question is if AI was being subject to the result of that model, would I have a problem with that? Yes or No?

 

Silvija: You use this quote by Max Stelmach: "AI will be the best or worst thing ever for humanity. So let's get it right." And I think your chat with us now proves the necessity of that point really well. Christian, would you like to conclude?

 

Christian:I think that's a wonderful conclusion. I think I would conclude with saying thank you so much, Loek, for spending the last half hour with us and sharing all your insights and experience.

 

Silvija: Thank you very much.

 

Loek: Happy to be here and thanks for having me.

 

Silvija: Thank you.

 

Du har nå lyttet til en podcast fra LØRN.Tech - en læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å ha lyttet til denne podcasten på vårt online universitet LØRN.University. 

 

 

 

Quiz for Case #C1169

Du må være Medlem for å dokumentere din læring med å ta quiz 

Allerede Medlem? Logg inn her:

Du må være Medlem for å kunne skrive svar på refleksjonsspørsmål

Allerede Medlem? Logg inn her: