LØRN Case #C1182
Gaps in the regulatory requirements vs modelling
How should the public sector work with artificial intelligence, and how far has NAV come with exploration of AI? We learn how NAV works with AI to help solve future challenges. Our guest Robindra Prabhu, is a data scientist with NAV IT Data & Insights, and our co-host, Samson Esaias, is an Associate Professor in the Department of Law and Governance at BI Norwegian Business School. This is a part of the series we are creating together, to explore main challenges and opportunities to creating good AI, as part og BI course Responsible AI Leadership.

Robindra Prabhu

Data scientist

NAV

Samson Yoseph Esaias

Associate Professor - Department of Law and Governance BI.

BI

"I think many countries would be envious of us if we look at the sort of quality of the data that we have in the public sector, and that gives us opportunities."

Varighet: 45 min

LYTTE

Ta quiz og få læringsbevis

0.00

Du må være medlem for å ta quiz

Ferdig med quiz?

Besvar refleksjonsoppgave

Tema: Digital etikk og politikk
Organisasjon: NAV
Perspektiv: Storbedrift
Dato: 220617
Sted: OSLO
Vert: Silvija Seres

Dette er hva du vil lære:


How to develop an AI based technology for the services in public sector

What kind of technologies are the public sector working with and what are they using it for

What are the legal challenges that NAV are facing and how are they trying to tackle those challenges

How fair is the use of AI in the public sector

What are the requirements for explainability and transparency for the public sector

Del denne Casen

Din neste LØRNing

Din neste LØRNing

Din neste LØRNing

Dette er LØRN Cases

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. 

Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

Vis

Flere caser i samme tema

More Cases in the same topic

#C0061
Digital etikk og politikk

Glenn Weyl

Professor

Princeton

#C0147
Digital etikk og politikk

Hans Olav H Eriksen

CEO

Lyngsfjorden

#C0175
Digital etikk og politikk

Hilde Aspås

CEO

NCE iKuben

Finn Amundsen

CEO

ProtoMore

Utskrift av samtalen: Gaps in the regulatory requirements vs modelling

Velkommen til Lørn.Tech - en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

Silvija Seres: Hello and welcome to a case with LØRN and BI Norwegian Business School. This is a part of a series we're creating together to explore the main challenges and opportunities in creating good AI as part of a course on responsible AI leadership. Welcome to our guest, Robindra Prabhu, who is a data scientist with NAV Data and Insights, and my co-host Samson Isaias, who is an associate professor at the Department of Law and Governance at BI. Lovely to see you Robindra. On a personal note, we used to work together in wonderful technology. I remember you as a digital geek, if I may say so. You have your PhD and you are not talking about digitalization just for fun. You are super ambitious on behalf of our society when it comes to digitalization. So I'm super excited now.

 

Robindra Prabhu: Well, thank you for that kind introduction.

 

Silvija: I'm going to ask you to both introduce yourselves, please, and then tell our listeners, both the students from BI and others, on what you want them to be listening for in this conversation. We have about 30 minutes to talk about responsible AI, fairness and explainability in data usage in AI crucial to public services. Now AI is at the heart of our society and how it is applied. It will be super important. So first of all, please, Samson and then Rabindra. Who are you and what do you want to talk about and why?

 

Samson Esaias: I lead the responsible leadership course together with Professor Christine Fisher. I have a legal background. I have been working with law and technology for several years. I also teach courses on the intersection between law and technology, including data protection law, and a bit of competition, and we also are looking at some of the new legislative initiatives that we now have, including the AI Act, the Data Act, and data governance.

 

Silvija: You’re one of the lawyers we need in the future, because you don't dismiss digitalisation, you actually try to fix it.

 

Samson: The objective with this podcast was NAV. We came in contact with Rabindra through you and through Carrie Lohmann from the Data Protection Authority and the idea was to have also one case from the public sector where they are trying to develop an AI based technology for their services. We wanted to see what kind of technologies they are working with and what they are using it for. What are the legal challenges they are facing and how are they trying to tackle those challenges. You alluded to some of the issues in relation to fairness, especially when we are talking about the use of AI in the public sector. We basically have to make sure that the technology is fair. And then we also have requirements for explainability transparency. So perhaps I think Robindra will give us an insight into some of those challenges and how they are trying to deal with them.

 

Robindra Prabha: I'm a data scientist with the NAV. I have a background, perhaps a geeky background in experimental particle physics, where I spent a lot of time analyzing huge amounts of data and realized that a lot of the techniques that we were applying to that data, not very different techniques, were applied elsewhere because there was so much data being produced outside of our research center in society. I remember that it was a bit of an eye opener that my colleagues were starting to apply those same techniques elsewhere. Super interesting. After a stint in research, I joined the Norwegian Board of Technology to explore these issues further. I always had sort of an eye for the public sector, huge opportunities there. I also realized, and I think that was an eye opener for me, that it's not just sunshine. There are definitely some challenges here and they are very thorny issues that are not obvious how we navigate. So that brings me to how I landed here now, because we're seeking data scientists to explore this field with the passion for the public sector, with the passion for data, for doing this in a responsible way, I couldn't think of a better place to do this. And I've been here since.

 

Silvija: I want to kind of just serve in a couple of scenarios, examples. I'm thinking when it comes to fairness and public services, we've heard about cases where people got denied insurance or some welfare money because of maybe stricter interpretation of their position by AI than what's fair. Maybe there were some assumptions that shouldn't have been there. Maybe there was some biased data. And then if you want to ask why I didn't get this service or this money or this loan, you might get an answer that we don't know. It's just the AI that says so. And that's the explainability problem. In Norway, we have a lot of our public services delivered by NAV. So when you go on maternity leave, when you're born, when you die and actually everything in between work, unemployment benefits, etc., all of this is managed in one institution which is very rare and that is NAV. That means you have a lot of data about what the population of Norway is doing. But also a huge responsibility to kind of supply these services in a good way. So can you help me where the problem arises with data and I related to this?

 

Robindra: Well, I think you touched on it. I think that is the essence of what you just said. We have a unique institution in Norway. We've put all these public services, everything from early childcare to your unemployment benefits, your sick leave benefits. Everything that makes up the backbone of the Norwegian welfare system is under one roof. I know there's a lot of opinions about what NAV does and how it operates but there's also a lot of trust that goes into this. Trust in managing all this in a way that is congruent with the ways that we want this to operate. Now, what happens is that we have all this data from all these different services under one roof. It doesn't mean that we can connect them all together. It's not that straightforward, but the data is there and it's actually not half bad. I think many countries would be envious of us if we look at the sort of the quality of the data that we have in the public sector and that gives us opportunities. But it also challenges what kind of a welfare state we want in the future? What is it like? What kinds of values do you want to shape the future?

 

Silvija: Sorry, I have to ask you and this goes back to Samson who can help us actually remind our students, both from BI and otherwise, on the two concepts of fairness and explainability and perhaps even data bias. But you said, Robindra, that the data is not half bad, which means we have quite good siloes, relatively structured data, all connected across the silos with our personal ID number going back maybe 70 years, which is really quite unique. Maybe Iceland beats us, but not many more. You were saying it sets some normative questions. What kind of a state do we want? So do you mean, should our state, our public sector, give equal rights to everybody? Do we ensure that or do we want to stop them from knowing too much about us? Or what do you think are the thorniest kind of normative issues?

 

Robindra: Okay. Well, if I had to pick, that's a hard one. There are many issues that arise. But I think one that is very pertinent is that we have the opportunity, in a sense, and I'm not saying this is straightforward. There are definitely legal conundrums here, but to find ways to be more event driven, if you like, to understand the circumstances around a person and what kinds of services that person then requires. That allows us to perhaps be more proactive. I'm not saying that we should be, but it perhaps allows us to be more proactive.

 

Silvija: This is my job. Or I lose my child or I lose my husband. You could be serving me and I might think this is great or I might think this is terrible.

 

Robindra: Exactly. And there are many ways we can do this right. We could be very proactive where we could say you don't need to apply for this benefit. Right. Because we know that you  qualify. And can we make sure that you get it? These kinds of things have their issues. But it's something that I think that we should discuss more widely. What kind of role do we want us to take in the future? Now, coming back one step because that is maybe somewhat in the future. There is also a case of what do we do right now? And if you look at, for example, the trust that we have now, matching unemployed people to jobs, we have tremendous opportunities in that space with digital technologies. It's something that we have been tasked to do, and we need to decide whether this is one of the ways that we should be doing that with all that entails. We have data that gives us some idea of what works and what doesn't. And maybe we should use that information to say; I think this service would fit you because of this. And isn't that because people in a similar position have benefited from this service, for example, or we think, as we did in one of our cases, this is going to be a prolonged sick leave. If we know that in advance, maybe we can prepare for that. The follow up of that in a very different way from what we can do right now.

 

Samson: I think it's very interesting. I think we can just then make it a bit concrete in terms of the case that you mentioned, Robindra. So can you tell us what is this AI based tool that has developed enough? And that was of interest also for the sandbox at the Data Protection Authority. So if you can give us the functions, what are the functions of the service? And then we can go perhaps to these normative questions. What kind of challenges did you see coming and how do you try to address them?

 

Robindra: I should say upfront that before we dive into this, we actually considered a handful of the different cases for the sandbox. They all had their merits. But this particular one we picked because it allowed us to dig into some very sort of deep, fundamental questions in law and also provided some very practical handles on how we should think about governance and inspection of public sector algorithms in the future. So what does this tool do? Well, we all fall ill from time to time, sadly. And sometimes it takes a little longer to get well. And one of the beauties of the Norwegian welfare system is that it's fine. We've got your back covered if that happens financially, but also to assist you back to full employment if you need assistance. Sometimes that assistance could be just because you can't really return back to the job that you initially had. It's just not possible after your illness. Maybe you need to reorient towards a different job. Maybe you need your workplace to adapt to new needs that you have. These are the things that we are supposed to assist with and do assist with. One of the problems is that we have a huge volume of incoming sick leave notices and we actually have one of the higher sick leaves in Europe, I've heard. We don't know which one of those is going to develop into a long term sick leave and which one is just sort of going to fizzle out after a given time. So right now, there's no way that we can actually plan and prepare other than to just sit and wait and see if it develops into a long term sickness. So this team looked into this problem and thought, what if we can try to create a model that feeds on your type of illness, your profession, your work history, your history, etc., and then tries to predict whether this would be a long term sick leave or how long is it going to be? That prediction can be used for a lot of things in principle. But this particular case wanted to use that prediction to assist in deciding whether we need to call that person for a so-called dialogue meeting. A dialogue meeting is a meeting between the employer and NAV and the person on sick leave to discuss what kinds of adaptations do we need to make in the workplace to assist us with retirement? So we have a lot of these meetings. We think a lot of them are unnecessary just because we don't know which ones would develop into long term and which will be short. And this tool would then help decide along with other kinds of information whether, you know, should we call for a meeting or should we not call for a meeting?

 

Samson: I think if you can then give us an idea of what kind of data you are using in developing the technology and what kind of concerns you have and when using that data in building this model?

 

Robindra: The data that goes into this model and I have to say that I'm not on the team that actually developed the model, I worked on explaining it and the fairness assessment of it. But like I said, it's trained on data from previous clips from your type of illness, your profession, things like that that we think are relevant for the duration. So that's what's fed into the model, information that goes into your sick leave follow up process is what is what this feeds on. It's not straightforward, though, and it raises some very interesting questions. For example, this is a decision support system. But it's using a lot more information than is currently being used by BI caseworkers. Right. That is part of the beauty of these models. They can actually process a lot more information than we can do today. But that is also a legal problem because does that mean we have a right to be looking at this. These officials are following it up in a certain way, looking at certain things like your profession or your age or whatever. Right. And now if you're looking at all kinds of other things in addition that might like your full history or  did it increase over the last month? A decrease over the last month, things like that. Then what does that mean in a legal sense? Another very interesting thing, I think, is you're training a model here. So I'm using my data and your data, Silvija, to say something about something sensitive. And that is absolutely unprecedented in public administration. We're not supposed to do that. 

 

Silvija: This is probably where I'm being a little too naive and blue eyed. But I'm hoping that we as a country can continue this wonderful welfare system that we have. It is based on an efficient application of this welfare system. The reason why Norwegians pay tax is because we believe that it's being both gathered, collected properly and used properly, and similarly with the welfare system. So why do we consider this such an ethically troubling thing? I mean, if this was something that would suddenly make my insurance cost much more or something. But basically it's just trying to ensure that we divide this welfare wealth in the fairest possible manner. And I do not understand why we are so worried about the state having this data and using this data as long as there is some sort of a democratic process about what welfare really means and what does fairness mean, etc.. I don't want people to be, you know, faking long term sick lives with my tax money.

 

Robindra: No. And like the efficiency argument is one of the arguments that motivated this as well. We want to focus on the long term sick leaves that need a follow up rather than spend time on those that actually never qualify for a meeting anyway. That's a waste of time and resources for us and also for the employer and maybe the doctor and for the person on sick leave. If you don't need anything, why hold it? Nobody wants to have this meeting unless you actually have to and it's beneficial.That is definitely part of it. I think that what one has to also appreciate in this is that we need to do it in an efficient way, but we need to find avenues for efficiency that maintain a sense of the underlying principles that form public administration. And this principle is the data on my sick leave informs decisions about me, not your data that informs decisions on me. That is a fundamental, thorny issue. It might be fine that we do this because, in a sense, it's a statistical aggregate. We're looking at people that look like you, etc.. You could argue like that, but we need to debate it and we need to sort of agree on it. Like you said, it needs to be democratically codified. 

 

Samson: I can also add a bit in relation to Sylvia's question. So of course, you mentioned this idea of why do we need to really worry? I mean we have trust in the Norwegian government, in the Norwegian democratic system. One of the reasons we have trust is because there are actual existing rules that make sure that our data is not abused. And of course, Rabindra alluded to one of these principles that if NAV is going to make a decision about me, then it must be based on my data, but that the data has to be accurate as well. This is basically one of the core principles under the general data protection regulation as well. So they have to make sure that this data is accurate and is relevant for the decision they are trying to make. Plus, we have we have seen also experiences where actually very democratic countries such as the Netherlands, where the social service agency there, they tried to use some kind of AI tool to predict, to kind of detect fraud in child care services and because it wasn't robust, it wasn't tested, it wasn't they didn't engage in this discussion of what concerns might arise, it led to discrimination, where many of the people that have been identified as trying to cheat the system, were basically people with immigrant backgrounds. So immigration became one of the factors for cheating the system. This played a role in causing the government to resign. So this is actually one of the few cases where use of AI and concerns of discrimination have had significant consequences as well in terms of the government. It's not just the fact that we have a democracy in the country, and rules are not going to make sure that things would not go wrong unless you go through this robust system of testing. And that's why the AI sandbox that the Data Protection Authority is developing is quite useful in that sense.

 

Silvija: I have two worries here. One is why don't we then say, okay, fine, this service was biased. It was unfair, but it has to do with historical data and maybe old biases. So let's correct it. How do we use this experience to make AI better? That's one question I have. Rather than saying, oh, he was wrong, we'll just stop doing anything more. And the other thing that really worries me is this kind of differentiation between use and abuse of data, because frankly, everything that Silicon Valley does is about use of data for future profiteering, monetization. Right. And then if the state tries to do that, even to give you services that are for free, that are welfare, that are health, that are education, then suddenly you talk about abuse of data.  I don't know. I just want to challenge both of you and the students to think about why we are so much stricter against the public sector and our own state than than against Silicon Valley or China?

 

Robindra: Well, I think that's a fair question. I think we should bear in mind that a lot of the things that we are trying to do are happening around us, elsewhere. I believe this is the reason I think we need to grab the bull by the horns and handle these issues because it is also a matter of relevance. Being relevant as a public sector institution in the future. So I think we need to explore these issues. But at the same time, I think there is a case for holding us to slightly different standards. That is because people who come to us do not come to us voluntarily. I mean, I hope some do, but usually there's an asymmetric power relationship. I'm not saying it's not there in some of the big Silicon Valley actors as well. There is also an asymmetric power relationship there, and that is also deeply problematic. But we should be cognizant of that. They're not subject to the sort of democratic control that we are. And I think that's something we should pursue. So that is exactly why we decided to take this particular case to the sandbox. And that's because it was a fairly mature case with lots of cases with other thorny issues, some less than this and some in different areas. But this was a fairly mature case. We had developed a model. We were ready to test it. We'd also applied some of our internal principles which are always in flux and developing. But we had, as part of this, done something that we thought was unprecedented, namely tried our best to fairness, assess the model before we deploy it. We built in explainability from the very beginning. So it's not an add on that we do at the end. It's something that we think about from the very beginning. And, that was a lesson I think that we will repeat, because that was indeed very useful. So that was in order to do just what you said, say we want to we want to shorten the route to bring value right to reap the benefits of this technology affords. But in a way that sort of decreases the uncertainty regarding the legal playing field, because while we have a law that governs the use of data, it's not entirely clear what that entails when it comes to machine learning development in practice.

 

Robindra: And this sounds vague, I know, but we don't really even know whether we can use the data to train a model for this particular purpose. We have a history of specifying these things in law and we just do not have a legal framework that gives us any clear indication about whether this is okay. Is it not okay? And if it's okay, how far can we go and where? It's not too far. We want to open that debate, but we want to do that in a safe space rather than end up on the front page. We wanted to give assurances, both for ourselves and for the country at large, that what we're doing is actually in line with current thinking. It's by and large correct. And foster and nourish the trust that we know that we enjoy every single day. Also, when we do these new things with technology. And if I may make a final point, it's that Silicon Valley also does this. But we are a large public sector player in Norway. And with that, I think we have the responsibility for this.

 

Samson: I think if we can spend a couple of minutes just making this fairness and explainability challenge concrete. So in your case, what were your concerns? What kind of, for example, fairness issues were driving the work that you are trying to do and. Give us a concrete example. And then of course, to the explainability, often there is this debate about how some models learn on their own based on the data. And for many people, even for those who have developed this technology, it would be very difficult if I was called in based on your tool for a meeting for you to explain to me why? What was the criteria for selecting me and why not another person? So if you can elaborate a bit with examples on those two issues.

 

Robindra: Okay. So this is a huge, huge topic. And I always find it very challenging to take a position because it's hard. And, and the reason is that there is no such thing as a fair model. It's not an accidental feature that a model is unfair. It's a feature of the model. It doesn't matter that your data is biased or is not biased unless you have a completely homogenous population where everyone is exactly the same, you are very likely to get a model that will have in some way or another, differential treatment of a certain group of people. It is an unavoidable reality of this feature. That's the other side of it. Right. You also want to have differential treatment that's also built into it. Otherwise why would you do it?

 

Silvija: Sorry. I think this is a super important point and I'd like you both to kind of just stop for a second, maybe repeat what you just said. So what you're saying is that this differentiation is actually the feature. What you're trying to say is that we ask the AI to look for patterns in our data, and if it can't find any patterns, if it says everybody's the same, everybody does the same, then it didn't do anything for us. Right.

 

Robindra: Right. So there's two sides to this. And one side is exactly that the reason that you do this in the first place is that you want to differentiate. But in this case, we want to differentiate between long term and short term absences. And we think that certain salient features point to long term and short term ones. Right. That's what we want to do. We want that differentiation now. We want the differentiation to be lawful. 

 

Samson: For example, in your case,if you are going to call someone in for a meeting it should be based on sickness, based on the job occupation, not based on my gender.

 

Silvija: Or color.

 

Samson: Color. Yeah.

 

Robindra: That's one side of it. We have a lot of interesting data. We also know that that data is coloured by all kinds of collection processes, right? Interactions between humans in offices, caseworkers and users and the systems that we have designed over the years. It's huge, it's complicated. There's no such thing as neutral data. And then added to that is the data challenge. We also know that even if that weren't the case. There are always going to be majorities and minorities in a population. You can always find the minority. You could always say that the minority does not have to have the same properties as the majority. But when you train a model, you're basically saying we expect you to behave in this particular way. There could be very, very real reasons that have nothing to do with bias for a minority to behave in a different way. We know that Parkinson's disease is, I think, more prevalent in men than in women. I think that's the right way. So there are differences. And some of them might be due to societal bias. Some of them might not be due to society bias. But other things, it doesn't really matter. The point is that the differences are there. Once those differences are there, it's very hard. It's very hard to create a model that treats everyone in the same way. So our starting point is always that it's much better to assume that your model is going to be unfair. Do the tests check and show that it isn't in the ways that are important to you. And so the challenge in this particular case was to say, look, we have some tools. And that's one of the beauties of these models as well, I think. It's not that we don't have bias systems today. Humans are biased to models that allow for inspection in a way that these systems we have today aren't. That gives us an opportunity to dissect and probe these issues. Not perfect, but it gives us one tool. And I think we would be remiss if we don't use that tool. So we say, look, let's just assume that we have a bias, probe that bias and then decide whether that is tolerable. Is it not tolerable? And to dive into that question as to whether this is okay or not okay, you have to go back to the law. And what? What is the notion of fairness that is built into the law that governs sick leave in Norway? And that's hard. That is hard.

 

Silvija: Samson, we have maybe two or 3 minutes more. And you also I don't know if you covered your question about regulatory gaps. Where do you want to go?

 

Samson: I think that's a good point. But I think we can relate that to this exponential explainability. So one requirement under the law and many other laws is transparency that you need to tell the user how you're making decisions, how you're using the data. And of course, in the literature, in data science, in machine learning, there is this idea of black box - even the developers are unable to explain what's happening. How are you trying to deal with this? You have these models and this obligation of being transparent and explaining what's happening behind the box.

 

Robindra: So it's absolutely right. I mean, this is an issue that these models are often harder to explain. Again, there are also a bunch of tools out there that allow for some kinds of explanations. They have their issues, but they allow for new ways of explaining and shining a light on these models. So we thought, you know, let's try to harness that, those opportunities that come with this technology. To address this challenge. And this challenge manifests itself in many ways in this particular model, but in particular, we want it to be a decision support system. Right. So if you want a decision support system, you don't want the caseworker just to follow the recommendation blindly. That's not decision support. Decision support is something that allows for critical thinking. And so we realized from the functionality of this is that we need the caseworker to be able to contest. Otherwise, there's no way of buying into it either. There's no trust. You need to be able to contest it. And in so doing, we need you to say, look, these are for this particular sick leave. We are predicting 120 days. And the reason we predict 120 days is because of this and this and this factor contribute to increasing it while these factors contribute to decreasing it. The mathematical framework for doing that is fairly advanced. But we took that and we put that into a decision support system and tried to test that with caseworkers and also found that without anything like this, it was very hard to get any buy in whatsoever.

 

Samson: So we are taking into account perhaps your type of sickness or the kind of job you're doing.

 

Robindra: But it’s not just that that you're taking into account, because we can give that information upfront to the user. Right? This is what we're looking at. But in your particular case, this is the prediction because X, Y, Z. Right. Okay. That allows you to say, no, that is not right. That's just not correct or I don't agree with that. Or, you know, that doesn't really make sense and allows you to contest it because you have a rationale and explanation for that prediction. And that was the idea.

 

Samson: Yeah. Okay. Very interesting. Yeah.

 

Robindra: We built that into the decision support and I think the key lesson we've learned is you need to start thinking about this in the very beginning. And I think that's for all developers out there. It's not something you can add on in the end, just like you, you know, you optimize for outcomes for your performance metrics and stuff. You have to start thinking about fairness and sustainability from the very beginning and build it into your development cycle.

 

Silvija: We have to conclude this fireside chat, but I'm taking away many really interesting examples and ideas. Maybe the most important is just like you summarize it now. One is that you have to design for explainability and fairness from the very start. But then number two, it means that even before that you have to make some really difficult decisions about what you mean by those things. And this regulation is not in place yet. So you might have to basically make some normative, challenging decisions while we await that regulation. Yeah.

 

Robindra: Sometimes the regulation might not be there, sometimes it might be there, but it's not laid out in mathematical terms in the literature. You need to sort of translate between the law and and the very mathematical expressions that govern these models. I don't know if that's something that can be done, but something that I really wish for is some sort of mapping in that space. And it needs to be said, and I cannot stress this enough, is that this is like quintessentially a cross-disciplinary exercise. You cannot do this alone. You cannot let just data scientists do this. It's not something the lawyers, the legal experts can do alone either. You need to sit together and we've only been able to do anything in this space because we've all sat down, the legal coaches, the data scientists, the product owners, the domain experts on together to see what makes for the right choices here. Design is also hugely important. You can have the best mathematical methods in the world but if you cannot create a tangible framework for explaining your model, you're just as far.

 

Silvija: Very nice Samsung. If you are to choose one thing you'd like your students to remember, what would it be?

 

Samson: So normally in discussions about new technologies, you have this ambition of, how it's going to revolutionize how we do things, in a positive way. And sometimes you kind of bake in some of the concerns that need to be addressed. But the starting point talked about is you need to assume things are going to be unfair and actually going to benefit those in power or going to exacerbate the problems we have. If you take that assumption at the starting point, I think you have a better chance of coming up with a better product and a fair product. So I think that the mentality we need is this is going to be unfair. That starting point would be, I think, a very good mentality to have. It would give you a good starting point to be very critical, to be to build your systems in a robust way and also an ethical and legally compliant way.

 

Robindra: Yeah. So perhaps I was a bit too strong. What I want to say is that they're definitely imperfect. Right. And I think for us, it's proven very useful to just accept that from the very onset and then instead try to show that, look, this is imperfect in ways that we can live with. Or, you know, this is definitely not imperfect in ways that we can live with in most respects. You know, at our very first assessment, this model was absolutely fine and it was only in one particular domain that we found a real issue. But it's so much better to uncover that issue before you actually deploy your model. 

 

Silvija: Very nice. I learned a lot. Thank you both very much for that. And on behalf of our students as well, thank you for a great, inspirational and educational conversation.

 

Robindra: Thank you.

 

Samson: Thank you.

 

Du har nå lyttet til en podcast fra Lørn.Tech – en læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å ha lyttet til denne podcasten på vårt online-universitet lorn.university.

 

Quiz for Case #C1182

Du må være Medlem for å dokumentere din læring med å ta quiz 

Allerede Medlem? Logg inn her:

Du må være Medlem for å kunne skrive svar på refleksjonsspørsmål

Allerede Medlem? Logg inn her: