LØRN Case #C1180
Sandbox for responsible artificial intelligence
Creating a sandbox environment, who do you invite in and why. When they are invited, what rules of the game need to be in place? This is a part of the series we are creating together with BI Business School, to explore main challenges and opportunities to create good AI, as part of the BI Business School course Responsible AI Leadership. Our guest today is Kari Laumann, section leader at Datatilsynet, with a mission to help take care of peoples’ privacy.

Kari Laumann

Programleder sandkasse, Seksjonsleder utredning, analyse og politikk

Datatilsynet

Samson Yoseph Esaias

Associate Professor - Department of Law and Governance BI.

BI

Varighet: 42 min

LYTTE

Ta quiz og få læringsbevis

0.00

Du må være medlem for å ta quiz

Ferdig med quiz?

Besvar refleksjonsoppgave

Tema: Digital etikk og politikk
Organisasjon: Datatilsynet
Perspektiv: Storbedrift
Dato: 220602
Sted: OSLO
Vert: Silvija Seres

Dette er hva du vil lære:


What is the motivation behind the AI Sandbox and criteria for selection of participants? 

The public and private sector angles. Are there differences in the issues they encounter and on how you work with the two sectors? 

Challenges and risks of participatory regulation, how do you tackle risks of capture? 

Mer læring:

Coded bias. Netflix documentary

Atlas of AI av Cate Crawford

Ekko av Lena Lindgren

Digital revolusjon av Hilde Nagell

Datatilsynet. Report (2018) on AI and privacy where we identify some unique challenges for AI.

Del denne Casen

Din neste LØRNing

Din neste LØRNing

Din neste LØRNing

Dette er LØRN Cases

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. 

Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

Vis

Flere caser i samme tema

More Cases in the same topic

#C0061
Digital etikk og politikk

Glenn Weyl

Professor

Princeton

#C0147
Digital etikk og politikk

Hans Olav H Eriksen

CEO

Lyngsfjorden

#C0175
Digital etikk og politikk

Hilde Aspås

CEO

NCE iKuben

Finn Amundsen

CEO

ProtoMore

Utskrift av samtalen: Sandbox for responsible artificial intelligence

Velkommen til Lørn.Tech - en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

Silvija Seres: Hello and welcome to a case by Lørn and BI Norwegian Business School. This is a part of the series we are creating together to explore the main challenges and opportunities to create a good AI as part of the BI course responsible AI leadership. In this conversation, I will be the host, Silvija Seres from LØRN and my co-host is Samson Yoseph Esaias, who is associate professor at the Department of Law and Governance at BI. So welcome to my co-host. And our guest is Kari Laumann, who is the head of the section for Research, Analysis and Politics at the Norwegian Data Protection Authority. Kari is also a good old friend. So super lovely to see you again. A very warm welcome to both of you.

 

Kari Laumann: Thank you. Thank you for having me.

 

Silvija: So Kari, the way this works is that Samson, who is a co professor of this new and very important course at BI, has said that he really wants you as a guest. So I'm going to ask first, both of you, to just very briefly introduce yourself personally, and I will start with Samson and then go to you Kari. And after that, I would really like Samson to comment on why he wants to talk with Kari.

 

Samson Esaias: I mainly work with law and technology, so I have worked at the Department of Law, particularly the Norwegian Research Centre for Computers and Law for several years, looking into basically the intersection between law and technology, addressing specific issues on data protection and and of course, a bit of competition law.

 

Silvija: So you're a lawyer by training?

 

Samson: Yes, yes, I'm a lawyer by training. I did, of course, my PhD as well at the Department of Law at the University of Oslo and moved to BI a couple of years ago, just before the pandemic. Okay. So the reason we want to have Kari in this course or as a guest in this course is because, I mean, as you know, BI is in the business of training managers who will be in charge of the next wave of technologies. And, of course, the trend shows that artificial intelligence is getting wider adoption both by private entities, but also public agencies. So our goal with developing this course was basically to equip students with the knowledge and tools they need to develop and use artificial intelligence that respects existing legal rules, but also ethical principles. And of course, data protection is central in this in ensuring that artificial intelligence is all developing the technology responsibly. And that's why we have Kari here as a guest as she is also the project manager of the AI sandbox, which is quite interesting as an idea of the sandbox itself, but also gives us insights into what are the main challenges in relation to data protection when one develops and uses technology. So it'll be very interesting to hear what she has to say about this experimentation with the sandbox, but also the main challenges in data protection.

 

Silvija: Very good. Kari, who are you?

 

Kari: Oh, that's a difficult question.

 

Silvija: And why did you become like that?

 

Kari: Yeah, I should know. Because I'm a social scientist, a sociologist, so maybe I could do an analysis. Why did I end up here? No, I don't know. But I am a social scientist. But I've been working with the intersection between technology and society issues for many years. I've been with the Data Protection Authority for about ten years now. I also had a little bit of a break from the Data Protection Authority, and I was with Telenor in Asia working with the privacy and data protection in their Asian business units, which was also a very interesting perspective, working on the same topic that I have been for many years, but in a different culture and from a different from another side of the table. And then I came back to the Data Protection Authority to work specifically with the AI sandbox that we will talk about today.

 

Silvija: So I've known you for a while, but I know really nothing about you personally. What's your most exotic hobby?

 

Kari: Oh my God. I dread this question. I'm afraid I'm probably the most boring person ever to have been on this podcast, unfortunately. Do I have any hobbies? Not sure, but I do have a book club that I started with some friends from high school during COVID and that is still continuing, which I really enjoy. We do it digitally because we're spread all over the country, so that's the most exotic hobby. I have to disappoint you. I think on this point.

 

Silvija: I think we'll have to get you on our LØRN book club and maybe you can lead that for us. And the idea is that there's so many good books published in Norway, but also internationally that people need to read and somebody that could drive that would be amazing. So let's get back to that idea there. So going back to what Samson was saying. And no, maybe before we do that, could you please just define it in one minute. What is this data? Why do we have to have it? And you know why? Why did you actually choose to work for them?

 

Kari: So all European countries that have adopted this law, the GDPR, which is also adopted in Norway, even though we're not part of the EU, we have adopted this law, are required to have a supervisory authority to enforce this law. So that is why we exist. And even before the GDPR that came in 2018, we also had a data protection law in Norway that required Norway to have a data protection authority. So our role is to enforce the law, to receive complaints, to handle them, and also to do investigative actions, enforcement of the law. And also, we are required by law to give guidance and to help advise companies and governments in data protection questions among some of our most central legal requirements, according to the GDPR. But there is a list of things.

 

Silvija: I'm going to give you two examples and I'm going to start my provocations right away. And what I want to kind of provoke around is two things. One is, why do we have to have this Data Protection Act at all? What's the point? And the other is, is it actually as much a hindrance to innovation as it is a supporting mechanism to a stable future? So example number one is we have a coastal path going over our property in Fornebu and there is a boathouse and things have gone missing from that boathouse a couple of times and there are some nice little statues that also disappeared. I really wanted to install a camera on that boathouse so I can see who's taking these things. But because that would be capturing images from a public path, I'm not allowed to do that. I understand this is data about people and it infringes on their privacy and it's difficult to get consent from people to be filmed while they walk on a coastal path. But still, how do we make sure that we capture the necessary data to protect our businesses and our security needs in the country? And this is a big question that I think the police units have, that the tax authorities have, that not have so many of the public instances that are trying to develop as smart solutions as Google or Amazon or Alibaba are delivering are hindered by their need to respect data protection.

 

Silvija: So that's my first provocation. The second provocation is, are we being stricter here in Europe and in Norway, then American companies and Asian companies have to deal with. And an example there I'd like to use is FHI sporings app, the first one in COVID, which was in many ways stopped because of uncertainties around the usage of location data of people and how much should the state know and do they need to know. And this kind of necessity argumentation around it stopped that thing. And politicians are very easily stopped when you wave the GDPR flag. But at the same time, we send the same data very promiscuously to people in Silicon Valley and people in China. And they can then develop both tracking, but also health services and many other services that we can't compete with. So, can you can you just get us started and maybe Samson can then continue with the questions to just frame the whole conversation. Why do we have to think about data protection? Why can't people just gather data? What is data protection really about, if you can make it very simple? And then how do we make sure that it works for Norway? And not just the people who don't have it.

 

Kari: So you want a simple answer for quite complex questions? Well, you also ask like, why do I work at the Data Protection Authority and why do we have this law? I think the law tries to do different things, but maybe two main things. So the first one is to protect people's integrity, basic human rights of the right to privacy, which is closely connected to democratic values, such as being your own person and being allowed to express your opinions freely. Freedom to information, to search for information without being tracked and traced. So I think one of the core ideas of the law is to protect the individual's rights. And the other core goal, I think, is to enable a free flow of data in a responsible framework. And I think that is a project that the law has. But you also see it in some of the European strategies and the Norwegian strategies, amongst other, the strategy, the national strategy for AI in Norway, where you want innovation when it comes to AI, but you want it to happen in a responsible way that still takes care and makes sure that the values that we have in Europe about the right to privacy and other fundamental human rights are taken care of. So I think these two things should go together. You can use data, but use it responsibly, and then you give some examples about camera use and also what was the second one as.

 

Silvija: Exploiting or the COVID tracking?

 

Kari: I think these two are examples of things that are regulated in the law. And it shows that the law is not absolute. It doesn't say you cannot use camera surveillance or you cannot develop an app that collects data, but it sets requirements of how to do this, to do this in a responsible way that respects people's rights. And then you need to have this balancing test. Okay. I want to put up camera surveillance because my garden gnomes were stolen and I don't like that, but I also surveil a public path. It's this action I want to do. Is it proportionate? If I can sit and live stream people walking past on their trip to the green areas, is that proportionate to what I want to achieve? Like to keep my garden gnome safe?

 

Silvija: You said a lot, so I'm just going to stop you and try to summarize, and underline some of the stuff because you said a lot of really important things. And then I want to strengthen some of the stuff that you said. My role here is to translate what you guys are saying into Lego language. But I heard you say that it's about using data well, but wisely. And there is a balancing act between creating value based on that data, for example, protecting my garden or helping people understand the development of COVID in Norway and the possibility of abuse of that data. I think that's the really important thing that I don't think our society, our politicians, our businesses have really taken in. And it's this future accrual of extreme value and growth opportunities for those who have data today. And it's this future optionality of data that you're trying to kind of control in a way that is good for society. Because, you know, I can collect data today about people who go on my path. And at the moment, you know, I really don't care. I just want to be able to look back to yesterday if something got stolen today. But if I save this data, I can start building products based on this that I can sell to, I don't know, municipalities or hackers or and it's I don't think people understand how much you can do with the data if it falls into people who are uncritically building businesses or bad services on top of that. And this sort of future consideration, I believe, is one of the most important things you're trying to balance with the necessity of creating good services today.

 

Kari: I agree, but I also think it's here and now. Imagine the Norwegian government issued this app and people feel it's too intrusive. They don't only ask for my data related to COVID, but maybe also all my geolocation. And they want to use it for tracking disease spread but they also want to use it for other purposes. So maybe this doesn't feel okay, maybe it feels too intrusive and I don't know. And that I think could also have a here and now effect that people maybe don't want to use it. That might have an effect on the trust people have to that particular app but also the trust that people have to Norwegian government and are they then willing to share data?

 

Silvija: So here comes my main question to you, Kari, and then I'm leaving the floor to Samson because it has to do basically with this Norwegian perception of trust and privacy. And we are very concerned that nobody should know too much about us in some areas. On the other side, we are really unconcerned about people being able to access our tax returns openly. My American friends get a complete shock when they hear about that kind of openness. So the balance between being open about many things and then suddenly being very restrictive with the Norwegian public sector, especially people who are supposed to deliver health and welfare services to us, I believe, is out of balance because we trust Google and Amazon and Alibaba and Huawei more than we do the Norwegian State. Yes, because we are giving this data to those people.

 

Kari: People say they trust the public sector in Norway handling their personal data. And at the bottom of the list, you have all the big Internet companies, Google, Facebook, so people don't actually trust them.

 

Samson: They don't have many options. So I think it's part of a competition problem as well.

 

Silvija: Yes. So this is a competition problem and when an expert from China was visiting Norway, one of the things he said is that he believes that we can't catch up with China and the US on AI because we have GDPR and because basically we are not able to collect the data that AI needs. So my question is and I'm provoking, but you know, why are we stopping our own developers from gathering the data if we can control them? And we believe that there are more whitehat hackers and we have no way of controlling the data gathered by the international mega monopolies and the services they can be delivering on top of it. So, you know, nothing stops Amazon or Apple from selling me a package of health insurance and health services and welfare services based on all the data they have on me. Super personalized, but also priced very much according to the super intimate health data they have on me. I guess my question to you is, are we being a little too controlling of the units we can regulate related to the ones we can't?

 

Kari: Yeah, but I also think it's not like there are no rules for these companies. The same rules for these companies apply only to their customers in Europe. So of course, a Chinese company, maybe they have some rules in China, but they can develop their services and train on their own data. So that's probably an advantage for them. But for European customers, the rules, the GDPR rules apply for them as well. Whether it's easy to enforce it is another question. And yes, European companies have stricter rules in terms of data use and that can be a hindrance. But I also think it can be an opportunity for you to develop products that are actually more sustainable when it comes to data use. You're building trust and I can definitely see how it can be a limitation, but it doesn't have to be. And I also think the GDPR, as we talked about, you have to make these assessments. So it gives quite a bit of room. And I think in some instances this is something that we're also trying to address in the sandbox. Companies are hesitant because they're unsure whether they will breach the law, that we at least remove those types of hindrances that shouldn't be there if they're based on some kind of misconception.

 

Silvija: You're saying it's a more healthy process and probably will long term give better results in terms of privacy versus personalization. We are going to talk with other investors in another conversation, and they have a lot to say about all these acceptance processes that we have with the foreign data services providers. But Samson, you wanted to talk about the sandbox.

 

Samson: So Kari is leading this sandbox, so I think we can structure our talk into two main parts and we can start with the idea of the sandbox itself. So what was the motivation behind launching this sandbox and what is the benefit? What benefits do you see from having this sandbox? So just talk about that. And then we go to the data protection issues a bit later.

 

Kari: Maybe not everyone knows what a sandbox is because it's kind of a little bit of a new topic. And I think there's no kind of standard definition of what it is. I think it can be different in different circumstances and I think in the financial area they've had some books for some time now and it's quite new in the data protection space. And for us the sandbox is a space where we're able to have dialogue with specific companies and where we can kind of dig into real use cases where we, together with our external partner, then try to find responsible AI solutions. And then you ask What is the motivation to do this? The British Data Protection Authority was actually the first authority to start a sandbox. It's not an AI sandbox, but it can handle different issues. So we got inspiration there. And then we started the dialogue with our Norwegian ministry about establishing a sandbox. And when the government launched the National AI Strategy in 2020, the sandbox was part of this strategy. So it's quite nice that it's not only our project and Babel, but it's actually connected also to the government's direction in the AI area.

 

Silvija: I just want to paint a picture from what I heard you say. So basically we can think of a sandbox that children are playing in. And the idea is that it has a limit. It's a very defined area. And you define these limitations in terms of who gets to test a new idea for how long, in what geographies. And then you learn as much as you can from this process. And you have had a competition and many companies have wanted to join this sandbox project in order to learn. Right.

 

Kari: That's true. And also an important part of the frame is that we're working within the GDPR. We don't have the mandate to give exceptions from the rules because you're part of the sandbox. So the whole idea of the sandbox is to find a good solution, dig for a good solution within this frame. And also this is guidance. We call it dialogue based guidance. So we don't make decisions, we don't make pre approval, we don't give a stamp of approval, we give guidance. And then it's up to the company what to do with this guidance. And then we can come on inspection the day after if we want. So it's not like a free pass, but it's more like in-depth guidance. And also a very important part of the sandbox is to be open. So we try to in the selection process, we try to select companies that have questions that are relevant for many people or many companies. And then we publish exit reports and we also try to kind of get the experience, share the experiences with the different sectors and other types of actors that could be interested in the same type of questions that we dealt with in that particular sandbox project.

 

Samson: I think that's quite interesting. So I see the benefit, for example, for companies to be part of this sandbox because you have many uncertainties when you're trying to develop a new product or service. And of course, getting the idea of the most relevant expert in the area would be very useful. But do you think it is also useful for the Data Protection Authority to see what kind of challenges exist so it's actually also beneficial for you?

 

Kari: We are evaluating these projects and you can ask the participant, was it useful for you? And I think it has been. We've been running it for one and a half years now. But for us, I really see the effects of the sandbox internally, for us, like the types of discussions that we are having today internally and the authority we could dream about having a year ago. So I really see that this is increasing our knowledge about the technology, but also like how to apply the law. And the business circumstance, how it is like to be on the other side trying to find solutions. Like we're also learning about the context. So this is super useful for us and I think it's really important because this is happening now and it will be happening in the future. So we see all the possibilities, but we also see that it might go wrong and we need to find a good solution to how do we make sure the algorithm isn't biased, how do we communicate? How can we be transparent in a good way? So we're learning a lot from this as well.

 

Samson: I think before we go then to the main substantive issues about what are the data protection issues that you're dealing with, perhaps if you could say a few words about what do you see as the challenges in doing this? So you are a watchdog or you are a supervisory authority. So your main role is to see whether people actually comply with the law. And then if they are not complying, then take measures. But this is a different approach from what is normally your role as a data protection authority. So do you see any risk? Do you see that perhaps those who gain access to your sandboxes might have an advantage over others who might not get the chance. So if you can just say something about those.

 

Kari: Yes. Maybe they can get an advantage a little bit. But we try to mitigate that by having an open application process and being open about the criteria for applying and so on, and also by being transparent in the process and the outcome so that we share the experience as we say. Okay, these were the questions we discussed. These are the assessments and this is what we kind of concluded. So by doing that, we hope that it's a more level playing field. And of course they get special advice, but they also need to do a lot of the work themselves, like we're not a consultancy firm kind of doing it for them. They normally provide assessments and then we discuss it together and we give feedback. So it's not I mean, it is known what we already do. We do give guidance, but the sandbox gives us the opportunity to go more in-depth and have more dialogue with the partners in the sandbox projects.

 

Samson: Okay, great. Perhaps then we can dive a bit into the substantive issues. So you decided to have a sandbox really specific for artificial intelligence. So I think an alternative could have been that also to have just a general sandbox where anyone can come up with ideas. So my question would be why? Why focus on artificial intelligence? Do you see that this technology, artificial intelligence, actually raises unique challenges to data protection? If so, what would be those unique challenges?

 

Kari: Yes. Why an AI sandbox? We issued a report in 2018 about AI and privacy where we identify some unique challenges for AI. And when we wrote this report, we kind of talked to Norwegian businesses, and I was a bit underwhelmed by how short the AI implementation has come in Norway. Also some chatbots and it wasn't really what I expected. I heard a lot of hype, but now we really see applications from all sectors health, education, gaming, retail. Everyone seems to start using AI, maybe not super advanced, but this is really happening. So I think it's really important for us as an authority that has responsibility for all sectors and data used to be up to date on AI. So for us, it's important to upskill and also to help companies to do this right. It can have a huge impact on society for good, but possibly also for bad. So we want to help trying to get this right. But I also think our sandbox could benefit from other topics as well. Like we see other technologies, privacy enhancing technology, blockchain that it would be super interesting to explore, but also areas that you could think about health or children's privacy and so on. That could be interesting topics for the sandbox. So let's see, maybe in the future it's not only an AI sandbox, but it could also be a broader sandbox, I think.

 

Samson: Okay, great. So if Silvija doesn't have any other questions, I will then go to see if you can give us, for example, some of the issues that you have dealt with in the sandbox. And I would be very happy if you, for example, give us one specific issue dealing with public agencies. So if you can give us some of the unique data protection challenges that you've looked in that sandbox or and then the private sector has another software or technology that has been part of it.

 

Silvija: Yeah. I mean, more important challenges than garden gnomes.

 

Samson: Yes, yes, definitely. Yeah.

 

Kari: We see some topics reoccurring in the applications that people want help with. Data minimization, transparency, fairness, legal basis are some topics that we see and also risk assessments people want help with.

 

Silvija: Can you just say two words about each of these?

 

Kari: Yes. So data minimization is a requirement. You shouldn't use more data than strictly necessary to achieve your purpose in AI. This is a challenge because sometimes you want to use as much data as possible to see. Maybe you can discover some new connections, or maybe your results will be better with more data. So there might be a tension there to explore requirements about being transparent. How are you transparent? Is this a black box? Do you need to open the box? How detailed do you need to be to give an individual explanation? A lot of questions around that legal basis, especially in the public sector. I think it was quite interesting because a lot of you need to handle people's personal data in order to give them good services. But a quite novel question is can you handle everyone's data? People who spin this were predicting the length of sick leave. Can you handle people who are on sick leave from three or five years ago? Can you feed that data into an algorithm to train to use on individuals that come into the office tomorrow? So that was another question about the legal basis. Like when you use data in new ways, it also raises new questions about the risk assessments and KPIs. Since AI is quite new, like how do you do these risk assessments? Fairness is one of the questions that is a quite general requirement. It's one of the fundamental principles in the GDPR process and should be accurate and fair. What is fair?

 

Silvija: Don't judge people that might look Arabic as a bigger security risk, or women as a risk in terms of certain bias, etc..

 

Kari: Yes, but what does it mean in practice? What is a fair algorithm? How do you build a fair algorithm? How do you audit it? How do you discover if it's not fair? Do we have any AI relevant cases? Very few. Same as in Europe. So I think as a method the sandbox is quite interesting because you get to predict a little bit the questions that are coming, you get to dig into them and we don't make decisions, but we get to kind of explore them and start kind of digging into issues that often there is not necessarily kind of easy or clear answers still because we have little experience with it.

 

Silvija: So just say two words about the difference between the public and private sector. And then Samson, we have two more minutes. So then I suggest after that you conclude with what you would really like your students to remember.

 

Kari: We have had exciting big public actors in the sandbox. And also like small startups and NAV also worked about transparency and in this case not transparency towards the end user, but to the case handler. What do the case handlers need to know in order to explain to the user why an algorithm said you need extra follow up as we predicted that their sick leave will last for a long time. So I think that was quite interesting because if the algorithm just says this is a decision support tool, and if the algorithm says this person doesn't need up and then the case is okay, but why? It's difficult to make a good decision as a decision support if you don't know the reason. So now NAV has a plan to say, okay, we recommend not follow up because of maybe type of illness, gender and occupation. Maybe in that way you give the case handler some context so that they can make a good decision. So lots of interesting questions there. And then small startups, you have one secure practice. I want to profile employees to give better information security training. And then we did focus groups to understand data minimization. What type of data are employees comfortable with profiling them in order to give them tailored training? And the feedback we got from focus group with employees and also labor unions were quite interesting. If the employee trusts the company, they're willing to give a lot of data. We talked to employees in a big Norwegian company and they were willing to share everything because they trust my employer will not use this data for other purposes. It will not harm me. I can answer truthfully. But then the labor union said no. We've seen a lot of examples of this not working in practice. If it's not clear for the employee how this data will be used, they will not answer truthfully. So for example, if you answer a request about whether you have opened any suspicious emails last week, you might not answer truthfully if you don't know if this will affect your performance review next month and your pay rights. So if you think you know this will affect you, you might say, no, I didn't do that. And the consequence of that is inaccurate information and inaccurate prediction or recommendation from the algorithm. So this is not only like a legal question, but it's also about the functionality of the algorithm itself in terms of how you structure the data flow, but also how you communicate and inform the employees. So lots of interesting topics in the sandbox.

 

Samson: Yes. I think this was very insightful, very interesting. I do actually believe that data protection authorities should be very proactive, as you are doing now with the sandboxes. I think my feeling is that we have missed that opportunity in relation to some of the business models, in relation to, for example, these big data where actually the business practice has been being largely shaped before data protection authorities have been able to kind of contribute activity in this sense because I mean, the business model has been developing, although we have had already existing rules, data protection rules, and we see that normally a lot of that business practice is really shady and does not really respect existing rules. So taking this proactive approach would give you, I think, the opportunity to really shape the technology, the business practice going forward. And I think hopefully that would show also in practice that the business is well aligned with ethical and legal rules. And I also thought that this is actually also useful from a data protection authority because you get insight into the concrete problems, uncertainties that people face in dealing with data protection issues. And then you learn you can provide better guidance also to everyone else who is developing or using this technology. And I also believe that the fact that many people are quite have really strong trust towards in Norway, towards governments and regulators, would be a good opportunity to to work on sandboxes like this in Norway, because sandboxes require trust the all participants to be really vulnerable about the uncertainties and the challenges they are facing. So I think that would be something interesting to look forward to.

 

Silvija: Final words then from me. I want to underline basically the three things Samson just said. Being proactive and experimental is necessary when the world is developing as fast as it is and having public authorities working the way you do. Our trust in the government and into each other and therefore ability to share the data is a strength. And the bottom line is not that more data is always good, but it's the right data and the right use of that data. And then my final comment to the students is that you can see very good examples here of how important it is to be cross functional and cross professional. In a way, we have a social scientist, a lawyer and a technologist, and we are together thinking about these problems of the future. So some of the most interesting jobs in the future, I think, are going to be found exactly in this intersection and probably very related to the problems we were talking about just now. So, both of you, thank you so much for helping us understand more about the sandbox and how data protection works in Norway.

 

Kari: Thank you.

 

Du har nå lyttet til en podcast fra Lørn.Tech – en læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å ha lyttet til denne podcasten på vårt online-universitet lorn.university.

 

Quiz for Case #C1180

Du må være Medlem for å dokumentere din læring med å ta quiz 

Allerede Medlem? Logg inn her:

Du må være Medlem for å kunne skrive svar på refleksjonsspørsmål

Allerede Medlem? Logg inn her: