En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg.
Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.
En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.
Who legislates AI.
What are the challenges of legislation and innovation ?
How do we move forward.
Orly Lobel: https://www.sandiego.edu/law/faculty/biography.php?profile_id=2844
Susana Borras: https://www.cbs.dk/en/research/departments-and-centres/department-of-organization/staff/sbioa
Mariana Mazzucato, https://marianamazzucato.com/
Del denne Casen
Flere caser i samme tema
Hans Olav H Eriksen
Velkommen til Lørn.Tech - en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.
Silvija Seres: Hello and welcome to case with LØRN and BI Norwegian Business School. This is a part of a series we are creating together to explore the main challenges and opportunities to create good AI as a part of BI course called responsible AI leadership. Our guests today are Christian Fieseler, a professor of communication management and Sofia Ranchordas, a professor of public law and innovation from the University of Groningen. We are going to talk about empathy and automation or the necessity of thinking about social driving forces when automating and digitizing societies, and how law and those kinds of skills can also help us be better technocrats. Welcome, Sofia, and welcome, Christian.
Sofia Ranchordas: Thank you.
Christian Fieseler: Thanks for having us, Sylvia.
Silvija: I'm also going to introduce Christian as my co-host in this series of conversations. So basically, the game plan now is that Christian is our subject matter expert host and Sophia is his guest and I am the podcast director. So I'm pushing us into being as, what shall I say, easily understandable as possible, given that you academics can get very enthusiastic about the deep conversations. So we're going to start the conversation and it's always an introduction to our guests first. And I would really like our listeners to be able to know you a little bit as people, not just like professors. So, Christian, I'm afraid you're going to have to repeat your story a few times, but we're going to start with you and then we're going to ask Sophia, who are you and what are your main driving forces related to this topic?
Christian: Let me begin. So I guess for the average Scandinavian listener, you want to play, I guess, the accent. I'm a German national who is now with BI in Norway and my work is made mostly on ideas of designing technology well and creating management systems where we are using and employing digital technology as well. And I think why? We are having these conversations and why you have to listen to that podcast is essentially this question of when we look at technologies, they arguably have a lot of potential. At the same time, we all feel a little bit of unease in what it automatically means for me. What happens when our artificial intelligence becomes ever so better? Will I still have a job? Will I still be a meaningful part of our democracies? And I think what interests me really in these conversations which we're having is exploring with our guests the different ideas and different fields where people have really excellent ideas, how to do digital transformations and artificial intelligence as well. Because, of course, programming, data science, machine learning and all of these engineering factors which we are also talking about in this podcast, but also our societal systems like the law, the way that we manage, the way that we do advocacy and that we give representation to people.
Silvija: Very cool. So not just basically how to make money on new technologies, which is what drives us very much from Silicon Valley and not just how to control societies, which is what drives us from China, but how to create technology that makes better societies. And how can different disciplines participate in that? And Sofia, where are you from?
Sofia: So my name is very tricky because when you see my name, I think it's kind of weird. So I was born in Portugal, Portuguese passport, but I have an Indian surname, so I'm half Indian, half Portuguese. But I have been for many years in the Netherlands and now I divide my time between the Netherlands and Italy. So I'm quite complicated.
Silvija: I don't dare to ask you how complicated you are, and I don't dare to ask you how many languages you speak. But I assume it's a few.
Sofia: Yeah, but not Norwegian yet. Maybe one day I will watch a lot of Scandinavian series, so I definitely think that helps. So like Christian, I work from an interdisciplinary perspective. And what really drives me is that I find that now we have so much innovation, but the innovation we have is not being shared by everyone equally. So basically it's developed in Silicon Valley, as you said, and all these other innovation hubs around the world. But the average citizen may have an iPhone, but it's still not really empowered on the same measure as people who are more tech savvy. And what really drives me is to make sure that citizens can really be more powerful, that we don't have things like poverty in terms of access to services. That the digital divide is really bridged and while doing so that we actually can help them exercise their rights as citizens. Because after all, I'm a lawyer, so I'm interested in more inclusive access to technology, use of technology. And I will say that's what drives me and what drives my study of automation and some of the rights.
Silvija: Very cool, because before we go on, I usually ask people if they have some sort of an eccentric hobby. And Christian, sooner or later you'll have to say it in these many conversations we have. So if we were at a party, you know what would be the most fun thing to talk with you about.
Christian: That's a very spontaneous question. I'm not a spontaneous person. I guess being German.
Silvija: What's about you, Sofia? You have a hobby.
Sofia: I'm not sure it's eccentric, but I do have a hobby that takes up quite a lot of time. I am a wannabe novelist. I write fiction. I'm currently writing a novel. So I guess that's something I love to talk about. And I hope one day I'll get published. And I know that in Scandinavia people love books and stories. So yeah, so that would be my eccentric hobby. I would say.
Silvija: Very, very cool. I look forward to reading your book, Sofia, and I look forward to talking to you about it at the party. Back to our topic about interconnections between law, innovation and as part of that innovation and AI driven society because the course, after all, is about responsible AI. So, Christian, you threw in Norwegian, we say brand kill. So basically a fire grenade conversation in a way, when you said that, how can I stay a responsible and active Democratic member of our society? You know, you said something like that this technology, if not introduced in the right way, might challenge some very, very basic rights and the functions we have as members of society. Can I ask you, what did you mean by that? Why do you worry about that?
Christian: I partly worry about that, but I'm also a management professor first and foremost. Right. And there is a saying which is a little bit horribly translated from German, but we sometimes say you are committing suicide because you're afraid of death. And what I find interesting or interesting concerning however we want to put that when it comes to new technologies, is that on the one hand, we have so much potential to do good, right? I think we shouldn't lose sight of that. That technology normally is something which can make everyone better off, can help illness, can make us richer and all that. But I think everyone is feeling uneasy. I think the unease comes often from this feeling of I have no real say on that or nobody is really helping me or should something happen, which is to my disadvantage, where can I get help though? I think it's a bit of a feeling of alienation when it comes to digital transformation. I find it very interesting to strike that balance, but we don't want to throw out the baby with the bathwater. So we still want to make technological progress. But how do we make it so that people feel that this is for everyone, right? And then there is someone somewhere that cares for me as an individual, that I'm not essentially just the object of anonymous market forces or bureaucratic forces for that matter.
Silvija: So I heard you say several things here. I'm just going to be simplifying your very intellectual statements into something very colorful and Lego like. But what I heard you say is that you are concerned basically that we, out of fear of damaging things, don't hold back in terms of production of new systemic solutions in technology. And that's on two levels. One is basically making sure that technologies don't do too much bad. So we understand the negative consequences, the negative effects that they might have. But the other thing that also Sofia was very concerned about when we started the intro to this conversation is making sure that it's accessible equally to everybody, right, so that it's a useful, positive force for good in our society. Did I understand you correctly?
Christian: Yeah. Wonderful. Better than I ever could put it.
Silvija: So you're talking about it at two levels. One is that we make sure that these technologies don't do harm. And then, Sofia, you are also concerned that we make sure that they are actually equally accessible and that they don't polarize the society. Right.
Sofia: Yes. Because what we see comes back to the point of Democratic participation, which is a big word. But basically it means, as was already explained by you, that people have a say. And right now, what we see is that many people in our society feel that they don't matter. That only elites have a say. This actually has polarized society very much in the last years. And what we see is a digital innovation started out by claiming that it would help democratize participation. Everyone would have a say on social media. So you can just have a Twitter account and have a say or have used one of these applications and have a say in your local elections or whatever. But the truth is not all voices are heard equally or equally loud. So we know that, for example, like when women try to speak on social media or use one of these applications that women usually are not heard because they don't, they just speak differently. So women as a gender, we use different words. For example, women are trolled more on social media, which means that they are afraid of continuing the conversation because they are just trolled for being women, people of color as well. And also because in general white men of age actually feel that they actually can participate more, they ought to have more time or they just have more self confidence. Which brings us again back to the idea that only the same kind of people participate, people that already have a lot of participation and voice. So what I thought interesting is that when developing all these AI systems for more participation, for all these algorithms that can help us navigate information, it is important that we take into account diversity of people out there because otherwise we're just giving again a louder voice to ones who already speak. What we have right now is that we have a lot of digital innovations, all these applications that you can use to inform your representatives, your municipalities, you have hackathons, you have fab labs. But let's be honest, people who participate in the hackathons, the fab labs are basically the same suspects, the male tech savvy people, very young and not just everyone. So that's what interests me, I would say.
Silvija: So you're talking about democratizing access, as you actually say, democratic participation, and you're worried about the society where we might get into a situation where we've moved from haves and have nots to can and cannot. And that's something that is quite urgent. So Christian, does your kind of weight in this conversation have to do with law as well. Where does law come into the picture?
Christian: So I’m not a lawyer, but I can say something about law or what I find fascinating about law. So I was a very mediocre law student. I had the law as a secondary degree and I never really understood law until I met law researchers or legal scholars, because I come from management. And when it comes to technology matters, network neutrality, platform regulation and all of that, I find that for whatever reason, unbeknownst to me, legal scholars are five years before the management field. So you make all these wonderful discussions, which we essentially just learned five years later that it matters how free our systems are or how neutral our infrastructure is and. I learned that the law is more than I am using now, a little bit of inflammatory language, but it's more than just a dry legal code. But it's essentially also a very interesting, very lively, discursive space which works with more than just the courthouse. It's a very inspiring and very intellectual space tool to essentially make all these tradeoffs. I think that's a good way of putting that right, because what Sofia just said is that it's wonderful to have her presentation, but in the implementation, right, we have all of these than caveats that we have representative spaces, but then it is used by people who are always overrepresented. Or maybe we have workarounds which can be somewhat exploited. Planning with eventualities. I think that's something which always fascinated me with the law as an outsider or more an interested onlooker.
Silvija: Sofia, I want to play that ball over to you. And basically, my question is and Christian, by the way, dry legal code is not inflammatory language. It's a colorful language, very well-skewed.
Christian: It's a bit of an argument of ignorance, however. But because what I don't understand can be very interesting to others.
Silvija:. Yeah. I had a similar experience actually, where a couple of really excellent legal scholars showed me that it's almost like a game you're playing. You have to understand the basic elements, which is the legal code. And then it's how do you put it together and which argument do you want to pull and where do you want to go and what do you really care about that drives it? And that's when I had a similar epiphany like you that it's a super exciting field. And so to use Sofia, I am worried. I'm more worried than Christian, and I'm worried especially after reading a book like Surveillance Capitalism by Shoshana Zuboff, that worries that our futures are in many ways owned by the people who own the data and the algorithms. And it seems like they are very different, both cultural and regulatory systems where the US lets them get on with their market driven innovation competition. In China, it's very much driven by their social needs and state driven. But the European Union seems to be one of the very few places in the world where we are actively using law to drive the sort of forces that you talk about. Is that your impression and why is it so? Or, you know, what good are we doing at the moment?
Sofia: Well, first of all, I also agree that law is definitely not just going to the courthouse. Please. If you ever watch Modern Family, there's a very funny episode where Mitchell actually goes to court and he has never been to court. Then he just plays the role of an actor in court, like by shouting Shame at the court. And this sounds like the expectation of the average law student is when they come into law school, they think that they will only go to court. Actually, most of them do not go to court ever when they actually graduate. So it's funny because our role in law is exactly to actually draft frameworks like puzzles and also like policies that we can stretch. And what happened was that for many years the European Commission had this framework for digital innovation that was being stretched to the maximum, and it worked in the beginning, but it then got really outdated and the European Commission really realized that it was getting outdated and it was being used to violate citizens data, the right to privacy. And because of that, there was a growing concern about the need to actually redefine this puzzle's framework because you can't just go to court and just shout Shame at Google. That doesn't work. You won't get anything out of it. To be honest, you might get some money, but they have plenty of money. It's not the way of shaming them, I'll say. So European Commission is definitely the regulator here, the key regulator in Europe. They are definitely a trendsetter for the rest of the world. And I mean there's something called the Brussels effect that professor Bradford coins and that is the idea that in Europe we are concerned about protection of European values like privacy much more than the Americans. But I would say that right now we are not like some years ago where we were very focused on protecting our values, our rights, and being precautionary. I think now we see that Europe also cares about innovation. So you see that there is a growing need to balance this approach to the big five, but also in general to innovation and and also fundamental rights. So I would say that gradually I think the US and Europe will probably be brought together when it comes to regulation. But right now you're right that Europe is really being the trendsetter, trying to protect consumers, our values, our rights, our privacy much more than the Americans. And the Americans are slowly getting there, slowly there. There are some like I think having now Lina Khan at the Federal Trade Commission.I think this will help because she has been someone that has really pushed against the big tech. So I think we're moving in the direction of convergence. But of course, it's very different from China because China has a system where everything is basically conceived by the state. And as I said, the needs of the state or of the citizens are always merged together. But at least the difference is that in China, you know what you have in the sense that everything and everyone is being controlled. And we know that. But in Europe, we are also being controlled. And in the US as well it's just that it's more opaque because the ones who profit from all the control and data are private parties mostly, and they can use it in ways that we don't really see and understand. So this actually goes back even to the book Black Box by Frank Pasquale. So in a way, I think it's important to have more regulation on this, but I think it's also important to have regulation that, again, balances the development of digital innovation with the protection of European values. So I think in that sense, I think the European Commission is getting there, but the US is also trying to balance its approach.
Silvija: I think I'm going to try to leg of my leg of your story now. And what I'm thinking is that this and you're actually talking about what also Christian introduced as his main concern. And it's that innovation and good management of values can both be done by law. And they are not always in contradiction. And I think back ten years when I remember hearing people from Google say that in the fight between privacy and personalization, because basically we are looking for personalized services, all of us in these automated AI times. But they say in a fight between and you can't have both. Right. They balance with each other, but people often give up privacy in order to get personalization. And this is where Europe is trying to help us not give up too much, basically. And it's really, really interesting to see how you guys are now building this into a tool for growth rather than a tool that stops innovation.
Sofia: Yeah, exactly. Also, because what we see is that people trade their privacy for convenience because we don't see the price of privacy. So we don't see the price of personalization. Personalization is efficient. I mean, there is economic evidence of this. So it's very efficient to have personalization of prices and even advertisements are incredibly efficient. But the thing is, we as consumers have become so blinded by convenience and by the value of having something for free that we are basically willing to just sell our soul to the devil, because we just think, okay, we consented. I mean, it's okay. They don't know anything about us. They know everything about us, but we just don't care because we don't see it. So we don't get the short term impacts of that and there is just not enough information for citizens. So our literacy in terms of algorithmic literacy, data literacy is just very limited. So we just remain ignorant. And ignorance is bliss.
Silvija: First of all, Christian, I saw you smile for a split second and then you put your German professor, Dr. Face back on. Second of all, I would love you to ask Sofia, as your guest, really, what is the main idea you would like to extract from her head for this conversation?
Christian: Extract from head that sounds very exploitative, like we normally accuse some players in the industry of that. I think what I find interesting in your work, Sofia, is this idea of where essentially, especially as a lawmaker, somebody who needs to be or is supposed to be objective and benevolent. We are arguably talking about a moving target, right? We're talking about technology which changes ever so quickly where maybe even the people that design the technology don't understand all its implications. What do we do as lawmakers or how do we make these systems work that we essentially somehow supervise these goals that we have? We want to protect the people that might not have all the literacy the people might want to feel should feel included. Is there anything we can do as legal scholars or legal practitioners to essentially deal with, I would think, a moving target technology where not everyone understands and can understand where this is essentially going or what implications it might have down the road.
Sofia: Yeah, I think we can do a few things. As I said, one of the features of law is that it can be elastic. It's not always elastic. So in law we basically regulate people by either setting principles like very open norms. So basically the principle of equality like for example, like everyone should be treated equally. That is a principle. It's very big, it can be very elastic and that is something that is more future proof. But we also regulate by offering rules which can be very specific and they become very easily outdated. So for example, when you have rules on all the little features an iPhone should have. These are just rules that can become obsolete when a new model comes up. So I think the first idea is that when designing rules for digital innovation, we need to make sure that we place our bets more on principles because they are more elastic, which can stretch them more. Also, because the key idea of regulation right now is mainly to manage risks. How risky is an activity for humans? I mean, we cannot prohibit AI from being efficient. It will not go away. It's good for our society, but we can manage its risks. So law is all about managing risks these days. And the idea is basically that we develop systems and mechanisms that are elastic enough to manage these risks so that we actually don't end up in the hands of Google or whatever and in a way that can be extremely extractive. But at the same time, these rules have to have to be good enough and specific enough to how people enforce it, implement it. So I think it's about having this balance and making sure that regulation is about managing risks. I would say that's the first point. And the second one is that the same way that AI is personalising advertisement and services and prize and everything, my question is, why don't we also use that idea for law in the sense that we should also take into account the diversity of the population. Not everyone is equally tech savvy. Not everyone can read and write the same way. Everyone has the same digital skills. So maybe we should also take that into account when implementing the law. This is a controversial idea that I exposed in one of my articles on empathy because empathy, the idea of having compassion for people, the idea of placing yourself in the shoes of others is very controversial in law, because in law it should be equal for everyone. When we actually say law, we see that equality means not that law should be equal for everyone, but that so many inequalities have to be taken into account. So basically the idea of introducing empathy in digital innovation, the regulation of digital innovation is that law should be equal for everyone. But when you have some extra troubles having access to digital innovation, using digital innovation, those extra troubles like could be your financial situation, could be disability, could be anything else that is relevant that will be taken into account to make sure that your specific situation is considered. So if, for example, you are a mother of three children on social welfare, you might not be able to actually engage with digital technology in the same way because you are overwhelmed with work, with struggling with financial needs. So perhaps it would be good to have some extra assistance there. And that my idea is basically that we design a regulation for digital innovation. That is the balance principles that balances different needs, but also takes into account the situation of citizens. And that is what I basically call empathy in our digital state.
Silvija: Last 2 minutes, I'm going to spend ten of them summarizing what I just heard you say, Sofia. And then I would like our host to tell us what's the most important idea he's now stuck with after this conversation. So I heard you talk about elasticity in law, and that's a new idea for me. I thought law is the law and it's based on history. And we just developed it going forward based on things we've seen. What I've heard you say is think about law based on principles, not very specific rules. It gives you flexibility. It gives you maybe more ability to interpret things in the context of new contexts as well. Second is to think about unknown risks and manage them as much as manage known risks and things that are well established. And then you're talking about some sort of a relative law that really introduces empathy into innovation and digital access for everybody in order to create a fair future. That's my short summary of the last thing. What's your kind of nugget of gold from this conversation, Christine?
Christian: My nugget of gold, I hope it's somewhat at least a nugget, but so I reread a lot of Sofia's writings actually in the last preceding days. And I found and it's maybe a very, very whimsical thing, but I find this idea that, even in law, you need to be somewhat of a good “designer”. I find that very interesting because in design signs or user interface design, we have this saying that the inmates are running the asylum. So if essentially just people who never are faced with the design side, with the hardships of actually using a system, maybe in our case or in Sofia's case with the legal practitioners, with never having felt discrimination or alienation, that this idea, which Sophia proposes of empathy, I think is a very interesting point and I think a very needed point to to to essentially make sure that when we now have systems which increasingly removes the human out of the equation, or at least for the immediate act of being governed. Right, like having a decision made by a system on you that there is this additional not human in the loop, but this institutional idea of empathy to stay essentially with the people.
Silvija: Excellent. Thank you both for a very inspiring and educational chatty conversation. Thank you.
Christian: Thanks so much.
Sofia: Thank you.
Du har nå lyttet til en podcast fra Lørn.Tech – en læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å ha lyttet til denne podcasten på vårt online-universitet lorn.university.
Du må være Medlem for å dokumentere din læring med å ta quiz
Allerede Medlem? Logg inn her:
Du må være Medlem for å kunne skrive svar på refleksjonsspørsmål
Allerede Medlem? Logg inn her: