LØRN Case #C1183
Security and privacy as business enablers
How can focusing on security and privacy not only help you become a trusted partner, but also improve operational efficiency, gain competitive advantage, access new markets and foster innovation. We have invited Erlend Andreas Gjære, CEO of Secure Practice and our co-host, Samson Esaias, Associate Professor in the Department of Law and Governance at BI Norwegian Business School to explore more. This is a part of the series we are creating together, to explore main challenges and opportunities to creating good AI, as part og BI course Responsible AI Leadership.

Erlend Andreas Gjære

Co-founder

Secure Practice

Samson Yoseph Esaias

Associate Professor - Department of Law and Governance BI.

BI

"We find people are great resources. They need to believe that they're able to deal with digital security"

Varighet: 40 min

LYTTE

Ta quiz og få læringsbevis

0.00

Du må være medlem for å ta quiz

Ferdig med quiz?

Besvar refleksjonsoppgave

Tema: Digital etikk og politikk
Organisasjon: Secure Practice
Perspektiv: Storbedrift
Dato: 220617
Sted: OSLO
Vert: Silvija Seres

Dette er hva du vil lære:


How to enhance customer trust and loyalty

How to improve operational efficiency

What competitive advantages you get by focusing on security and privacy

How focusing on security and privacy help you acces new markets

How you can create an environment for innovation without fear of data breaches or privacy violations

 

Del denne Casen

Din neste LØRNing

Din neste LØRNing

Din neste LØRNing

Dette er LØRN Cases

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. 

Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

Vis

Flere caser i samme tema

More Cases in the same topic

#C0061
Digital etikk og politikk

Glenn Weyl

Professor

Princeton

#C0147
Digital etikk og politikk

Hans Olav H Eriksen

CEO

Lyngsfjorden

#C0175
Digital etikk og politikk

Hilde Aspås

CEO

NCE iKuben

Finn Amundsen

CEO

ProtoMore

Utskrift av samtalen: Security and privacy as business enablers

Velkommen til Lørn.Tech - en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

Silvija Seres: Hello and welcome to a case with LØRN and BI Norwegian Business School. This is a part of a series we're creating together to explore the main challenges and opportunities in creating good A.I. as part of a course on responsible A.I. leadership. My guest today is the CEO of Secure Practice, Erlend Andreas Gjære and my co-host is Samson Esaias, an associate professor of law at the Department of Law and Governance at BI. Welcome to both of you as well.

 

Samson Esaias: Thank you.

 

Silvija: So I'm going to ask you very briefly to introduce yourselves, because we really learn better from people that we feel we know at least a little bit. Then we're going to talk about security and privacy as business enablers with an entrepreneurial twist. I'm really curious. We'll start with Samson and then go to Erlend. Who are you and why do you care about this? 

 

Samson: I’m Samson Esaias. I'm an associate professor here at the Norwegian Business School. I'm responsible, together with my colleague Kristin Fisher, for this executive masters course in A.I. Responsible leadership that we are working on together with LØRN. I have a law background and I have been working for several years in the intersection between law and technology. I have been looking into data protection concerns, all fairness concerns, and several of the new legislative instruments in the EU, especially now with the Data Governance Act. Basically anything to do with the interplay between law and technology would be something interesting.

 

Silvija: I have a question for you, Samson, because you were relatively early in this field, this intersection of technology and law, but it seems like your time has come just now. This seems to be a super hot topic, right? Do you experience the same?

 

Samson: Well, I don't feel like I'm early because I have been part of the Norwegian Research Center for Computers and Law, which was actually established in the 1970s. 

 

Silvija: So it's becoming mainstream?

 

Samson: Definitely. Normally we had this idea of certain types of laws. Privacy, data protection were quite digital in the sense that you focus mainly in relation to digital markets. But now, I mean every law, financial banking regulation, becomes digital. There is quite a lot of interest in that sense. Mainly everything is becoming law in technology now.

 

Silvija: Thank you. Erlend, who are you?

 

Erlend Gjære: I am a co-founder at Secure Practice, a company started in 2017 and we have this focus on security and people, which is a bit unique in our space because mostly security has been a lot about technology, information, security, cybersecurity. There's so much security technology. When I entered the space as a researcher, ten or 12 years ago, if you had a firewall, antivirus and spam filter, that was the definition of security for many companies, if they had that at all. Then I entered into the practitioner part of research and applied research and tried to train my fellow researchers at the company I was in about secure behaviour and secure practices at work and in their private personal life as well. We're now all digital citizens. We're all living in this space where we cannot ignore the fact that we are living in a digital world and it affects all of us. As a consequence, everyone needs to know something about information security and staying safe online. This has been like a passion for me to really help people with this part. I think we started Secure Practice to do this, to help people in order to help the companies, because companies also depend on the human factor. We cannot ignore the fact that people are people both at work and otherwise. There's no security system that will work without having the people part in good health.

 

Samson: Just one question. My sense is actually many of the security breaches happen because of human error? Is that right?  

 

Erlend: The Verizon data breach investigation report says that 85 or 87 or 90% of all incidents are somewhat related to human error. You could say 100% because there's a human somewhere.

 

Samson: Yeah.

 

Erlend: Very often it's about a human clicking a link or doing something bad with an email or on a website or something. But you have a lot of other cases as well, and we're used to calling it like bad luck or saying it could happen to anyone. What can you do about it? We really wanted to solve this problem and go beyond the de facto standard. More than doing annual security awareness, training, e-learning and hope that people kind of get the best out of it and stay safe otherwise. I really think and believe that we are able to do more. This is why we also entered into the AI space with our company Secure Practice. We try to use AI to assist our human intelligence, but also the other way because we see people not as weaknesses. It is not the case that if you just cut the people out of our systems then we’d be great. We're just your employees and cut out your customers and you have no more problems. You can't really do that. You have to tackle it in another way. We find people are great resources. They need to believe that they're able to deal with digital security. Most people back off and think this is not really anything of my concern or I'm just a little bit scared when I'm faced with Russian or Chinese hackers trying to attack my company. How do we kind of stay safe from that? 

 

Silvija: You have to explain to us your service. But I just want to give you another example to give some more color to this. I was speaking with the management of a company yesterday that had this security awareness program, and they were going to teach people not to respond to phishing attempts. So don't click on a link if the email has an extra digit or look for where the domain names are and look for misspellings. When they did an internal test more than 20%, I think, clicked. Quite a big percentage proceeded, after clicking, to provide the password or some such. In some cases they didn’t inform their own internal IT services. Some of these people actually did call their internal IT to ask should I click on this link? They were told to go ahead. I think that learning or education about cybersecurity still feels like one of those things we just want to get out of the way. We haven't really understood the incredible consequences it might have for our business. I learned a couple of weeks ago of a human error with one of the suppliers down the value chain to the National Mapping Institution. It shouldn't happen in the public sector. Human error.

 

Erlend: People are called the weakest link in the chain because we're not predictable as humans compared to computers. We tell computers what to do, but you cannot just tell humans what to do. Right? So we can all fail. And it's really human to fail. And you know what, Silvija? This week I actually clicked on a phishing simulation myself.

 

Silvija: What should you have noticed or why was it just extremely well done?

 

Erlend: We hired two new employees, two students from NTNU. On their first day after 2 hours, I went out for a seminar and on my way back, I got this email on my phone saying, “Hey, you didn't show up? So here's the fee you have to pay for not showing up.” And I thought, but I was there. And these two students, they actually tricked me on their first day.

 

Samson: Wow.

 

Erlend: This after so many years of telling people not to click. This time they got to me. Everyone can be tricked. 

 

Samson: How are you trying to address the problem? Erlend, can you tell us what a secure AI does?

 

Erlend: What we really need to do is to make people kind of believe in their own capability of dealing with security and not make people intimidated by the fact that digital security is very digital. This can be frightening because we don't understand the consequences or how it works because it's IT and people. They're experts in other areas. What we want to do and what we use AI to do, first of all, we have this main risk tool to help people with suspicious emails. It's a button you can click to get help if you're suspicious of something and you get feedback. This is safe or this is bad. We use AI there and I've been using this for many years and we have hundreds of companies using this service. We are now developing this platform for training and personalized training, automated to address the fact that people are different. Because what security awareness has been so much about is being one size fits all, but not actually what fits well. What we want to do is try to build this in a way to find out who are the people interested in what we have to tell them and what do they actually know from before? Because some people know a lot while other people, they don't know too much.

 

The experts who are really interested, like myself, at least we're not a big risk. Then you have the people who know less or little if they are interested. That's a good group to communicate with because there is a training need. But if they are not interested, will it actually help if we kind of just push more and more training of the same stuff that you need to be really interested to dive into, and have like the empathic people in the middle. We built this from research at University College London actually, and we developed more on this ourselves. We did a research project with the engineers at the Institute for Design and investigated more like how do we kind of find out what level of interests people have and how do you profile people in terms of their knowledge in security? Because knowing this, we can actually target various groups of people with various communication messages, just like any communications expert would actually do. We need to target your communication. Talking to the board, talking to people, working in the field engineers or accounting people, or like what are their interests and what are their knowledge? So that's what our tool is doing.

 

Samson: I think it's quite interesting. This is one of the cases where you're using technology to address some of the security and privacy issues that might arise in organizations. I know that you've been a researcher and you had this transition from a researcher to an entrepreneur, a startup company. So how did you think new laws, such as GDPR, provided inspiration for founding this company? Did you think that there are some opportunities there?

 

Erlend: First of all, you see this societal change where actually cybersecurity matters, which didn't before because companies get breached all the time, boards are now scared. You see cyber or ransomware is on the top of the enterprise risk chart and then you have the regulatory risk. If you're breached, then you also get a fine and you have personal data spilled out and people's lives may be affected and nobody would actually harm other people. The regulatory part is important. We saw this opportunity when the landscape was actually changing to build a company which puts cybersecurity on the agenda with the human in focus. We see it's like the visibility of doing security. It's important for culture or organizational culture. It's important because you cannot just say it will solve this. And if something bad happens, we have an organization that is not only able to prevent as much as possible, but also to respond to incidents and to deal with them. Detecting and responding is really important because nobody is hack proof. You need to work on multiple levels. We also see the regulatory landscape with GDPR on a policy level. It's important for Europe to be differentiated in market competition with America and privacy. It was one way to differentiate. When we build privacy friendly products, that's this advantage in the markets in information security, which is mainly dominated by American vendors. So if we build privacy friendly products, privacy friendly AI, that's a big advantage.

 

Samson: Do you pitch your service as something that will help companies also comply with the law or at least contribute to their compliance with the law?

 

Erlend: Yeah. So through our tool, we collect a lot of data, but you would like it in a privacy friendly way. But still we provide the company with statistical insights. We give the eyes to allow companies to manage their risk, to see their risk pain points and also the good spots. Where do we have low risk? Being able to document this is a challenge we all have had in security for forever but building good KPIs and actually being able to feed those with data continuously. That's the challenge we have been approaching and trying to solve. If you have good data and you can actually show you've been taking this seriously, if something goes bad, it's like covering your ass. But you can show that you have been working with these things. You have a track record.

 

Silvija: Just to understand a couple of use cases, Erlend. One thing I'm thinking of is your data can show where the security problems usually arise. You can say that those processes, those groups, those kinds of tools seem to be originating 80% of the problems. That's where you should focus. The other thing that you can do is showing that the ones that used to be the biggest problem last year are now really completed. So you don't always have to have this red team, blue team. You can do it a little bit more continuously, right?

 

Erlend: Yeah. We have this data model that shows historical development and we also have data aging. If you collected a really high risk incident, for instance, at one point it would be in first place for increased risk, but then as it ages the data points than you would, for instance, if you get it into the system and actually someone reports it and it's a good thing. So you have become aware of it and being able to handle it and kind of building this insight is very important for strategic decisions and planning.

 

Samson: Yeah. I see that at an organizational level, your tool actually provides quite a good value in terms of ensuring security and also ensuring privacy as well. But at the individual level, it could actually also raise some really serious privacy concerns, because what you're trying to do, as I understand it, is you're trying to categorize people based on their behavior. We have had experiences of big technology companies trying to categorize users based on their behavior. There are lots of concerns related to that. I'm aware that you have also been part of this AI sandbox within the Data Protection Authority. So if you can explain to us what kinds of concerns are you facing and what were the motivations that led to being part of this AI sandbox?

 

Erlend: We saw vendors in America pitching products saying we can identify your next cyber breach and the solution was to name an employee. Maybe this is a really good sales pitch in America. We thought let’s do this in a trustworthy and safe way for people to be a part of and fit for use in Norway and the world. I think our main motivation here has actually been to build a product where people can feel safe and they're getting these positive emotional stimuli from learning about security. We don't want to pollute that. We're like, am I being monitored now or what can they see? Did I learn that I missed out on this part, etc.? We approached Luna when they launched their regulatory sandbox for AI and said, we want to profile employees in companies with respect to the cyber risk that they kind of represent to the company. We were tabloidy on purpose because we really wanted to take part in this. It was a perfect match with our timeline for product development because we don't want to retrofit first into our products. You cannot do that. You have to align the overall product strategy from the start. We got into Sandbox and we got a really good idea of the legal and regulatory issues that could come up.

 

Erlend: There are challenges and problems that we want to solve. For instance, how can we ensure that an employee is not exposed to their employer with terms of a risk request? We never wanted to create this tool that the Americans are right now. We still need the individual tracking of data to personalize and automate aggregate. We could aggregate without tracking individually, but it's better this way. Can we do this? Can we say to our customer, although they are the data controller and we're just a processor, can we say no, you cannot assess the risk score for Erlend or Samson? They would have legal grounds to say we want that because you're just a data processor. One of the solutions here is we have controls to secure that through our portal and our product. The employee employer cannot get this data on an individual basis. It uses statistical analysis to expose people. We also have the legal control where we say for the individual risk scores, secure practice, and the customer are joint controllers.

 

Erlend: This means we have a shared responsibility. This is a legal kind of construct that can apply in many situations, but it's not very thoroughly used or mentioned in practice. You're like an all around processor. You don't have to do joint controllership for everything, but for the individual risk scores. If we are legally bound to be joint controllers, then we can say we actually get to decide as well. And we say, No, you cannot have this data. By putting up this legal firewall upfront, you can actually safeguard your own company from doing employee surveillance because that's what it would become if they could actually inspect the individual risk scores of employees. So you put up this in front to actually safeguard yourself as a company, to use our tool and use us as a third party to process the data. This is a bit of, I don't know, privacy, innovation actually, in legal terms, I think. One of the good outputs from the project.

 

Samson: Quite interesting, I think so. This is something that you actually gain from being part of the AI sandbox. Yeah, that's quite interesting to learn. You actually came up with concrete solutions in terms of how to address the privacy challenges that you have. But perhaps one question is that would you think that the employer would know who is being receptive in terms of education? When you decide to provide education, would they have a mechanism of knowing? Have you considered those challenges as well?

 

Erlend: Yeah. We did this series of workshops during the Sandbox project. We did focus groups with employees and companies and we did a focus group with a labor union to get the real tough questions up. We did a workshop with the Equality and Non-Discrimination Ombud, which I think I got right and tried to elicit all these objections that that you could have. One of the outputs from engaging with these kind of people and organizations is you get a chance to find the problems upfront, and then you can do kind of mitigating actions in a product like information and transparency and trust and and the real kind of really important challenge here for us, not only to make it safe for our customers organizations to use our products like legally, but we really want to build trust and confidence in our product from the end user side, from the employee side. So they feel comfortable with actually learning about security.

 

Samson: And also sharing data with you. Yeah, I think they have to trust you first to engage with your tools. Yeah.

 

Erlend: Yes.

 

Silvija: Just a question from me. It sounds like you guys are seeing there's growing regulation around cybersecurity and usually people just feel like all this compliance is driving costs and limiting business. Well, in your case, it seems like it's improving the quality of our organizations, both in terms of being more secure, but also perhaps being transparent in the right way or understanding privacy versus efficiency, etc.. So regulation, you've done it in stages, and as we learn, it can really be a driver of both innovation and value.

 

Erlend: Absolutely. I think so. You just need to, like, do it. Well, it's like Steve Jobs was all about really getting into a problem and thinking so hard on the problem, kind of finding the ultimate solution. You really have to dive into it. With security, it's a cost, it's a necessity. Maybe you can skip it because we've been doing fine so far. But developing your company culture never goes out of fashion, right? And developing our people skills never goes out of fashion. And people need the skills both at work but also at home to stay safe. When we do this, it's like we put up this headline in our projects for a good purpose, right? And nobody can kind of question that the purpose is good to actually make people more safe. Because if you are breached, if you are low on knowledge, if you don't know much, you are much more put at risk for getting a breach and that would actually cause you harm. As the equality and non-discrimination on board said, it's a good kind of discrimination to to address these people. And you don't have to waste other people's time. You give the basic stuff to those who know little and you give more advanced and interesting stuff to those who are interested. So it's time productive. In companies, you can do better than one size fits all for training.

 

Samson: Yeah, I heard that the Equality Discrimination Ombudsman was part of the sandbox. In your case, it seems so. Where that kind of particular discrimination bias concerns in relation to the the tool and how did you actually manage those as well in the in the sandbox, how did the sandbox help in relation to those issues?

 

Erlend: We got to learn a bit about discrimination as well. There is much more than just age and gender. One of the challenges to detecting discrimination is when you have AI algorithms is you need actually data on the parameters, you're checking for discrimination to check if there is discrimination in your algorithm. So if we want to check age discrimination or whatever, then we would need info on the age of people. So we would actually need more data. And in our case we didn't want to go there, at least for now. But what they said is that as long as you can deliver on this legal and technical guarantees that people will not be exposed individually, there is a low risk of discrimination because you're only providing statistical data. They cannot identify high risk people and give them bad projects or lower salaries or whatever. But there are many risks and blind spots there. You need to be aware of it and work through it. That's also a regulatory risk that you want to avoid. It's good to just get the problems up on the table. Technology is so much more than just the code. That's also one thing that I'm really intrigued about working in this area. It's so much about people and people's lives.

 

Samson: Very interesting. So you think the sandbox was useful and you think that you would use your participation in the sandbox as a selling point to companies? Do you mention that you actually participated in the sandbox when you make these marketing pitches to companies?

 

Erlend: Yeah, we've actually gotten a bit of publicity. Early on, we actually met potential customers. They're asking, how are you doing? Is there an incident with the Data Protection Authority? We’re telling them it's not us. We're doing a proactive project. We're trying to be cutting edge on privacy. We’re not the bad guys. 

 

Samson: They actually thought that you were under investigation for that.

 

Erlend: That was kind of the negative aspect of being first.

 

Silvija: I want to add something to that. And then I would like to ask Samson to help us conclude on what he thinks are the main issues here. But I know at least three other companies that were desperate to get into that sandbox project with really interesting challenges in terms of privacy and ethics in the use of public data. And they didn't get in. So you must have had a really, really good problem formulation and something that is very socially critical for us.

 

Erlend: Yes, we chose a problem on privacy in the workplace, which represents nearly 30% of all the incoming cases to the Norwegian Data Protection Authority.  It's a really big topic for them and many people are concerned with it like hacking of cars and vehicles. There are many ideas concerning checking people's email mailboxes, and inspecting them. There are many cases like on how do we deal with AI in the workplace? And now there was this issue with Microsoft having a new tool to detect harassment in the workplace and people who are kind of threatening to leave by analyzing their email logs.

 

Erlend: You don't want harassment, right. So it would be good to prevent that. But as soon as, say, there is no legal basis for actually doing that, so being able to dive into the sandbox and get this hands on proactive, like risk reducing, getting to ask all the difficult questions and then design our solutions from that. It's a much better way to approach this and make security and privacy business enablers for us and for our customers.

 

Silvija: Samson. What's the most important thing you'd like your students to take away?

 

Samson: I think a few points. I think the first point would be how you can basically take inspiration from regulation to creating value. I think the practice is a good example where they are trying to use existing regulation, but also of course empirical data behind breaches and security concerns to build a product which would help companies to be more secure, more privacy friendly. And I think another point is how this API sandbox actually allows for innovative solutions. It forced the data protection authorities to think really creatively about how to address the privacy concerns of employees that are being surveilled. By allowing them to be this joint controller ship, and preventing the employer from accessing these risk scores, you kind of balance those concerns and the value that you're proposing. And of course, the last point would be you can actually use participation in the AI sandbox as also as a way of, as Erlend mentioned, to publicize your company, but also to actually gain concrete solutions as your practice has done. So I think those are at least my takeaways from the discussions.

 

Silvija: Lots of value creation and also business opportunities. I guess in the space of both privacy and ethics and cybersecurity.

 

Erlend: I think we're seeing a global trend where Europe is actually leading on privacy and kind of regulation, although there are reasons why as a market Europe wants to do this.  I was in Brussels a couple of weeks ago and met with a EU Parliament member who was chair for the new AI Act committee and there will be more regulation of AI as a domain. He is a very big proponent of the sandbox concept where you actually can deal with the pretty wide legal interpretation space and narrow it down to like understandable practices because you gain this experience. I think actually that lesson learned from our project, they say they do. They learn hands-on work with AI solutions, ours and other projects because they are similar.

 

Silvija: Open ground, I guess. A lot of open ground. 

 

Erlend: Absolutely. So of course, from our project we learned a lot. We apply that directly into our project and products and then you can also learn from the process and how we do things in other contexts as well. So like transparency and trust and interacting with various groups of people, it's not just coding the tech. You actually need to talk with people. So I think that's an important takeaway for everyone engaged in AI. Just learn to know your stakeholders.

 

Silvija: I'd just like to add two things. First of all, every conversation we've had in this series was very much cross-functional across subject matter conversation. So here we have a techie and the entrepreneur, and the professor and the lawyer. And I think the future is genuinely complex and will require you to know more than one subject. The other thing that I'm really fascinated by is how much you actually learn by doing. You know, this understanding that we can't just sit there and analyze while we are completely still how to regulate this, how to make good AI, etc. We have to experiment, we have to do, we have to build, and then we have to learn very, very quickly while we do that. That's, I guess, the only way to get there fast enough.

 

Erlend: I agree.

 

Samson: I completely agree with this. I think this kind of sandbox gives the regulatory authorities the ability to learn how things work, but also to shape the technology at an early stage. I think one of the problems that has happened in relation to some technology, some of the big tech companies and their practices, is that there was no such opportunity to kind of shape it from an early stage. So it then becomes very difficult to make it compliant afterwards. I think you have to be a bit ahead, proactive and engage with the stakeholders. But of course there is a risk that you have to be careful that you should not also miss out from this regulatory perspective watchdog and then enforce the law. But I think if you don't have the insight into the technology, you won't be able to exercise that role. So I think engagement should be a priority as well.

 

Silvija: Engagement and impatience. Thank you both very much for an inspiring and very educational conversation.

 

Samson: Thank you.

 

Erlend: Thank you, Silvija.

 

Du har nå lyttet til en podcast fra Lørn.Tech – en læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å ha lyttet til denne podcasten på vårt online-universitet lorn.university.

 

Quiz for Case #C1183

Du må være Medlem for å dokumentere din læring med å ta quiz 

Allerede Medlem? Logg inn her:

Du må være Medlem for å kunne skrive svar på refleksjonsspørsmål

Allerede Medlem? Logg inn her: