LØRN Case #C0064
High-dimensional data
In this episode of #LØRN Silvija speaks with the Head of Machine Intelligence from Simula Metropolitan Center for Digital Engineering, Leva Martinkenaite. The Simula Metropolitan Center for Digital Engineering is a newly established research institute aimed at areas such as networks and communication, machine learning and IT management. In the episode, Leva explains who we should trust most out of an AI-system or an expert, how an algorithm can give accurate predictions of glucose levels, as well as how they work towards strengthening artificial intelligence and machine learning.

Valeryia Naumova

Head of Machine Intelligence

Simula

"AI-teknologi gjør i stadig større grad avgjørende beslutninger på våre vegne. Dette gjelder innenfor en rekke områder, med alt fra autonome kjøretøy til kliniske diagnosesystemer."

Varighet: 24 min

LYTTE

Ta quiz og få læringsbevis

0.00

Du må være medlem for å ta quiz

Ferdig med quiz?

Besvar refleksjonsoppgave

Who are you and how did you become interested in AI? I head up the Machine Intelligence Department at a newly established research institute, Simula Metropolitan Center for Digital Engineering, which is a joint venture between Simula Research Lab and Oslo Metropolitan University. I became fascinated by machine learning and data-driven modelling during my PhD, when I worked on developing an algorithm that would provide an accurate short-term prediction of blood glucose/sugar levels in diabetes patients from current and previously observed data. What is your role at work? I primarily focus on research, but I also actively help promote formal education in ML/AI. My research focuses on developing new methodologies and numerical methods for the analysis of complex systems and learning from high-dimensional data in science and industry. What are the most important concepts in AI? The overall goal of AI is to create a technology that allows machines to function in an intelligent way – in other words, making them capable of thinking, acting, and learning like humans. Why is this exciting? AI technologies are increasingly making far-reaching decisions on our behalf in a number of fields, from self-driving cars to clinical diagnostic systems. What do you think are the most interesting controversies? AIl progress has raised various controversial topics that we need to address, including: • Should AI development be heavily regulated? • Should humanoid robots have rights? • Will AI kill jobs? • Can we combat AI cultural insensitivities? What is your own favourite example of AI? Probably AlphaGo. Can you name any other good examples of big data, nationally or internationally? I am fascinated by a project we are working on with Norwegian Cancer Registry, where we analyse screening data and provide personalised predictions about when next to perform screening, and identify women at risk of cervical cancer based on their screening history and additional personal information. How do you usually explain how it works, in simple terms? I always start with the simplest concepts, since they are essential for understanding more complex and advanced concepts. It is also important to clearly explain how a machine reasons and how this differs to human reasoning. Is there anything unique about what we do in AI here in Norway? Norway has pioneered the digitalisation of various industries and improved energy consumption, etc. as a result. Moreover, Norway has uniquely well-preserved data sets, such as medical registries that could be used for training ML algorithms to provide more personalised advice on treatment options. Do you have a favourite big data quote? Eliezer Yudkowsky’s quote: “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Who are you and how did you become interested in AI? I head up the Machine Intelligence Department at a newly established research institute, Simula Metropolitan Center for Digital Engineering, which is a joint venture between Simula Research Lab and Oslo Metropolitan University. I became fascinated by machine learning and data-driven modelling during my PhD, when I worked on developing an algorithm that would provide an accurate short-term prediction of blood glucose/sugar levels in diabetes patients from current and previously observed data. What is your role at work? I primarily focus on research, but I also actively help promote formal education in ML/AI. My research focuses on developing new methodologies and numerical methods for the analysis of complex systems and learning from high-dimensional data in science and industry. What are the most important concepts in AI? The overall goal of AI is to create a technology that allows machines to function in an intelligent way – in other words, making them capable of thinking, acting, and learning like humans. Why is this exciting? AI technologies are increasingly making far-reaching decisions on our behalf in a number of fields, from self-driving cars to clinical diagnostic systems. What do you think are the most interesting controversies? AIl progress has raised various controversial topics that we need to address, including: • Should AI development be heavily regulated? • Should humanoid robots have rights? • Will AI kill jobs? • Can we combat AI cultural insensitivities? What is your own favourite example of AI? Probably AlphaGo. Can you name any other good examples of big data, nationally or internationally? I am fascinated by a project we are working on with Norwegian Cancer Registry, where we analyse screening data and provide personalised predictions about when next to perform screening, and identify women at risk of cervical cancer based on their screening history and additional personal information. How do you usually explain how it works, in simple terms? I always start with the simplest concepts, since they are essential for understanding more complex and advanced concepts. It is also important to clearly explain how a machine reasons and how this differs to human reasoning. Is there anything unique about what we do in AI here in Norway? Norway has pioneered the digitalisation of various industries and improved energy consumption, etc. as a result. Moreover, Norway has uniquely well-preserved data sets, such as medical registries that could be used for training ML algorithms to provide more personalised advice on treatment options. Do you have a favourite big data quote? Eliezer Yudkowsky’s quote: “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Vis mer
Tema: AI- og datadrevne plattformer
Organisasjon: Simula
Perspektiv: Forskning
Dato: 181020
Sted: OSLO
Vert: Silvija Seres

Dette er hva du vil lære:


Kunstig intelligensMaskinlæringHøydimensjonale data

Mer læring:

Report by Teknologirådet “Kunstig intelligens for Norge”. Deep Learning book by Goodfellow, Bengio, and Courville

Del denne Casen

Din neste LØRNing

Din neste LØRNing

Din neste LØRNing

Dette er LØRN Cases

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. 

Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

En LØRN CASE er en kort og praktisk, lett og morsom, innovasjonshistorie. Den er fortalt på 30 minutter, er samtalebasert, og virker like bra som podkast, video eller tekst. Lytt og lær der det passer deg best! Vi dekker 15 tematiske områder om teknologi, innovasjon og ledelse, og 10 perspektiver som gründer, forsker etc. På denne siden kan du lytte, se eller lese gratis, men vi anbefaler deg å registrere deg, slik at vi kan lage personaliserte læringsstier for nettopp deg. Vi vil gjerne hjelpe deg komme i gang og fortsette å drive med livslang læring.

Vis

Flere caser i samme tema

More Cases in the same topic

#C0045
AI- og datadrevne plattformer

Michael Link

Forsker

Kongsberg

#C0044
AI- og datadrevne plattformer

Heidi Dahl

Forsker

Sintef

#C0043
AI- og datadrevne plattformer

Sverre Kjenne

Leder

BaneNor

Utskrift av samtalen: High-dimensional data

Velkommen til Lørn.Tech - en læringsdugnad om teknologi og samfunn med Silvija Seres, Sunniva Rose og venner.

 

Silvija Seres: Hello, and welcome to Lørn. Today we are going to learn about AI - artificial intelligence. I'm Silvija Seres and we meet dr. Valeryia Naumova. Welcome.

 

Valeryia Naumova: Hi! Thanks a lot, Silvija for the invitation. It's a pleasure to be here.

 

Silvija: Great. You rushed all the way from Lisbon to join us.

 

Valeryia: Yes, I was at a conference, and the flight was delayed a bit, but I made it.

 

Silvija: You made it. Thank you so much for coming. Valeryia you do research within the area of artificial intelligence and machine learning. You do these similar research labs which is now combined somehow with this wonderful Oslo Metropolitan University. You actually head the new department there?

 

Valeryia: Yes. So, I'm working as research scientist, as you mentioned and I'm also leading the Department of Machine Intelligence. It's a newly established research institute which is called Similar Metropolitans Center for Digital Engineering, and as you correctly mentioned it is a giant venture between Similar Research Lab and Oslo Metropolitan University. So, we moved downtown to Bislet.

 

Silvija: You moved from my dear Fornebu. I'm sad. Tell me about your research.

 

Valeryia: I'm doing it short, so my background is in the applied mathematics, but I'm working quite a lot on machine learning and data driven modelling. And I started the work on this topic when I did my PhD in the research lab in Austria, and in particular during my PhD I worked at an European project which dealt with development of tools or techniques which could help people with diabetes to live a better life or to manage their disease in a better way.

 

Silvija: What does it have to do with the machine learning? How does that work?

 

Valeryia: So the problem is that the people with diabetes, whom are more than 400 millions worldwide, they suffer with the so-called optimization problem where they have to know when to reject insulin or when to take a new portion of glucose in order to keep their blood glucose level in the normal range, to avoid some kind of escalation of high blood glucose or low blood glucose, which could lead to some acute disease or some long term complications.

 

Silvija: What I didn't realize before I actually spoke with one of these patients who got on to an app, probably powered by your research is that they sometimes, to avoid very low levels overmedicate and these kind of things, as you say, have long term negative effects like loss of vision, loss of hearing, loss of limbs or kidney disease. So, it's actually quite bad.

 

Valeryia: Yes, it is, it is really bad. And some acute complications could be coma or even death. It says a blood sugar goes really low, which could happen during the night. And as you mentioned some people are so afraid of these complications that they overeat in the evening. And the goal of our work was to be able to develop an algorithm which allows to predict the evolution of the blood glucose from past measurements.

 

Silvija: So, it develops in different ways, in different bodies?

 

Valeryia: Yes. So it's patient specific algorithm, and it adjusts the patient’s lifestyle, and actually what we have seen that we can use some sort of universal algorithm, because the patient's glucose evolution changes not only from patient to patient, but actually from day to day, because it depends quite allot on your well-being. Whether you are sick or healthy, how much exercise you did. Everything influences it. And actually, doctors do not know exactly what the causes of changes in the blood glucose is. But what we did using machine learning and mathematics in particular, is that we are able to provide short term, meaning 20-30 minutes ahead of time predictions, which could warn the patient of saying that you should have something to eat, or inject more insulin to be able to control the blood glucose in a better way. We also developed an algorithm which does long term predictions overnight saying what does the risk of getting low blood glucose? Our hope is that patients with these techniques would be able to feel safer and also feel better. The fascinating point of working in this project was not just using our developing machine learning techniques and also doing mathematics, but we worked in extremely interdisciplinary team. We had teams of doctors, industry engineers. It was a very direct way from development to users. We developed techniques that we used in clinical trials in different hospitals throughout Europe to justify and evaluate the algorithm.

 

Silvija: So basically, engineers because you needed to have some sort of a chip that measures this all the time?

 

Valeryia: Yes, so we used different techniques. We took measurements from different devices or devices which provided to us in real time, and then direct blood glucose measurements every five or ten minutes. This was our main device we used, but of course this device is not available for all patients due to different reasons. We also tried to address patients to only use fingerstick measurements, which is what most of diabetes patients do, when say just prickle their finger and take blood glucose. In that case, you cannot prickle a finger more than five-six times per day, so you're really in a very difficult situation of being able to analyze how your blood glucose behaves during the day.

 

Silvija: Very lumpy data.  I have to ask you something before we go into whereas the A.I. and M.L. in this. You said your background is applied mathematics. And I just wonder, where are you, the life and soul of the party? When you said, "I'm an applied mathematician". What does an applied mathematician do? And why do you think it's fun?

 

Valeryia: It's fun to do mathematics or machine learning. I think it's fun for me because you work not only with developing methods and understanding methods, but you can also apply them and see how they can have some kind of feedback on different lives or different technology. There is a hand I'd really like to have the background in mathematics because I can really go inside the method and try to understand how it works. Because now everyone is talking about artificial intelligence and machine learning, and many people are users of those techniques, and that's great. It's great that all these speak of technological companies like Google, Deep Minds, developed this  off the shelves techniques, but I think it's also important to understand how do they work and how must you can trust them, and how to improve or extend them in order to be able to apply the new problem. Because you cannot have a universal predictor or universal algorithm which you can use everywhere.

 

Silvija: If you were to explain what A.I. is? The most important concepts, how would you structure your explanation?

 

Valeryia: So, the overall aim of artificial intelligence is creating, and technologies that allows machines to act in an intelligent manner, meaning of being able of thinking, acting or learning like a human do. If you look back to when the essentially intelligence definition was provided by Alan Turing in the 1950's. He essentially developed this intelligence test where he tested the machine's intelligence by giving answers from both a machine and a person, to a jury. If the jury was not able to distinguish the answer from whether it was from a person or from a machine, then the machine was called intelligent. So, this is considered to be an operational definition of intelligence. Using that, we define what does it mean - intelligence in person? And we try to connect how we can translate it to intelligence in machines. In this case, intelligence incorporates those aspects that is perceiving, being able to recognize the object which is essentially now so-called computer vision field or face recognition field. Then machines should be able to memorize the object. Also, it should be able to recognize the language people speak, able to learn and move. All of these specific concepts of intelligence now developed in subfields so far, artificial intelligence.

 

Silvija: So, it has to do with motion and robotics and with language processing, with image processing. Can you help us understand the difference between these things called narrow A.I. and broad A.I.?

 

Valeryia: I think narrow A.I. what mainly people are doing when they are trying to develop algorithms to specific subfields and focusing on specific problems.

 

Silvija: For example, financial modeling.

 

Valeryia: Financial problems, or we for instance talk about language processing, when algorithms work only on recognition of a written language. For instance, if you look at Google Translate, this is an example of narrow A.I. because what stealing raises, recognition of language, translation and also recognition of images when we take a picture, and it translates it in different languages. Broad A.I. is essentially a very, I would say, the ultimate goal of developing a machine which have rejects like a human.

 

Silvija: I have a personal on that, because I'm actually not worried about the three scare scenarios of broad A.I. either a Terminator or a Matrix or Utopia. I'm not worried about that. But what I'm worried about is the collective power of all the narrow A.I.’s. They are going to be so good in finance, so good in biology, so good in medicine. Especially if you connect the data across the fields, people who own the data and algorithms will have so good commercial use for them that I think they might be some really scary systemic power in that.

 

Valeryia: It could be. I think there will become some problems associated with the A.I. but like data availability, and their needs for extreme complex infrastructure in order to be able to provide good results, as you can see now.

 

Silvija: It's like somebody was saying "Don't worry about that, it's like boring overpopulating Mars". Listen, you mentioned also systems biology. And you said A.I. could be used for everything from image recognition to self-autonomous cars to systems, biology, communication networks. Give us an example for the last two. What is systems biology?

 

Valeryia: Systems biology is essentially when we try to understand what happens in our body from different scales, starting from molecular scale up to the ordnance scale. 

 

Silvija: So, it's modeling our biology?

 

Valeryia: Modeling our biology. There is quite allot of work and it's our colleague, or former colleague, now working in Alan Turing Institute in Seattle, that is working on essentially visualizing how molecular looks and how do they interact with each other in order to understand the cause of specific diseases. We are also working on several projects related to cancer research where we are trying to understand how the interaction between the different genes in our body can cause or lead to specific types of cancer.

 

Silvija: Or Marie E. Rognes with her fluid dynamics in brain and in heart.

 

Valeryia: They are using more simulations but off course also producing allot of data, which eventually leads to the needs of analyzing the data, and then in that case we can also try to apply techniques and be able to get results.

 

Silvija: I asked you about what you think are the biggest controversies? I have to just read your list, because I thought you were very spot on. So, explain that very briefly. Should A.I. be strongly regulated? You have to explain that very briefly. Should humanoid robots have rights? Will A.I. kill jobs? Can we combat A.I. cultural insensitivities, and whom should we trust more - an A.I system or an expert? What do you mean by strong regulation of A.I. development?

 

Valeryia: I mean, now we are talking about self-driving cars. Who is essentially going to regulate, say behavior or so in cars? Who is to blame if some accident happens with those cars? So this regulatory aspects are still not clear in how to proceed. 

 

Silvija: What about robot rights?

 

Valeryia: If we are developing the robots, and they become more and more closer-looking as people and behaving like people. Can we give them the same rights? So that was the question I still don't think is completely addressed in machine learning.

 

Silvija: People watch West World and then, you suddenly start understanding. If we were just looking at the video of the Boston Dynamics dancing dog and the Atlas Robot doing parkour. They are being pushed over by a man continually and people feel really bad about that.

 

Valeryia: Yes, I think quite allot of discussions are focusing a lot on whether AI can take jobs from people.

 

Silvija: What's your position on that?

 

Valeryia: Of course A.I. can be used quite soon in doing some basic jobs, basic activities. I think it's quite easily to be implemented. But in that case also A.I. and machine learning will create quite a lot of new jobs not only in information and communication sector, but also beyond this. So, we start producing new technologies, and new jobs will be created. Off course, it might challenge us because now all basic operations will be done by a machine, meaning that we would have to challenge ourselves to be able to compete through constant learning.

 

Silvija: You can't stop learning.

 

Valeryia: Yes, always learning.

 

Silvija: I'll make your next job.

 

Valeryia: Yes, that's true.

 

Silvija: And this is why we do learn. What about these cultural insensitivities? What's that?

 

Valeryia: There was an example of Chat bot, which learned algorithms, which learned from previous Twitter accounts.

 

Silvija: Within six hours, I think.

 

Valeryia: And then it showed to beat the cultural insensitive behavior. And it's unclear how it is possible to remove these insensitivities from the learning algorithms right now.

 

Silvija: It's funny how we don't judge people as harshly when they do that as we judge robots. And the task of that robot was actually to try to do a Turning, you know, so not to be discovered. So, it started swearing and being racist and showinist and all that. Then people got very upset.

 

Valeryia: Actually, machines learn from people’s messages, so that's the way we behave. But it probably reflects our behavior and then we kind of become judgmental.

 

Silvija: Actually, one of my favorite quotes on A.I. is actually this thing that it might hold up a mirror towards our humanity and it might challenge us, but also inspire us to be more aware of our humanity. The last point was, should we trust more in an A.I. system or an expert? Because there might be a need for some human ethical judgement. Is that what you mean?

 

Valeryia: I mean, when we for instance say that A.I. will come in medicine and for instance we will develop some system which will help clinicians or advised clinicians on specific treatment options. For instance, if clinicians disagree with the treatment options. Whom should we believe? Should we follow the machines advice, or should we follow the clinician’s advice? And as aspects of machine for instance, using on the looking of the history of the patient predict some adverse illness or some disease, which is not obviously seen by a clinician. How should we behave in that way? Should we tell about this to the patient or should we keep it up to the point where clinicians see this? So, all these aspects, especially in medical domain, I think it's extremely challenging, and they should be properly addressed before we can start using machines in those domains in the driver's seat.

 

Silvija: We are running out of time, and there are a million things I'd like to ask you about, but can you tell us briefly about.. I asked you about your favorite examples of research on this area, and you mentioned something related to the Norwegian Cancer Registry. Can you tell us about that?

 

Valeryia: Yes, so we are now working together with Cancer Registry of Norway, which have very well preserved the data from the last 30 or 40 years. Particularly we are looking at the registered data of cervical cancer. So, the problem is that every woman in Norway between 25 and 69 years old are supposed to go to the screening every third year for cervical cancer. All this data has been accumulated, but for some women this screening every third year is not crucial. So, some women stay healthy throughout the year, throughout their life, while for other women on the other hand, the screening should be done more frequently. The goal of our collaboration with the Cancer Registry is to be able to develop this personalized screening procedure by looking at the historical data. And this is really fascinating. The way we look at all this data and try to make some tangible impact.

 

Silvija: You mentioned that we have very good data, that unique and highly well-preserved data sets. Why?

 

Valeryia: I don't know exactly, but for instance as I mentioned with the Cancer Registry, I think the data is so well preserved and well kept, and it's really a big treasure for people who are doing machine learning and I think other registries also contain quite a lot of data to which we can use. I actually know that people from other countries would like to get access to the Norwegian or Scandinavian registries. Probably politically, and regulatory. The preservations of the data were a big focus. This makes a big difference.

 

Silvija: Towards the end, would you like to leave us with a quote about A.I.?

 

Valeryia: Yeah, I think there are million quotes and you probably heard a lot today, but I really like the one by Eliezera Yudkowsky, who's an American artificial intelligence researcher and writer for how to popularize the idea of friendly artificial intelligence and essentially he says that by far the greatest danger of artificial intelligence is that people conclude too early that they understand it. And another one is from Elon Musk, that “the pace of progress in artificial intelligence is incredibly fast, and unless you have direct exposure to groups like Deep Mind, you have no idea how fast it's growing at the pace close to exponential. The risk of something seriously dangerous happening is in the five year-time frame - ten years at most”.

 

Silvija: So, we need to learn fast?

 

Valeryia: Yes, and fast.

 

Silvija: Thank you so much, Valeryia, for coming here and sharing your knowledge with us

 

Valeryia: Thank you, Silvija. It was a great pleasure. 

 

Silvija: And thank you for listening.

 

Du har lyttet til en podcast fra Lørn.Tech - en læringsdugnad om teknologi og samfunn. Følg oss i sosiale medier, og på våre nettsider: Lørn.Tech.

 

Quiz for Case #C0064

Du må være Medlem for å dokumentere din læring med å ta quiz 

Allerede Medlem? Logg inn her:

Du må være Medlem for å kunne skrive svar på refleksjonsspørsmål

Allerede Medlem? Logg inn her: