LØRN Masterclass M0005
Rise of AI/KI
Vi snakker med og lærer av Morten Goodwin, professor ved Universitetet i Agder og OsloMet. Morten er en av de ledende forskerne på kunstig intelligens i Norge, og han er også forfatter av bestselgeren «AI: The Myth about the Machines» populære boken en AI: Vi dekker den lange historien til AI på 5 korte minutter og utforsker forskjellen mellom de sentrale konseptene til AI, spesielt General AI og Narrow AI. Vi snakker om de mest kjente eksemplene på AI nasjonalt og internasjonalt. Du vil lære om de viktigste bruksområdene for AI, hvordan mennesker vil jobbe med AI i stedet for å bli erstattet av det, og hvorfor spill er en så viktig del av AI. Vi vil også utforske bruken av AI i Covid-pandemien. I vår workshop vil vi dykke ned i bruken av AI for læring, og hvordan det kan bidra til å forbedre både innholdet, ved å tagge, kategorisere og oppsummere (f.eks. via chatGPT) og ved å foreslå individualiserte læringsveier på en bred plattform med læringsdata.

Morten Goodwin

Professor

UiA

"You have to understand the data in respect to your problem"

Dette er LØRN Masterclass

Digitale samtale-baserte kurs – 4 x 30minutter
Vi samler de beste hodene bak de nye teoretiske konseptene innen ledelse av digital innovasjon og transformasjon. Vi dekker 15 tematiske områder innen ny kunnskap og erfaringer om innovasjon  og ledelse, og 10 perspektiver som gründer, forsker etc.  Innen hver av disse tema og 10 perspektiver setter vi opp digitale samtale-baserte kurs i fire deler, som alltid følger samme struktur: introduksjon, eksempler, verktøykasse og verksted. På cirka 30 minutter i hver leksjon vil du på en lett måte lære nye konsepter og forstå nye muligheter.
Digitale samtale-baserte kurs – 4 x 30minutter
Vi samler de beste hodene bak de nye teoretiske konseptene innen ledelse av digital innovasjon og transformasjon. Vi dekker 15 tematiske områder innen ny kunnskap og erfaringer om innovasjon  og ledelse, og 10 perspektiver som gründer, forsker etc.  Innen hver av disse tema og 10 perspektiver setter vi opp digitale samtale-baserte kurs i fire deler, som alltid følger samme struktur: introduksjon, eksempler, verktøykasse og verksted. På cirka 30 minutter i hver leksjon vil du på en lett måte lære nye konsepter og forstå nye muligheter.
Vis

Leksjon 1 - Introduksjon (41min)

What is AI, The history of AI in 5 minutes, General AI and Narrow AI, Some examples of AI

Leksjon 2 - Eksempler (30min)

Mortens best examples of AI, Application areas of AI, Human touch vs AI Games and AI, Where AI was useful in the pandemic

Leksjon 3 - Verktøy (20min)

Tools and tactics to get you started, Why you should join AI competitions, Developments in Norway, How AI has rapidly grown

Leksjon 4 - Verksted (16min)

Personalized learning paths, AI in LØRN, AI in platforms, AI limitations

Ferdig med alle leksjonene?

Ta quiz og få læringsbevis

Du må være medlem for å ta quiz

Ferdig med quiz?

Besvar refleksjonsoppgave

Du må være medlem for å gjøre refleksjonsoppgave.

Tema: AI- og datadrevne plattformer
Organisasjon: UiA
Perspektiv: Forskning
Dato: 20, august 2021
Språk: EN
Sted:KRISTIANSAND
Vert: Silvija Seres

2000+ lyttinger

Litteratur:

Check out the podcast we did with Morten on AI in 2018 here on LØRN. 

The episode is titled “#0053: Hva er greia med big data og AI?” (The episode is in norwegian) 

Del denne Masterclass

Dette lærer du om i denne Masterclass

• What is AI, The history of AI in 5 minutes, General AI and Narrow AI, Some examples of AI
• Mortens best examples of AI, Application areas of AI, Human touch vs AI Games and AI, Where AI was useful in the pandemic
• Tools and tactics to get you started, Why you should join AI competitions, Developments in Norway, How AI has rapidly grown
• Personalized learning paths, AI in LØRN, AI in platforms, AI limitations

Din neste LØRNing

Din neste LØRNing

Din neste LØRNing

Leksjon 1 - ID:M0005a

Leksjon 1 - ID:M0005a

Leksjon 1 - ID:M0005a

Velkommen til Lørn.Tech – en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

 

Silvija Seres: Hello, and welcome to an Lørn Masters series. Today our topic is artificial intelligence, and our guest is Morten Goodwin, who is a professor at the University of Agder and Oslo MET, welcome.

 

Morten Goodwin: Thank you. So nice to be here.

 

Silvija: Morten, you are one of my favorites within AI in Norway and I’m looking forward to this conversation. So, this is a conversation in four parts that results in four mini-lectures that are intended to open up a topic to anyone willing to listen. In this case, the topic is artificial intelligence and there are good courses on the topic in Coursera. There are the Elements of AI, there are the university courses. There is a plethora of stuff, also TED Talks, but what we are trying to do is a slightly different pitch. Our style is informal. And it is as if you met a person you know on the street, you’re a professor of AI, and you’re trying to teach them your subject over an extended lunch if you wish. It’s sound first, so we don’t use any slides. Therefore, we must paint pictures for this lunch.

 

Morten: This sounds very exciting. So, I think the key to what you’re doing is teaching the public about such an important topic which artificial intelligence is. It’s a revolutionary technology and it needs to be contested by the masses; I think.

 

Silvija: I think your point here is extremely important. So, I’ll just add one more sentence before we go into the kind of structure of it. And what you just said now, I think there exist good courses on AI, but quickly, they get very technical, and people get scared. They think I need to learn programming or a master’s in data science. I need to know a lot before I can talk about AI. While AI is going to be the most revolutionary tool of every business in the next decade. It’s the application of AI that we are very, very keen on, right?

 

Morten: Absolutely, and I often compare it to driving a car. Most people know how to drive a car. They know to turn the wheel. They know when to press the gas pedal. Only a few people know how the motor works, how the gear shift works, or the combustion engine works, and I think it’s the same with artificial intelligence. Some people need to be the typical nerds that drive themselves into the technology and do the programming etc. But a lot of people will interact with artificial intelligence in their business, in their home environment, in their hobbies. And they need to know what artificial intelligence is, what it isn’t, what the faults are, what the positives are, and what the drawbacks of using a technique that is so intelligent as artificial intelligence is in the public and private sector. There needs to be a lot of knowledge on the use of artificial intelligence. I think that is why such a masterclass like this one would fit perfectly to the masses. But I agree with you. If you want to delve into it, there’s a lot of courses online that you can use, mathematics for example. 

 

Silvija: It’s beautiful and it’s amazing. I worked in AI some 20-25 years ago, basically on the algorithm side before we called it AI. I’m just amazed at the development of the last five years, it’s shocking. All of us have been working in the field. It is an amazing tool. It’s a dangerous tool and it’s a necessary tool. I love your description of What did you have to learn when you got your driver’s license?” It’s exactly how I tried to talk about what people need to learn about other technologies as well as AI. I think we need to just think about what you did when you got your driver’s license. You need to learn the concepts, you need to learn how the thing works. Then you need to learn that there is an entry and that there is some fuel there. This is why I’m pressing these different buttons and that sort of thing we need to know with AI. Then you need to learn traffic rules, you need to learn where you have to be careful, what’s the driving ethics in the country. That’s the sort of thing we’re going to try to explore here. So, it’ll be four parts. The first part will be what you described. What it is, what it isn’t, what’s good, what’s bad – basic concepts. The second part will be your favorite examples in-depth. The third part will be the tools of the trade or, you know, what are the big parts of the engine, and how do people get started? And for the fourth part, I’m abusing you a little bit by exploiting your experience and your intelligence on the topic of Lørn. We’re going to have a workshop. I want people to start thinking about their own company. And we’re going to do that through an example. And so we’re going to play with Lørn. What should Lørn do with AI? I’d like people to think about “What can I do with AI in my company and my job?”

 

Morten: It sounds like an exciting journey. And I very much look forward to every bit of it, but especially the last part. Because that’s when you get your hands dirty when you try to understand what I can do or what can I do in my business. That’s when you start to learn what AI really is.

 

Silvija: Excellent. So then, let’s get into the first lecture which is “What is AI?” Before we start on AI, Morten, would you mind please telling us, just in a minute, who is Morten and why is he interested in AI?

 

Morten: Good question. I’m a professor in computer science mostly at the University of Agder, but also, I have my hands in some businesses and some other places as well. I have been working with computer science since I was a little kid with my Commodore 64. But during my bachelor’s, which is almost 20 years ago, I was introduced to this new fascinating topic, which turned out to be not only a revolution of the field but also a revolution in my background, because I understood that artificial intelligence was a completely new way of making machines think. Up until then, I’ve been working on just making everything in a software program, specifying absolutely everything. And that was very fun. But then I understood that the future is not like that, the future has to be that the software learns in some way. And that is mostly what artificial intelligence does. And then I did my Ph.D., and I was an associate professor for some time, and then a bit more than a year ago, I became a full professor at the University of Agder.

 

Silvija: You have also written a great book.

 

Morten: Oh, thank you. It’s only in Norwegian unfortunately. It’s called “Myten om maskinene” or “The myth of the machines”, which is kind of doing the same thing that we are trying here which is to teach the masses of artificial intelligence. Because I think the key to avoiding the hype of artificial intelligence and avoiding the scare of artificial intelligence is to learn and teach people about what it really is. No robots are taking over. There’s no job market without humans, for example. It is a powerful, powerful tool that most people use every day, if they use a cell phone, or if they use social media, or if they watch Netflix or anything like that. And it is a tool that will be more and more powerful in the years coming.

 

Silvija: Very cool. So, AI, there are many places to read the story of it in one way we could start with, you know, the old Greek philosophers and the first idea of automated slaves, I think it was very thoughtless.

 

Morten: Probably, yeah.

 

Silvija: And then we have these technical developments, maybe some 50-60 years ago. But then something happens. There is this exponential growth that hit about five years ago. So how would you tell the story of AI in three minutes?

 

Morten: In three minutes? It’s a big story, but a lot of the history of AI comes after the Second World War in the 50s and 60s. Alan Turing is a person that many people have heard about, he talked about artificial intelligence mathematically and philosophically, meaning that he said can we make machines think? His general idea was if we were able to make a machine act in a way that it acts intelligently – we have made artificial intelligence. It’s kind of a game of persuasion, making sure that the other people believe that you act intelligently, something that you address outside. And some people disagree with him. But most say that it makes sense. As long as my software or my robot does something smart, it is really intelligent. And then, of course, a lot of technical things happened in the same era. There are some famous names. One is called Marvin Minsky. Another is Frank Rosenblatt. They both worked with techniques that we today will call neural networks. And the idea is that you have some sort of brain-inspired software or hardware that has sensors that are connected to other sensors. And they call those sensors neurons – artificial neurons, and the neurons were connected with synapses. The general idea was that these neurons and synapses should learn from the environment to recognize images or recognize ways in a maze, etc. Amazing revolutionary techniques. But I think the world around didn’t understand how revolutionary it was because they did really rudimentary stuff, meaning it could look at a picture and see whether it’s a circle, or a square, or a triangle, this type of thing. Very fun and nice, but what is the real application area for that. And for that, to be understandable, time had to pass. And there was an eye-opening of artificial intelligence in the 50s. And then it kind of went into an AI winter, when people understood that it was not those conscious robots at all, it was something much simpler. And then I think my famous scientist, who is still alive, called Geoffrey Hinton, in the 1980s, I think in 1986, he was still working on those very old-fashioned techniques of neural networks, this time in a software type of way. He invented something which is called backpropagation. Which is just a method to strengthen some synapses and weaken some others, making sure that it learns in a very, very efficient way. So that means that you can take software and you can make sure that it learns either an image part, this is a cat, this is a dog, or text part, or good student, bad student, whatever you want, just based on these kinds of brain-inspired techniques. Again, nobody cared because, in the 80s, it was very rudimentary. And some people worked on it, of course. But I think what sparked the revolution that we’re in now was Hinton in 2012. He was in a competition on the internet called image-net. And the idea is then that you could submit whatever algorithm you wanted, it could be AI-based, it could be based on some other technique. The idea was to recognize the difference between cats and dogs and horses and brooms, or whatever you like. Every technique had faults, meaning that it categorized a dog wrongly as a cat or something like that in 1/4 of the cases, so very bad. So, three-fourths of the cases, it was correct, but 1/4 of the cases were wrong. Hinton had an error rate much lower in, I think, it was 12%, not 25%, but 12%, and after some more work, it was down to seven. And then after some work, it was down to three or 4%.

 

Silvija: Can I just stop you, Morten? 

 

Morten: Yes!

 

Silvija: I remember the competition. Knowing a dog from a cat seems like an easy task for humans. But of course, for a machine, sometimes a chihuahua looks very much like a Siamese. And then there are all these funny examples that we don’t think are very hard until we see those images, the Chihuahuas, and the muffins or the Chinese dogs or folded towels.

 

Morten: Absolutely. And if you really look into an image, as it’s seen from a computer software point of view, it’s pixels, meaning that its numbers are in an organized matrix. What is the characteristic of a cat? Can you formally define what a cat or a dog is, or a muffin is? It’s very, very hard based on just these types of numbers. So, as humans evolutionary, we were perfect at this. But yes, machines are a little bit harder.

 

Silvija: The interesting thing is, you know what you just said now, we humans don’t think of it as hard because millions of years of evolution have formed us into being really good at reading the world with our eyes, and we’re trying to teach a computer to do that within a few days of training. And why is it important that a computer should know a cat from a dog? Well, self-driving cars need to recognize people on the street, but they also need to recognize traffic lights and other cars and snow from markings. Or these computers that are now doing image analysis on cancer images or radiologist images, need to recognize normal cells from cancer cells. So these are extremely applied problems that were unreachable only a few years ago until Geoffrey Hinton started getting to, you know, a couple of percent error rates.

 

Morten: That is absolutely great. And I think that when he was able to reduce the error rate this much, it opened up almost a Cambrian explosion. Everybody wanted to do AI then, and specifically neural networks, sometimes we call it deep learning. But the essence is that AI are techniques that are artificially doing something, but complex, so complex problem solving with AI, and much of it is to learn. This is exactly what happened and we often call it machine learning. 

 

Silvija:  I’m trying to simplify and I’m trying to reiterate what you said so I’m sure that I understood and maybe the audience understood, more importantly. So, we talked about images here, but AI can be used for text, for sound, for images, it’s reading many different kinds of inputs and can then find these patterns. And then you also talked about different levels of AI. So we can tell it what things are and then it can recognize them again, or what you just said now, it can start figuring it out on its own. Right?

 

Morten: Absolutely. There are levels of AI, as you say. Much of what is applied today is what we call supervised. That means that there’s some expert that says this is a cat, this is a dog, or this is a cancer cell, this is not a cancer cell, this is a traffic light, etc. And then it learns what are the patterns of a cat based on that type of system. Or it can be unsupervised. Unsupervised means that you give it a lot of data and it’s supposed to find these patterns alone and say that here’s a group of customers that are very similar and here’s a group of customers that are different from the others, but they’re still very similar to this type of pattern. And that’s typically what happens when you watch a streaming service, you will get a recommendation, this is a video that you’ll like. And the reason is that the AI has learned that you like these types of movies, you’re in this group, and it gives you this type of information. 

 

Silvija: I just want to say one more thing for people who are not data people or computer science people. Because I think there is still this kind of magic about AI. How the heck does it know where to drive? Or how the heck does it know what I like when it comes to films. And I think it’s important to understand that first of all, we have to convert everything into data. There are great chatbots now that can talk with you, and you don’t even realize it’s not a human. How does it do that? Well, it listens to what you say. It converts that into data that it then understands as words and sentences. And eventually, it has learned the right response to those words and sentences, and it converts it back into data that gets played on your phone or something. So this conversion of a sound, an image, health things, cancer, DNA, protein folding, financial concepts, everything gets turned into data patterns that computers then are very good at sorting.

 

Morten: Exactly. So, if you look closely, you can see that everything is data for a computer software. And it’s finding patterns. The example of chatbots, for example, there’s AI that translates my sound waves into data, there’s AI that finds the correct response to my question so that it goes back and there’s AI that produces the question back to sound. And of course, this happens behind the wall for everyone. When I ask Siri or Alexa or anything, there are several steps of AI that are there. And the key to it all is that it’s not limited to images or audio or health data. It’s basically whatever you have that can be put into a computer in some way or a cell phone in some way. Then you can use AI for it. That is the key because that is kind of like the internet or computer or everything. It is at this level that it can do, almost everything, it’s very versatile. You just have to train it correctly. And that is also why it is such a revolutionary technology because you just have to imagine what it can do, and train it. But there’s no magic to it of course. There’s heavy mathematics that you have to follow. And it is finding, if you think images, for example, it’s finding that cats have these pointy ears, dogs have long ears. And the same is true for cancer cells, or the same is true for my voice or red light, etc. So it’s finding these patterns inspired by the human brain and the eyes and sound mechanisms in our brain. But again, very different, because it’s more mathematically sound.

 

Silvija: We have about 10 minutes more in this first lecture, maybe 15 – this is the most important lecture so we can go a little bit longer. I want to throw out two topics for you that I’d like us to cover. One is to help us understand with pictures almost, the different levels of AI. I think we understand that AI can be used for all kinds of things, from reading sensor data on an oil platform, and balancing that platform in 19-meter-tall waves to looking at people’s faces on cameras to let them in on their mobile phones. It doesn’t matter whether it’s an image or temperature, or a sound or financial data, this data is read by the computer in applied AI, it understands the context, and it can find the relevant patterns. But we can do that in very different ways, depending on how smart it is. I would like us to talk about, especially, demystifying a little bit around neural networks and deep learning. Because I think when people start hearing some of the more statistical concepts, they get terrified. And I just like them to understand how they can use it as a tool. The other question I have for you is the topic of general AI versus narrow AI. And I’m just going to start with that right away. AI sounds like this amazing thing. And then I think people get a little bit lost in general AI. And to be honest, I don’t care about general AI. I don’t even like the idea. I think we humans should be taking big directional decisions. And I don’t care if the machine cares or has feelings or not, I look at it as a great tool. And what worries me is that all this discussion about AI kind of removes the sense of urgency on narrow AI or applied AI, which is the revolution I think.

 

Morten: A lot of questions here. So, let’s first separate between narrow and general intelligence. General intelligence is the hypothetical future world where software AI is at the same intelligence level as humans, meaning that Siri or Alexa or any of these robots, or any other system has the same sort of level as me or you. And we are nowhere near that. Those systems are very stupid. Maybe we will be there in the future somewhere. But it doesn’t look like that now at all.

 

Silvija: Can I just stop you for a second? I think it’s really important what you said, that those systems are really stupid. They are extremely stupid in some areas where we humans are made to be strong, such as ethics and morals. The organic world is messy, and we are very robust in messy situations. Computers are amazingly much smarter than us, in a structured world.

 

Morten:  Exactly, but in a much more narrow world. And that is the key. If you take chess playing, for example, AI is much better than any human at playing chess, but it’s a very narrow problem when solving a chess game. And then if you compare that to general intelligence, you cannot ask some of the software to talk or tell a joke or something like that. It’s like, impossible. There is that difference. AI is absolutely becoming smarter than us. But at very narrow fields. Just playing for example, probably driving, a lot of the medical techniques, a lot of the financial as you mentioned, which stock to buy at what time. So it’s smarter than us in small areas. But it means that it is very narrow. So that’s the difference, narrow intelligence, and then general intelligence. And as for the demystifying part, it is finding statistical trends in the data. And my feeling is that it’s most understandable this way because everything is just trends in the data, is to think of it as images because we can see that there’s a square or a circle or this type of thing. And when an AI recognizes a face or recognizes red light, or green light, it is finding those types of patterns, meaning if it looks for crossing, it looks for a sign that says crossing in a self-driving car, and if it looks at a red light, it looks for something round. And the point is that we don’t have to tell a self-driving car that the traffic light has three circles, you just have to give it a lot of examples. This is red, this is red, this is red, this is green, this is green, this is green. And then the point is that it picks up these types of patterns. And this is very similar to how you teach kids. You don’t explicitly say it’s a round thing, it’s a green thing, etc. You say, when the red is the lightest red, stop, or make your kids talk, for example, you don’t tell that there exists a verb, noun, etc. you just give it a lot of examples. And then our kids find trends in the language and they learn a language and it’s similar with AI. The difference is that it can only do it narrowly, but it is very advanced.

 

Silvija: I love the examples of games. Because that’s easy for people to understand. And sometimes when I talk about the history of the subject, I like to talk about, you know, we had Deep Blue playing chess. I forget the age, but maybe some 30 years ago.

 

Morten: 90s, yeah, 96-97 I think.

 

Silvija: And Gary Kasparov lost, and he was first shocked. Then he decided that he will understand AI and the power of AI. Interesting, because, for example, the computer made a mistake, which made Garry Kasparov lose because it put him completely off balance. And he thought the computer is smarter than it is. But actually, it was an error. But that’s narrow, right? 

 

Morten: Deep Blue is a very good piece of technology. There’s very little learning in Deep Blue. So what it does is searching in a chessboard, meaning that if you put your knights there, if you put your rook there, the game is changed. If you put your rook first and then your knight and then your peasant, etc., then you get that kind of game which you lose. It’s kind of searching very many types of games. So that is Deep Blue. Very impressive. It’s similar to what chess players do, they search, but it can search much further, and much more quickly than Garry Kasparov. 

 

Silvija: Before you go there. I’d like to mention a middle step. I’d like to mention Jeopardy and Watson, which is also an IBM extension of Deep Blue that was kind of smarter. We got a little bit surprised there because it could understand the natural language, you could tell that Big Apple refers to New York. So how was that different from Deep Blue? And why wasn’t that as impressive as what came five years ago?

 

Morten: I think the Jeopardy machine was also very impressive. And it has more elements of AI in it, because it has some sort of natural language understanding, meaning that the word such as “apple” could be something you eat, it can be a type of computer, but also a big city in the eastern part of United States. But at the core, it’s still a searching mechanism. Meaning that it had some sort of AI to formulate the question, find the pattern in the question so that it could be understood by a human, and then it searches in a big database of knowledge and then it responds. Truly impressive, but the learning mechanism, which I would say is key to what we see these last years, was that it was there, but not as much as we saw earlier. It’s kind of like building a huge complex database with some AI on top. Very good, good at playing Jeopardy. The challenge is if you want it to do something more. If you want to have a conversation with that Jeopardy machine or something like that, it’s completely impossible that way. I think what maybe really gave a lot of focus to AI in the recent five years is what happened with Google DeepMind. They first played a game called Go, which is similar to chess but has a bigger search base, meaning it’s more complex. Then this type of traditional Deep Blue-based searching mechanism falls short, you cannot do it, because it takes the lifetime of the universe to do it with today’s computers, it’s impossible. So, what they did instead was to build some searching mechanism, it was led by a person called David Silver, a very good scientist. He had some deep learning neural network mechanisms. The same as we do for image recognition, but it recognizes board states saying that this board state is very good, this board state is very bad. Then he kind of found the patterns in the pieces. So, this is a good pattern, this is a bad pattern. In addition to searching for steps, it also asked the AI, is this a good way or is this a bad way? With the first, they gave it a lot of expert data saying this is good games and this is bad games, then kind of picks up the trend for that. And it was able to beat the best Go player, by a lot, it was five games and it even made some mistakes, or they believed it made some mistakes. It turns out it didn’t, it put the Go player off-board and the machine won in the end. Amazing technique, much more learning in the form of machine learning than the big Deep Blue mechanism. What truly amazes me is that David Silver and the team there took it further, it did something called the AlphaZero, meaning that it avoided using human data. It just played against itself. In the beginning very badly. After a few seconds, much better than most people, and after a few hours or days, it was better than the human level of playing the game.

 

Silvija: I think it’s absolutely amazing. By the way, the sort of stuff that David Silver did, makes me regain my strong belief in financing that kind of research. I mean, if you think about the commercial effects of his breakthroughs here, he has gazillions for the world. And we need more of David Silver’s. But we also need people who know how to apply, understand and apply the effect of what David Silver did, and then spread it to all kinds of industries. But what I want to say is what one of my big kinds of epiphanies here was when they use, I think it was DeepMind, on Atari and they played this game of ping pong where you keep hitting the ball. And they wouldn’t tell the computer, what is the goal of the game, it’s not like you have to stay alive as long as you can, or you have to hit the brakes at the top or you have to move this little lever here to kick back the ball. It must figure it out on its own. And then after a few 100 rounds of the game, it started playing this game better than humans, in the sense that it discovered strategies that humans don’t hink of. I thought that was amazing.

 

Morten: It truly is. And that happens a little bit before AlphaGo. But they used the same type of reinforcement learning technique. That means that you make a decision, and this decision affects the environment in Space Breakout, which is what the game was called, and then that affects back. And you might say why do you care about AI that plays Breakout, Go, or Chess? Well, it turns out that the same type of techniques you can apply to other world situations like medicine, self-driving, etc. And if you think of a game, such as Breakout, which most people know, or Pong or any of these other games, there’s a lot of image recognition as we humans do. We understand where the ball is, we understand where the paddle is, we understand where the bricks are, and we do that intuitively. And then we have to make a decision, do I move to the left, do I move to the right? And without programming, without saying if the ball goes to the left, go to the left. If the ball goes to the right, go to the right. It only got reinforcement feedback, positive or negative feedback based on the number of points it got. So, if you’ve got more points, you get positive feedback. If you get fewer points, it gets negative feedback. That is a very amazing way to think of it because it figured out all those types of details that are very natural to us, but not natural to a computer. And basically, the sky’s the limits there, I think. There are still some drawbacks because Go, chess, and these types of games, they’re very limited in scope, meaning that if you change a little bit, if you add a new piece to your chessboard, or you change the pixel level, etc, it falls completely short, meaning that you have to retrain it again and again. So even though it’s immensely impressive, and I will guess, one of the most impressive scientific discoveries in the last 10-15 years, as I would say this type of system is, and it is still in this type of very narrow field. But it’s truly impressive. 

 

Silvija: Very good. We are going to go into our lecture two in a minute, I would just like to ask you, very briefly, what do you think are the greatest misunderstandings or myths that you would like to debunk?

 

Morten: There are many of them, I think. So, some of them you touched upon already. One is that AI does not become smarter than us. That’s kind of a myth. We should know, at least in narrow ways, it absolutely does. And another myth is that it becomes smarter by giving it more data only, meaning that if you have a system that works badly, you just have to give it more data and it becomes better. That’s partly true, but you must be very careful because it becomes just as good as the data you put into it. In chess games, etc., it’s kind of easy to find more data. But in real-world scenarios, it is often very difficult to find enough data. In hiring people in HR, if you’re doing policing, this type of thing it’s very hard to find high-quality data.

 

Silvija:  I think, I’d like to summarize that in a way. You have to love the data. But you also have to love the problem.

 

Morten: Oh, absolutely, yes. And you have to understand the data in terms to your problem. And in this case, we heard the statement “The data is the new oil” a lot of times, and in a way, I partly agree with that. But I partly also disagree, because you must know the data that you want to collect. So, meaning that if Tesla collects data, they must understand exactly what to collect, how to store it, and how to regenerate it for users. If I collect health data, I have to understand exactly what I do. So if I get some data outside of my field I cannot use it in any way. So, in this case, it’s a bit different than oil, I think of data collection much more as patents. I have to have a good understanding of my data to collect it. And when I collect it, I should have some sort of ownership to it, because I’ve spent so much time collecting it. And then you cannot share data wealth, same as you can share oil wealth, you can share some of it, but it has to be some sort of level that you gain financially from the data you collected. 

 

Silvija:  I think that’s a great point, Morten. I think that the responsibility for sharing your data, where you should be the one defining how I’m going to give this to the world, you should be allowed to save some of the data just for you if that’s the core of it, and you should share the rest of your natural resource with the world to help the world go forward. And I think that’s going to be a political question. A technical, political question where we need politicians and techy people to start talking in a little bit more refined way than just, you know “GDPR and we’re done.”

 

Morten: I think that’s key, because you need some sort of financial gain for the data collector, if you share everything at once, people won’t collect. And it has to be some sort of way because you often collect public data. It has to be shared in some sort of way, otherwise, you’re putting all your money on one company and letting them control it.

 

Silvija: Traffic rules for data sharing, I think is a very important political discussion. And if you say share everything, it’s even worse than well, who’s going to be financing the continual gardening on this data, the upkeep of the data, the refinement of the data? Who’s going to be responsible for mistakes in the data? If people expect you to have perfect data perfectly shared, I mean, this is infinitely costly. And I think this is somewhere where Norway could actually lead the way, because we have a lot of good public data, and we have this trust.

 

Morten: I think you’re touching upon something very important there. I also think it is the responsibility as you and I know, but it’s important to tell the audience I think that AI makes mistakes. And oftentimes, these mistakes come from faulty data, not always, but often. Then somebody has to have the responsibility at someplace. And if you’re just sharing the data, you’re kind of leveling out the responsibility. Nobody’s being responsible. But if you’re saying a company is collecting data, and they make a mistake, at least you have some somewhere to blame, at least.

 

Silvija: Very good. You have inspired me to pull in two more examples in the next lecture. We’re going to be talking about games. We’re going to be talking about conversations. And we’re going to talk about protein folding. And then I would like to ask you to talk about Corona, and AI, you know, two or three examples of where it actually was useful. I don’t think people have realized enough about that. And yeah, maybe we’ll stop there. I possibly think public data is another super interesting example, but let’s see where it takes us. We are saying thanks for lecture one. And we’ll be back on lecture two.

 

Morten: Thank you.

 

 

Du har nå lyttet til en podkast fra Lørn.Tech. En læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å lytte til denne podkasten på vårt onlineuniversitet Lørn.University. 

 

Leksjon 2 - ID:M0005b

Leksjon 2 - ID:M0005b

Leksjon 2 - ID:M0005b

Velkommen til Lørn.Tech – en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

 

Silvija Seres: Hello, and welcome to lecture two on AI by one of our masters of AI, Morten Goodwin. This second lecture will be a conversation about examples, your favorite examples, Morten. Which examples of AI do you think will help people understand the topic? So, where do you want to start? 

 

Morten Goodwin: Well, we can start in many places I think, but I suggest we start with game-playing, which is one of the areas where artificial intelligence has mastered the world these last years. And it was one of the big challenges, in the beginning, playing chess, playing checkers, but the researchers, John McCarthy, and others, soon realized how complex these games were and that it was tremendously challenging to make an autonomous way to solve it. We touched slightly upon it in the first lecture, but I think the key to it is this IBM System that was in the 90s, where we had a search mechanism. Then Deep Mind with Alpha series, starting with AlphaGo, then AlphaZero, in 2017, 2018, etc. And the reason games are so important is that it is a very controlled environment, we can see exactly everything that happens in the game. We can understand the rules of the game, as humans, and the computer program, the AI, can also understand similarly. So, this is not so messy as the real world in a way, it’s not as messy as the medical world or driving world, etc. That means that they have tremendous control over the system. As we mentioned, in the first lecture, we had a system where it first trained from human examples, and then it went on and trained without human examples. It became even better, meaning that the data that we gave in as humans became an obstacle, we humans did not play Go, in this case, well enough to train an AI. When we removed humans completely, it became much better. And that’s a good lesson learned. Because data is absolutely important. In this case, it turns out to be the actual obstacle of the system.

 

Silvija: I just want to interrupt for a second because it makes me smile at how interesting philosophically it is, as well. You know, we see that with self-driving cars as well or autonomous cars. Humans are the last bug in the system. We don’t want the system that is perfected and where humans are made completely irrelevant. So, I guess, this idea of finding a place for humans, is a very important area or ethical area of AI research. And we’ll get back to that on what’s human and what’s machine.

 

Morten: But I think we can already bring the human in here, because Grandmaster players, such as Magnus Carlsen use these AI systems to train themselves. So, the circle has been completed in a way. So instead of the masters giving chess examples to the test computer, the chess computer now gives examples back to the human, because we want humans in the loop in those areas. And in many other places, we want it. What do you think, Silvija?

 

Silvija: I just can’t stop being philosophical here. And I think it’s a really important point. This is making them better. This is making their chess-playing more interesting, more efficient, more fun. I think what’s important here is that we have found a way to use the technology, so it makes, not just the world more efficient, but it’s making our lives better. And I think it’s a really important part of what we should be thinking about. You know, how do these self-driving cars not just make more money or, you know, but how do they make life and the world better in terms of fewer accidents, better climate. Sometimes I feel that people believe that AI will remove humans from the loop. And then we’ve solved all the problems. We do that then I think we’ve made a much bigger problem.

 

Morten: Yeah, self-driving car is a tremendously interesting example, because it kind of touches upon many of the areas of artificial intelligence at the same time. It has image recognition from these LIDAR scanners which has, I don’t know how many, but a lot of pictures at the same time in 3D. It’s complex to see, and it has to make decisions all the time. Should I turn left to right, when should I break? What should my speed be? It’s a very, very complex problem. I’m sure you’re aware of this DARPA challenge that came from some time back, it’s been a challenge that is essential “Can you make a self-driving car in some way?” And I’m not sure I remember the details of the story, but at least for a long time, they used traditional techniques, takes a long time to learn, and makes bad mistakes. Whenever a bush came in the way it went the wrong way, these types of things. But a couple of years ago, it became much better, and the reason is that they put deep learning on the topic. And Deep Learning is just an AI system that is used for chess-playing, image recognition, and a lot of other areas and also self-driving cars. And then it’s very apparent, these types of games you can say, chess playing may be very important for Magnus Carlsen, etc. But the application area, if you ever drive a car, if you’re ever in traffic, or even walk outside of traffic, it matters for you. Environmentally, absolutely, because we move to drive more efficiently, we want to have fewer cars, because the production of cars is also environmentally costly. And we want to share car rides. If you look a little bit into the future, I think that the idea of everyone owning a car, or even two cars, as we do and live far in the countryside, we have to have two cars, it’s like a stupid way of thinking. We can all share but AI is essential to be part of that. 

 

Silvija: Here, you also exemplify an area where you know, people need to make political decisions related to technology development. So, I know lots of people who say, well, I like driving, I don’t want to be replaced by an algorithm. And some people’s paradise is other people’s hell and so is it always. This is where the majority has to decide, do you want more safety? Or do you want people to decide if they want to drive themselves or not. These are kind of political democratic negotiations, but we need to educate the people that this is a sort of a problem you’re going to have to have an opinion on relatively soon.

 

Morten: Truly, I fully agree. And you can take that example even further. There are even people who like to ride horses. Before everybody was riding a horse, but nobody would say we should only have horses in the streets anymore, because that’s ridiculous. But there are still places where people can ride horses. And of course, please go ahead. And I think for driving, it will be the same. Some people like to drive, I’m not one of them. I’m not particularly good at driving even, but I must. Every day I drive somewhere, mostly to my job, and I would much more like to spend time reading or doing something else, right. And that is the key because we want to automate those parts of the world that we don’t want to spend our time doing. And then that is why the AI revolution is so important because we see that as potential in many places. Game-playing, true, self-driving, absolutely true. And then we see the same in medicine, where you kind of play games with an antibiotic material, for example. And I think a bit before COVID, we found a new antibiotic material that needs to be tested in real clinical tests, etc. But it gives the potential of what AI can do with these game-playing systems.

 

Silvija: I want to go back to that idea of AI in health. So, let’s hold that thought. Because what I want to do is to talk a little bit about it, but also think about the same ethical question we talked about now. So, if much more of the job of the doctors and the nurses and the other stuff will be done in some level of AI and automation. You know, on one hand, we are dependent on it because that will make more health services accessible to them all. But at the same time, you know, can people then say “Sorry, no, I want the person to diagnose me, not the machine”, I think this will be really interesting. Before we go there, and before I lose the other thread I have. We started with games. You mentioned AlphaGo, AlphaZero, and MuZero and I don’t know about that one. So, would you please help us understand the three?

 

Morten: Yeah, so AlphaGo was the first one that played from Go games and it was very limited to playing this only one game, go. You could not play anything else. 

 

Silvija: Old Chinese game?

 

Morten: Old Chinese yeah, it’s kind of similar to Othello. Maybe some more people have heard of it. You play black and white pieces, and you’re supposed to capture the white if you’re black, and you capture the black if you’re white, so you can compare it to chess because it’s the same line of thought. But it’s bigger, more complex. And it is older than chess so a go player would say that chess is just messy, new and fancy. We want to play this old-fashioned game because that’s kind of the game of the gods. So, it’s a very complex game and when David Silver solved it, meaning that AI was able to play better than the best human in 2007, it was a revolution. But that was AlphaGo. One of the criticisms was that it’s very limited to only playing go and it’s very limited based on data. Then he did two things at the same time. He removed the data, meaning that they could play only with themselves. Then he made it applicable to both go chess and shogi. Shogi is like a Chinese version of chess, meaning that you play the board, and then when you go to the other side of the board, you kind of flip back a bit, it’s similar but different. He showed that the same technique with minor differences could solve all three at the same time. So, it’s kind of generalizing it a bit through learning. There’s still a big limitation to all of those, and that is that you must write the rules of the game. You have to say that the rook moves in this way, the knight moves in this way. For chess and go, not a big problem, but for real-world examples, self-driving cars, etc. It’s a big problem because the rules of driving are immensely complex. It’s not like that at a human level, but like for a computer, it’s immensely complex. Then MuZero is the next level after AlphaZero and you don’t even include the rules. They should learn the rules by themselves by the game. You just interact and say “With this, you didn’t win, you lost”, it’s kind of iteratively learns the rules of the game. It turns out that when you do that, not even is it possible, but it plays even better than when you add the rules. The rules themselves are kind of a limitation. The key is that this opens for many more application areas. And then he didn’t do only those three games, he went back to Atari, meaning that you can pay all those Atari games, not adding the rules of any of them and just let it be. Please learn the rules, please learn how to win. And the assumption is then you can do that in the real-life world, medicine, finance, etc. without adding the rules here.

 

Silvija: So, if I try to simplify, MuZero sort of algorithms applied to health could then diagnose cancer without having been told that this is the patterns you’re looking for. This is the color this is the type, the shape. They can figure out what the game is or what I’m looking for.

 

Morten: Exactly, yeah. The patterns themselves they kind of figure out anyway. But it’s more what affects a certain system. When you do diagnosis, you typically do it in what we call the supervised way, meaning that there’s one correct example. But sometimes you will say that maybe this medicine affects the patient differently than this patient, for example. So those kinds of rules, which are very closely related to individualized medicine are what we think is the future of medicine. I’m different than you and then I should have different medicine than you if I’m sick for some reason. It is more those types of application areas that MuZero potentially can be useful for. Because the human body is a complex game with complex rules, and the rules for you and me are different because we’re different people.

 

Silvija: Very cool. So, finding the critical rules or understanding what matters. Now we are approaching the question of do they understand what they’re doing or not? They can be very efficient if you don’t know what the real drivers are for us humans, we need them to solve that problem. 

 

Morten: Yes, absolutely, and they also played some older games, which are, I think, very fascinating because they played poker and a game called Hanabi. It was not DeepMind who played poker, it was another research group at the same type of technology because that is a very much more human game to think of. So, chess you can think of almost like a mathematical problem to solve, you just have to find the correct piece. But when I play poker or Hanabi, which is quite similar, at least in the way of thinking, I have to understand why are you bidding so much? Is it because you have good cards or is it because you’re lying to me, and kind of putting that type of human knowledge – can it understand what I understand? So as a human, I have what is called the theory of mind which means that I understand myself to some level at least. I can also put my mind into your mind. So why are you saying what you’re saying now, and in the chess world it’s not so important, but in Hanabi, this kind of card playing game, it’s much more important. It’s the essence of the game. What does it understand? At least it understands at the level that it can find a pattern because that’s still what it does. But it kind of “can put its mind”, in quotation marks, into “other people’s minds”, also, in quotation marks, and then play accordingly. That’s important because that’s what medical doctors do for example. #hy are you saying that you have a bad knee, maybe it’s your bad knee, but maybe it’s because you went running last week or it’s essentially human to do that.

 

Silvija: This human understanding, finding, again patterns in our behavior, and our psychology, I think will be super important and a very interesting part of applying AI to social problems. And again, the question is, to how large extent do we want the computer to tell us? And to how large extent is this the core of our politics? 

 

Morten: Yes, I think there are different levels there. But I think AI and other advanced digital technology will play a more and more important role in the political part and the societal part of problem-solving. I don’t think it will be the only thing because it should be decision-based support, meaning if a politician should make a choice. Typically, what happens today is that they ask some experts, in Norwegian, it’s called “utredning”, it’s like some sort of survey about what will be the effect of changing the education system in this way, for example, some expert needs to find it out. In this case, AI could play a significant role saying that if you change it this way, this may be the effect if you change this way, it may be an effect. Then, in the end, it’s like a political, and in a way, an ethical way. I think it’s more important that businesses thrive, I think it’s more important that everybody has a good healthcare system. So that’s kind of like the human ethical decisions and those elements AI is not so good at.

 

Silvija: If I’m rephrasing what I hear from you, it is a human-only responsibility to be asking the right questions. And then the computers can be an immensely useful analytical tool, they can show you scenarios, they can compute many different results. And then it’s still a human decision based on different priorities, these are often multi-dimensional problems, choose the right solutions.

 

Morten: I think you can look at the recent example of COVID as one, so whether or not to shut down has to be a political decision. But the potential spread of the disease, the potential impact of shutting down, the potential impact of not shutting down, the potential impact of sending vaccines to Oslo versus another area, for example, could and to some extent are AI-based solutions. But in the end, there must be some political or at least political-like system that decides because it’s such a human way of thinking.

 

Silvija: Yeah, so I think we in many ways answered my question about health care, AI, and ethics or the social side of it. We will want people in roles that require you to have this human understanding of relations, of history, of politics. It’s messy, but it’s uniquely human and then we can leave that to the machines, but we want to use the machines as really great tools. I want to ask you, we have about 10 more minutes in this lecture, and I’d like us to talk a little bit about, you said chatbots, and you said protein folding. And I would like to ask you about a very recent and relevant example on Corona or COVID vaccination, or where was AI useful in the pandemics and where it wasn’t? I don’t know, where do you want to start?

 

Morten:  Let’s start at the end, I think. AI wasn’t used that much in COVID, there were other techniques mostly, but there are absolutely examples of COVID-based AI and that is for prediction of disease, for example, through social media or to other areas. It has predictions of where there will be future breakouts, but even more important because those systems that can also be used without AI are kind of resource allocation-based methods. I don’t think it was used in Norway at all, but what you can do and what was used is to say that there is a need for the system to send vaccines in this area, because of some logistical issue, it will take a longer time than in this era, etc. And there have been projects on that even at my university to kind of make this more efficient, in a way. It tells you about the potential for AI there, but it also tells you a bit of the limitation, because in the end, mostly this was a human-based approach. Because we want control, we want to be completely 100% sure that this works perfectly. Fortunately, I would say, there are not many examples of pandemics in recent history that can be used as an input to be training those types of systems. So, let’s say COVID-25, or COVID-29 may be more applicable where AI is more useful because it’s like the third or fourth example of it. And then we can trackback, AlphaFold, it kind of pushes the same boundaries as game-playing, because it’s a very similar technique. It’s not AlphaZero for game-playing, but it’s AlphaFold for protein folding. So, protein folding, as probably some of your listeners know, is the that you have some sort of proteins in your body. When you just stretch out a protein, it folds in some certain way, and how it folds and the final structure, the biologists and people with medical experience can tell what the protein does. So it binds to this molecule and then this medicine will work, or if it doesn’t bind to this molecule it will have this type of side effect. But giving the set of protein, which is often known, in medicine, for example. And the way it folds is a big problem and has been for many years. And then there’s this big competition that happens every second year, where you can use AI, or any other technique you like, but the recent years’ people have been using AI to predict these types of folding systems. And then DeepMind again, one of the big tech companies in the world here, Google DeepMind were able to more or less solve this type of protein folding system. And that means that you can give them a protein, it can tell you how it folds. Just a couple of weeks ago, they released a public article on that, and even open-source code doing the same. We were a bit skeptical about this very important technology be owned by Google and only Google or will it be given openly. In the end, they gave it out openly available. And Nature, this journal, most famous journal, I think it said that this will change everything, this is the most important scientific discovery of the last 10 years. They think or they philosophized, I guess, they speculated that it will be the future of medicine in a couple of years, it will be the future of reducing plastic waste because you can specify proteins to chew plastic in some sort of way. And in environmental problems, you can bind proteins to specific carbons, and it kind of solves part of it. The key to it is it is learned from an AI system; it just gives you the potential of this type of revolutionary technique. Would you think AI is limited to go and chess etc.? Turns out, just apply it to any other area. A lot of human intelligence, of course, but a lot of artificial intelligence also. And then you have yourself the next revolution, which is what happened there.

 

Silvija: So, we’ll go back to chatbots as well for just a couple of minutes, but I have to put two replies here Morten. One of my favorite books, as I mentioned to you before is one called “Deep Learning Revolution” by a guy called Terry Sejnowski, who’s in my mind one of the fathers of this area and I adore him, and I adore the book. And the other book that I’m thinking about that I enjoyed is a book called “The American Prometheus” and it has nothing to do with AI. It’s about Oppenheimer and the making of the atomic bomb. And of course, it refers to the Prometheus you know who was punished by the gods for giving fire to people, forever having I think ravens or eagles eating his liver or something terrible like that. But you know sometimes I wonder, and the American Prometheus refers to Oppenheimer, giving this immensely powerful weapon and tool to humanity. I think he had this comment when he saw the first successful testing of the hydrogen bomb in, I think it was Los Alamos, and he regretted it apparently, and was saying, “and thus I’ve become Shiva, the destroyer of worlds.” I wonder if these fathers of AI ever have thoughts like that because when you talk about these incredibly useful, necessary, proteins folded specifically to eat all the plastic or kill certain kinds of bacteria or attack certain kinds of viruses. I mean, these things can easily either go wild, there is this exponential effect of everything data-driven, right. Or they can simply be used by people who are not suited for tools that are so globally powerful.

 

Morten: No, it is a dilemma. And it’s a well-known dilemma in technology, it’s often referred to as the Collingridge dilemma and that is, it’s very hard to understand the effects of technology before it’s put in place.  If you want to take the regulation of technology, for example, that is practically impossible before you’ve seen the effects of the technology. Atomic bombs, AI, but in the end, when you see the effects, then it’s too hard and challenging to stop it because it’s already put in place. The hydrogen bomb, for example, after it was invented was very hard to regulate. Same with AI after protein folding is there. It’s such a powerful technique to use for good – making medicine, bad – using artificial viruses that we send out if we want to, but you can. You could say that the AI founders are Shivas as well, the destroyer if they want, but they’re also the builders and creators. And we should not at the very least, not stop creation, because the potential of bad is there. Because then we can just say, let’s not do technology advancement at all. But when we see problems, there needs to be regulation. Either nationally or internationally, I know the EU has a good or has a suggestion for regulation that came a couple of months ago on AI. They do something that I think is very smart. They look at the effects only not on the technology, they don’t say, don’t use Deep Learning or don’t use statistical techniques in this area. They say that, if you’re in a system that has high risk, such as medical diagnosis, it’s particularly important explaining why such a system makes a decision, for example. And if you’re at low risk, such as in the game-playing system. It’s not so important, you can make an AI that is the bad person in games, that’s fine. If you do such things as surveillance, that should be only from the police or a governmental agency that controls it. Meaning that you have this type of risk level, as they say, and these types of ways of regulating it. It’s all is about what is the effect for people who are using it in some sort of way. I think this is a very smart way of thinking because you don’t limit yourself to technology today, because it’s so fastly changing and rapidly becoming better and worse if you use it for bad things. But at the same time, it is something that we need to explore and need to understand. Is it like an atomic bomb that destroys us? Hopefully not, but it potentially can so we need to be careful. But the important thing is that technology makes our lives better in so many ways so we should be positive, I think.

 

Silvija: I completely agree. I think that it’s necessary, and it’s kind of Pandora’s box, you can’t close it. If one person doesn’t invent it another one will, I think it has a force of its own. But we must think about the future optimistically and lovingly, as wishy-washy as it sounds. And we must learn about this technology. Otherwise, we can’t have any opinions about what’s good, what’s bad right? Or it will be irrelevant opinions. It’s all about educating the people.

 

Morten: Yes. Which is why we have this master class isn’t it?

 

Silvija: Exactly. Listen, Morten, we are going to finish lecture two now and maybe we’ll save a little bit on the next lecture just to tell people as we talk about the tools and the tricks and tactics of this on how it’s applied in chatbots. Does it sound okay?

 

Morten: Yeah, so you mean that we move the chatbots part to the next one?

 

Silvija: Yes.

 

Morten: Yes. That’s fine, we can talk about chatbots, absolutely.

 

Silvija: Cool!

 

 

Du har nå lyttet til en podkast fra Lørn.Tech. En læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å lytte til denne podkasten på vårt onlineuniversitet Lørn.University. 

Leksjon 3 - ID:M0005c

Leksjon 3 - ID:M0005c

Leksjon 3 - ID:M0005c

Velkommen til Lørn.Tech – en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

 

Silvija Seres: Hello, and welcome to Lørn Masters on AI. My guest is Morten Goodwin, and we are in lecture three. This lecture is going to be about where do you go now? How do you get started using some AI in practice? The tools and the tactics of this trade. So, Morten, welcome back. What would your startup advice be? And I don’t mean to entrepreneurs, but I mean, anybody thinking that maybe I should use AI?

 

Morten Goodwin: Well, there are many ways to start, I think first is to think of what is the problem you’re trying to solve? I think you should have this problem-solving mind before pouring AI on top of it. So, often most people are an expert on some domain, meaning that they have their professional experience in, I don’t know, boating or health or finance, etc. And that is often a good way to start. Understand what you’re trying to solve just outside of the air world, what am I trying to predict? What am I trying to classify? What am I trying to categorize? Understand that in some level of detail first, and then you can think I can go about using AI. When you do that, when you understand that, then I would say it is not necessarily so that you have to be a full-scale nerd understanding everything, but at least trying to play some with some examples. And there’s a lot of resources out there that you can use to kind of learn the tricks and trade and basic examples. What I personally often advice is to be in some of those online competitions, meaning that you get a lot of data that is already chewed quite well, and cleaned up nicely. There are many of those competitions, in the last podcast, we talked about the DARPA Challenge Corporation, but there’s, for example, Kaggle who has a lot of these types of competitions on anything. And even NORA, this Norwegian AI society, has a data set competition, there’s one out now, but we have many that come many times. And particularly now, it’s some sort of medical diagnosis. And whilst it’s good to use those competitions as a starting point, the reason is that you can learn a lot from a very concrete example, and you avoid the messiness of real-world data. And it is very concrete on exactly what you’re solving. So, your kind of abstract away most of what is part of the real world, and then you become an expert on AI.

 

Silvija: Just so I understand, it’s a sort of a sandbox game where they clean the data and prepare them within a particular problem, and then they give you a problem. They say, whoever solves this problem with the highest precision, the most interesting solution algorithm, wins.

 

Morten: Yes, that’s roughly it, exactly. It’s not necessarily only the highest precision or highest accuracy, but sometimes its highest accuracy and an explainability level to it, for example. This means that you cannot just enter a black box system from a company that doesn’t tell you anything, but you have to kind of explain at the same time. For example, this NORA competition has explainability as a level. So that means that you get scored by two points. One is how accurate is it to diagnose malicious tumors, which is the case, and then how good are you at explaining why this is a malicious tumor or not. So, two important levels, and often it has more than just the accuracy of it depending on the problem area. Because sometimes you just want a highly accurate system. Sometimes you need a bit more. And competitions are very good at that. As a bonus, because when you do these competitions, you become an expert on your own solution. When the competition is finished. Often in most competitions, at least the NORA one, it’s publicly placed openly, all the submissions and then you can see what these other submissions did that you did not. Maybe you came in 10th, there are nine people better than you. and then you can learn a lot from those examples already with the expert knowledge of the data set that you have there. So, it’s a learning experience, but it’s very controllable.

 

Silvija: Can I just stop you for a second? Because before people dare to start playing with competitions, and I don’t know how many people with full-time jobs and too much other stuff going on, have the time. You mentioned courses. I just want to comment. First of all, that we are not trying to compete with university courses. This is an inspirational conversation, to open up a subject and make people want to take one of these courses. And then my question is, which courses are available digitally? For somebody who can listen to them in their evening time? I mean, I know that you have courses for grownups, post-education, lifelong education at the University of Agder, but are they physical? And are they full time is my question. 

 

Morten: So, my courses at the university are for people who already have some computer science background but want to learn AI. It’s not full-time, it’s part-time. We have one physical session, and then the rest is digital so that we can meet and greet, etc. For that particular need, it’s very good, but if you want to be fully digital, there is a lot. You just have to Google, but Coursera has one great series on it at different technical levels. I think MIT has a great series that is quite heavy. If you’re interested in that sort of stuff, a person called Andrew Ng is kind of key to that one. Those are more like free evening lectures, and you watch video lectures, and then you sign in or submit some sort of code or some sort of thought experiment. Then you get feedback from it, and you get like a small degree or some confirmation that you’ve completed it. Even my students do that because they get more input that they cannot get from here. It’s rather basic but it’s called “elements of AI”, which is international. I think it may be started in Finland, but at least now it is available in Norway. That is a very good starting point. You have to think about what level you want to be at. Do you want to be a developer, then you need to go a bit heavy into stuff. Are you a user? Then maybe elements of AI are enough or some YouTube videos or Coursera course. Or if I want to be an entrepreneur, innovating some part, maybe it’s good to get my hands a bit dirty, do some competition, Kaggle competition, or NORA competition or something like that.

 

Silvija:  Yeah, I agree. So, my problem is that when people ask and you start looking for courses for them on the net. My favorite is the Andrew Ng course that you already mentioned from MIT, but it’s heavy, you know? It takes time and it takes quite a lot of pre-skills. Elements of AI are great, but it’s also a bit on the technical side for too many people. The question is whether we should maybe start advising people also to look for good TED talks on a particular topic. They’re usually fun, they’re short, they’re very colorful. If we could think together with NORA, and this is an invitation to dance, on some sort of an AI playschool for grownups. So, finding a way into this topic without having to get your hands dirty with any programming, because I think most of our audience will be not implementers of a solution, they will be buyers of a solution. We have to help them become good at understanding what’s the problem. What data do I need? What are my hypotheses, and then who do I go to get this implemented?

 

Morten:  Yeah. I like your idea. And, of course, we want to dance. But partly that already, NORA as you said, so we have these weekly webinars, and the key there is every second week, there’s an academic professor or associate professor, or lecturer, that gives some sort of information. Every other alternate week, the startup environment has accrual, meaning that you want to do some business, you want to innovate in some sort, may not be heavy research, maybe it’s research inspired. But it’s more about the tools and the trades. I would recommend people to follow those that they at least find interesting. There’s probably a lot at Lørn as well that you can do. But it is about understanding. If I do voice recognition, these are the available libraries. If I do image recognition, these are the available libraries. If I want to do a prediction of a medicine part, these are the available tools. There is a lot and it’s a jungle to go out there.

 

Silvija:  I want us to talk a little bit about libraries as well. I think many people think if I’m going to make an AI do something interesting on medical images or traffic images, then I don’t have the millions of dollars to do that. But there are these amazing libraries that have been made. If we just know about the four or five most famous ones internationally, and then maybe you could tell us a little bit about developments in Norway as well. 

 

Morten: Absolutely. 

 

Silvija: I just want to give people an image to keep in mind. And please correct me if I’m wrong, but TensorFlow by Google is a language AI library? Is it both written and spoken or just spoken?

 

Morten:  No, it’s a computer library. So it’s written, yes.

 

Silvija:  But is it for spoken language only?

 

Morten: It’s for anything. The key with TensorFlow from Google is that you can develop these neural networks from scratch if you want. But there is also a lot of pre-trained examples. Meaning, language recognition, for example. Then image recognition, as another example. Because when you train an AI system, it’s very seldom that you start from scratch because it’s very costly. You often start from someplace, and TensorFlow as a quirky example, gives you that starting point, if you want to do image recognition, or if you want to do voice recognition.

 

Silvija:  So again, please help me navigate my failing memory now. But I remember about two years ago, I think, when Google turned on DeepLearning, in TensorFlow, I guess, and they were using this to translate. You know, Google translator has been working for many years, but it was quite easy to see that this is Google Translate, rather than, you know, human translated. And then once they turned on, I think, DeepLearning, and I think they were using libraries from TensorFlow, I think people got shocked by the sudden, you know, over one day, improvement of quality, where it suddenly started translating almost as good as a human.

 

Morten:  Yes, it does. They turned on deep learning, and they used TensorFlow, as a library, and more specifically, a part of it, which is called BERT. And that is like a language-based system that is very powerful from Google. We often measure translations in something called the BLEU score, and it reached a BLEU score near human perfection, not at human perfection, but near. That means that for the cases where you have a lot of examples in translation, it became very good, almost like a human. But in the cases where there are very few examples, it kind of falls short and it still does. That BERT TensorFlow-based system is in Google Translate, and it is in a Google search. Before it was kind of a word lookup, but now it’s a deep learning-based system that kind of means that if you search for something, it finds the synonyms, it finds the kind of information that is related to your search, not only word lookup what was the old fashioned way. That means that most people have already used those systems, but maybe not as libraries, but as tools. The competition to TensorFlow is called PyTorch. So that’s from Facebook, so they’re kind of competing with each other. TensorFlow develops some sort of mechanism pretense a lot of algorithms, and then PyTorch does the same and they’re kind of like Coca-Cola and Pepsi, kind of continuously innovating. And without Pepsi, Coca-Cola would be worse and it’s the same here. They kind of play with each other.

 

Silvija: For people who are not masters of computer science, how do they get any value out of these libraries? Should they know what’s possible and available, talk to somebody who will be programming for them, or do you expect people to learn Python? And I mean, because it’s really, you and I can say just another language. But I think for somebody who has never programmed, it’s insurmountable.

 

Morten:  If you’ve never programmed, I think going into TensorFlow or PyTorch. It’s too long a search. But you of course can get there because you can learn but still, it’s challenging. I think for those libraries, it is more useful to understand what has been available or is available directly. And then you can ask someone to help you do the software programming. There are some visual user systems that you can test as well, I think one is called Orange and one is called KNIME. It’s the same type of thing. But you draw boxes and you play boxes, meaning that kind of looks like PowerPoint. You draw in an image of this, an image of that, and an image of that. Then because I draw in my data, or I put in my data, I put in my algorithm, I put in what I want to do, and then it kind of does it for you. At least for learning what is happening with, for example, KNIME spelled with an extra k at the beginning, and that is very useful to understanding. When you put it into production, you probably want some sort of development system in there. But then there’s a lot of possibilities and people to collaborate with. NORA as we mentioned here, there’s a lot of university examples also like CAIR which I’m particularly fond of because that’s my homeworld. But we also have several AI hubs that kind of helps you in this type of way. It means that if you’re a company that wants to innovate in AI, I suggest that you look for some of those places. So NTNU has this open AI area and then much of the rest of the academic world has NORA including NORA-startup. And then there are several others also. But these are the ways to connect researchers, programmers of AI, with entrepreneurs and innovators, because we researchers, we often will just want a problem to solve. And we have the competency. 

 

Silvija: I think it’s good how these research environments are now connecting to real-world industrial needs. Also, I think there is a group of students, at your university that can be used as chaos-navigators, chaos-pilots in the world of AI, so you can go to them. They can help you dissect your problem, and come up with the first outline of a solution, which you then have to go to a programming environment. But I think most consulting companies by now know how to get an AI part of your system.

 

Morten: I think you’re pointing to something very important because the first element, the first part is always to understand, can I use AI at all, have I enough of data? Is it discriminatory enough, can I separate between this and this classification part? And that requires some knowledge. But students can do that. And I think when you say that connection to real-world problems, but that is something that has happened very recently. Partly because the Norwegian Research Council funds a lot of projects on that. They fund the innovation of that. But also because we see that AI is far removed from these toy examples, as it was 10 years ago, now, we are solving real-world problems, because they are so much more interesting and it can do it. That happens all over the world. And it’s as we talked about in other podcasts, it’s like a Cambrian explosion when people understand that they can solve these complex things with AI, everybody does. If you have an idea, go ahead because soon somebody else does it for you.

 

Silvija: Very good. I think we’re going to stop our third conversation, our third lecture here, and then in a couple of minutes, we will continue with a workshop. I very much look forward to the workshop because it will help me start using AI on my example which is Lørn.

 

 

Du har nå lyttet til en podkast fra Lørn.Tech. En læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å lytte til denne podkasten på vårt onlineuniversitet Lørn.University. 

 

Leksjon 4 - ID:M0005d

Leksjon 4 - ID:M0005d

Leksjon 4 - ID:M0005d

Velkommen til Lørn.Tech – en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.

 

 

Silvija Seres: Hello, and welcome back to Lørn Master series with Morten Goodwin on AI. This is our fourth lecture, our fourth chat. The topic of this fourth session will be a workshop, a play shop really where we are sandboxing AI in Lørn. I have my own company that I need to worry about. And as we’ve heard, AI is useful for any company, any industry, any function. We are basically going to try to do a 15-minute exploration of what can AI do for Lørn? And where do I start? How do I get this going? And what I’m hoping for is that the audience, as they hear me discussing this with Morten, think about their own company, their own function, and try to think about what would this mean for them? Sounds good?

 

Morten Goodwin: That’s excellent, exactly. Because I think once you start with your own ideas, where you’re expert on, in this case, your Lørn environment, that’s where you see the potential ethic. And that’s what most people should do, they should say, this is the area, I really know, if I want to do AI on this one, my area, then there’s no better place to start. Sounds perfect.

 

Silvija: I have to also admit that Morten and I have spoken quite a bit about where we should use AI in Lørn. But I’m going to start from scratch now. And I say, Morten, lovely seeing you, I haven’t seen you for a couple of years, I have this company called Lørn. It does corporate education in digital format. We are trying to get as many people as possible to listen to the educational content we put out there. I would love to use AI. What are your thoughts? Where do I start? 

 

Morten: My head is filled with thoughts and possibilities. I think the first one to think of is what problem is it that you’re really trying to solve and then kind of delve down into it. And one problem to solve could be to specialize content for each person. Meaning that if I go to Lørn, I may be interested in some parts, technology parts, probably, but maybe some other parts as well. Some other people may be interested in political issues, health issues, etc. I will say, at least what comes very naturally is to say that similarly to how Netflix, YouTube, and Facebook all use AI to specialize content, we have never really seen that to a good level, at least a satisfactory level in education. When I teach my students, I teach a couple of hundreds of students at the same time. But I really know that they’re individuals, and they need the content differently produced and differently provided. And I think that is exactly where, or at least one of those key areas where you could use AI in Lørn. So, if I go as a completely new student to Lørn’s system, I will say that I will maybe first give a few of my interests, I’m interested in computer science, this this, this, and then the AI system could filter out some parts. That’s relatively easy. But as you go along, you can maybe collect a bit of data on what am I clicking on? How long am I watching a video, am I just watching the first two minutes or the entire video? Am I scrolling a lot across some videos I never click on, these types of things? It tells you a lot about the behavior of the students. Maybe without them knowing that you’re collecting data, of course, you should tell them, but it’s not so obvious. Then you can understand the patterns and understand from the patterns, what each person’s preferences are both in content, but also the learning style. Some people are better learners by reading, some people are better at learning through listening, some people are better at learning in collaboration, but in most cases, it’s a little bit of each. Meaning that if you watch a video, maybe there’s time to discuss it with some other person if you discussed it for some time, maybe it’s good to get some new content in there.

 

Silvija:  And so, if I’m breaking this down. Basically, you started from an understanding that you have from before on our business model. And I remember you and I discuss that. Because I remember you were asking me: “Well, you know, but what’s your business model? What’s your ambition? Where do you want to go?” And we want to have growth in two areas. One is traffic visibility, we want more people to see our great content. And then the second thing we really want is also more effect of our content. So, we want to see that they are really learning more, we want to see that people are getting out of their echo chamber and learning new areas because the future is very cross-functional and complex. We want to see that they learn something every day, we want to see that they learn in different modes, sometimes podcasts, sometimes a quiz, sometimes a video. So, it’s about driving these things. And then I remember you were asking me, the other question is, “and where are you unique?” You know, what can you use that is unique to you that will also give you some unique data. Our unique strengths are this library of 1000 cases of innovation, subscribers, and our past traffic. We also have companies that want to come to us and make a tailored series of innovations. So again, we have some unique content that is very tailored, but relevant for their whole industry, right? So, there’s something unique about the content. That’s the thing now. And so, then I remember you were asking me: “Okay, so how do you gather data that can help you?” We can have student data, which we talked about now, as well. So, you were talking about, you know, how often is every person in the system? What do they learn? What’s their background? So, this we can combine to then create these personalized learning paths. We can group students in learning and have collaborative groups. I think you were suggesting to me also, to think about optimizing or finding a way to measure the learning experience and optimizing it in some sort of a dashboard kind of way. These are the student things, and we can dive further into this. And then another area that we have dived into before in this workshop is text or content. Coursera has a lot of content, but when you search for something, you can search for AI, but it’s not structured very much. So you were asking me, well, if I want to learn about earthquakes, and building technology. Do you have anything for me? And I’m like, well, I have building technology. But I haven’t tagged anything on earthquakes. So, we were saying, well, maybe you could find AI useful to tag your content, to look for sentiment, to look for clusters. So, let’s look at two areas separately here. One is how do we help students? The other one is, how do we make most of our content?

 

Morten:  Yeah, I think it’s a good division. Exactly as you say, one is to guide the student in some correct way. And the other is kind of content-wise, the systemizing of your platform. When you speak now, what immediately comes to mind is, for example, automatic transcription of this video, these podcasts, would be quite easily done with AI, meaning whatever we talk about, it’s impossible to think, or at least very impractical to think that somebody should tag absolutely everything, it’s way too much work. AI could do that. Whatever I’m saying exactly now could be transcribed, and then categorized, etc. And that means that probably through interaction with students you can say that this content is similar to this content, meaning that if you’re interested in earthquakes, maybe you’re interested in tornadoes because it has some sort of similarity. Or if you’re interested in earthquakes, maybe you’re interested in agriculture, because that’s somehow related. And then you can see both just word-wise which is what is similar because earthquakes are a synonym with disaster, which is a synonym again with a tornado, but you can also see what are people watching at the same time. So, a person that is an expert in disasters will maybe search for both of them at the same time, and then you say these are accessed at the same time, probably there’s some sort of grouping there. And for that part, you could use unsupervised learning, meaning that you don’t have to have guidance to say this is correct, this is correct, this is correct, this is wrong, but saying that this information material is used at the same time, they’re probably in the same general area. But these are also used at the same time but differently from the first. So, they probably have some sort of relation. Of course, you can do this all manually, but it’s very tough.

 

Silvija:  Can I just ask you one more question because this particular conversation is in English. But 90% of the content that we’ve created on Lørn is in Norwegian. Is there anyone in Norway, and we’ve talked about this before, but really good groups that are good at doing this stuff in Norwegian?

 

Morten:  Yeah. In the previous podcast, we talked about the English version, BERT, which has a sort of language model. And for a long time, AI was limited to English and some other big international languages, like French and Spanish, etc. But now there’s something called NorBERT, which is a Norwegian version of BERT trained by the University of Oslo mostly, and some of the collaborators, Lilja Øvrelid are leading that one, a very talented and intelligent professor at the University of Oslo. And this library is publicly available, you can use it, you don’t have to contact Oslo, you can just use it. Download it, but yeah, a little bit of programming experience is needed of course. But then you can automatically tag, systemize and transcribe videos, I would say nearly as good as you can do in English. It’s an amazing opportunity, which I think most people are not aware of. And when we say Norwegian, it’s not only bokmål but also Nynorsk, the Sami languages. I’m not sure if Lørn has any content of that. But that’s a possibility if you want to do it like this.

 

Silvija:  Very interesting. And I think that also Lilja would be interested in getting her hands on better training. So, if people have a lot of really good content in Norwegian, there might be a very good collaboration opportunity for research as well.

 

Morten:  And as a researcher, I would say probably yes. Because that’s kind of the thing we’re always looking at. Where can I get data? Where can I solve a new interesting problem that hasn’t been solved yet? And I will say that, yes. Both the language groups in Oslo, led by Lilja and I would say also the pedagogic group – technology pedagogy groups in Oslo, but other places also would probably be a good fit, because it is this interdisciplinary world you’re talking about. It’s a lot about technology. Absolutely. But it’s also about language, which is better related to technology, AI, but it’s also about finding what is the best educational path, for example, which is very related to psychology, but not the same. So, I think it probably boils down to what is it you’re solving? What is your business model? What is making you unique? And where can the technology support it? I would recommend having a kind of an iterative process on it. Because when you first develop something, it will work sometimes, but not always. And whenever something doesn’t work, you should have some sort of reinforcement mechanism saying that – when I searched for earthquakes, I got cookie recipes, this is not related to me. So it’s wrong. And then let that feedback into it.

 

Silvija:  Two more minutes Morten. And what I want to ask you. You had also this idea of anomalies. And both in students learning but also in content, or even the idea of writing, you know, so it’s cross-connecting the data, we could see which content gets used most, which content gets people to drop off at a certain point. AI could give us some ideas that are across these two topics as well.

 

Morten:  Yes. Anomalies is a very interesting way to think of it. And the reason is that you don’t need to say this is a good student, this is a bad student, this is a talented person, this is not, or this is good content or bad content, you can just look at the content and see here that’s a bit different from the system in general. So, you can say that maybe there is someone who has a different experience than we expect with a different background, and maybe he or she should be put to another learning area. It could be some person with a learning disability, for example, that should be detected early on, ideally. But it’s also going to be content that falls out of place. Maybe it’s, I don’t know, an AI talk but you talk too much about viruses or whatever. This type of thing. There’s a lot of these type of anomalies that are very useful to detect, and especially when you have a lot of data and the data is a bit skewed. Meaning as I said most of it works perfectly, but there’s like 5% that doesn’t. And in disease prediction or emergency management, this is often the case, and I think in learning as well. So, we have this majority of the data follows the major trend, few data points don’t, and those are maybe the interesting ones, and you want to improve upon. So, anomaly detection is central, I think.

 

Silvija: So, if I’m summarizing what you have been playing with now, and before with me, you ask, understand your problem and your goals. Understand your uniqueness and your strengths. Based on this create some hypothesis of how you want to drive this and then figure out how to get data to support that and from there on, you can get help.

 

Morten: Yeah, or you can develop yourself, but understand the problem, understand the data you have. And at least then when you talk to some AI experts, if you want to do that, then your conversation will be much more meaningful because those are the questions they will ask. They will ask what are you trying to solve? Do you have data? And if you already have the answer, you’re spending your consultancy money much, much better.

 

Silvija: Very good. Morten Goodwin, thank you so much for inspiring us to learn more about AI over these chats.

 

Morten: It has been a pleasure, thank you.

 

 

Du har nå lyttet til en podkast fra Lørn.Tech. En læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å lytte til denne podkasten på vårt onlineuniversitet Lørn.University. 

 

You must log in to pass this quiz.

Du må være Medlem for å dokumentere din læring med quizes og motta læringsbevis. Prøv et

Allerede Medlem? Logg inn her

Du må være Medlem for å dokumentere din læring med quizes og motta læringsbevis. 

Allerede Medlem? Logg inn her