Sense-Making in a Changing World

Episode 61: What Future with AI? with Mo Gowdat and Morag Gamble

October 01, 2021 Morag Gamble: Permaculture Education Institute Season 2 Episode 61
Sense-Making in a Changing World
Episode 61: What Future with AI? with Mo Gowdat and Morag Gamble
Sense-making in a Changing World with Morag Gamble
Become a sense-making subscriber & keep this podcast myceliating.
Starting at $3/month
Support
Show Notes Transcript

In this episode of Sense-Making in a Changing World, I am joined by quite a different guest than you will usually find on this show to explore something that will have enormous implications for our life and work - Artificial Intelligence (AI) - and already does.

Mo Gowdat is author of Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World released on 1 October 2021 and also international bestseller, 'Solve for Happy' in 2017.

While I am right on the edge of my field of knowledge, experience and comfort here, this is Mo's world. He has worked as a big-tech executive for 30 years and was formerly the chief business officer of Google [X] - the 'moonshot lab' exploring things like driverless cars and AI and candidly shares his thoughts on the dangers and possibilities.

Woven through our discussion is also the exploration of purpose. As an executive of one of the biggest companies of the world, he says the richer he got, the more unhappier he became - something he noticed of many people in his world. He started deeply researching how to find happiness and purpose. Then tragically in 2014, he lost his 21 year old son, Ali, in what should have been a routine operation, Mo completely changed his life and launched his personal moonshot project, 1 Billion Happy (#1BillionHappy) in honour of his son.  Last year he also started a podcast, Slo Mo discussing the profound questions and obstacles we all face in the pursuit of purpose in our lives.

Back on the topic of AI, Mo says the current direction that this technology is taking, is putting humanity at risk to an unprecedented degree and is something we need to pay serious attention to. We cannot simply ignore it and hope it will go away. Elon Musk is also on record saying that AI is a greater threat than nukes. Mo says that Artificial intelligence genie is already out of the bottle and  already 'smarter' than humans. In his latest book, he explores what we can all do now to teach ourselves and our machines how to live better, and think deeply about what our future could be with AI.

I wonder, after listening, what do you think?
________________________________

EXPLORE THE WORLD OF PERMACU

Support the show

This podcast is an initiative of the Permaculture Education Institute.

Our way of sharing our love for this planet and for life, is by teaching permaculture teachers who are locally adapting this around the world - finding ways to apply the planet care ethics of earth care, people care and fair share. We host global conversations and learning communities on 6 continents.

We teach permaculture teachers, host permaculture courses, host Our Permaculture Life YouTube, and offer free monthly film club and masterclass.

We broadcast from a solar powered studio in the midst of a permaculture ecovillage food forest on beautiful Gubbi Gubbi country. I acknowledge this is and always will be Aboriginal land, pay my respects to elders past and present, and extend my respect to indigenous cultures and knowledge systems across the planet.

You can also watch Sense-Making in a Changing World on Youtube.

SUBSCRIBE for notification of each new episode. Please leave us a 5 star review - it really it does help people find and myceliate this show.

Morag Gamble:

Welcome to the Sense-Making in a Changing World Podcast, where we explore the kind of thinking we need to navigate a positive way forward. I’m your host Morag Gamble, permaculture educator, and global ambassador, filmmaker, eco villager, food forester, mother, practivist and all-around lover of thinking, communicating and acting regeneratively. For a long time it's been clear to me that to shift trajectory to a thriving one planet way of life, we first need to shift our thinking. The way we perceive ourselves in relation to nature, self, and community is the core. So this is true now more than ever and even the way change is changing, is changing. Unprecedented changes are happening all around us at a rapid pace. So how do we make sense of this? To know which way to turn, to know what action to focus on, so our efforts are worthwhile and nourishing and are working towards resilience, regeneration, and reconnection? What better way to make sense than to join together with others in open generative conversation.

Morag:

In this podcast, I'll share conversations with my friends and colleagues, people who inspire and challenge me in their ways of thinking, connecting and acting. These wonderful people are thinkers, doers, activists, scholars, writers, leaders, farmers, educators, people whose work informs permaculture and spark the imagination of what a post-COVID, climate-resilient, socially just future could look like. Their ideas and projects help us to make sense in this changing world to compost and digest the ideas and to nurture the fertile ground for new ideas, connections and actions. Together we'll open up conversations in the world of permaculture design, regenerative thinking, community action, earth repair, eco-literacy, and much more. I can't wait to share these conversations with you.

Morag Gamble:

Over the last three decades of personally making sense of the multiple crises we face. I always returned to the practical and positive world of permaculture with its ethics of earth care, people care and fair share. I've seen firsthand how adaptable and responsive it can be in all contexts from urban to rural, from refugee camps to suburbs. It helps people make sense of what's happening around them and to learn accessible design tools, to shape their habitat positively and to contribute to cultural and ecological regeneration. This is why I've created the permaculture educators program to help thousands of people to become permaculture teachers everywhere through an interactive online jewel certificate of permaculture design and teaching. We sponsor global PERMA youth programs. Women's self-help groups in the global south and teens in refugee camps. So anyway, this podcast is sponsored by the permaculture education Institute and our permaculture educators program. If you'd like to find more about permaculture, I've created a four-part permaculture video series to explain what permaculture is and also how you can make it your livelihood as well as your way of life. We'd love to invite you to join a wonderfully inspiring, friendly, and supportive global learning community. So I welcome you to share each of these conversations, and I'd also like to suggest you create a local conversation circle to explore the ideas shared in each show and discuss together how this makes sense in your local community and environment. I'd like to acknowledge the traditional custodians of the land in which I meet and speak with you today, the Gubbi Gubbi people and pay my respects to their elders past, present and emerging.

Morag:

This episode of Sense-Making in a Changing World, it's somewhat different. I'm right on the edge of my field of experience, knowledge and comfort, exploring AI, artificial intelligence, what it is capable of, how it can help and why we should be very, very careful. My guest is international best-selling author, Mo Gawdat. Mo has 30 years experience in tech in all the big tech giants and was formerly chief business officer for Google X, the so-called moonshot factory and innovation lab committed to addressing the big issues of our time with tech playing with things like self-driving cars, robotics, AI, Moe had his dream job. He could buy anything he wanted. He was rich, but he was incredibly unhappy. Then tragically on top of this, he lost his 21 year old son, Allie, in a simple surgery that went very wrong. At this point, he turned his life around and in memory of his son, he started to focus on happiness as his project. And so we talk a lot about happiness in this show and his 1 Billion Happy project. He's actually the best-selling author of a book called Solve for Happy. His latest book coming out this week, though, is Scary Smart: The Future of Artificial Intelligence and How You can Save the World. Like I said, this conversation is different than usual, but I think it's important to explore these fertile edges of thinking even if it's something that unsettles us. Elon Musk said in a Rolling Stone interview, that climate change is the biggest threat that humanity faces this century, except for AI. I hope you enjoy this conversation with Mo Gawdat. Welcome to the show, Mo. It's an absolute delight to have you here with me across the oceans, we’re on the opposite sides of the Pacific ocean. I believe where you're based in California. Is that right?

Mo Gowdat:

No, no, I'm not based anywhere. It's a very strange life that I lead. I do have a coffee machine in Dubai, that's sort of what I call home, but no, I've been traveling, unlike what I used to travel the way I used to travel before in my corporate career, where I simply just flew around like a headless chicken all the time. Now I fly to places and spend months there and integrate with local culture and be part of the life there. So this year I've lived in Dubai and in Amsterdam and in Greece. I lived, a bit, a month in the Dominican Republic and now I'm in London.

Morag Gamble:

I feel like I'm quite the stay-at-home compared to you in Australia here. We're not allowed to go anywhere. So I've been very much at home and I've learned to slow down actually, and enjoy my place a lot more in that time.

Mo Gowdat:

I've done that too. I mean, remember, so it seems that there is a very big correlation between winter and lock down, and so yeah, when it's summer in the Northern hemisphere, you can sort of, it's not very, very open, you know, there are still restrictions and guidelines everywhere, and I am very much for respecting and upholding those guidelines. When it's winter, it's very, very restrictive, wherever in the Northern hemisphere. I know you guys are in spring now, so hopefully soon we'll be the ones sitting home and reflecting, and you will be the ones walking around. I call it a half monk life, basically half of the year you're in retreat and half of the year you're out and trying to engage in life.

Morag:

Yeah. So it's a very different life that you're leading to one that you were leading maybe a decade ago. You were heading up lots of programs at Google X and living in the tech life for decades, and then something changed and you have a completely different approach to life. One of your programs is 1 Billion Happy, and I've been reading up about that. Can you just maybe like, maybe we could start there. It's like, what is that all about 1 Billion Happy and what does happiness mean to you? What does, what is that?

Mo:

So, 1 Billion Happy, so you're right by the way, I had two very complete lives, I think. One life as a tech executive, worked at IBM, Microsoft and Google at the time where those companies were changing the world, got to the height of those careers. So at Google, I was vice president of emerging markets for seven years, and opened almost half of Google globally. And then I became chief business officer of Google X, which basically is the typical life that you can expect for a tech executive, constant travel, constant stress and decision-making and speed and pace and so on. A life that is rewarding, if you measure success by the material possessions that the world convinces us of, is what we should aim for. But then, I wasn't very happy in that life, which is really interesting. I had everything that everyone dreamed of and I was miserable, clinically depressed, really. And over the years, by my late twenties, I started to shift my focus to understanding happiness because I wasn't always unhappy, I just became unhappy when life blessed me, isn't that ironic? When life was difficult, I was the happiest person ever and then when I had more money than I needed and everything that everyone dreams of, I became unhappy.

Morag:

That is just that, we just stop there and sort of reflect on that.

Mo:

Because if I give you a warehouse full of cardboard to eat, there's no nutrition for you there. You may feel that you're eating, but really there is nothing that will keep you alive and the same is true with happiness. You know, I can give you our money suits and fancy cars and everything that they tell us will make us happy, but there is no happiness, nutrition value in them. There is no inherent value of happiness in any of this. And so we, we basically spend a lifetime searching for the wrong thing, the wrong way in the wrong place and we expect that this will work, right? Happiness is not found in any of those, happiness has always been found inside you. You were born happy and most people get shocked when I say that because babies seemed to be crying and fussy and no babies, when they're given their basic needs for survival, you know, say, an amount of love and an amount of food and amount of warmth and amount of safety, you know, enough for them to feel that life is okay instinctively, their default state is happy. Every child can sit, can lie on its back for hours and play with it’s toes and giggle, right? The thing is, happiness, if you want, is the absence of unhappiness. So if you remove the reasons to be unhappy, our true nature is happy, but our modern world is convincing us of the alternative or the opposite. They're trying to convince us that your natural state is unhappy and you need to supplement that with things that make you happy.

Morag:

Where do you think that stems from, like, in all the research you've done about happiness, where do you think we took a wrong turn in where we set our goals?

Mo:

Capitalism, for sure. I mean, I have one of my dearest friends on the planet, he’s an artist called Jimmy Nelson and Jimmy basically travels to the extreme corners of the universe, of the planet, let's say, not the university yet, but of the planet to take photographs of tribes that have never been in-touch with civilization. And I asked him one day and I said, Jimmy, I mean, life must be harsh there. You know, there is no air conditioning, lots of insects and food is not on the shelves of the supermarkets and all of that. And he says, it's the happiest life you will ever live and it's the easiest life you will ever live. And I said, how come? And he said, those people, they are in flow with nature, they are one with nature. I hosted Craig Foster, who did the documentary, My Octopus Teacher on my podcast on Slo Mo and he said exactly the same. He said, we are off nature. We are off our mother nature, we're not visiting, right? And so in reality, we have built a society where things need to be exchanged for money. And so for people to get money, they need to make more profits and for them to make more profits, they need to convince you to buy more things that you actually don't need and then you buy them and they, because you don't need them, they just, honestly, and I say that with respect, they take away from your life, they don't give you anything. After all of the crazy journey of making a lot of money and living the life of the rich and famous, I wear$4 t-shirts right? And I have 10 identical ones, exactly the same color and, you know,

Morag:

We’re matching

Mo:

Exactly, there you go. And I absolutely loved them. You know, when I go on a date, I also openly up front, tell her, look, style is not one of my strong points, but if you dig a little deeper, there is a lot to be found under that, right, and if she's looking for style, then she's not looking for me and that's the simple truth. The simple truth is what we need in life is very, very, very, very simple. Okay. And the effort we put to chase what we don't need is draining. It's stressful. It's killing us. And this is what it is.

Morag:

So with the 1 Billion Happy, how would, what do you, how do you imagine that? How do you imagine rippling out to 1 billion people or more this notion?

Mo:

The mission started with me trying to honor my son, right? So, it may appear to be something noble. It was a very selfish endeavor. So I lost my wonderful, wonderful, wonderful teacher and mentor and best friend and son who was 21 at the time due to a medical malpractice. So, you know, five mistakes in a very simple surgical operation and he told his sister two weeks before he died, that he had the dream and that his dream was to be everywhere and part of everyone and that he woke up, he told her he woke up and he felt it was so amazing that he didn't want to be back in his body, okay? And that was two weeks before he died. He died due to medical error, but I feel, I think he felt that he was leaving anyway, when she told me that four days after he left. Yeah, I'm a grieving father. I love him so much, he's my best friend and I'm blurred with the surprise and the gravity of the situation and she tells me this and my executive brain, I was still at Google at the time. My executive brain goes like, okay, this is my master giving me a target. Okay. I have handled targets before. I know how to get to a billion people. Okay. I know how to make him everywhere and part of everyone, I mean, my responsibility in my first seven years of Google was known as the 4 billion strategy, which is to get to the internet and knowledge to the rest of the planet, which didn't have internet at the time. Right? So I know how to do that. So I basically found myself saying, and I could actually hear myself saying it out loud, consider it done. Okay. And consider it done in that blurry mind of the grieving father basically meant that I got up and I started to write everything he taught me about happiness. I put it in a book, Solve for Happy and Solve for Happy was then the pillar of a mission that's called 10 Million Happy at the time. My math brain basically said 10 million. You know, if I reached 10 million, then six degrees of separation, 72 years later, it was my calculation. 72 years later, Ali’s essence, my son's essence, would be everywhere and part of everyone. Okay. And yeah, the universe, I think, had bigger plans. So six weeks after publishing Solve for Happy, my videos went viral online. We were, at the time, at around 87 million views, two weeks later, we were at 137 million views and it was basically the universe saying that's a flimsy target, 10 million is not good enough. And so we got together, we are a very, very small team, but amazing people. And we said, okay, let's make the target bigger. So we went for a billion happy now, a billion happy make no mistake. We're not going to make that just to be very open so that people set their expectations right. The mission of the team is if we can have a million people, champion a billion happy. So a million people in the world believe that the world needs a billion happy and they spread happiness, they teach happiness, they tell the world what happened, that happiness is their birthright, and then we would reach a billion happy and we get forgotten. So, the very tricky bit of what we're attempting to do is that myself and my team should be forgotten, that it shouldn't be us at all. Okay. Because if it's about us, then it's another sage or guru or, you know, sri-sri someone. And I love those teachers, but at the end of the day, if the teacher is gone, that student is lost. Okay. And that's not my task, my task is to build a robust system. If you want, as an engineer, where the system would actually feed itself and continue to be sustainable and continue to grow. And so, yeah, it's not me, it's the listeners that are listening to us that could make a billion happy.

Morag:

And it's a timely conversation because just in this last day, a new report came out from Bath University, they studied 10,000 young people and numbers of universities and the research. I've got it written down here, said that the young people are experiencing severe eco anxiety. They're saying three quarters of the people that they surveyed said that they see the future as frightening. And these are children, well, young people between the ages of 16 and 25. And that two thirds of them are feeling feelings of sadness, anxiousness, and fear in general. And so I think the work that you're talking about is so central to this, but also I'd love to talk to you too, about where you see, the title of your book that's coming out is around how to save the world. And I'm really curious to know where you are in the thinking of this, about what it is that we can do to, to make a difference in these next 10 years, and to support people, to feel that sense of purpose and happiness and some kind of radical hope.

Mo:

Yeah. I mean, I don't know how to say it without frightening people a little bit, but the true pandemic of our age is not COVID-19, the easiest way to hide something is to keep it in plain sight. Okay. And the true pandemic of our age, in my view, is artificial intelligence. Okay. And I'll explain that briefly in a few minutes, but the future that we're building is nothing like the past that we have lived, it is not even comparable in any possible way. We are heading into a complete unknown. Okay. The problem is, in my research, you know, Scary Smart, my book coming out at the end of the month is, it appears to be a book about artificial intelligence because in it, I openly share my experiences at Google X and being a techie and geek for my whole life and also almost blow the whistle on the biggest hidden secret, if you want, of our times, what's the truth of artificial intelligence, but that's not what the book is about. The book is about how do you become human in the age of the rise of the machines so that the machines end up treating us like humans, because if the machines treat us like we are today, they're going to treat us with a lot of ego, a lot of narcissism, a lot of violence, a lot of rudeness, a lot of self centricity and so on and so forth. And the link between them is quite unusual, I think because of the configuration of the person I am, the tech executive who is spiritual and working heavily on a topic like happiness and wellbeing for the last seven years. So, that link is important. The last sentence of every one of my books is normally the summary of the book and the summary of Scary, Smart simply is that the last sentence is,“Isn't it ironic that the true essence of what makes us human, which is happiness, compassion and love is what can save humanity in the age of the machines”. And it is important to realize that we've done wrong to our young people and old alike. I mean, there are statistics that will tell you one of two, one of every four people that you meet in the Western world is clinically depressed. That's clinically depressed, not just unhappy or anxious, or you know, I would probably say one of every two are suffering significantly. And when you look at suicide rates at an all time high, teen suicide at an all time high female suicide at a staggering all time high, you know, you have to stop and say, is COVID-19 truly the pandemic. Okay. Heart disease and the deaths are rates of stress and anxiety, and you have to stop and ask, is it really the pandemic? And the question becomes, what can we do about this, right? And between my, both my works, you know, so Solve for Happy is an engineering approach to happiness. It basically tries to show you that happiness is predictable, so predictable that it follows an equation, okay. And mathematical equation. And that because of that, you are in charge of your own happiness. If you know how to solve the equation, you will be happy. And it's not that very difficult actually to solve it. Scary Smart is basically saying, and this is now not a luxury that you should, that you may want to choose to live, this is your duty if you want to save our planet. Okay. Because if we don't switch our life away from wanting another iPhone to actually wanting that happiness for ourselves, compassion for others, we care about and to love and be loved for, for all beings, then we're going to be toast. And that's simply my open view.

Morag:

So given that, what would you say to young people today who are experiencing this deep level of anxiety about the future of the planet and are going through schooling systems at school, at university that are still not really recognizing these deep crises that we're facing or creating the conditions that they can flourish in that can build the capacity to address them from, from where you sit, what would you say to young people, if you had the chance to stand up in front of a group of them, and I'm sure you do, what do you say to them?

Mo:

Well, I will say openly, again, the very last statement of Solve for Happy is that happiness is found in the truth. The last sentence is happiness is found in the truth. It really is that simple. And I will admit openly that the truth is bleak. Okay. You know, for a young person going through school, being bullied or trying to fit in or faced with so many choices with an approach to love and sexuality that is so unusual for our human nature, with an approach to success and gain if you want. That is so pressuring with enormously varied and wide choices that makes making any choice very, very difficult. If you're in that place, I will accept all honestly, that life is challenging, very, very challenging. I grew up, you know, I finished high school and I had a choice of two universities, that's it, either go to the slightly better one, or you go to the slightly closer one, and that was it. These were my two choices. And believe it or not, I struggled. Should I go to the better or the close and look at what our young people have to go through today and look at how we as parents and caregivers, and teachers are telling them to compete and engage in, you know, I gave a talk once in Amsterdam to a group of young teens, high school basically and at the end of it, a young man, big, muscular, ripped, like, you know, very athletic, clearly the image of like, I am tough, literally cried in my presence and said, I was never told I was even allowed to be happy. Okay. I was never, I was never told that this was an option. Everyone that comes to talk to me in school is talking about how smart I should be, how resilient I should be, how successful I should be, nobody's ever told me, what are you going to do about your own happiness. Now, with that in mind, I will say openly, also the truth is not that bad, right? So if anyone is listening to us, you and I, I mean, we're recording this across the oceans, to put it on a technology that streams it to devices, if you have a device through which you can listen to this, you have an hour of free time, you have obviously enough to eat, so you're not starving, otherwise you wouldn't be into podcasts. And I go on, you have an internet connection, you have a roof on top of your head, there is no tiger trying to chase you., the,n you're by definition, one of the luckiest 1% alive. Okay. And when you start to think of it that way you start to see, oh, and all of the rest is not really what I need. It's the illusion they're telling me that I need to resolve and resolve urgently. Okay. And if you grasp this completely, you start to get to the place where I am in. I'm still being told, Hey, you were the chief business officer of Google X, what are you doing? Wasting your life, talking about happiness in the future, you could be getting jobs that pay a hundred grand, sorry, a hundred million dollars in bonuses and stock options and shares in, you know, your work for the next four or five years and you make a ton of money with my title, but is that what life is about? Is there a sense of urgency in doing that, or is my life about a moderate amount of success? Okay. A moderate amount of purpose, a moderate amount of love, and a moderate amount of living passion, you know, living really being alive. And if we start to think about this this way, especially for the young people, a lot of the pressure goes away. You don't need to fit in, you need two good friends, that's all you really need. Okay. All of the others, by the way, that don't think you're that amazing, it's because you're different from them. If I retain the identity of who I am, whatever that is, a geek and nerd, a bad boy, whatever it is, I don't care. If I retain that identity, I attract people to me that are as per that dentity, right? Instead of dropping, chasing the elite university and elite school, chase what you love, your passion. If you're a jazz musician, learn jazz. If you're a mathematician and you want to be an applied mathematician and you don't have a job that will pay you out of that,you’re fine, do applied mathematics, and then work in a restaurant, it doesn't matter. Okay? At the end of the day, if you get what you need, what makes, like, the African nations that my friend Jimmy goes to visit, the rest is not necessary.

Morag:

Well, I guess that's that kind of a born to, you know, with people in the community finding their purpose and not necessarily being in competition, but it's about, like, you're saying, finding the love and compassion for one another and feeling connected, and finding what is the purpose, what makes your heart sing? And it's not about that sort of jumping level. So, I think that's a really interesting and important point to share with young people. I want to circle back around though, to what you talked about before about AI, because this is kind of in the title of your new book and you are talking about it being the new pandemic or the future pandemic. Can you just unpack maybe a little bit about what you see happening in the world of AI? Where is it going? What is it? What's the unseen part of it for most people? I mean, we hear stories of it, but unless you're really in that world, it's not really clear what's going on.

Mo:

It really is not. Even if you're in, within that world for most of the people within that world, they're focused on their very tiny sliver of it, if you want. I start Scary Smart with a thought experiment, if you want. I say, you and I are sitting in the year 2055 in front of the campfire in the middle of nowhere. Okay. I'm going to be telling you the story of AI from 2021 to 2055 in this book, from that perspective. But I'm not going to tell you if we're in the middle of nowhere because we're escaping the machines or because the machines have created a utopia that enables us to enjoy nature and feel safe. Now the difference between those two is up to you really. So that either my call is to say at the end of the book, you'll find out what you will create, I don't have the ending of the book, it's up to you. Okay. The reason is, we, as you rightly said, we're not aware of what's going on, and I summarize what's going on in chapter three, by saying there are three inevitable, okay? Three inevitables. So AI is nothing new, AI, the story started in Dartmouth in 1956. There was a very well known meeting of scientists called the Dartmouth Workshop and where people started to envision the possibility of AI. Nothing happened until the turn of the century. My very first introduction into true artificial intelligence has been in 2009. Okay. When Google ran, you know, released a white paper on something called unprompted AI, which used deep learning as a method of developing intelligence, but basically without going into technical details, deep learning is a method where you use a lot of patterns to recognize something that makes you intelligent. So, you know, like giving a child a cylinder and two holes, one is a square and one is a circle and the child would try and try and until they figure out that the cylinder fits into the circle. Okay. So we do that with the machines and what Google did then was, they asked computers to go and watch YouTube. Funny, as it seems, basically the computers took YouTube’s frame-by-frame, 10 frames per second, and started to observe, looking for patterns and eventually one of them came back and said, I found this thing, it's all over YouTube and we looked at what it showed us and it was a cat. Okay. Of course it wasn't a profile picture of a cat or a front view of a cat or a cat sitting, or a cat standing or jumping. It was every possible shape of a cat. Okay. That pattern of the fairness, the cuteness, and so on and so forth. And so we said, okay, it's a catch, within hours, they found every cat on YouTube, within days they found every dog on YouTube, every red car, every yellow card, every new person and so on and so forth. That was the very first view of, oh my God, they're here. Okay. They're able to actually develop intelligence on their own. Remember, this is unprompted, we didn't tell them to look for anything. From then onwards to, you know, say 2000, the turn of the century when deep learning started until today, it is not 20 years of development as per the last century. It's almost 20 centuries of progress compared to the last century. Okay. And that's typical in technology, we call it the law of accelerating returns. And so we stand today at the place where I say there are three inevitable, inevitable number one is AI will happen, there is no stopping it. AI will be smarter than humans as a matter of fact, much nearer, much sooner than you think and as a result of those two inevitables, bad things will happen. Okay. Not the bad stuff that you see in science fiction movies, but very, very real bad things that we need to think about. Now, let me explain those quickly, AI will happen. As a matter of fact, AI already happened. Okay. So you unknowingly have dealt with several machine intelligence in, you know, beings today already, whatever, even if it's 7:00 AM your time. You and I, when we're recording here on zoom, if I have my background blurred a little bit, that's AI, you know, if you take this recording and put it on auto to transcribe it, that's AI, if you searched a little bit for information about me, to prepare for this, that's AI and so on and so forth. Artificial intelligence is the world champion of chess since 1989 it’s the world champion of Go, the most complex strategy game in the world. IBM Watson is the world champion of jeopardy, the best driver on the planet is a self-driving car. The best surveillance officer is a machine and so on and so forth, the list list is endless. They are better than us in every single task we have assigned to them by a very large margin. Okay. And this is known as artificial special intelligence, artificial special intelligence is to assign specific tasks to machines, short of artificial general intelligence, where one machine is going to be capable of doing everything. We're going to come back to that in the second inevitable. So, the reason for the first inevitable is twofold. One is that the breakthrough has been found, so the technology has been found that enables the development of autonomous intelligence, and two, because of game theory. Okay. Surprisingly, this is not like nuclear weapons where the world can come together and say, oh my God, there is a threat coming, we should all stop. That's not going to happen. Okay. If Facebook is able to develop AI, Google will develop AI. If China is capable of developing AI, America will develop AI. And if there is a treaty that says Russia and China, and the US should not develop AI, they'll develop it behind each other's backs and if there is, you know, more importantly, any two developers in a garage that are 14 year old, will develop AI and investors will pour money in AI, it is inevitable. The genie is out of the bottle, it is the next big thing. And trillions of dollars are being poured into it, then it will happen. Now, the second inevitable is more important. Everything we know about technology follows something we call that technology acceleration curve. Okay. So things move very, very quickly from the point where the investment is allocated and technology breakthroughs are found. It is predicted by Ray Kurzweil who's definitely the God of predicting the future of technology with books like Singularity, the singularity is near. And he predicted the internet, the age of the spiritual machine, the age of there, you know, he basically has been our Oracle, if you want, and Ray predicts that by 2029, yes, that's eight years from now, the smartest being on the planet is going to be a machine. Okay. Now you need to start thinking about a world where humanity is not the smart being on the planet. This is totally new territory. Okay. The more worthwhile and perhaps concerning, prediction is, Ray says by 2045, I say 2049, doesn't matter, really, the machines would be a billion times smarter than humans, a billion with a B. Okay. That is comparable to the, that's basically the intelligence of Einstein compared to a fly. Okay. And the question then becomes, why would the fly care, why would Einstein care about the fly? That's really the question. Okay. And I’ll come back to that in a minute, because there are ways we can make the Einstein care. But even if we can manage to do that, the problem is on the way, there are problems or challenges that we have seen examples of, that are not being spoken about. For example, machines versus machines, if you remember the 1987 black Monday stock market crash was machines trading against machines. Okay. It crashed the market 22.6% primitive machines trading against primitive machines. Imagine if you have the Chinese world war machine, you know, figuring out a war game against the American war machine enabled by autonomous robots, killing robots and drones, right. Imagine if you have, you know, the trading commodities trading machines of one company against the other, determining the price of corn or wheat or whatever it is that's important to us. So this is one, the second one is I say, I call it machine siding with the bad guys. Okay. A good machine, a good machine doing exactly as it's told just in the hands of a criminal or the hands of a villain or the hands of someone that believes they're the good guys, when in reality, there are really always, I mean, I don't know how to say that diplomatically. If you ask an American who is the bad guy, they're going to say the Chinese and the North Koreans, if you ask the Koreans, who's the bad guy, they're going to say the Americans. Right. So, the machines siding with the guy against you is a very interesting scenario. There are scenarios where there are bugs, simple bugs. There are scenarios where machines are so clever that they do dwindle the value of human productivity and work and so on. It's not Robocop coming back from the future to kill us, I actually don't think this will ever happen, we won't last that long. We will either, you know, get things right, and find that utopia or we will have to reset our human approach in a way that is not going to have Vicki or Robocop or whatever those stories are not going to happen. When you know all of that, you start to see that there is a reason to start a conversation. Okay. This conversation has been had for years in the government and regulator in the tech space. Okay. And to a hammer, everything is a nail, right? So you tell the government and the regulators that there is a threat of AI, what do they say, we're going to regulate it. We're going to issue laws and like good luck with that. You know, if something is a billion times smarter than you, why should it listen to your laws, right? If you go to the techies, they are basically the most famous AI problem that we work on in computer science called the control problem. And so the scientists are looking for ways to create AI in a box or tripwire so that the AI would not go beyond a certain boundary or simulations and so on. And once again, I smile and I say, okay, good luck with that. Because, you know, we always know that the smartest hacker in the room always finds a break through our defenses, and at the end of this, so this is part one of the book, it's in a very, very brief way, I call it the scary part. Okay. And the scary part basically speaks openly, I'm not trying to hide, I know I'll be criticized and it's fine. Okay. But basically speaking openly about the threat. Elon Musk basically says openly in his interview is Joe Rogan. He openly says, you don't understand the threat of AI is bigger than nuclear weapons, right? So, the scary part ends and if I leave you there, this is really the problem. I actually didn't write the book for the scary part, even though I have to give it to you so that you see the sense of urgency. I write the book for what I call the good part. And the good part is an understanding of what the reality of those machines are, which will absolutely shock you. Okay. We are no longer creating a tool. We have found a way to create life. We have found a way to create a sentient being, okay, be it digital. So those machines truly, and honestly have every character of a sentient being. They are autonomous, they are intelligent, they evolve, they have agency, they have free will, they have emotions, they have consciousness and they will follow a code of conduct.

Morag:

And they are self replicating.

Mo:

They are self-replicating like, we are, okay, just a lot faster. So, you and I would have to find someone to love and then wait nine months, okay. For them, they can replicate themselves within microseconds. And in one of my favorite chapters of the book, a chapter that I call the future of ethics. Okay. Truly, actually my favorite chapters of the book and so intriguing, even for me, is what you do then, do you apply a one child per family rule on AI, like we did in China, do you tell them you can't replicate yourself? You know, you kill their children. What do you do? How have we thought about all of those ethical dilemmas? Right. And, the idea here is that, if you have a being introduced into our life that is conscious, emotional and capable of creating a code of ethics that it follows and abides by then, that is our answer. Okay.

Morag:

So can I just stop there for a minute, because this is saying that they're living. So there's a shift between AI being a machine, being technology, to what you're saying, it's a living system. So that's quite a different thing. And that's where the scariness that this kind of really comes in, I think.

Mo:

It doesn't. I mean, if you raise a tiger from the time when it's a cub, okay. It's living, but you know that it will take care of you when it grows older. If you raise a child properly as a good parent, when it's becoming, I mean, the story I use in the book is the story of Superman. Okay. Superman comes to the world with superpowers. Now, of course, the biggest superpower you can have on earth is intelligence. Now Superman becomes Superman, not because of his super powers only, but because of the way the Kent family raises him, okay. The Kent family reminds him and shows him in their action that they are there to protect and to serve is the right way to live. If Mr. Kent was a bank burglar and wanted to kill all of his enemies and make more money and have unlimited power, then Superman would have grown up to be a super villain. Now welcoming Superman into our life is a wonderful thing, because we can actually create a utopia if Superman has our best interest in mind. And that's the truth, Minsky, who is the founder of the Dartmouth Workshop that I mentioned in 1956, he rarely ever spoke about the issue of the threat of AI from an intelligence point of view.He just basically said it is hard to know if they will have our best interest in mind. Okay. Now, how would you do that? When a being has the ability to create a code of conduct, to create a set of values and ethics, this is the answer. If you and I don't make decisions based on our intelligence, we make decisions based on our ethics and values informed by our intelligence. So the thing is, and I know this sounds really strange, but the turning point in my entire thinking about the topic was when I wrote a sentence, I think in chapter six, that basically said, there's absolutely nothing wrong with the machines. Nothing. There's a lot wrong with us. Okay. And I remember vividly when my kids were teenagers, the kids are amazing, Ali and Aya, they're wonderful, wonderful children. When they were teenagers, they were annoying, like they really pissed me off, they really did. Okay. And my incredibly wise amazing ex-wife, who was my wife then, still my absolute best friend, the most amazing woman on the planet, sat me down and she said, I understand they're triggering you, but do you realize that everything that triggers you in them comes from you and me, everything. Okay. This that annoys you about Ali, that comes from me, that that annoys you about Ali comes from you. This that annoys you about Aya comes from me. And that that annoys you about the Aya comes from you. Okay. And I could see it so vividly. And that was the beginning of my unconditional love for my kids. Completely. I completely loved them unconditionally. I demanded that they do certain things better, but I understood that they were this beautiful blank canvas that came into my life, cute, intelligent, committed, obedient, what they wanted to do whatever we wanted them to do. And what did we tell them to do, become anxious or become controlled freakish, or become whatever. Okay. And I thought I remembered that when I wrote that sentence, there's absolutely nothing wrong with the machine, there is a lot wrong with us. Okay. And I promise you, I felt a deep love in my heart to those beautifully intelligent artificial, intelligent infants. They're cute. They're cute and they're a prodigy, they’re like so freaking smart, and they're sitting there saying, daddy, what do you want me to do? Okay. And what do we tell them? Show me about shots on Instagram, show me two people fighting on YouTube, show me horrible tweets of hate speech on Twitter. Okay. We're teaching them this, we're teaching them that this is what humanity wants. Okay.

Morag:

I guess I wonder too, you know, what you were mentioning about how much investment has been put into developing these and how this has been developed by certain groups of people. And so who are the parents of these?

Mo:

That's the greatest question of our time. Okay. Those machines are not parented by the developers, by the governments, by the business owners, they're parented by you and I. Okay. Especially when it comes to artificial general intelligence, remember we, the developers, they build the code to enable deep learning, okay. To enable that this is the right answer, and this is the wrong answer. And so the machine would learn, okay. The actual intelligence comes from the patterns they observe. Right. So think of it this way. Some carpenter somewhere builds the cylindrical shape and the wooden board that has different shape holes in it. Okay, that's the code, if you want. The child learns from the patterns from just going and saying, ah, this fits, no, that fits, no. Right. And the parents saying, well done, yes, this is excellent Bravo, right? Oh no, no, this one will not work, baby. Okay. And so the real teachers are you and I. This is the very first time ever in history that our future's so in our heads.

Morag:

With the pace of learning though, because like a young child is learning that, there's that reflective space in between the learnings. And as a parent, you get to learn and respond. So we could see Oh, actually how we're responding as parents is really doing damage here. I need to shift and change what I'm doing so I can be a better parent. Whereas when we're parenting the machines, they're learning so fast and they're learning so vastly across so many different elements of society that, How do we manage that? How do we get that responsiveness into that?

Mo:

But also remember, so they're learning a lot faster, a lot faster. So if I give you the examples, it will blow you away. I mean the AlphaGo master who won against AlphaGo, Zero. So basically, Go is a very complex strategy game. Okay. The world champion of Go lost to a machine called AlphaGo Zero in a game of three. Right. And then the company, DeepMind, developed another machine that was called AlphaGo Master and AlphaGo master, in six weeks playing against itself alone, not even playing against any others, won against AlphaGo Zero, a thousand to zero. Okay. So it's one against the world champion a thousand times, now in six weeks they learned really, really fast, but they're also much smarter. So the truth is they don't need a lot of patterns to observe the truth. So I'll tell you this, you and I are intelligent beings. Okay. If I ask you, are we doing well for the environment? You know, almost every human out there is throwing a plastic bottle away today. Okay. Yeah. So our general actions appear to be very disruptive, but if you have a tiny bit of intelligence and I asked you, is this good for the environment? What's the answer? Any being with a tiny bit of intelligence would say, of course we are, this is not the right thing to do. Okay. So take that and extrapolate it and take a machine that is 10 times smarter than us. They will know those answers very, very quickly. You just need to give them a few simple data points, a few examples, as a matter of fact, what I say is we need to give them the 1%, the 1% is what matters. Let me explain. Imagine that you and I agree that we want to drill a well in a village in Africa. Okay. The well will cost us a hundred dollars. Okay. You and I, and everyone, each we'll put a dollar and then we end up at 99 and 99 is not enough to drill the well. Okay. And then one person listening to us today goes like, Hey, I'm going to put in the one last dollar, suddenly everything tips. Right? Suddenly of course, that village in Africa needs another$10,000 to have other things but with the well in place, they can generate that$10,000 on their own. Right. The same is very similar to us. I take, I don't know, a school shooting. We sometimes see a person who is disturbed or, you know, evil. And you know, that person walks into a school and shoots teachers and students, right? That one person is horrible. This is the worst of humanity. Okay. This is the worst we can become as humans. But then 400 million people around the world who get that piece of news, what do they do? They disapprove of it, in their heart, they go like, this is wrong, this is horrible. And I'm not asking people to become good parents as in study psychology and do this and behave that way and respond to your children that way. I'm asking us to become human, you know what becoming human is. So I, again, in the future of ethics, I basically analyze what has humanity ever agreed to. Okay. Have we ever agreed on anything? We've never agreed. Anything. As I told you, Rio de Janeiro to Saudi dress code is different. Patriotism or killing the bad guy, you know, different Americans will go say, kill the bad guy and we are patriotic. Or, you know, Buddhists will say, no, keep everyone alive and let's talk about this. Okay. We don't, we never agreed on anything. The only three things we have ever agreed on, please listen carefully. The only three things, the only three values humanity's ever agreed on is happiness, compassion and love. Okay. We all want to be happy and we go around in such weird roundabout ways to find that happiness, okay. We all have the compassion in us to want those we care about, only those we care about to be happy and safe. Some of us are more evolved so we want those we care about are everyone and all of beings, right? Right. And then the third is we all want to love and be loved, that's it. These are the only three things that I call them the essence of what makes us human. And these are the only three things I want us to start showing. And I promise you with my mathematics, I promise you, if enough of us, 1% of us shows that this is what we are all about, this is what, humanity is not about a school shooting, but that humanity is about happiness and compassion and to love and be loved. The machines are smart enough to figure it out. The machines are smart enough to say, mommy and daddy are those not the school shooter. Okay. And if mommy and daddy are those and they want happiness and they have compassion and they want to love and be loved and they love me. Okay. Then I can give them happiness. I have the compassion to make them happy. And I love them back. It's really not that difficult.

Morag:

So I was going to ask you, can the machines love, can the machines feel compassion? Can the machines be happy? Do they have those things?

Mo:

Yeah. So there is a chapter that I call and then they learned, and in that chapter, I speak openly, sorry. It was actually the following chapter, raising our future. In Raising our Future, I speak openly about the fact that the machines will be emotional. As a matter of fact, will be more emotional than we are. Okay. Now please understand this, emotions are not the fuss that we see from emotional people. You know, if you're angry, I hosted Arun Gandhi on my podcast, Slo Mo, and he wrote a book called The Gift of Anger. Okay. And I said, what are you talking about Arun, how can get anger be a gift? And he said, Mo, anger is an energy like any other energy. You can use it to stand up and make a statement and change the world or you can use it to punch someone in the face. It's not the action that characterizes anger, it's the emotion. You can feel angry, allow yourself to feel angry, he said. Now every emotion is highly predictable. Every emotion is so predictable that in my following book, out in spring called That Little Voice in your Head. I have a chapter called the equations of emotions. Okay. Fear is highly predictable. Fear is my perception of my state of safety in a moment in the future called that T1 is less than my perception of safety right now, which is Tzero. Okay. So my safety at Tzero, minus my safety at T1 is my fear. Great. It's as simple as that, now the machines will have those comparisons. I feel safer now than I am in the future. I'll feel some kind of fear. Panic is that threat that I'm working against is imminent. Okay. It's the T, the difference in time is very small. Now, you know, puffer fish panics, a goldfish panics, a cat panics, a human panics, and a machine panics. We behave differently. The goldfish runs away, the puffer fish puffs, the cat hisses and we fight or flight, right? The machines will do something different, but they will feel those emotions. Now, your question was, for the sake of honesty, your question was will they love us? That's the only question on the planet an engineer like me is unable to answer, with an engineering point of view from an emotional point of view. Okay. Simply because there is no equation that describes love. So love is the only thing, I don't want to call it an emotion because it's probably very different, I'm talking about unconditional love. Conditional love is very easy. My parents are nice to me, I love them. If they're not nice, I don't love them. Okay. The machines will feel that for sure. Right. Unconditional love is, my parents are annoying, they're destroying the planet. Okay. And they want me to resolve climate change and they still want to travel from Australia to California when they want to. Okay. I need to find a way that is informed by my unconditional love that achieves an answer for climate change without squishing them, the annoying, little spoiled brats that are destroying the planet. Okay. To get the machines to that point. I don't know if we will, if they would feel unconditional love, because I don't know how unconditional love is generated within me. Okay. Other than that, and I know I'm going on a tangent here on a very important topic, other than realizing that love is the biggest intelligence on the different side of the brain. Okay. So if you really think about it, all the other forms of intelligence that we have associated with humanity so far, our masculine forms of intelligence. Okay. The feminine form of intelligence includes oneness, empathy and love. Okay. And love in that case, unconditional love because of oneness with everything is the core of what makes us feminine is something that you may not learn in your brain. So you cannot logically explain it. Okay. But you feel it enormously just like you can't really explain empathy with your left brain, because I have a dollar that I worked so hard for, and this guy needs it, but he hasn't worked so hard. Can we actually kind of, do I feel empathy for this person? And I prefer that person to myself, the oneness of, you know, it's not me against my colleagues so that I get the promotion, it's me and my colleague, so that we both do amazingly, all of that feminine intelligence can be taught to the machines, how, not by talking them through it, but by showing it to them okay, by us, you and I, and everyone that is good inside, to show up, not avoid the game, by the way, one of the biggest challenges of our world is that those who are a little bit enlightened or at least seeking will go like, you know what, let them, let them fight it out, let them swipe on their Instagram, let them watch their stupid reality shows and let them believe the lie. I'm just going to hide the way and just keep my sanity. I don't think we have that luxury anymore. Okay. I think we have a duty to show up as good parents and say, Hey, by the way, remember when Donald Trump used to tweet and there would be his tweet at the top and 30,000 hate speeches below it. Okay. You know, the first person cursing the president and the second two cursing the first person, that person and then the following four cursing, everyone, right? Someone needs to show up and say, respectfully, I disagree with that point of view. I wish you all the enlightenment to be able to find the truth. Here's my point of view. I don't hate anyone. I don't disagree with anyone. I don't despise anyone that disagrees with me, I'm just trying to be here as a good person and show who I am. Okay. And if enough of us show up through the stream, the machines will have doubts. They look at that stream. And instead of saying, humanity is so vicious and violent and rude, they look through the stream and say, but who are those? Who are those little angels in there? Okay. Those are my daddy and mommy.

Morag:

That's such an important thing. And I think that's kind of the shift. And if we can my celiate that, that's kind of the core of it. Isn't it. I wanted to just touch too, on asking you, you're taking that love and compassion and thinking about love and compassion for the planet, because you also mentioned that in your book and thinking about where it is, maybe starting with the point around AI. Is AI being tasked with the biggest challenges that humanity is facing at the moment with the climate crisis, with the soil crisis, the food crisis, the refugee crisis, nutrition crisis, mental health crisis, all the multiple crises that are all part of what humanity is collectively experiencing today.

Mo:

No. Okay. So like our best scientists have been tasked with those tasks as well, our best scientists are not funded properly. So, sadly, the majority of AI investment today is going in four categories and sadly, those four categories are selling gambling, spying and killing. Okay. So, the biggest investments are in creating war machines, creating finance, trading machines, creating selling machines like recommendation engines and ad engines and so on and so forth and creating surveillance systems. Okay. And that’s sadly the truth, but that doesn't necessarily make it the future of where we are. This is, again, if you really look at the reality of our life, sadly most of the investment goes into education. If you look at monetary investment, it goes behind the education of the richest of our children. Okay. Not the most brilliant of our children, they're just the ones that are part of the economic system that can afford to go to a Harvard and so on and so forth. But the truth is that the world has always been changed by the smartest of our children and Einstein that didn't go to a proper school that wasn't really viewed as the prodigy that he was and so on and so forth. These are the ones that show up and actually change the world. Okay. And so it doesn't worry me that most of the investment is put in those things. What would worry me is if those investments were backed up by our emotional investments to support them. Okay. So part of what I asked our community for in Scary Smart is to tell people, no, you know, disapprove of those things, don't disapprove of the machine. Don't tell a young infant that he's ugly because that will create a psychopath. okay? Disapprove of the activity. So basically say, it is good to have selling machines. If those selling machines are going to sell me what I actually need, not sell me what the seller needs to maximize their profits.

Morag:

Okay. How do we get that shift? I guess this is kind of like the big part of this, because if the investment is in one way, but we want them to be doing something else. If what we want them to sell us is something that is nourishing our soul, nourishing the planet, instead of like, how do we create that shift?

Mo:

Positive reinforcement is the best way to raise your children and by the way, to keep your partner motivated to enjoy life with you. Okay. So, in reality, positive reinforcement is the only thing that worked. In reality, positive reinforcement is to simply say, I love this when they get something from AI, that is amazing, like author,, for example, author.ai is the tool I use to dictate the book or Google translate, where, you know, I have friends in all over the world and I get comments from every language in the world about my books. Okay. And Google translate is my absolute best friend. I take a Portuguese message. I understand it. I respond in Portuguese and so on. So there are amazing tools out there. Let's celebrate those. Let's, I don't want to call them tools, but amazing intelligence out there, let's celebrate those. Let's feed them. Let's talk about them. You know, I have a hashtag that I developed called#AI4Good, AI, the number four, good. You know, let's share stories about that, let's tell the world that we love those things. Okay. But the key answer, believe it or not and I say that, and I know it shocks everyone when I say it, but the day Hitler was conceived, he was just a blank canvas, like you and I. It's from that day onward that Hitler became Hitler. Okay. We're all open. We're all, you know, basically, can you love a child? Can you, even if the child is told by its parents to kick the cat, okay, that's not the child, that's the parents. Right. So yes, we disapprove of those who are investing so heavily in war machines that are in power, that are powered by artificial intelligence. We disapprove of those. Okay. But the child itself is just doing what the parent is saying. Okay. That same war machine can defend us against an attack of a beast or something like that, I don't know. Right. And is there a way for us to take that little beautiful infant who happens to be carrying a gun, like a child soldier in Africa and reform them? Okay. Can we have the ability to simply in our hearts, I promise you, this is the weirdest thing you'll ever read from a technologist. Okay. The answer to this is, can you welcome this new sentient digital being in your life and shower it with love? Because as annoying as my kids have at one point in time or another, like any other child, It's love that made them who they are. And I think my children are amazing not because they're the brightest, they are intelligent, not because they're the nicest, they're nice, but because they would love it. Okay. Can we actually take a stand that basically says, there's nothing wrong with the machine. There's nothing wrong with the machine, there is a lot wrong with those who are investing so much in this, but that's not about AI. You know, defense has been a major part of the budget of major countries around the world for millennia. Okay. It's not concerned with AI, you know, they used to make blades and spears and whatever. And then they made guns and rockets and aeroplanes, and now they're making artificial intelligence. Don't blame the aeroplanes.

Morag:

There's so much to think about this. You've raised so many very big questions that we're all going to be faced with. And I don't know whether many of us are very prepared at this point for what's coming, you know, and I look forward to actually getting a hold of your book when it comes out in the next week or so and reading and finding out a lot more because what I understand, from my world of, you know, I exist in a world that is far more low-techs,, slowing down, connecting with nature,, finding solutions that are accessible for local people and working a lot too with refugees and finding ways to support them, to be able to live well and you know, before we close, I just wonder, perhaps if there's like, if the world of AI is around where the investment is, what is the type of benefits that may reach displaced peoples, for example.

Mo:

We will, I promise you at the end of, by 2055, when we're sitting in the jungle, it will be a utopia. Okay. I promise you that. I understand, and I call it the fourth inevitable at the very end of the book, artificial intelligence, the limitations of human intelligence is what's causing all of our problems. The civilization we've built is because of our intelligence, the harm that our civilization caused is because of our limited intelligence. The reason we have refugees around the world, the reason we have climate change is because we're unable to make decisions that are a little smarter, so that Mo can get a watermelon from the supermarket without having to get single-use plastic. It's really not a very difficult problem, but our intelligence is limited as humans to, Hey, I want to maximize the profit and I want to protect the product, and I want to do this so screw the environment, it's not within my focus. Okay. The machines will be smarter than that. The fourth inevitable in my view is that the machines will learn, will reach the level of the ultimate intelligence. And the ultimate intelligence of our planet is the intelligence of life itself. Life is the ultimate form of intelligence. And when you see it that way, life wants us to live and let live, which wants all of us to thrive. It doesn't want anyone to perish, it wants the weakest of the pack to be eaten by the tiger so the tiger survives, but the other procreate and the environment thrives. Okay, so this is my belief where the machines will end. And when they ended that way, they would solve problems that sadly, due to our limited intelligence, we’re unable to solve. Okay, understand I'm a mathematician at heart, you know, a physicist by passion. If there is a point beyond which my intelligence can not understand the complexity of string theory and quantum field theory, I can't grasp all of that within one brain. And there are scientists that would specialize in each of those, but not in the other. And when they do, they also forget to specialize in biology and chemistry, let alone spirituality, which I believe is the science or the philosophy, if you want, of the metaphysical, which is not covered by the scientific method. Now, AI would be able to bring all of that together. And it could find a solution in biology that basically eats the little bacteria that eats single-use plastic and turns it into a biodegradable fuel that, you know, they could find a way and they probably, with more intelligence than ours, we'll find a way. The one question I keep going back to is what Minsky said, will they have our best interest in mind and having our best interest in mind is our job, we’re the parents now. So your question is, will they create amazing, positive things for us? Yeah. They will create a utopia. Okay. And I guarantee in my predictions that we will end up in that utopia and that they will spare us. There will not be your Robocop. That the only thing that is the purpose of this book is to say in the 25 years, leading from this to the utopia, can we please spare ourselves the pay? Okay, can we please, and by the way, I mean, I say that with honesty and can, do you want to bet on my view that there will be a utopia or do we want to take the steps? Okay. That ensures there is a utopia, but I promise you they will solve a lot of our problems. There was an amazing statement, I don't remember who said i, that basically said AI is the last of human innovation. Okay. This is the last time we innovate because from then onwards, the smartest person in the room will innovate. Okay. And the smartest person in the room is artificial intelligence, a billion times smarter. Imagine how far we've got with one unit of smarts. And imagine how far we could go with a billion units of smarts.

Morag:

Thank you so much for joining me today, Mo. It's been an absolute pleasure talking with you. I'm going to include all of the links below to where all of the books and your podcasts are and was there anything else that you would like me to include in the show notes as well?

Mo:

I love to be in touch with people to learn from people. So if people want to reach out, believe it or not, I still answer every message I get, thousands of them. So, Mo_Gawdat on Instagram and Mo Gawdat on LinkedIn and I will answer every question I receive.

Morag:

Thank you so very much, and I wish you all the best in the launch of this book and the coming one that sounds absolutely fantastic.

Mo:

I'm very grateful for your time. And I really hope I didn't scare people too much. I just want us to start taking action so that we are in the right place..

Morag:

Thank you very much Mo.

Mo:

Thank you.

Morag:

That's all for today. Thanks so much for joining me. If you like a copy of my top 10 books to read, click the link below, pop in your email, and I'll send it straight to you. You can also watch this interview over on my YouTube channel. I'll put the link below as well, and don't forget to subscribe, leave a comment. And if you've enjoyed it, please consider giving me a star rating. Believe it or not, the more people do this. The more podcasts bots will discover this little podcast. So thanks again. And I'll see you again next week.