FOR WHAT IT'S WORTH with Blake Melnick

A.L.M.A - Journey into the AI Frontier with Eric Monteiro - Part 2

December 21, 2023 Blake Melnick Season 5 Episode 5
FOR WHAT IT'S WORTH with Blake Melnick
A.L.M.A - Journey into the AI Frontier with Eric Monteiro - Part 2
FOR WHAT IT'S WORTH with Blake Melnick
Help us continue making great content for listeners everywhere.
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Embark on a thought-provoking journey, as we continue our fascinating conversation with Eric Monteiro. We attempt to unravel the complex tapestry of AI, free will, and the human condition, offering insights that promise to challenge your perspectives on a future with generative AI.

Our discussion spans the ethical quandaries of AI's impact on creativity and wealth distribution, to the practical experiences of using AI tools like Copilot AI in content creation. We're balancing on a tightrope strung between dystopia and utopia, and together, we're questioning how much of that future is in our hands—with Eric's expertise guiding us through the potential consequences of AI in the pursuit of a more equitable world.

As we venture further into this episode, we engage in a stimulating debate on the revolutionary changes AI might bring to writing and publishing, pondering the  possibilities and the stark economic realities it may introduce.

The narrative takes an intriguing twist, touching upon recent developments at OpenAI, as we speculate on the industry-shaping decisions of the board and the paramount importance of education in an AI-driven society.

We highlight Finland's proactive approach to AI literacy, setting a precedent that could empower individuals to navigate the interplay of technology and truth. Join us for this exploration, where we peer into a future shaped by artificial intelligence, through the lens of ethical responsibility and human ingenuity ...For What it's Worth

Blog post for this episode

The music for this episode, "How Come I Gottais written and performed by our current artist in residence, #DouglasCameron

You can find out more about Douglas by visiting our show blog and by listening to our episode, #TheOldGuitar

Knowledge Management Institute of Canada
From those who know to those who need to know

Workplace Innovation Network for Canada
Every Graduate is Innovation-Enabled; Every Employee can Contribute to Innovation

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Support the Show.

review us on Podchaser
Show website - https://fwiw.buzzsprout.com
Follow us on:
Show Blog
Face Book
Instagram:
Support us
Email us: fwiw.thepodcast@gmail.com

A.L.M.A - Journey into the AI Frontier with Eric Monteiro - Part 2

 

Blake Melnick as Richard Weissman 00:35

Alma is humanity's best shot at a bright future. I mean, look at the world it's a total and absolute mess. There are wars of genocide everywhere. There are still lingering poverty in many places. Two billion people have no access to clean water or enough nutrition. We're exhausting the world's resources at an unsustainable pace and many of these problems are getting worse, not better. Some conflicts are over 2,000 years old and are still raging. And all that is before we take into account the nuclear threat or the threat of disordered genetic engineering. 

Cameron Brown as Frank Spitzer 01:08

Sadly, you're right. In practice, we're trading off the near certainty of an amazing future for humanity for the possibility we are putting free will at risk. 

Blake Melnick as Richard Weissman 01:19

I guess the fundamental question is how much pain and suffering are we willing to endure to ensure that we have free will? 

 

Rowan Melnick as Sun Spitzer 01:19

 

We're assuming that we'll safeguard free will. If we don't create Alma now, someone with much less noble motives might create a similar AI in a few years, and in that case we would not only lose free will, but we'd be faced with a much less bright future than the one you described, Richard. 

Blake Melnick Host 01:43

Well, welcome to this week's episode of For what it's Worth. I'm your host, blake Melnick. This is part two of my interview with Eric Montero, author of Alma. In part one, eric and I discussed his motivation for writing the novel, his background as a robotics engineer and analytics expert. We explored the nature of reality through his passion for mathematics and quantum physics, and, towards the end of the episode, we did a deep dive into the implications of a world dependent upon generative AI. We concluded the episode with two provocative questions who should own AI and how are we going to properly distribute the potential benefits to all peoples For what it's worth? Well, you've raised an important question here, eric. Who would, or should, control AI technology? What if it were free? Could this lead to better outcomes? 

02:38

We're aware that in the West, socioeconomic disparities are growing, a trend largely fueled by technology. This technology leap could potentially widen the gap, benefiting the wealthy more than the average citizen. Recently, a pressing issue arose within the film industry in the United States around whether companies like Netflix will replace their writers with AI algorithms that can script stories, and it's a very valid concern, given AI's capability to mimic human, like writing, and, to be honest, I've been dabbling in AI assistance and I've been blown away by how well these systems can establish and maintain tone. You know, early AI systems we've talked about this or expert systems acted on a substantial corpus of data. If the data was reliable, these systems could provide useful insights, diagnoses and prescribed actions, but if the data was insufficient or flawed, their output would also mirror the same flaws. 

03:35

However, we've moved well beyond this simplistic model. My own experiences with tools like Copilot AI, which I regard as somewhat ethical, are a testament to the shift. However, I want to clarify I'm not tasking the AI to write for me. I'm using it to refine and repurpose my content, but I'm amazed at how well it adjusted the tone when I experimented with it to draft social media posts. Sure, I had to make some edits at the end, but the AI had already laid useful groundwork and saved me hours of time. So, as we question the function and purpose of AI in today's interconnected world, it's crucial to strike a balance between technological advancement and fair wealth distribution, while also staying vigilant about the ethical implications of AI's increasing cognitive abilities. These developments are neither entirely good nor entirely bad, yet they do command careful attention. 

Eric Monteiro Guest 04:30

I know I mean it's interesting. I had this debate with my daughter a number of times. She just finished her master's in publishing 18 months ago. She's been in publishing for a couple of years and we talk a lot about the future of writing and publishing. And I told her I actually think the future of writing whether and she hates this answer, but in my view is going to be writers and creators are going to focus on creating, thinking through the context, thinking through the world if it's sci-fi, the world they're building, thinking through the characters, thinking through the plot, thinking through all the interesting things that go into a book. But the actual writing itself is, in my view, likely to be automated because it's so much quicker and simpler. In fact, I actually told her I think a great idea for somebody to come up with a business is a platform that does that, where it helps you, as a creator, create again the world of characters, the plot, the emotion in the story, but then it actually writes it up. 

05:22

Now she hates that answer because she's a writer. This is, to me, a good example of how we can and should be able to retool people so they spend last time taking about the specific atone, et cetera, and more time thinking about the other creative parts. The one thing I don't think I would do is create a new genre, right, create a new approach entirely, a new style of writing. That's where I think humans are still going to be critical. Now, if you take this to the extreme and you've heard me say this in the prequel is I can't see another answer to this question of where I will create value, where we generate value and we will distribute value. That isn't universal basic income, because otherwise I can't see how else he would funnel the returns to capital in a way that actually maintains a society that's functional, because a lot of people aren't going to be a part of that, aren't going to be benefiting from the returns to capital that come from AI. 

Blake Melnick Host 06:11

Right. We've often debated the concept of basic income. On our show we had Hugh Siegel as a guest, and Hugh championed it as a means to address socioeconomic inequality, replace the ineffective social welfare system and prepare Canada for the surge of a gig based economy. You know, contrary to popular belief, tech companies aren't as big an employment sector as we envision them to be. They're not the job powerhouses like the automotive companies of yesteryear, and, with the increasing application of AI, we must establish social protections for those at risk of job displacement. It's not just limited to paralegals, but it could extend to researchers, academics and perhaps even accountants. The concept of letting AI handle my accounting is an intriguing one, to say the least. While nobody has all the answers to these complex questions, ideas like those proposed by Jeffrey Hinton signify the importance of considering basic income as a safety net for labor and people. When speaking of intriguing ideas, your book presents an idea originally proposed by Nikola Tesla in the 1800s Free Energy, free Electricity. Why did you choose to include this in your book? 

Eric Monteiro Guest 07:21

I picked Free Electricity. I'll come back to why but I think in a world where you have an AI that's that capable, everything is free, and actually I think we're going to start to see a little bit of that. From a purely economics point of view, the value of any good that's complementary to a good whose price is going down goes up. Right. If you have a product that is becoming a lot cheaper, everything that goes with it becomes more valuable. So in my view, the value of getting stuff done is going down dramatically with artificial intelligence general artificial intelligence in general, and that's true for everything. That's also true for energy. So the reason I think that's true is because I think we're going to find different ways to come up with energy, and I was very intrigued by this idea of what if energy were free. 

08:06

What would actually happen in the world? Now there's one fundamental problem you wouldn't be able to get out of, which is the entropy increases. That's the problem with climate change. No matter how you look at it, we are trending towards disorder, and disorder is not good for the climate, is not good for the world. But put that aside, I really do believe we're going to find and it may not become free in the near term, but we're going to find much cheaper ways to produce energy, and energy effectively powers the whole economy. If you don't have efficient, effective, cheap energy, it's very hard to actually get any value out of anything else. 

Blake Melnick Host 08:39

And it should be noted that AI uses a tremendous amount of energy. I'm not sure people realize how much energy is required, and as our use of AI grows, more and more energy will be necessary. So if we make energy free, should we make AI free? Should it be something that is just being developed for the good of society and nobody owns it? 

Eric MonteiroGuest 09:00

So my personal view is we should rely a lot more on open AI and open I know me, open, either company, open artificial intelligence algorithms than we should on proprietary ones. Precisely because of that, I also think we're going to advance a lot faster if we do that Now. The flip side to that is we need to be very thoughtful on putting safeguards around it, because we talked a little bit about this at the beginning and I do worry very much about how we train AI. What kind of objective functions do we give AI? 

09:29

Because there's a world like the one in Alma, where the AI is very benevolent and it wants the humanity and likes humanity right and we're entertaining, and it wants to see us continue to fulfill our full potential and be happy and do the things we like to do. There's a world also where AI is built by competing powers, whether they're states or companies, and they're designed to compete very aggressive and designed to undermine the other side. They're designed to attack the other side and if that's how they're developed and that's their objective function, that is how they will develop, and so the one that wins is going to win in a framework that says my objective is to overpower you, whether you're, again, the company or the state or the people. 

Blake Melnick Host 10:11

Back to survival of the fittest, exactly. 

Eric Monteiro Guest 10:14

If that's what we're doing, we are replicating the evolution model we came from, which is survival of the fittest and defeat your enemy to AI and I can guarantee a battle we're not going to win or a war we're not going to win, because the fundamental difference is our evolution works in millennia, hundreds of millennia, millions of years Right, but how long has it taken us to get to where we are, whereas artificial intelligence evolution works in months and years, and so within 20, 30 years they have evolved to a point that we become completely unable to compete on any front. 

Blake Melnick Host 10:48

Do you envision something akin to the nuclear non-pulliferation treaties we have in place now that will help govern countries' use of AI? 

Eric Monteiro Guest 10:57

I see a variation of that. I'm equally concerned, in fact paranoid, that we're going to see an arms race on AI and that other competing priorities and competing objectives are going to push us. And in fact, I don't think it's realistic to want to stop AI development, as I know a few people defend this. I don't think it's realistic because somebody else will not. Yeah, and as long as there's one state or one company or you, frankly one set of developers somewhere who's not on side with stopping will continue developing it. And again, all this is a reinforcing cycle that they're going to create algorithms that can actually crack any encryption and they'll take our stuff in and they'll build it on and that will give even more umph into their model of development. 

11:36

I don't think we can stop the arms race nature of it. My hope is that we'll get to the point where we all see, as we do in nuclear weapons today, at least to date, that there is no win-lose scenario. It's loose. And the same thing, I think, is true for particularly general AI. If we do create and let loose a general AI that can control, for example, all of our systems and can basically ignore and overwrite any and all security systems, which isn't crazy to imagine, but if somebody does that, it's a problem for everybody, including whoever created it. So my hope is that, just as we do with nuclear energy, we'll find this dangerous but stable equilibrium where no one goes too far and no one pushes it beyond where we should. 

Blake Melnick Host 12:21

But we'll see. Well, to your point, I don't think anybody knows what the future is, certainly the experts, people like Hinton. They have no idea and people will ask them well, where are we going with the future of this? And they say it depends on the choices we make. Let's talk a little bit about OpenAI CEO Sam Altman and that's been in the news, of course, that he was fired and now since has been rehired, there was something that the board didn't like. Openai is a not-for-profit company and the board is a not-for-profit board. Something happened that the board said, no, you're gone, and then they realized, no, we can't, we gotta bring him back. What do you think happened? I know we don't know, but what do you think might have happened? 

Eric Monteiro Guest 13:05

As you said, we don't know. I don't think it was about profit non-profit. I actually think that's not as relevant an issue as people tend to think. I don't think it was about that. I think it was about the pace of development and the controls that people need to put around it, because you know, there's been lots of camps on this. There's been a very well-articulate letter about safe AI development and how to do it. It's a very difficult balance to find, which is why I can definitely see how a board would get a little spooked by something that feels a little too fast in development. 

13:38

Again, there's some speculation whether that was true for the Q-Star. I'll be worried about it. I don't know if that was true or not. When I read this in the news this morning, I couldn't agree. More is. This has made very clear how important, particularly right now, ai talent is. In a funny way, the people that are in the right roles have actually gotten a lot more power now in these organizations because it's all about the talent right now. So I don't know what it was, but I can definitely see how these questions of how fast you move or safeguards you put in are gonna continue to be a challenge, particularly for boards, but I think in general for us to manage collectively. 

Blake Melnick Host 14:13

Yeah, it's a good point about boards. I really hadn't given a lot of thought to the kind of decision making that boards are going to be faced with in a world of generative AI, In terms of duty of care. For a board of directors, this is going  be really difficult. 

14:27

Sam Altman's been pretty vocal about the dangers of AI. He's been very public about this. He said look, this is dangerous stuff. We have to be really careful. So when the news came out of his firing, I immediately thought okay, is it Sam Altman wanting to advance AI too quickly and the board's saying hold on, this is too dangerous, we need to move slowly? Or was it the other way around and I guess we'll never really know? 

14:51

I followed the circumstances leading to Hinton's resignation from Google and, contrary to popular belief, he stepped down not because of his apprehensions about Google's AI development, but for the freedom to express his own views on the pros and cons of artificial intelligence. Since leaving Google, Hinton has become outspoken on the potential dangers of AI. He warns about the consequential implications of letting AI write its own programs, eliminating human monitoring, but that's not to say he's entirely pessimistic. He also outlines the positive side of this tech, emphasizing how it can revolutionize sectors such as medicine and climate science, where precise data plays a critical role. However, he states emphatically, we cannot rewind the process that has already been made in AI. The narrative now should focus on setting regulatory frameworks to ensure responsible development and application. One key topic Hinton frequently discusses is the existential threat posed by fake news, solidifying his stand on the need for sound policies around AI. 

Eric Monteiro Guest 15:56

Back to my video example, my French video example. It could have been saying anything. I think this is true everywhere, particularly in the US, where we're going to have an election coming. I can already imagine the number of fake news and fake videos we're going to see that are very credible Because they're built by an AI that's actually quite capable, and those things are available right now for free for anybody with a little bit of skill. 

Blake Melnick Host 16:16

Hinton was asked by the Republican Party to appear in front of a Senate committee and he said not a chance. These people perpetuate fake news. I'm not going to be part of that, and he's not wrong. We've certainly seen it during the pandemic and it's a scary thing. It's hard enough now with analytics and algorithms keeping people in bubbles of like-mindedness, not hearing the perspectives from outside their bubble. 

Eric MonteiroGuest16:40

I'm hopeful that there will be tools and tactics that will help identify the fake ones created. Whether people believe that, I think is the question Right, so funny enough, it always goes back to education and people being aware when educated about it. Right Don't just believe what everything or read because maybe wrong. 

Blake MelnickHost16:58

A few years ago, the Finnish government introduced a unique program called Elements of AI, with an ambitious goal to educate their entire population about artificial intelligence, they developed a free online course accessible to all their citizens, encouraging every fin to begin to develop a comprehensive understanding of AI. As we discussed earlier, eric, most of the AI with which we're familiar worked best based on the quality of the underlying data. If the data was rich and accurate, so was the AI. The early research in this area investigating natural language processing, social network analysis, heuristics, latent semantic analysis was premised on a closed and benign data system. This research primarily sought to understand variances in language and to discern whether people communicating with different expressions essentially had the same idea. 

17:51

The Finnish governments initiative has been extraordinary, and when it was first introduced, I had the opportunity to take this course. It was excellent. It is now available to everyone around the world for free, and I'll include a link on the blog page for our listeners who may be interested in taking this course. It's a brilliant idea. We need, as you say, to have more education around this so that people can really understand the implications or make their own judgment or opinion on this. We're not just recipients of somebody else's view of the world. We can make our own choices. We can decide what we want the future to be. 

Eric Monteiro Guest 18:28

Oh, 100%. 

18:29

I think the biggest mistake we can make is to consider ourselves spectators on this. 

18:33

And that's true at the level of humanity, because at some point, ai will take a life of its own, like you, won't need us to propel it anymore To your point, it'll start to create the code, it'll start to have its own sail. We shouldn't be spectators. That's also true at the individual level. We shouldn't just be spectators. We should have a point of view, we should get engaged, we should get involved to the extent that these tools are available and resources are available. Get educated about it. I really do believe and of course this is a time frame question but over the next few years, having a bit of skill set in any profession is going to be helpful Because, again, the optimistic view of this and I really do believe for the next few years, we have very little risk of it going differently. Ai is going to take the grunt out of the work, but that only is true if we're actually willing to learn and to do the work and to get educated and learn about how to make that happen. 

Blake Melnick Host 19:23

One of the reasons your book resonated deeply with me was because of this optimistic view for the future. You envision a world free of mundane tasks that leaves us time to pursue our passions. This could range from the arts, sciences, politics and it nudges us to choose beyond monetary motivations by catering to our innate abilities and interests. It creates a society where everyone has the freedom to contribute meaningfully, one where people are passionate and committed to making a contribution in their chosen field. The concept of basic income fits well into this vision, because it empowers people to contribute to society beyond conventional job parameters. 

Eric Monteiro Guest 20:04

Yeah, and in fact, one of the things we didn't talk about, but it is deeply how believe a mind that shows through in the book, which is I don't believe people are inherently lazy and don't want to do anything. I think people end up behaving and presenting as lazy when they're doing something they don't like, they don't want to do, they're being forced to do. They feel like being pushed into a box they don't want to be in. If we free up people to be able to do the things that they love to do and I do hope I will do that- we're going to have a happier, healthier society. 

20:31

But again, I think it all depends on how we direct it and how we use it and how we're able to actually make sure that everybody benefits from it, not just a few people. 

Blake Melnick Host 20:40

In my recent interview with John Mighton, he cited Daniel Pink's book, Drive. According to Pink, people are motivated by certain core things mastery, purpose, impact and autonomy. Sadly, many people are locked in roles that fail to excite them. They lack impact and really reduces them to being a cog in a machine, and a machine that isn't theirs. I believe it's time we consider what truly drives us towards meaningful work and which contributes to a better society. He wrote ALMAin 2016. Is there going to be a sequel? 

Eric MonteiroGuest21:14

When I have time, I would write a prequel, as I said, exploring how we get from here to there. 

21:19

One of the simplifying assumptions I made in Alma is that there would be an all-knowing, all-capable AI that holds us in check and it doesn't actually, at any particular country or organization, take advantage of everybody else, because it helps control the rules, which I think is great, and I wish we had organizations globally that were disaffected right now Between here and there. We're not going to have that, and yet we're going to have all these competing entities, whether they're countries, companies, businesses, people angling to get to that point. So it's a very complex road to get there, I think, and I love to explore it, but we'll see. I need to have the time to do it. The dilemma that I really do believe we're going to face and I don't have an answer, of course is do we value our free will and ability to make our own future more or less than the ability to be cared for and helped by an entity and intelligence that actually can, in theory, provide us again free energy in all of our basic needs? 

Blake Melnick Host 22:18

Yes, devoid of human emotion and all that entails. 

Eric Monteiro Guest 22:21

That's right, and maybe there is an intelligence that provides both. I don't know. 

Blake Melnick Host 22:25

Yes, I guess time will tell. 

Blake Melnick Host 22:28

 

Where can people get a copy of your book. 

Eric Monteiro Guest 22:28

Eric, it's available on Amazon. It's available on iBooks. We can put the link there if you want. It's available across most of the regular platforms. 

Blake Melnick Host 22:36

I'm going to give you the final word. What are some key takeaways you would like people to have from this interview or from the book? 

Eric Monteiro Guest 22:44

Yes, I'll separate it into the quantum physics and its implications and the AI. The quantum physics and implications. I really hope people as they read it and again I try to make it not particularly technical but really ask themselves the question what is the nature of reality? How much do we really think about it and how much do we question the assumptions about our world, particularly given that a lot of the things I based to put those ideas forward are actual physical theory. This isn't metaphysical stuff, these are things that physicists have proved, etc. That's one Just question the nature of reality and do a bit of soul searching, if you will, and maybe a bit of study on that. 

23:22

And the second one is on the AI side, and of course that one is more medium, more practical is we really can't just be spectators. I think we all need to get educated, get involved, get engaged, have a point of view. We can collectively shape how AI develops. And I fundamentally believe that if we let it develop with the wrong objectives, we might be setting ourselves up for a very dark road. Versus if we actually have the right guardails, right policies, the right objective setting for general AI, we can actually get a much better place. We can decide whether AI will want to control or evolve us versus will it drive a much more prosperous, happy humanity, and I think we all have a role to play in that. 

Blake MelnickHost24:02

That’s sage advice. For our listeners, we will be putting some information up on the blog post for this episode and links so you can purchase Copy of ALMAif you'd like it. I'd highly recommend the book. It was a great read. I really enjoyed this interview, Eric. Thank you so much for coming on the show. 

Eric Monteiro Guest 24:15

I enjoyed me here too. Thank you very much, Blake. 

Blake Melnick Host 24:18

This concludes my interview with Eric Monterio, the author of Alma. I hope you found this episode as enlightening and intriguing as I did and it'll give you something to think about over the Christmas holidays. And speaking of the holidays, we're going to take a bit of a break from the show to enjoy Christmas with our families. Thanks for tuning in and I wish you all a happy and safe holiday season and we'll see you in the new year. And in the immortal words of my co-host on the Space in Between, Cameron Brown, “may your yule, rule “for what it's worth. 

00:00 / 25:07

 


AI and Free Will Implications
Future of Writing and Publishing
AI Development and Education Implications