Skip to main content
HomePodcastsArtificial Intelligence (AI)

Is AI an Existential Risk? With Trond Arne Undheim, Research Scholar in Global Systemic Risk at Stanford University

Trond and Adel explore the multifaceted risks associated with AI, the cascading risks lens, the likelihood of runaway AI, the role of governments and organizations in shaping AI's future, and more.
Updated Oct 2023

Photo of Trond Arne Undheim
Guest
Trond Arne Undheim

Trond Arne Undheim is a Research scholar in Global Systemic Risk, Innovation, and Policy at Stanford University, Venture Partner at Antler, and CEO and co-founder of Yegii, an insight network with experts and knowledge assets on disruption. He is a nonresident Fellow at the Atlantic Council with a portfolio in artificial intelligence, future of work, data ethics, emerging technologies, and entrepreneurship. He is a former director of MIT Startup Exchange and has helped launch over 50 startups. In a previous life, he was an MIT Sloan School of Management Senior Lecturer, WPP Oracle Executive, and EU National Expert.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

I wish we could all take a deep breath and realize that what AI is going to do is largely what we are going to let it and allow it to do, as opposed to what AI is going to be doing to us. And this is relevant for any emerging technology. There is no force called emerging technology that just lands on us like it came from Mars. It comes from humans. Humans created it. We are responsible for it.

AI is not some sort of positive civilizational force that only will lead to good things if you do it right. It will lead to terrible things no matter what we do. But that doesn't mean we're going to have to stop. You know, we're going to stop it. No one has the power to stop it. So let's stop discussing stopping a technology. That doesn't mean we can't modify it, regulate it, work with it, anticipate it, develop scenarios around it and develop our own flexibility and mitigation strategies. That's what we have to do through experimental AI approaches, but also through approaches that have nothing to do with the depths of tech expertise in AI.

Key Takeaways

1

Risks in the AI domain don't operate in isolation. Understanding the concept of "cascading risks" is crucial, where one event or decision can lead to multiple subsequent events or outcomes.

2

Both the public and private sectors need to develop dynamic and reactive governance systems as AI becomes more integrated into various sectors. Proper governance ensures responsible and ethical AI deployment.

3

The future impact of AI on the workforce will be significantly worse for those who are not adaptable or lack the resources for continuous learning. Organizations should focus on upskilling their workforce to prepare for AI-driven changes.

Links From The Show

Transcript

Adel Nehme: Hello everyone, welcome to DataFramed. I'm Adel, Data Evangelist and Educator at Datacamp. And if you're new here, DataFramed is a weekly podcast in which we explore how individuals and organizations can succeed with data and AI. It's been almost a year since ChatGPT was released, mainstreaming AI into the collective consciousness and the process.

Since that moment, we've seen a really spirited debate emerge within the data and AI communities And really, public discourse at large. The focal point of this debate is whether AI is or will lead to existential risk for the human species at large. We've seen thinkers such as Eliezer Yudkowsky, Yuval Noah Harari, and others sound the alarm bell on how AI is as dangerous, if not more dangerous, than nuclear weapons.

We've also seen AI researchers and business leaders sign petitions and lobby government for strict regulation on AI. On the flip side, we've also seen luminaries within the field, such as Andrew Ng and Yann LeCun, calling for and not against the proliferation of open source AI. So how do we maneuver this debate?

And where does the risk spectrum actually lie with AI? More importantly, how can we contextualize the risk of AI with other systemic risks humankind faces, such as climate change, risk of nuclear war, and so on and so forth? How can we regulate AI? without falling into the trap of regulatory capture, where a select and mighty few benefit from regulation, drowning out the competition in the meantime.

Here to answer these ... See more

questions is Trond Arne Undheim. Trond Arne Undheim is a research scholar in global systemic risk, innovation, and policy at Stanford University, where he leads the Cascading Risks Study, a study aimed at understanding how different systemic risks we face and to play with each other.

He is also a venture partner at Antler, a global early stage venture capital firm investing in technology companies. He is the CEO and co founder of Yegii, an insight network with experts and knowledge assets on disruption. He is also a non resident fellow at the Atlantic Council, with a portfolio in AI, future of work, data ethics, emerging technologies, entrepreneurship, and more. In a previous life, he was a MIT Sloan School of Management Senior Lecturer. an Oracle executive, an EU national expert.

Throughout the episode, we spoke about how AI can become an existential risk for humanity, why the cries of existential risk we see in today's media are immature and oftentimes unhelpful, how the risk of AI interplays with other forms of systemic risks we face as a species. The risk of regulatory capture with AI. How to best approach regulating AI and a law and more.

If you enjoyed this episode, make sure to let us know in the comments, on social, or more. And now, on to today's episode. Trond, hi, it's great to have you on the show.

Trond Arne Undheim: I'm excited too. Thanks so much for having me.

Adel Nehme: So maybe to set the stage for today's discussion, is AI today an existential risk for humanity?

Trond Arne Undheim: Yes and no. There are already things that we need to definitely worry about. Mostly because once you start worrying, it's not like the problem goes away. Right? So we have to worry about it now, even if it is a risk later, it definitely will become a very, very serious risk if we don't handle it well, but I think the current discussion is very overblown.

There is no AI takeover imminently happening. And I think it was the wrong moment to call out. This existential threat moment, even though, existential threats is what I study, I do not think this. Is the year and month that AI will take over in any way, shape or form so that we can talk about,

Adel Nehme: Yeah, we'll definitely unpack that and discuss how you view existential risk from AI developing over time. So I'm very excited to talk to you about this because you're currently leading a study at the, at Stanford University about what you describe as the cascading risk study, which we'll get to into more deeply and existential systemic risks associated with that.

But maybe first walk us through what the study entails and the motivation behind the study.

Trond Arne Undheim: sure, the cascading risk concept is a metaphor, obviously, but the very simple way of thinking about it is to think, a water cascade or you know, a waterfall. You can even just think of a river and a river delta. And I use the river Amazon to, just illustrate it because there are many, many, many tributaries, but the effect of the whole thing is monumental.

It affects large parts of that continent and much beyond. So that's where the cascade frame comes from. Our project looks 50 years into the future. We have created five scenarios. And just a reminder, scenarios are these plausible ideas and concepts about how futures might emerge. And there's always numerous, so we chose five, it could be three or fifteen hundred different scenarios, you know, you're never going to nail exactly what happens.

But then our project is, we are using all kinds of drivers, particularly technology drivers, and we're looking at not only where is the future going and what risks might we face, but we're looking at the mitigation opportunities and especially in this cascading lens, the important thing becomes don't get too fascinated and perhaps bogged down in any individual risk, however large it may seem to you.

Because you might be wrong about the configuration of risks. And if you then invest all of your time and money and energy in the wrong risk, particularly in one or two risks, and you're wrong about that. Now you are really putting perhaps even humanity at risk.

Adel Nehme: That's great. And you, talked about choosing five risks. Walk us through what these risks are and what led you to choosing these particular five risks when looking at the particular study.

Trond Arne Undheim: Yeah, I don't lead with that just because the whole study is about cascades. We chose these five lenses, but then we of course fall back into this whole dichotomy logic where, you know, one is a climate. risk lens. So it's very, very important to me that when there are these five, and I'm not now going to just list them quickly.

So one's about climate. One is about sort of a financial downfall. One is, a classic sort of World War scenario. So it's like geopolitics as the culprit. And then we have a synthetic biology gone wild. And then we have an AI risk Scenario AI takeover scenario, but it's very, very important to me that in each of these scenarios, even though there is one main factor, we just did that to kick this off.

But the whole thrust of the effort is really to look at the interrelationships of these five and many, many other factors. But the long and short of it is we have these five different scenarios that could happen. And they are, let's call it mainly either driven or affected by one sort of key systemic risk.

But it's very important to say that might look like it is AI that is the driver, but it is the hundred other things surrounding it that are influencing or shaping perhaps AI or climate that actually is the real driver.

Adel Nehme: Yeah, and you mentioned here the interplay between the risks and what I really like about the framing behind the study is that it puts different risk drivers in context of each other. For example, the risk of runaway AI does not exist alone in a vacuum. It also exists in context with other systemic risks such as climate change, war, as you mentioned, synthetic biology and other types of risks.

For example, geopolitical pressure leads to more faster innovation in AI, which could lead to runaway AI, for example. And you mentioned how when. These risks are studied together, you get this cascading risk effect where the destructive potential of one risk is exacerbated or enhanced because of another seemingly unrelated risk.

So maybe walk us through that thinking a bit more deeply. How do these different risks interplay with each other and create this cascading effect?

Trond Arne Undheim: So the whole cascading lens comes from disaster research, right, where it has been used very successfully to show that when a natural disaster occurs, and by the way, it's a big discussion, why call it natural when in fact, it's all a cultural disaster because, all these choices we make is what creates disasters.

That's a whole other discussion. But anyway, cascades, when you look at just an isolated natural or, some disaster, occurring through nature it becomes pretty obvious that it's not just one thing happening because one thing leads to the next. A flood could lead to, the collapse of electricity installations and, human infrastructure decay and stuff like that.

So it's been a lens that we have been using in the research field or among disaster professionals in order to organize relief efforts, even, and just look at what might happen next and what typically has to be mobilized in order to rescue people in, in like very acute situations. Now, the broader cascading effects that we are talking about in our study, they go much wider, so they, they don't necessarily have to do with a discrete event, they have to do with the fact that society is one thing, right?

It's consists of many, many different systems, arguably. And it is all these systems that we have a very poor handle on, Most of us don't really think in systems. It's not really very easy for humans to think in systemic ways. And the cascading metaphor helps us do that. Because we can try to isolate some of those factors and we can then figure out, well, what might be in this case, what we need to focus on and what subsequently might we want to focus on, but the big sort of systemic system of systems that takes us rapidly into what, people call complexity theory and it becomes very unmanageable for most traditional scientific approaches, which are based on the opposite, isolate things to the very, very smallest detail, and then become an expert in that one little aspect.

I think this is the challenge of this paradigm in and of itself, is that it deeply challenges science and engineering, which traditionally cannot deal with this type of complexity or chooses to deal with it by chopping it up. But here the point is, if you chop it up, you lose The whole, which is the point.

So it is a very, very challenging tool to use. But it has to do with a lot more than disasters things that are very visible. It also captures a lot of more invisible features. And when we talk about AI specifically, let's talk about some details, that AIs.

Adel Nehme: Yeah, and let's jump into that, one of the main risks that you discuss in the Cascading Risk Study is, is runaway AI, the risk of runaway AI, which is what I really want to center today's episode around. You know, I think a lot of folks in the industry, myself included, are more on the optimistic side when it comes to AI.

Speaking for myself here, I don't think I have a strong enough intuition or grasp of how AI can be an existential risk to humanity. So to clarify, I think there are deeply urgent issues we need to fix with AI. For example, bias perpetuation, misinformation amplification, the degradation of the quality of the internet when everything is auto generated, or the misuse of AI by evil actors, for example.

But I have a hard time imagining how runaway AI can happen in practice. And I think the cascading risk lens, when I started going into the study, trying to understand it more deeply and preparing for this episode, really gave me a good way to build up that imagination. So maybe walk us through scenarios you've deeply thought about when it comes to runaway AI, and What do you think needs to be true for runaway AI to become a reality?

Trond Arne Undheim: Sure. Let me first say that I am quite skeptical of the whole concept of runaway AI. I consider it pretty unlikely, but I think the only way that one can conceive of it today is as a phenomenon where it's It is a collusion between various, very real, tangible human actors. They could be even be governments or fractions of governments.

They could be even well meaning, well intended people and groups, but of course also malignant actors, terrorist groups, perhaps large criminal networks or, crazy groups of scientists that just want to really, or, or even companies that want to just push the envelope and want to develop things, and they have this idea, which we need to discuss of innovation that has to do with just.

As long as I do something crazy, I'm innovating, which we need to get away from. But anyway, it is. In my mind, the cascade that could explain the runaway, meaning how it impacts many, many other factors, it's not, some AI system, I think, that in and of itself decides it's going to take over the world, I think that is really, it's a very overly simplistic way of looking at Any kind of governance or change, whenever revolutions happen, they don't happen because one person has decided x happens.

It is a confluence of different things and place. So no change in human history has ever happened because one factor. intervened and, one actor or one technology or one thing just happened, it is a confluence of many, many different things. So we are not looking out for this one thing that's going to go wrong.

It is a hundred thousand things that has to go wrong. Over a bit of time that perhaps then becomes a hybrid enemy and in AI's case, it is much more the case that we are already integrating advanced technology in government procedures in private sector endeavors and products, things that are becoming platform technologies.

They're very hard to remove. we can't even afford to remove them. We could possibly not even remove them if we wanted to. We agreed to remove them, it might take 25 years and might set us back civilizationally or financially to a point where we would cause recessions and depressions or the worse.

So I think The cascading argument here is simply just be very careful when you make yourself dependent of something because you may not be able to undo it. And that's really the cascading lens. And, in that scenario, we talk about that in a much more event driven scenario where there's a kind of a group of actors that collude and use AI for ill.

And I think that could happen.

Adel Nehme: Kind of unpacking what you mentioned here of you need a hundred different factors and a hundred different decisions, small decisions that kind of lead towards a more dire risk when it comes to runaway AI in the future. And I love how you lay that out in the discussion here. Because what you mentioned, for example, about AI becoming embedded, And critical infrastructure, for example.

And when you discuss the risk of runaway in 2070, right? They come off the back of sensible decisions that you we can all think about are sensible right now that we may make as a species in the 2030s and 40s. So maybe walk us through how you think I can become embedded in a critical infrastructure and what those sensible decisions Thanks.

That we may want to leverage with, more powerful AI in the next 10 years. What those look like that could lead us to a more dire place in the future.

Trond Arne Undheim: Well, first of all, I don't think this dichotomy of AI or no AI makes any sense at all. First of all, AI is not one thing. We haven't talked about how to define it, but you know, AI is just, it's just a name. It's a marketing brand name for a panoply of different functionalities that are continuously evolving.

This year it was fantastic for people to explore the chat. Options and, suddenly people discovered large language models before that, there was the image use case per se, that people were excited about, recognizing cat photos and stuff. These are what I would call epi phenomena that they're, they're really just not the heart of what's going on.

The reality is, of course, we have already for decades been including digital. Algorithms in the decision making in governments, in financial infrastructure and products in transportation, even in consumption, in, e commerce, in many, many areas of our life. And those are the areas that are just and not to forget health, right?

We haven't come very far, but nowadays I think it's pretty apparent. That the health application of and not to forget the environmental application, all the sensors, all the data that we can now garner from the environment is going to be instrumental. So we have so many big challenges in our world.

where we will have to rely on actually simpler systems, but better algorithms, and AI provides us with that. So we don't have a choice. We are going to be using all these techniques but we have to be mindful. And I think the most important thing for me is that our governance systems, both in public sector and in the private sector, need to become much more dynamic and reactive.

Everybody has to take responsibility. If you're a vendor, you're a government. if you're affecting millions of people, that is. a governance actor. You are already a de facto government. You have to take that responsibility seriously.

Adel Nehme: What's interesting when you mention, a lot of these sensible decisions that will lead us potentially to long term risk in the future and the importance of governance as we make these decisions in the future, thinking about adding digital systems, AI into our critical infrastructures, just healthcare, transportation, et cetera.

What do you think good governance need to look like right now? or within the next 10 years as we build this capability within our infrastructure even more to avoid long term risk.

Trond Arne Undheim: Ideally, our governance in this particular instance is a global system. But given how difficult that has been to establish and the resistance and understandable resistance and hesitancy towards something like that, a comprehensive global framework, I think what we're looking at is regional approaches where the big economies in this case, I think, what we're really interested in here is what the EU, the U S and China does in order to regulate their technologies and the technologies that are selling into those.

Continents and systems, these three actors have to not only get their act together, but they have to get them truly together. They have to align some of their principles, not all, I'm sure there'll be three very separate different regulatory systems, but that will evolve, I think, over this next decade.

And that's really crucial is to do it, I guess, fast enough, 10 years so that. You can then have an adaptive system that can change and flex to whatever developments are heading our way, not just in AI, but in terms of the cascading risks challenges that we will face and, being able to incorporate the addition of many, many other emerging technologies that are actually acting on AI.

For example in my scenario, quantum technology is going to be what escalates and makes some of these very drastic, but also very positive scenarios possible, like the amount of calculations that you can do over a network is going to change dramatically, and that will launch new applications, so I think, we have to remain innovative and catch the opportunities that are showing up for us as humanity and as inventors and as governments and, and private actors, but we have to have a system where we try to keep these systems explainable.

Right. So explainable AI, explainable systems, and then secondly, transparent systems and then verifiable. Transparent because they are standardized and understandable and theoretically at least communicate between each other. But then I have one more aspect, and this is again, Goes a little counter to the global idea, but I wrote this in my health tech book.

I think it's really a, a big challenge to put all your eggs in one basket, we now have such advanced systems, it's actually good that from certain perspectives that we don't have a global system. The internet as such is actually a risk. So if it was so easy as to just take down one system and the whole thing will be gone, luckily it's not like that with the internet.

But other technologies might be easier to sort of corner. So I think one of the aspects we really have to look into is perhaps to regionalize technology and isolate certain things so that we know that if a catastrophe were to happen, In one part of the node, it wouldn't take down the entire network or infrastructure, Because it's not that maybe it would destroy. Each node, but it could just be too expensive to rebuild because it only makes sense if a lot of actors have financed it. So the systems that only make sense when basically everybody goes together, once you have trouble in those systems, it may not be worth it to rebuild it.

If you have a big catastrophe. So I think we have to build more resilient systems that are scaled at appropriate levels and can be rebuilt regionally or nationally or even at smaller scales in a very distributed fashion. And this centralizing trend that we're now seeing in a lot of AI is from that perspective, quite scary to me.

Because, the algorithms might flex and be able to run even on small, devices and such. But the systems that we put them on and the data that we let them run on sometimes are centralized. and I think that is a very scary paradigm to only have centralized systems. As much as you save and gain in terms of data power from central systems, I think that could be a very significant risk.

Adel Nehme: Yeah, I couldn't agree more here, and there's a fine balance it seems that, as a species we need to make between creating robust governance but also creating anti fragile governance that is quite resilient to any failure in the nodes, as you say, and I think what you mentioned here ties really well to the cascading risk lens because you contextualize runaway AI or the risk of AI within other risks as well.

For example, you mentioned how, evolutions and quantum computing could accelerate. The risk of runaway if not managed correctly, but you also mentioned, for example, how an increased reliance on digital infrastructure and VR, for example, due to the degradation of nature, which is the climate risk here, or how geopolitical competition between nations can prompt even higher dependence, more use on which also create a wider possibility of the misuse or harmful use of AI.

So walk us through how we should think about the interplay of these risks, these trends more deeply when thinking about AI risk.

Trond Arne Undheim: The first thing is that just because there's an interplay doesn't mean that that interplay necessarily is negative, right? So cascading effects are both positive and negative. So you could think of it as like spirals, but they can also spiral in a positive way. So we, of course, need to use this awareness of how things cascade and work in a systemic fashion to our advantage.

And as long as we're using them as positive cascades, right, this is what innovation scholars and people who earn money in the stock market and whatever it is, when you're using network effects. This is generally great for all involved because you're spreading benefits in a very, very efficient manner.

So I think we have to be able to design systems that are doing that and keep innovating products that are allowing us to do these things. over networks, but then at the same time, be very mindful that because these network are so efficient if we don't fully understand a, how they work, or if someone understands it all too well and starts to exploit it without telling anybody else, these are really, really big things to watch out for.

So I think we need to create more watchdogs. Informally, informally, and we need to use, sensors and, and open principles to build those systems. And they are not as simple as, let's just outsource it to some third party that, is like a watchdog. These are things we need to embed in the innovation process.

I'm really eager to reconceptualize what we think of as innovation. And I think we need to become more responsible for the. Potential possible future end use and end use cases of our technologies. So you're not done with your innovation just having come up with something great that people like, And then you can sell for a huge premium. You need to actually embed into that package a risk analysis of what this could be used for especially when it becomes. Commonplace and available across the world as a globally available product. And that is not systematically done right now, of course, if you're a startup, priority is to make a product that works.

And if you're a large corporation, if you're lucky enough to have a penetration, all across the globe. Even then, sometimes your product fails. So once you have something that seems to work and people love, why would you go through the trouble? But I think we can't afford as a society, as humanity, we can't afford to have mess ups in technologies as consequential as AI.

Or quantum technology or indeed a lot of the new synthetic biology technologies that I, will embed with AIs because they're using AI for their compute power. But of course, with very real consequences, you're creating artificial life through technology that mimics nature. Well, that's really serious business and some of those things, when they really release into nature, you can't undo it.

This is not just about AI per se. I think for me, the lens is we are living in a society where we, for the perhaps first time, you know, this, these last 30, 40 years, we are gaining a renewed power over life as such. And we have to use it responsibly because we will not get a do over, nature might survive over the next 100, 000 years, but that's not so relevant for current humans, right?

So whether nature survives in the next 100, 000 years and regenerates biodiversity, if we destroy it for ourselves over the next just 2000 years. I believe we are in deep trouble. This is certainly not an option we want to consider.

Adel Nehme: It's definitely not an option we want to consider and you mentioned how we need to redefine what innovation looks like in an age where there could be hugely consequential risks to unfettered innovation. So maybe walk us through how you would like that definition to evolve over time and what do you think?

are, maybe players within the AI industry, if any that model, at least that definition of innovation as they roll out AI products and services.

Trond Arne Undheim: The first thing I want to say, and I have a new book out called Ecotech, where I go into a little bit this debate about, whether everything now should become gigascale and gigaplatform or whether innovation now needs to size down to Smaller distributed, little effects. And, we're really in a degrowth need to go into a complete degrowth mindset, meaning smaller scale, less growth perhaps.

And I think the dichotomy is in some ways wrong headed because it's not that innovation should become less efficient or smaller scale per se. It's just that it needs to alter its core objective. The objective shouldn't just be to carve out economic niches for small social groups, whether they be startups or wealthy individuals or, corporate networks.

We need to think in a much bigger picture. That doesn't mean that innovation shouldn't happen. So I think, the way I think of innovation, it is, I think degrowth is a useful word. I don't know if you relate to that notion, this idea that has come up over the last few years in environmental economics.

Where they're thinking, not only does the innovation process need change, but the whole logic by which we, we innovate has to change. Now there's a disagreement there in the community. Some people would say, we have to consume less, travel less, do less, just less. in my book, I go through that argument and I conclude it is destined to fail.

This is not how we operate as humans. Certainly not powerful, expansive, creative humans don't want that lens. So that lens is not going to fly. I want an expansive innovation paradigm, an optimistic innovation paradigm, but one where you always think about the end use case and where you're prepared to defend that, almost like in a court of law saying, even if a bad actor gets access to the technology, here is how we will proceed.

Adel Nehme: And I think this connects really well to my next question on the state of open source AI and the potential ramifications that we see in the upcoming future or in the, in the short term future. I think when, especially you mentioned earlier in our discussion that it was maybe the wrong time to echo the, existential risk alarms with the release of Chachapitian foundation models and large language models.

And I definitely agree with that point. And a lot of folks within the industry, we see that kind of dichotomy today between those who would like, closed source models. and kind of foundation models that are centralized within certain actors to be essentially the de facto only available large language models that you see today.

You have others such as Andrew Ng, Yann LeCun that are discussing the need to accelerate open source AI and the development of AI. Maybe how do you see the argument play out, especially in the future, the risks associated with it. And I'd love to see your thinking around the trade off between open source and closed source AI from a risk perspective.

Trond Arne Undheim: Well, there's risks and then there's agendas. And, I've worked in, in big tech and I've even, been a lobbyist, I guess, for some part of my life. So I understand how the temptation to frame your argument in terms of I'm trying to save humanity. That's a very big appeal to that argument, whether you are inside, you know, I was working as a regulator as well.

People come to you from private sector seemingly wanting to save the world. It's very compelling. and on the other side, whether you are on the open source side of the argument or you are selling some sort of closed source product, if you take the higher road saying, you know, I'm thinking about everyone else, initially you are listened to, but then the proof's in the pudding.

Were you thinking about anybody else and everyone else? And maybe you were right. And maybe these people are just picking this time. But what I'm saying though, is you don't get to scream and cry wolf many times. At least one actor that has cried wolf, now you used your card. So I think that they, that the timing was interesting.

And I think there were agendas there. they wanted to capture the regulatory process. They wanted to regulate. Now, rather than in 10 years, because their products are coming out now, I'm a big believer in openness in technology. So people who have followed my work know that I've worked on fostering open source and certainly interoperable technologies for 20 years.

Both inside and outside of government and private and startup companies. So all, all, all of the above that doesn't mean that I think only open source technology is the solution. I think there's a trade off here and these federal federalized models where you are sharing an appropriate amount. Without disclosing the underlying identity of the data, there's a value there as well.

Now, do I worry about large data sets getting lost or large sets becoming monopolized by vendors? Yes, of course, that is a massive, massive issue. I feel like every decade we worry about one thing. Last decade we worried about big data and who owns all this data. Well, then it turns out now I think we are in the decade of algorithms.

There's who has the right algorithms? You have the algorithm, who cares about the data? You can even have crappy data because you have the best algorithm. So you're going to win. And then, we now are also starting to realize that our infrastructure is horrible. That we're building all of this stuff on.

So we want quantum because we want a new architecture, not silicon, to build our stuff on top of, even though NVIDIA, has come up with better chips. Eventually we're going to need to escalate and you get like a thousand X on our efficiencies and we need different technologies. So this obsession with individual pieces of call it AI or digital platforms is always wrongheaded.

So I think, yes, I obsess over openness in data. But I think openness in algorithm and openness and transparency all across is important. And this is in fact, how we created our. legal system, right, to protect IPRs. We said patents and stuff, they all need to be open. Everybody needs to understand that doesn't mean, you know, in terms of patents, when those were valuable, which is a long time ago mostly in the non digital space, they were useful because people could see what was happening.

They obviously still have to pay that patent owner. Now in the digital sphere, it works very differently. Patents are not so useful at all. But that doesn't mean that you can't claim ownership and get benefits from having developed technology. So I think we need to rethink how all these things work.

They're not going to work in one way for 30 years. We have to be adaptive. We have to allow different licensing models for technology. We have to be aware that if we adopt viral licensing all across the world, And that does Tamper with relationships between technologies that we may not be prepared to do.

So it's not as simple as, adopting one viral license for all technology either, right? That would at least be something we need to prepare for. So I think, getting out of this myopic space that, everything now has to be. Open or everything should be closed because then innovators can innovate.

Neither is correct.

Adel Nehme: What's interesting about you mentioned here is, the potential risk of regulatory capture that you discussed, which I think you really nailed on the head here. What's also fascinating about the discussion of open source is the potential impact of AI, On other cascading risks as well that you've discussed in your study, And you mentioned here earlier in our discussion as well that not all cascades are negative, Not all interplays are negative, and I think in a lot of ways we see AI playing a big role and maybe decelerating the potential risks of other large risks, right? For example, leaps in AI can help us unlock new materials that can act as new energy sources or enable better, cheaper, faster healthcare, right?

Which tend to impact how nations think about war, for example, right? There's an entire set of complicated relationships here that can be alleviated and improved with a I. So maybe when looking at the potential use cases of a I and driving positive impact across different risks, maybe walk us through What do you see as the best route to optimizing a future where AI plays a positive role in alleviating a role over these potential long term risks

Trond Arne Undheim: The first I think is just to simply say that I've studied the internet for 20, 30 years, right? And I think in a similar way that there were so many optimists that thought that the internet was going to make, equalize everything, there would be no poor people, there would be no people without information anymore, everybody would be educated, right?

Everything was going to change. No technology has the potential to shake up everything. in the world, whether physical, mental, or otherwise. So in the same way that the internet, of course, went into existing power structure and amplified those, in addition to giving others some new opportunities, the exact same thing can be said about AI, however you define it.

It will lead to some good things, some not so good things, and A lot of in between. So when we think about its potential, and I agree with you, the biggest potentials of AI is for monitoring other risks and amplifying innovations that we truly need for this world to go forward. And health is a great example.

Environmental monitoring is another massive and enormously important example. and those are just starts, but I don't think we should assume or expect or even hope that AI in and of itself is either positive or negative. It is our responsibility to make sure that it basically is as good as it, can be.

But it is not some sort of positive civilizational force that only will lead to good things if you do it right. It will lead to terrible things no matter what we do. But that doesn't mean we're going to stop it. no one has the power to stop it. So let's stop discussing stopping a technology.

That doesn't mean we can't modify it, regulate it, work with it, anticipate it, develop scenarios around it, and develop our own. flexibility and mitigation strategies. That's what we have to do through experimental AI approaches, but also through approaches that have nothing to do with the depths of tech expertise in AI.

So those guys who are working on AI safety right now, it's laudable work, but a lot of their work is very, very myopic and deep into the current algorithm du jour. That algorithm in three years. Might be irrelevant. So then if we haven't thought about the bigger social dynamics or the governance implications or built a new United Nations that actually understands technology, what are we going to do?

Right, so we have to do all of the above.

Adel Nehme: couldn't agree more and you mentioned something here the social implications and something I think is Especially fascinating and the decade to come with the rise of more potent AI systems one thing that I think about quite often is the potential impact of AI on the labor market because if we have potential volatility in the labor market that could lead, that could have many knock on effects of, the elections of authoritarians, the potential economic downturn that would lead to more friction between countries and that adds more to geopolitical stress.

So there's a lot of different relationships happening here in the cascading risk lens. So maybe walk us through how you view the impact of AI on productivity, labor, the labor market. And its potential negative impacts or positive impacts in the years to come.

Trond Arne Undheim: Yeah, I mean, the typical thing, right, before this latest AI hype was to say either. Robots and technology is going to take all jobs. And this is, but then that argument was largely debunked. It didn't look like that. It was a big MIT study that just says, where are all the robots, right?

It took longer. And, there are current robots weren't able to take all the jobs and maybe humans weren't interested in even future robots taking over some of those jobs and, and, and their jobs were more complicated than they seemed. Now, large language models have.

Perhaps altered that perception somewhat. And I think there are many more jobs that are open for, for grabs for for technology, but largely, and this goes to the argument in our book, Augmented Lean, where we looked at manufacturing technology per se that came out last year. It is the combination of humans and technology that wins out every time.

it's very visible in the manufacturing sector actually because every technology that gets into that sector initially automates a lot and then as it adjusts into the system, humans take over higher and higher value functions and just alter the way that they use the technologies and then some technologies gets Stacked on the shelf or sits in the back and it's very expensive.

And it was a major time and energy sink. And it's hard to train workers on and it's fails despite being very advanced. So I think this obsession around advanced technology, that's as expensive as possible. It just leads to unnecessary complexity. So I think basically the labor market, yes, will have to adjust.

And we are probably only seeing some possibilities now that implications of AR are likely to be even. wider, perhaps worse for certain groups of workers that are not adaptable, not learning new skills. It will probably be far worse than we even imagined. But on the other hand, there are these lingering effects that humans have where we actually take over and manage and what we would call in my field, domesticate technology.

We make it our own, which means we actually are in charge of it. And then we make use of it and become even more dynamic. However, this is a very rosy picture if you are uneducated, don't have any resources, can't go to school, and if you're just a recipient of all these changes. If you are on that end of the spectrum.

It is a very challenging world that we're going into.

Adel Nehme: Yeah, and there would need to be some form of response. To alleviate that risk, right, and optimize for a better outcome. within the dynamic that you're describing, what do you think should be a good response from government institutions, for example, to enable reskilling and to enable people to find their paths in this new world?

Trond Arne Undheim: I think government's role in upskilling and reskilling workers and indeed changing the educational system remains. I think technologies such as AI will have enormous productivity effects, but they don't discount an entire class of citizens and, and workers just because. They become more advanced.

And it's also, I think government's responsibility that even if technologies have complicated and exciting kind of outcomes that they are their interface is easier to operate. And I think that can be mandated. We wrote that in, augmented lean and I book on the future of digital manufacturing. It is a opportunity not to be relinquished by governments to basically mandate simple technologies with interfaces that both users and managers of technology can adapt to almost instantly without training, no code, training required.

This is really, really important. And if we have that as a principle, then no matter how advanced the underlying technology is. The use will be so simple that there is really no excuse. Well, if you don't educate yourself, if you don't have the right attitude, you don't want to change, well, then you will be that worker that becomes superfluous, I think we all should have the ability to reinsert ourself into the labor market. And I see no reason why governments couldn't ensure that that's the case, no matter how advanced AI or other sort of cascading technologies working together become.

Adel Nehme: and as we end on this, hopeful message here What are your hopes for AI over the short term? What do you think are strong regulatory that we need to take to curb some of the risks that we've been discussing today?

Trond Arne Undheim: I still hope for a global regulatory framework for AI. I hope for it. I don't think it's very realistic and I think we can deal with it. Even if we have a regional systems, I hope, and I am already doing some reeducation of government decision makers. I think as a former regulator myself, I know how valuable it was to try to stay up to date, to try very, very hard to understand the technologies as they were emerging.

And I don't think our governmental systems have very good strategies to, to do on the job education that way. But there are, technology boards and advisory councils and universities. We just had congressional staffers over at Stanford. We had a fantastic three days with them and we both learned from each other.

These things need to become institutionalized. And in our case, it was mostly younger staffers. I think, with the fact that we have very, very experienced decision makers in regulatory bodies around the world that are. Perhaps of a different age, those people would also need a very different catered approach to understanding how technology evolves.

And it's not an easy thing and it's not their fault that they were educated in a, in a day where technology has lasted 30 years or more. In a time when technology might maybe last three months, you need to adapt your education system to that. You can't just obsess over one technology. You have to understand the why of how this is generated and realize that if you invest, a year in understanding something, it could be gone and irrelevant the next month.

So you need to have a very different approach and regulators need to do this. But I think managers at all levels in society need to have that awareness as well. learn to learn, not, learn something.

Adel Nehme: And I couldn't agree more. And as we wrap up today's episode, Tron, do you have any final call to action to listeners before we end today's episode?

Trond Arne Undheim: I wish we could all take a deep breath and realize that what AI is going to do is largely what we are going to let it and allow it to do, as opposed to what AI is going to be doing to us. And this is relevant for any emerging technology. There, there is no force called emerging technology that just lands on us like it came from Mars.

It comes from humans. Humans created it. We are responsible for it. We need to deal with it. And it's an ongoing project, so there's nothing to be afraid of, but inaction is what I fear the most.

Adel Nehme: That's really great. And it's definitely all in our hands. Thank you so much, Tron, for coming on DataFramed.

Trond Arne Undheim: It was a pleasure.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More