Skip to main content
HomePodcastsArtificial Intelligence (AI)

Trust and Regulation in AI with Bruce Schneier, Internationally Renowned Security Technologist

Richie and Bruce explore the definition of trust, how AI mimics social trust, AI and deception, AI regulation, why AI is a political issue and much more.
May 2024

Photo of Bruce Schneier
Guest
Bruce Schneier

Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books—including his latest, A Hacker’s Mind—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation and AccessNow; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.


Photo of Richie Cotton
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

AI pretends to be a person. AI pretends to have a relationship with you. It doesn't. And it is social trust. So in the same way, you trust your phone. Your search engine, your email provider. It is a tool. And. like all of those things, it is a pretty untrustworthy tool. Right? Your phone, your email provider, your social networking platform. They all spy on you right? They are all operating against your best interest. My worry is that AI is going to be the same that AI is fundamentally controlled by large for profit corporations.

I push for the notion of a public AI. I think we should have at least one model out there. I don't know. I just need one that is not built by a for-profit corporation for their own benefit. It could be a university doing it, a government, or a consortium led by an NGO, as long as it is public. And by this, I don't mean a corporate model that has been made public domain.

Key Takeaways

1

Consider the specific context and potential consequences of AI applications. Evaluate the cost of failure and the trust environment to ensure the technology is used appropriately and safely.

2

We should encourage competition in the AI industry to prevent monopolistic practices. Diverse and competitive markets drive innovation and responsiveness to users' needs.

3

Advocate for AI-related issues to be part of political debates and policymaking. Public pressure is essential to drive government action towards creating fair and ethical AI regulations.

Links From The Show

Transcript

Bruce Schneier:

AI pretends to be a person. AI pretends to have a relationship with you. It doesn't and it is social trust. So in the same way you trust your phone, your search engine, your email provider, it is a tool. And like all of those things, it is a pretty untrustworthy tool, right? Your phone, your email provider, your social networking platform, they all spy on you, right? They're all operating against your best interest. My worry is that AI is going to be the same, that AI is fundamentally controlled by large for-profit corporations.

Richie Cotton:

Hi Bruce. Welcome to the show.

Bruce Schneier:

Thanks for having me. 

Richie Cotton:

I'd love to talk about trust in ai, but before we get to that, can you just tell me what trust means to you?

Bruce Schneier:

I wrote a book about trust. I don't see it on my bookshelf right now. It is an incredibly complex concept and it has many meanings. It's an overloaded word, kind of like security. So asking what trust means to them I think is bad because it means many things in many contexts to everybody. There's a difference between trust and trustworthiness. In a security context, trust is often something you have to trust, not something that's trustworthy. I do write about the difference between a more intimate personal trust and social trust, but there's a lot of scholars about trust and they have very different definitions and it really depends on what angl... See more

e you're taking. So I tend to keep it open because many people have different needs when they talk about the need for trust and as security person, we often have to provide for them all. But in terms of ai, I think we should differentiate between interpersonal trust. You might trust a friend and a more social trust. You might trust a bank teller or an Uber driver.

Richie Cotton:

Can you just elaborate on that? What's the difference between interpersonal trust and social trust? When would you need one or the other?

Bruce Schneier:

You might need, it's not really about need, it's about the relationship. So interpersonal trust is you trusting a friend, right? It's based on your knowledge of them. It's less about their behavior and more about their inner selves. I trust a friend, I trust a spouse, I trust a relative. We know what that means. I don't know what they're going to do, but I kind of know it'll be informed by who they are. When I say I trust an Uber driver, it's a very different form of trust. I don't know this person, I don't even know their last name. I just met them. But I trust that their behavior means I'm going to get taken to my destination safely and they could be a bank robber at night. I don't know, but it doesn't matter. So for social trust, there are systems of society that enable that.

I mean, the reason I don't just get in the car of random strangers, but I get in the car of random strangers who are Uber drivers. There's an entire system based on surveillance, based on competition of a star rankings based on whatever background checks Uber does based on my history, based on their history. That enables us to trust each other in that interaction, right? Same thing when I hand cash over to a bank teller. I don't know who they are, I'm giving them my money, but I know there's an entire system that the bank has to allow me to trust that person in that circumstance, if we walked outside the bank down the block, I would never give that person my money ever. But in the bank I would. And that's the difference. It's a really important difference because social trust scales, interpersonal trust is only based on who I know it's not going to be more than a hundred people in society and it'll be less than that. But social trust, I could trust thousands, millions of people. I flew in on an airplane yesterday, think of all the people I trusted during that process, including all the passengers, not to leap over and attack me. And I mean that's a little bit funny, but if we were chimpanzees, we couldn't do that. So social trust is a big deal. It is unique to humans and it makes society work.

Richie Cotton:

So it seems like social trust is then going to be incredibly important when it comes to ai. So in the same way that you mentioned the bank example, where there's hundreds of different people involved in creating all these systems to make sure that social trust works, what's the equivalent that's needed for trusting ai?

Bruce Schneier:

AI is interesting because AI pretends to be a person. AI pretends to have a relationship with you. It doesn't and it is social trust. So in the same way you trust your phone, your search engine, your email provider, it is a tool. And like all of those things, it is a pretty untrustworthy tool. Your phone, your email provider, your social networking platform, they all spy on you. They're all operating against your best interest. My worry is that AI is going to be the same. That AI is fundamentally controlled by large for-profit corporations. That surveillance capitalism will be unavoidable as a business model and these systems will be untrustworthy. We want social trust with them, but we won't have it. And because they are relational, because they are conversational, they will fool us. We will be fooled into thinking of them as friends and not as services, whereas at best they're going to be services. So I worry a lot about people misplaced trust in ai.

Richie Cotton:

That certainly seems like it could be a very bad problem when you start giving away all your most personal details or intimate thoughts to some AI that isn't actually as trustworthy as

Bruce Schneier:

So yes and no already. Your search engine knows more about you than your spouse, than your best friends. We never lie to our search engines. They know about our hopes, our dreams, our fears, whatever we're thinking about. Similarly, our social networking platforms know a lot about us, our phones, they know where we go, who we're with, what we're doing. So we already giving a lot of personal data to untrustworthy services. AI is going to be like that. And when I think about ai, digital assistance, I think it's one of the holy grails of personal ai, an AI that will be my travel agent and secretary and life coach and relationship counselor and concierge and all of those things. We're going to want it to know everything about us. We're going to want it to be intimate, do a better job, and now how do we make it so that it's not all also spying on us? And that's going to be hard. So

Richie Cotton:

In terms of being able to know when it's appropriate to have those sort of trustworthy interactions or those intimate interactions, is there any way to gauge how trustworthy some AI is or have you just got to assume that it's not trustworthy?

Bruce Schneier:

We have to assume that the AI that are run by meta and by Google are and by Microsoft and Amazon are going to be no more trustworthy than all of their other services. I mean, that would be foolish to think that Google whose business model is spying on you would make an AI that doesn't spy on you. I mean maybe they won't, but that's not the way to bet. And surveillance capitalism is the business model of the internet. Pretty much everything spies on us. Your car spies on you, refrigerator spies on you, your drones, whatever it is. We see again and again how surveillance is pervasive in all of these systems. So we can't expect different here and I think be foolish if we did. If we want different, we're going to have to legislate. It's our only hope.

Richie Cotton:

I think you're right that at the moment all the most powerful AI are created by these large technology companies. So you mentioned the idea of legislation. Is there an alternative to this sort corporate ai?

Bruce Schneier:

I push for the notion of public ai. I think we should have at least one model out there. I don't know, I just need one that is not built by a for-profit corporation for their own benefit. And it could be a university doing it, it could be a government doing it, it could be a consortium, an NGO, as long as it is public, and by this I don't mean a corporate model that has been made public domain. So the llama model doesn't count. That is still a corporate model. Even though we have access to its details, it needs to be a model built ground up on nonprofit principles. This feels important. The other thing, you get a different kind of ai. I don't need it to dominate. I don't need it to transplant all the corporate ais. I need it in the mix. I need it to be an option that people can choose and an option that we researchers can study in opposition to the corporate AI and sort of understand where the contours are. It's not a big ask. Models are expensive to build, but in the scheme of government expenditures, they're cheap and models are getting cheaper all the time. But this feels important if we're to understand how much and how we can trust corporate ai, we're going to need non-corporate AI to compare it to otherwise we're not going to be able to make good decisions.

Richie Cotton:

So this is a really interesting idea, having this sort of counterweight to the corporate edge, just having a public model. So there are different sort of levels of openness. So I know for example, the meta models, they're sort of open weights, but they don't give you enough details to anything. So just talk me through how much of it needs to be open and publicly available publicly?

Bruce Schneier:

I think all of it built ground up by the public, for the public and in a way that requires political accountability, not just market accountability. So openness, transparency, response to the public demands. So we know the training data, we know how it's trained, we have the weights, we have the model, and that now becomes something that anybody can build on top of. So universal access to the entire stack. Now this becomes a foundation for a free market in AI innovations. Now we're getting some of that with both the hugging face model out of and with the llama model out of meta, but they are still proprietary built and then given to the public, which is not enough. You're not going to get a model that isn't responsive in the same way to corporate demands. And maybe that doesn't matter, but we don't know if it doesn't matter yet.

And I think we need to know this is just too important to solely give to the near term financial interests of a bunch of tech billionaires. We are going to need a better way to think about this. And the goal isn't to make AI into friends, but you're never going to get interpersonal trust with ai. All I want is reliable service in the way Uber is a reliable service, even though I don't know or in this interpersonal way, trust anybody involved in that system. The way the mechanism works allows me to use Uber's anywhere on the planet without even thinking about it. And that's kind of the way trust works. Well, when I get in an airplane, I don't even think about do I trust the pilot, the plane, the maintenance engineers? I mean I know because of this social trust that Delta Airlines puts.

Well-trained and well-rested put crews in cockpits on schedule. I don't have to think about it right when I go to lunch in about an hour or so, I'm not going to walk into the kitchen and check their sanitation levels. I know that there are health codes here in Cambridge, Massachusetts that will ensure that I'm not going to die of food poisoning. Poison. Are these perfect? No, people do occasionally get sick in restaurants, airplanes occasionally fall out of the sky. But largely it's good enough. And there are Uber drivers that commit crimes against passengers. It's largely good enough. And actually I think Uber drivers are interesting. Taxi driver used to be one of the country's most dangerous professions. It was incredibly risky to be a taxi driver in a big city. And Uber changed that through surveillance that enables this social trust.

Richie Cotton:

There's a lot to impact there. I hadn't realized that taxi driver was such a dangerous occupation. But it's interesting that these sort of additional regulations, so for example, you mentioned with restaurants there are regulations around sanitation and making sure that the food is going to be healthy and not going to put food poisoning people is sort of beneficial and helps scale. I think a lot of people when they say, okay, more regulation there gut reaction is going to be, oh, well regulation stops innovation. So what sort of regulation can you have for AI that is going to,

Bruce Schneier:

So let's stop for a second. Regulation stops innovation is a bullshit argument given to you by people who don't want to be regulated. It is not true. Do we have problems with innovation in healthcare? Do we have problems with innovation? In automobile design, we put regulation in place because unfettered innovation kills people. So we're okay with it taking a few years for a drug to hit the market. If you get it wrong, people die. Same thing with airplanes. And if you wander around what is the lack of innovation in restaurants in your city because of health codes, none, zero. That is a fundamentally bullshit argument. Do not fall for it. That's one. Two is if it inhibits innovation, maybe we're okay with that. If innovation means people die, we want slower innovation. You can only move fast and break things if the things you break are not consequential.

Once the things you break are life and property, then you can't move that fast. And if you don't like it, get into a different industry. I don't care. So I have no sympathy for companies that don't want to be regulated. That is the price for operating in society. And sometimes we regulate people out of business, but 150 years ago we said to entire industry, you can't send five-year-olds opportunities to clean them. And if it hurts your business too fricking bad, we no longer send five-year-olds opportunities to clean them because we are more moral than that. You can't sell pajamas that catch on fire if it hurts your business model, get a new business model, you can't do it. So I'm okay with putting restrictions on corporations. They exist because it is a clever way to organize capital and markets. That's it. They have no moral imperative to exist. If Facebook can't exist without being regulated, I mean I want Facebook to spy stop spying on its users. If it can't exist because of that, maybe it goes away and gets replaced by a company that can do social network without spying on their users. There's no rule that Facebook has to exist. Sorry, I'm a little stride on this.

Richie Cotton:

No, I'm glad you're given your opinions. So in that case, what sort of regulations do you think should be, do you think should apply to

Bruce Schneier:

AI in general? My feeling is we don't need a lot of new regulations for AI because what we want to regulate is a human behavior. So if an AI at a university is racist in their admissions policies, that's illegal. But if a human admissions official is racist, that's also illegal. I don't care whether it's an AI or an AI plus a human or a non-AI computer or entirely human system. It's the output that matters. And that same thing that's true for loan applications or policing or any other place, you're going to see bias. Now, if I am concerned about AI is creating fake videos in political campaigns, I'm equally concerned if those videos are made with actors in the sound stage, there's no difference. So in general, my feeling is that we want to regulate the behavior of the humans and not the tools they are using. Now after saying that AI is a certain type of tool and we'll need some specific rules just like poisoning someone is illegal. And also we make it harder for average people to buy certain poisons. We do both. So there will need to be some AI specific rules, but in general regulate the human behavior and not the ai because the humans are the ones who are morally responsible for what's going on.

Richie Cotton:

That certainly seems to help, I guess people or managers become more accountable to say, okay, we're putting this AI tool interaction, but actually we are responsible for what goes on.

Bruce Schneier:

And it's the same if it's a non-AI tool, you put a tool in and a tool is an extension of your power and responsibility. So if a tool does damage, then it's your fault. And we have experience with this. A lot of talk is about what happens when AI start killing ai, start killing people. Robots have been killing people for decades. There's a steady stream, not a lot of industrial robot accidents where people die in the US and Europe and Asia. This is not new. So we have experience with robots and AI systems taking human life. And in general it is, it's the company, it's the maintainers. Courts are good at figuring out who's at fault and we as society have experience with that. That is not going to be a new thing.

Richie Cotton:

So it seems like existing laws are going to largely cover the use cases of ai if

Bruce Schneier:

We were.

Richie Cotton:

Okay. So how do you feel about the sort of new raft of AI regulations then? So we've got the EU AI Act is sort of the most recent, but there are quite a few on the way. EU

Bruce Schneier:

ACT is really, I think the only one that is real. I mean us, we have an executive order, but no one thinks will be able to pass any AI or regulation. We can't even regulate social media. And we've been trying for what a decade. So I'm not optimistic about regulating ai. The EU AI Act is good. I mean it's a good first attempt. You can tell It exhibits the problems with regulation of the technology. This technology changes the EU AI is being written before generative AI becomes a thing. Then GPT hits the mainstream. They're frantically rewriting the laws kind of half and half. But what is the big thing next year? And will the law cover it? And this is the problem of writing tech specific laws, the tech changes, but doesn't change with the humans. But in general, I like the EEU AI Act.

It's a really good attempt. I like the idea that they breaking applications into four different threat levels and regulating differently. I think we can do more in banking is a good example. We regulate large banks more heavily than small banks. So banking regulation is key to how big you are. We want to need regulation, but we don't want to strangle little banks. Now in tech, the big companies like regulation, because it weeds out the competition, Facebook will able to meet any regulation, any government throws at it, competition that won't. So that kind of thinking is needed here, but the tech is moving so fast. So we are going to need a regulatory environment that is flexible and agile and we're not good at that at society.

Richie Cotton:

So it should be the larger companies that have more regulation or the larger AI models. I think

Bruce Schneier:

That's always something we at least think about. I don't have answers here, but I kind of been poking at the questions. But yes, looking at some kind of tiered approach I think would be interesting here.

Richie Cotton:

If we're just looking at regulations, then that's going to put a lot of burden on governments to break trust in ai. Are there any things beyond regulation we can do to increase trust?

Bruce Schneier:

I think not. I mean corporations are fundamentally untrustworthy, meaning you cannot have a interpersonal trust relationship with them. You can have a social trust relationship with them, but corporations are precisely as immoral as the law will let 'em get away with. That's the way it works. Government establishes the rules and the market plays on top of the rules. If we want to force more trust, you need regulation. No company, no industry has improved safety or security without being annotated by the government like ever. And its planes, its trains, automobiles, pharmaceuticals, workplace, food and drugs, consumer goods. I mean these are all industries that produce dangerous things until they were forced not to. The market doesn't reward safety and security in the same way. And the market rewards fake trust just as much as real trust, especially when you have individuals working in a near term reward horizon. So you've got these market failures where you need non-market mechanisms.

Richie Cotton:

So if we need regulation, is it going to have to be, every country is sort of making its own regulations or do we need some kind of global synchronization here?

Bruce Schneier:

Global synchronization doesn't exist. So it doesn't matter what you need, you're not getting it. I am fine with country doing their own things. This is again, companies complain. There's lots of regulations, it's hard to meet them all. And again, my responses to fricking bad get used to it. You're a corporation, internet, you're a multinational corporation. If you can't handle it, don't be a multinational corporation. This isn't hard. And so I'm okay with the patchwork. I'm okay with the patchwork of states. We have state privacy rules that are different in every state. Me is doing just fine. They complain a lot, but they're doing just fine. So I'm good with a patchwork. I don't think you're going to get international harmonization anytime soon. We can't get it on even easier things. And this is just moving so fast now. We don't have planetary government,

Richie Cotton:

I suppose. Yeah, certainly. Probably pros and cons of having a planetary government and maybe the patchwork is best, at least for now.

Bruce Schneier:

I'm actually a fan of planetary government. I think in general, we as a species have two sets of problems. We have global problems and we have local problems. We no longer have France sized problems. So I tend to government that is very big and very small right now. The medium size seems to, I mean it was very important in an industrial age, but it seems less important today. That is a bigger conversation than this.

Richie Cotton:

I feel it's a whole separate podcast episode. Alright, so one thing that seems to have come up in conversation a few times already is social media. And it seems like social media went from being the sort of darling of this is going to save democracy. And with things like the Arab Spring, all this sort of like 10, 15 years ago and being kind of like, yeah, this is causing a lot of problems. And yeah, it's widely sort of, well it's disillusionment anyway. So are there any lessons that the AI industry can learn from social media?

Bruce Schneier:

So yes, and actually I wrote an essay, it appeared in the MIT tech journal last month called Five Lessons AI Can Learn From Social Media where I talk about our inability to regulate social media and causing all these problems and how we can learn from their mistakes. And it's things like virality, it's things like surveillance lock-in in monopolization. I think that's the big one that the biggest problem with social media is that they're monopolies and that means they just don't have to be responsive to their users or their customers, which are the advertisers actually, but their users most importantly. And anything we can do to break up the tech monopolies will be incredibly valuable in all of this. And that's the biggest lesson.

Richie Cotton:

So you think in this case, I guess for anyone wanted to think about regulations, you think, well, okay, these AI companies, they're going to be monopolies and therefore you need to regulate them as monopolies. Is that the idea?

Bruce Schneier:

Yeah, I mean monopolization, if you are a monopoly, you have a lot more power to shape the markets and not respond basically to market demands. You can operate outside the basic tenants of a capitalist market system and that breaks the system. So I need there to be competition. I need sellers to compete for buyers. That is how I get a vibrant market. And if sellers aren't competing with each other for buyers, you don't have that dynamic and that's what's really important.

Richie Cotton:

And do you think that monopoly market is going to be inevitable for ai, particularly for generative ai or do you think there will be that?

Bruce Schneier:

Of course, it's not inevitable. It's only enabled by the ability of corporations to ignore antitrust law. Now we have laws in place to try to prevent monopolies. We didn't enforce them for a few decades, which is why we have the big tech monopolies. They're starting to be enforced again now. But the power imbalance is great, so we'll see how it goes. But EU is doing better than the US and in a sense, the EU is the regulatory superpower of the planet. I look to them more than the US to keep these companies in check.

Richie Cotton:

It occur to me that we've been talking about corporations as being the sort of single entity, but actually that consist of people and in terms of people,

Bruce Schneier:

Yeah, that's what's his name said that Ronnie said that. I mean it's true and it's bullshit. They are Charlie Stro science fiction writer talks about corporations of slow ai. It's a really interesting parallel that yes, they're people, but they are this sociotechnical organism. Things they do are greater than the individual people. It's like saying that a car is like metal and screws. I mean yes and no. And if you think about it, let's take meta. I mean meta could decide tomorrow not to surveil its people. And if they did, the CEO would be fired and replaced with a less ethical CEO. The people can't operate with full autonomy because of the system they're embedded in. So corporations are not just a bunch of people in a room. They are a highly structured multi human entity and you cannot reduce them to just people.

Richie Cotton:

I suppose that's true. There are certainly limits on things I could say. I would never tell people it's a terrible idea to, it's about data and AI because what

Bruce Schneier:

I dom, I mean I mortal in a way that the people aren't right. They outlive the people in them. The people in them come and go. It's like your skin cells, right? The cells in your body come and go, but who you are is greater than a pile of cells. Maybe that's a better analogy

Richie Cotton:

With skin cells. Is part of human corporation something. Alright, so just human being remote though. So within the corporations there are people who are sort of building these AI tools who are working on these things and we have a lot of them listening in the audience. So do you have any advice for the people who are building AI or making use of AI at work? What do they need to do to create more trustworthy AI or can they have some sort of effect?

Bruce Schneier:

I think really pay attention to applications. It matters applications. If I have an AI that's a political candidate chatbot and it says we should nuke New Zealand, that's a gaf. If I have a AI chat bot that's helping someone fill out immigration paperwork and it makes a mistake, they get deported. So the use matters a lot. What is the cost of failure? What is the trust environment? Is it adversarial, is it not? Right? If I have an AI that is advising me in corporate negotiations, how much does it matter if the AI's parent company is on the other side of those negotiations? So pay attention to the use case a lot and that really determines whether it makes sense. AI assistance are doing a lot of legal work as long as there are human lawyers reviewing it, that's fantastic. It makes lawyers more effective. So think about how the human and AI interact. Think about the systems and the trust that needs be in the system, the cost of getting it wrong in addition to how often things get wrong and notice that that will change all the time. This field is moving incredibly fast. What is true today won't be true in six months. So any decision you make needs to be constantly it.

Richie Cotton:

I think that's quite important. Think about what goes wrong because often when you're building something you think, well just can I make something that gives the right answer sometimes, but think about how it can be misused. Think about what happens when it fails is equally important. And how about for people who are just using ai? Is there anything they can do in order to make sure that AI becomes more trustworthy over time?

Bruce Schneier:

Better laws is the answer, but really no. I mean AI is already embedded in your mapping software. The AI is giving you directions. Well do you either use it or you don't? I mean AI is controlling your feed on TikTok or Facebook. What are your options? There aren't any. So really for us as consumers, the ais are handed to us embedded in systems and that's pretty much like all tech. And we either choose to use the systems or not. This is where the monopolies are our problem because often we have no choice. I mean I can tell you don't get a cell phone, but that's like dumb advice in the 21st century. That's not viable. So again, I need government to step in and ensure that you can use the cell phone without it being too bad. And most people believe they have more protections in their consumer goods than they do with phones and interconnected cars. And there's a story that broke a couple of weeks ago about GMs spying on its drivers and selling information to insurance companies. People were surprised there. I was surprised. Kir Hill who writes about privacy for the New York Times was surprised. But should we be surprised? No, but we believe we have more protections than we do.

Richie Cotton:

So it seems like if we don't have those protections then that's going to break down the social trust then to go back to your original point

Bruce Schneier:

And it does, and I think this is why you're seeing what we mentioned a little bit earlier, this backlash against social media. We thought it was all good. Now we think it's all bad, the truth is in the middle, but we thought we could trust them.

Richie Cotton:

It seems like there is some sort of good possible futures and some bad possible futures. What's your sort of ideal situation here? What happens next that you think will make things go well?

Bruce Schneier:

I think that AI as assistive tech is phenomenal and a lot of what goes well is human plus ai. And a lot of what goes poorly is AI replacing human, at least today, right? Few years it might be different. So the more we leverage these technologies to enhance humans and really to enhance human flourishing, the better we're going to do. Now that is not necessarily where the market's going to push. Market will push towards replacement because that is cheaper, but that has a lot of downstream effects. Massive job loss is just not good for society and we might want to think about ways to help people that aren't tied to their jobs in the United States, all of your stuff is tied to your job. Unlike Europe, the US in Europe, they got healthcare through the political process in the US we got healthcare through collective bargaining and that didn't matter in the fifties. They both worked. But here we are in this decade, it means your healthcare is tied to your job in a way it's not in Europe and that's not serving us well right now. So if we're going to see massive unemployment because of ai, we need figure out some other way to deliver basic human services that aren't tied to your job. So that's not great because we as society really have trouble doing all these things. So that's kind of meandered a bit here, get me back on track.

Richie Cotton:

No, we started off going down the happy path and them sort of turned into disaster.

Bruce Schneier:

I think there's an enormous power in ai. I am mostly optimistic. I know I'm a security person and I say a lot of pessimistic things, but I am mostly optimistic here. I think this will be incredibly powerful or democracy or society. And so that's not what I'm writing about now and I think it's true. So

Richie Cotton:

Maybe we'll try and finish on a happy note. So what is it that

Bruce Schneier:

You are most optimistic about? I think there's enormous power in AI as a mediator and moderator and consensus builder. And a lot of the things I think AI will do for us are things humans can perfectly capably do. We just don't have enough humans. So putting an AI moderator in every Reddit group in local online government meetings and in citizen assemblies on different issues be incredibly powerful. AI doing adjudication. I think AI as sense maker explaining to people political issues, AI is an infinitely patient teacher. So instead of reading a book, you engage in a conversation, it makes you a better person. If we can get that working at scale, enormous values of AI as a doctor, there are parts of this planet where people never see a doctor because there are enough doctors, but an AI assisted nurse will be just as good in almost all cases. I mean there's phenomenal potential there. AI is doing research, especially research that is big data pattern matching where already seeing articles about AI and drug discovery, there's a lot of potential there. And so really I look for places where there aren't enough humans to do the work and AI can make the humans that are doing the work more effective.

Richie Cotton:

Lots of incredibly positive things there. I really like the idea of an AI moderator. That's something that hasn't cropped up in many discussions really. So that's pretty, of

Bruce Schneier:

Course I just said that you are replaceable in this podcast. So

Richie Cotton:

Yeah, there's a story.

Bruce Schneier:

A few months ago I was interviewed by a podcaster back like last summer. Chat's becoming a big thing initially and they said, I went to chat GBT and asked a bunch of interview questions, they asked it to come up with interview questions for you and here they are and they were fantastic interview questions. One of them was, if you were an action figure, what was it? What would your accessories be? I never gotten that question before.

Richie Cotton:

That's brilliant. Okay, so don't ask that. I guess by days and numbers as a podcast, maybe look out for ai. Richie,

Bruce Schneier:

Isn't there an NPR episode on AI where they had the AI come up with a podcast and it asked questions and it came up with a little sketch on the topic as most like three part, look it up. I think it was some NPR R program. It might have been all things Considered.

Richie Cotton:

Absolutely. Actually I just recently saw Reed Hoffman interviewing an AI version of himself and I

Bruce Schneier:

That was exciting.

Richie Cotton:

Yeah, it's getting very close to, yeah, Richie is replaceable. Alright, so just to finish up, do you have any advice, is there some action you think people should take in order to get towards this happy path of AI being good?

Bruce Schneier:

I mean, to me this has to become a political issue. Nothing will change unless government forces it and government will not force it unless we, the people demand it. And I want these things discussed at presidential debates and I want them to be political issues that people campaign on that matter. In the same way that inflation matters, unemployment matters and us China policy matters. It needs to matter to us. Otherwise the tech monopolies are going to just roll over the government. What they do, they have the money, they have the lobbying, and it's very hard to get at policy that the money doesn't want. It's really

Richie Cotton:

Hard, I think call to action there for everyone to start writing lessons to their local representative to get their ais to write letters to local representative. Oh yeah, write letter. There we go. Technology something before again. Nice. Alright, thank you so much for coming on the show, Bruce. That was brilliant. Good luck. Thank you.

Topics
Related

blog

Building trust in AI to accelerate its adoption

Building trust in AI is key towards accelerating the adoption of data science and machine learning in financial services. Learn how to accelerate the development of trusted AI within the industry and beyond.
DataCamp Team's photo

DataCamp Team

5 min

podcast

Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AI

Richie and Alexandra explore the importance of trust in AI, what causes us to lose trust in AI systems and the impacts of a lack of trust, AI regulation and adoption, AI decision accuracy and fairness, privacy concerns in AI and much more.
Richie Cotton's photo

Richie Cotton

50 min

podcast

Building Trustworthy AI with Beena Ammanath, Global Head of the Deloitte AI Institute

Beena and Adel cover the core principles of trustworthy AI, the interplay of ethics and AI in various industries, how to make trustworthy AI practical, the importance of AI literacy when promoting responsible and trustworthy AI, and a lot more.
Adel Nehme's photo

Adel Nehme

38 min

podcast

Building Ethical Machines with Reid Blackman, Founder & CEO at Virtue Consultants

Reid and Richie discuss the dominant concerns in AI ethics, from biased AI and privacy violations to the challenges introduced by generative AI.
Richie Cotton's photo

Richie Cotton

57 min

podcast

How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist

Richie and Bernard explore how AI will impact society through the augmentation of jobs, the importance of developing skills that won’t be easily replaced by AI, why we should be optimistic about the future of AI, and much more. 
Richie Cotton's photo

Richie Cotton

48 min

podcast

Is AI an Existential Risk? With Trond Arne Undheim, Research Scholar in Global Systemic Risk at Stanford University

Trond and Adel explore the multifaceted risks associated with AI, the cascading risks lens, the likelihood of runaway AI, the role of governments and organizations in shaping AI's future, and more.
Adel Nehme's photo

Adel Nehme

46 min

See MoreSee More