Skip to main content
HomePodcastsArtificial Intelligence (AI)

Why AI is Eating the World with Daniel Jeffries, Managing Director at AI Infrastructure Alliance

Adel and Daniel discuss how to define ambient AI, how our relationship with work will evolve, what the AI ecosystem is missing to rapidly scale adoption, how AI existential risk discourse takes away focus from real AI risk, and a lot lot more.
Updated Jul 2023

Photo of Daniel Jeffries
Guest
Daniel Jeffries

Daniel Jeffries is the Managing Director of the AI Infrastructure Alliance and former CIO at Stability AI, the company responsible for Stable Diffusion, the popular open-source image generation model. He’s also an author, engineer, futurist, pro blogger and he’s given talks all over the world on AI and cryptographic platforms.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

A lot of people are worried that old companies and the old tech companies are going to dominate again. This is one of those cycles where the new companies that become the Fortune 500s 20 years from now are spawned. And that's because they're gonna be more agile and they're going to look at AI and how you communicate with things much differently.

Don't listen to there's going to be no programmers. Like get in there. We still need brilliant programmers. The hardest part is not writing code. It's coming up with how to think about a program and then break it down and come up. So get into it even more. Just level up the skill adapt to it. If you're an artist, you know, don't be worried about this stuff. Just embrace it as part of your tool set. Artists are not going anywhere. We're always going to have artists creating real things. This concept that they're all going to be disappeared. I'm sorry, but it's just not true. Get in there and embrace it as another tool set. It's just going to be like, using Photoshop or something else or a paintbrush, it's going to really just be amazing. And I think it's gonna be amazing across every industry. And this is one of the greatest times to be alive. And you should just embrace it. Just embrace it with relish and love.

Key Takeaways

1

AI is becoming more embedded in every software stack, transforming how we work and interact with software. This is leading to the rise of 'Ambient AI', which will be integrated into everything from supply chains to doctor's visits.

2

Open source is crucial for the development and improvement of AI. It allows a wider range of developers and researchers to experiment with and improve upon existing models, leading to more innovation and better outcomes.

3

Despite the challenges and risks, AI holds immense potential to make industries more intelligent and efficient. It's an exciting time to be working in the field of AI, and those in the industry should embrace the opportunities it presents.

Links From The Show

Transcript

Adel Nehme: Hello, everyone. Welcome to DataFramed. I'm Adel, Data Evangelist and Educator at Datacamp, and if you're new here, Data Framed is a weekly podcast in which we explore how individuals and organizations can succeed with data and AI. There is a truism in technology coined by Mark Andreessen, general partner to Andreessen Horowitz, that says that software is eating the world.

This was especially true during the early age of software applications and the internet that really sprung on the digital ecosystem we now find ourselves in. It was a great truism that shined the light on how digital first software companies have essentially usurped and replaced their non digital predecessors.

Amazon was suddenly the largest bookseller. Netflix was the largest movie rental service. Spotify or Apple became the largest music providers, and so on and so forth. Given the rapid pace of evolution in AI today, I think it's also safe to say that AI is starting to eat the world. In a lot of ways, as we discovered in our episode with Bob Muglia a few episodes back, we are still at the early start of the AI revolution.

And AI is about to become embedded in almost every single piece of software we interact with. This is what today's guest calls Ambient AI. Dan Jeffries is the Managing Director of the AI Infrastructure Alliance and former CIO at Stability AI, the company responsible for stable diffusion. The popular open source image generation model.

He's also an author, engineer, futurist, pro blogger, and h... See more

e's given talks all over the world on AI and cryptographic platforms. He has also an awesome self stack called Future History that I highly recommend that you check out. Today's discussion was extremely wide ranging and I especially appreciated the passion and clarity that Dunn brought to the conversation.

Throughout the episode, we discussed how he defines MBAI, how our relationship with work will evolve as we become more reliant on AI, When the AI ecosystem is missing to rapidly scale adoption. Why we need to accelerate the maturity of the open source AI ecosystem. How AI existential risk discourse takes away focus from real AI risk.

And a lot more. Just a note on today's episode before we begin, I mentioned the passion Daniel brought to the conversation, and that passion translates with the use of profane language a couple of times here and there throughout the conversation. While we try as much as possible to keep things clean on DataFrame, I think it would do a disservice to both Dan and you the audience if we edit out the sections in question.

We have marked this episode as it contains profane language, but I just wanted to give you a heads up. Moreover, I thought Dan approached the topic of open source AI and AI risk management from quite a fresh and clear perspective and I hope it helps shine a light on an optimistic point of view that has drowned out in a lot of discourse that focuses on the risk and potential negatives of AI.

That does not mean, however, that we will not be featuring those voices focusing on the potential risks in future DataFramed episodes, as we really want to create a space where different points of view can coexist to help you, the audience, sharpen your thinking about these topics. As always, if you enjoyed this episode, make sure to let us know in the comments, on social, or in the ratings.

And now, on today's episode. Dan Jeffries, it's great to have you on DataFramed.

Dan Jeffries: Thanks so much for having me and I really appreciate it.

Adel Nehme: Likewise, so you are the Managing Director of the AI Infrastructure Alliance and former CIO at Stability AI. You also have a pretty awesome substack that I recommend everyone to read called Future History. Maybe to set the stage for today's conversation, walk us through the moment we are in today with AI, and maybe define for us what ambient AI is.

Dan Jeffries: So look, I think AI just basically didn't work for a very long time. And we had a number of sort of AI winters where it was promised to do a lot of different things, but it just didn't really work. Or he worked on a limited timescale, You started looking at things like being able to sort letters at the post office, sort of Andrew Ng's, kind of early work and those kinds of things where.

It was able to like detect digits, And we don't tend to think of those things today as AI. We don't think of, talking to your phone and it understands what you say as AI. There's like the old joke that once it works, it's no longer AI. And so that, that started to happen. And then, we got to this MLOps phase where everybody was betting that all these companies would have, a thousand data scientists would be doing advanced machine learning.

And it looks as if that's never actually going to happen. And the sea change was really first, the release, I think, of stable diffusion as an open source. But then after that, the real big bang, the neutron ball was what everybody knows at this point is ChachiBT, right? And that was the thing that even shocked kind of insiders that said, wait a minute, this stuff totally works in a way that's completely unexpected.

right now for many folks outside of the industry. I think that started to move us up the stack and the vast majority of companies and people are basically going to interact with AI at the API level, maybe the fine tuning level, although I'm not even sure how long the fine tuning lasts, because as soon as you give the, a complete you know, legal document answering model that I can give three examples to and it works, I'm not going to fine tune anything.

I'm not going to dump that as fast as I can because it's a pain in the butt. So I think most people are going to be functioning up the stack. We're moving into this era of applications. And we're moving into an era where AI becomes a lot more ubiquitous. And you have this kind of embedded intelligence.

A lot of people are out there talking about AGI and all this other nonsense. Like, that's great. I'm a sci fi writer. I love it. But it's not going to magic itself into AGI. From transformers, but what we do have are these very like small contained intelligences that can do very cool things that we couldn't do six months ago, then go read a bunch of documents on the web and understand them and tell you what's in them, you can build research agents and all kinds of cool stuff that you just couldn't do. So I see this intelligence layer, and I call ambient intelligence, just being embedded into everything, your supply chain, your doctor's visits, whatever, this kind of. It's like the chatbots that didn't suck or the intake patient doctor that didn't suck or like the call center AI that didn't suck, right?

That'll actually be able to answer a huge chunk of these questions. When you have a problem on Slack, you'll pop it in there and the AI will know the answer 85% of the time. And that's cool because this starts to give us this whole layer of intelligence. Some people worry about AI and I go, look, there's no industry on Earth.

That would not benefit from being more intelligent. Nobody is saying, I really wish my supply chain was stupider. I really wish drugs were harder to discover and took more brute force time. There's, they're all saying. Wouldn't it be cool if we could do this in a more intelligent way? That's really what Ambiena is going to give us.

I think it's a tremendously exciting time to be alive.

Adel Nehme: I couldn't agree more. There's definitely a lot to unpack. And you mentioned here in a lot of ways, a lot of the, application layers are the intelligence layer on the software stack. It's really the promise of data science. That was discussed in the past 10, 15 years when data science was on the come up as a field that we're going to have a lot of intelligent applications on top of our software.

And this is truly the year where this is going to start becoming a reality. So let's maybe unpack a lot of that. You're such a prolific thinker and writer about AI. There's so many directions I can take our conversations in. But what I want to deep dive with you on is exactly what you mean here by MBA.

AI is how AI will become the interface for a lot of work that we do, and really how we interact with software as we know it. So maybe here to deep dive a bit more, walk us through what you mean by AI being the interface for the world or work. And how do you think that will play out more deeply in practicality over the coming years?

Dan Jeffries: So I want to give credit where credit is due. I stole the will be the interface to the world idea from the brilliant Francois Chollet, who is the author of Kairos, a brilliant thinker in and of himself who I had the privilege to meet. One time early in my kind of AI career, he was, and I just thought he was a brilliant fellow.

And so I loved that idea and it made perfect sense to me. The idea that basically we're going to talk to this thing, like it was like Snoop Dogg. He's up there you know, all the covers and he's like, man, I can understand this thing. Like I can talk to this motherfucker, you know what I mean?

And like, and like, And like, it can talk back to me, you know, like, this is like, am I at a movie right now or what? Right. And that, to me, that is like, that's awesome. and Snoop Dogg is brilliant too. And he kind of really nailed it. to me, I think the more you can chat, like right now you have these, a lot of people are worried that old companies, even the old tech companies are going to dominate again.

This is one of those cycles where the new companies that become the Fortune 500's 20 years from now are spawned. And that's because they're going to be more agile. And they're going to look at AI and how you communicate with things much differently. So, right now... You have the big companies being very conservative with their chatbots, right?

They're going to make sure that you go to that sort of recipe site or whatever. But who the heck wants to go to that recipe site when it's become, just this, six pop up ads and like, and add every other paragraph. It's like, it's super annoying. So as soon as a company comes along, it was like, man, we're going to make this interface, you chat and tell it what kind of food you're in and what your dietary restrictions are, whatever.

And it's like, boom, here's three things that you can eat without ever going there and involves a new business model, right? In other words, the old business model will take down a little bit. It's not going to be totally destroyed. That's crazy. But like a new business model that supports this, we don't know what that'll be, but it'll evolve over time.

As soon as that comes, that starts to displace the old way of thinking about it. And they've got the innovators that they get stuck. Just like Kodak is like, well, we've been working on this film for a hundred years. Like this digital thing looks cool, but like it kind of messes with the original business.

So let's not go too far. And then somebody else who doesn't care about that comes along and replaces them. So I think, just being able to naturally converse with things, bringing in, I think somebody recently said that, there won't be any programmers. I totally disagree.

I agree with David. That maybe there's going to be a billion programmers, and it just like I'm a crappy web designer. I can use Photoshop. I can use some other things. I'm ter I couldn't write the XML, but you give me a drag and drop editor. I can all of a sudden put together some pretty cool websites.

I think we're going to have more people programming like that. I think we're going to have more people being able to talk to their applications and it understands them and becomes a friend in a way, right? And I think this is super exciting. I think that's how most software is going to function.

whether it's always talking or typing in, who knows, but we're going to be able to increasingly just describe what it is that we need get better and better output from there that we can iterate with and work on, that to me is exciting. You think about even an artist.

Maybe a guitar player playing a song, iterating and going, okay, give me 30 continuations of that. And it goes, okay listen. Oh, number seven is cool. Yeah. Let me try that. Okay. You know what? I just changed this note. Give me like 15 variations on that. Like that kind of stuff, that kind of co collaborative relationship with AI is going to be a very exciting thing for everybody, I think.

Adel Nehme: That's really exciting. And the co collaborative experience that you were talking about rests in a lot of ways in great user experience and user interface design. In a lot of ways, you mentioned the neutron bomb of Chachapiti. One of the reasons why Chachapiti was so widely used is not just because the model is very performant and,

the time to value when you get a high quality output is really low, right? But also the interface of the chat, the user experience, the iteration time, the feedback loop. that you get when you're chatting with Chachapiti is pretty, great and you get a lot of aha moments and that's one of the reasons why it took off so quickly.

So maybe in your opinion, what constitutes the ideal interface and experience for an AI model as we interact with it?

Dan Jeffries: I don't know that anybody knows the answer to this question right now, because I think. the creative process is an iterative process. I was just having this conversation where, I was talking with someone about, a programmer who was working on a, idea he was working on.

And, I said, well, he's working on my idea. And he said, she said, well, it's different though than what you originally, created. I'm like, that's the creative process. when I start out writing a novel or whatever, it doesn't end up exactly the same way as when I originally planned it, there's this co creative kind of thing that happens. So I think that's happening. That's going to happen with the UI UX as we go along. We're going to iterate and we're going to go along and we're going to say, wait a minute, this is a new way to do things. And that may be the best way I ever think about that is.

My friend Chris Dixon, who I knew when I was, when we were very young and, is now a famous, investor or whatever. he was a programmer at the time. And he the stylus had just come out on these kind of, non internet connected pad thingies that we had.

And most of the people were designing video games on it as like, click and type stuff, which was the dominant UI UX at the time. And he made a little, like, a space invader. So again, we had to like circle the, attacking aliens, with the stylus. And he was said, look, you got to utilize the new capabilities of like the interface.

And I think that's the creative process. That's what happens. The more people play with these things, the more we're going to get an understanding of what the ideal interface will be over time. And then when it happens, you'd be like, Oh, well, of course. that's the idea of like any invention.

It always looks so obvious in retrospect. That's how you're going to know, like that we've gotten there, but I don't know precisely what it's going to look like just yet.

Adel Nehme: Yeah, and what's very exciting here is that we take a lot of, software and tools that we use right now for granted, right? we've reached consensus of what makes a great application a phone in terms of a user interface and a user experience, but we had to learn that as the iPhone was released and as the app store evolved and apps became more and more ubiquitous and we're doing that same process with AI right now, right?

So. We're seeing more and more AI being embedded in every software stack. It's becoming truly transformational. You know, Seeing scary good applications of AI and tools like Word, Excel. And I think that has a lot of potential to change how we work in general. So maybe how do you see that transformation happening?

What do you think our relationship will work, will look like once AI, once ambient AI becomes more ubiquitous?

Dan Jeffries: the change will happen gradually and then all at once. I mean, that's certain. If you look at that diffusion of innovation curve, which famously came out in the. The late sixties that looked at like how ideas and technology disseminate, right. And it's, been repurposed for every business presentation on the planet.

But. Most people miss that. It's like you have these pioneers, you have these early adopters, you have the early majority of the late majority of the laggards and at each point in time, it becomes something where you're like, well, I, I don't know, that doesn't make any sense. I don't know why I'd ever use that to, Oh, that's interesting.

I started to use it to like, well, I'm using it every day to, I can't imagine my life without it, but that's the sort of progress. And, I changed my mind, maybe at some respects in that. I, as you were talking just now, I was thinking back to that scene in Blade Runner, which was totally science fiction where he has the photo of the gal and he puts it in there.

And he says, okay, pan 23 to 16, right? Okay, enhance, you know, go, no, go back, And like, the software's kind of moving around and searching. He's like, okay, enhance 23 to 15, you know, boom, boom, boom, boom, boom, boom. And, there were two sci fi things in there that weren't possible before.

One was enhancing a photo, which is in every stupid crime drama of all time. And, you're like, okay, you're like, Oh, we took this low resolution VHS footage and, and got a high res,

Adel Nehme: and it's HD

Dan Jeffries: yeah, right. Yeah. Like HD, we noticed there's a cat in the back.

Adel Nehme: Yeah.

Dan Jeffries: right. You know, like,

Adel Nehme: at, look, looked at the reflection of a car,

of a car window, and then

Dan Jeffries: right, right. But now like with kind of the generative models, like there's a possibility that they couldn't fill in some of this case. And that, idea of him talking to it and he's like, okay, giving it these specific commands. Okay. Do this. Okay. No, wait, go back. Which is a very human command to go back.

I think we'll be able to talk to it. I think we'll be able to have those kinds of things that are available. And this sort of gesture to it, move things around like that. Like if you see the interface in Her, by the way, where he was there's no controller and he's just walking and then he's talking to it the character and him are having an argument.

I think that, that's how it starts again. So I think I kind of backtracked this out of the initial question. You roll back to the, to the last one but that was my thinking again, this stuff happens slowly and then it kind of happens all, all at once.

Sometimes it feels like the cycle is sped up a little bit these days, right? And that we're so used to technology that there is an acceleration point. And so, we adapted faster and I think, I was of the, Gen X generation where I lived. Without all this ubiquitous technology and then it gradually came into my life and now it's a total point of life.

So I'm on this weird edge, whereas I see a lot of younger folks were like, they'll pick up a new platform and then abandon the old one overnight. Like, well, we switched everyone over to this and they'll they can learn it, as if it's like. I don't know, as if it were just like always there.

It's like a tree or anything else. They just know how to interact with it. So there is like even a faster acceleration of how we even adapt this stuff. So, and it's compressed the time there, which is exciting.

Adel Nehme: Yeah. Indeed. And if you look at judge key's, adoption, and how we talk about chat. Think about it. It's crazy to think that this has only been available since November, 2022. Or something along those lines. It's been less than a year, but it's become so ubiquitous and so widely used.

No, I think this marks a great segue to discuss how you think the AI ecosystem will evolve in the next few years, right? We've talked about applications of AI and tools in the software stack. In a lot of ways, it's interesting because we're seeing technology incumbents move quickly, yet conservatively, as you mentioned, to adopt AI in their products.

There's a lot of competition right now in terms of, foundation models, AI, infrastructure providers. I'd love to learn your thoughts about how you think about the different players in the industry today, and how do you see things playing out in the next few years?

Dan Jeffries: I see the research moving tremendously quickly on a lot of different things. And that's primarily because you have, traditional researchers empowered with like models they can pick up and tune. Versus having to have all the money to train it from scratch and the failure rate can be really high there.

So that's accelerating research, which is exciting. I think that'll pick up with the open source, foundation models. And we're already seeing a lot of that, that cool research. I think you're also starting to see a ton of developers, traditional developers, coders get into it. They bring a different perspective, which is exciting.

And they're able to do things that you might not see in traditional data science. Merging 26 models together or whatever and, data scientists go, well, that's crazy. It's going to collapse the model. Then it doesn't. It makes a better model, right? these kinds of engineering things, this is, it marks a different phase than any technological development.

It's one person who comes up with a way to extract nitrates or whatever from the air with the, in Germany, but then it's engineers that make it this scalable platform where you can build. Something that you can sell repeatedly and crank out in a ubiquitous way.

So I think we're seeing that same thing now of developers and engineers learning about this stuff and bringing their own perspective, which is super, super exciting. I see the foundation models as being a really cost intensive business. I think it's costly in terms of people, time, compute.

I think we have to get, there has to be a breakthrough in terms of making them all or smaller, or maybe even learning by example. I always follow the DARPA stuff because they're always like 10 years ahead of the curve. And, they're funding stuff like how does AI learn in novel situations and be able to adapt to a completely new situation.

And they describe it as like, you learn chess, but then the rules of chess completely change underneath from you. How do you, How do you deal with it? And today's AI can't do that. And I was watching a, like a liquid AI network that was based on like C. Elegans brain. And it was able to be like thrown into a novel situation in a drone and able to find itself.

Whereas a transformer was like, and this is the forest I was trained on the city. I don't know what to do. So I think we're going to see new breakthrough research developments. I think we're going to see the refinement of the old stuff. What I'm really seeing a lack of is as fast as the research and all these ideas are coming in.

The infrastructure for AI is really primitive. because I spent a lot of my life in infrastructure, I had an IT consulting company, I was at Red Hat, for a decade, and I was at MLOps. So I've seen a ton of these kind of like, go from bare metal to virtualization to containers, all these monitoring and management and security tools.

We have none of that in India, right? And when I look at, like, I even looked the other day, I was like, cool, we want to try out a bunch of the open source models. And I was like, cool, spin up, a single instance on an A100 or a two way A100 charged by the hour, 250 an hour, 30, 000 a year to run a single instance.

I'm going, this is crazy. Why hasn't anybody spun up, a bunch of these models and parallelized access to them and charged on a per token basis? They're not even there yet, right? And then there's all kinds of other things I see that are missing. I think we need a red hat of AI. In fact, I was thinking strongly about starting this as a business.

I just don't want to get up and do it every day. So I'm not going to do it. I'm giving this away freely. Listen closely. I think We had all this stuff in traditional code the open source stack where you could rapidly fix bugs and where you like added skills or Upgrades to it. I think ai is going to need a similar thing where you need bug fixes and skill pack upgrades meaning Okay, we added medical knowledge for this transformer and this model just is advising people to commit suicide Okay, How do we like fine tune the heck out of that rapidly?

Or example it out like take the original model ask it the same question of why you should never commit suicide Hear that answer With the original question, generate a hundred versions of each question and answer surface 2% to a person. Okay, take that out. Great.

Fine tune yourself rapidly, output a bug fix. And that's got to be an order of magnitude faster than traditional code, because these things are so open ended. Whereas in the past, you're like, well, this is a, this is an SSL, library or whatever. It can only fail in these ways.

It can fail in a lot of ways. So I think we need not just people who are going to run the inference of these things, But people are going to be able to support these things and the community builds around that of rapid fine tuning and rapid iteration, rapid pickups, so that when I, as a user, as an enterprise orderer, I'm able to get that model, I'm going to have 200 Loras or 1, 000 Loras.

I don't even know the upper scale. Of how many loras you can add without degrading it or adapters in general. I use Lora because that was the first adapter I ever came across, but I really mean adapters. I don't know whether it's adapter is the answer or whether they become hot swappable, whether it becomes a mixture of experts, whether it's fine tuning.

I don't know, but I know that we have a long way to go in terms of the infrastructure and support. And then the middleware, I talk to a lot of companies right now building the middleware, like how do you parallelize, or cache these kinds of things, how do you deal with Prompt injections, I see that almost being like a new anti virus business with heuristics, anti, you know, like neural nets or whatever, essentially saying like, okay, this is a prompt injection, stop this on both sides of the equation in the pipeline.

I see all this middleware and support monitoring and management, all this kind of stuff. None of this stuff exists. And to some degree, we're really starting from scratch because this stuff is non deterministic. And so it's not enough to just take your monitoring software and dump it on an LLN.

You are going to need a new kind of monitoring that's able to detect logic flaws, right? For instance, like, well, this, I asked it to go get a present for my sister and like, it's off there buying, I don't know, a baseball bat, or it's like, talking to someone else or it's, outputting garbage text, right?

We're going to need all kinds of new sort of middleware, monitoring, management, infrastructure. It's going to be a whole new industry. It's going to take a bit of time, just like it took a bit of time for us to figure out how to scale web scale applications. In the beginning, they're like, well, you throw a single database at it and whatever, and that's not good enough.

All of a sudden, you get sharded databases and, distributed balancers. It's going to be the same kind of progression for dealing with these non deterministic systems. it's going to take some time. And don't know how fast it comes together.

Adel Nehme: There's definitely a lot done back here and as you point out all the different kind of challenges and the limitations we currently have in the AI stack, what do you think is the most pressing aspect of the AI infrastructure stack that needs to be fixed within the next 12 months to be able to scale the adoption of AI and large language models in general?

Dan Jeffries: So I think you need to start getting to the point where the, you know, again, you have a, additional models that you need. You need the basic middleware in place to deal with them. I think you need some sort of upgrade process. That's much more clear. When I see stuff like OpenAI, they're like, well, we deprecated the old model.

You got two weeks. Good luck. That is totally unacceptable. It's not going to work. You're going to have these kind of longer lived models because if you just upgrade the model, it can rapidly degrade your application. Developers need the chance to say, great, here are the last six versions of GPT or Claude or whatever.

Yes. If one of them is considered a security upgrade or something like that, then that has to be there. But in general, you can't just have, Oh, we swapped out the old one. And like now your application. That used to summarize text perfectly is now falling off by 75% overnight. And good luck. That's absolutely not going to work.

And so I, kind of, that basic level stuff is totally missing and has got to be fixed in the next 6 to 12 months for this to become viable.

Adel Nehme: And what you're mentioning here is basically on that example with OpenAI, is one of the limitations of closed sourced API providers that we see here. And I think this segues to my next question pretty well, is on the trade offs between open source and closed source models, right? lot of discussion now in the industry about, hey, should we work with an open AI?

Should we fine tune? Should we, build a large language model from scratch? How do you imagine these trade offs will evolve the next few months, what do you advise companies looking to leverage large language models to start?

Dan Jeffries: Well, look, I think OpenAI is not going to be the only game in town for, in anywhere. You're going to, you're going to have Minstrel, you're going to have Pi, and you're going to have Claude. We already had Claude 2, and my programmer was already telling me that it looks like it's much better at coding already.

So like you've got the open source models Llama is probably going to come out with a commercial viable version you've got gorilla you've got these other kind of like open source models. So there's going to be a proliferation of models that are going to be viable, you know for people.

I think that's tremendously exciting I don't want to see one kind of group dominate along these things and open source we're going to talk more about open source later, but I think to me, the open source is tremendously important because that lets the, it lets the developers and the, researchers who aren't maybe the researchers making You know, 20 million, 2 million, 1 million that everyone's competing for.

But you know what, not everybody who is super smart or has a great idea is already at that level in their career. There's a ton of regular folks out there, everyday researchers who might have a breakthrough because they have access to the weights and they can, go try out an idea that was too far out and they weren't going to get funding for in kind of a traditional, very expensive, eclectic, foundation model company.

and now they have the opportunity to do that. And if you look at that, there's a perfect example in the stable diffusion community. We'll talk about that more later too. But one of the things that I thought was really interesting is the Laura paper was made for large language models and the community adopted it for diffusion models to the point that I saw a Reddit.

On the stable diffusion community where the paper writer of Laura was there going, Hey, I never thought to do this. I want to do a new version that takes a new account, a larger stack of models. I want to talk to the community to understand what works, what sucks, what's better, right? That kind of fast feedback that happens from the open source analysis is super, super important.

And it's why open source eventually ends up eating the lunch of things in the long term.

Adel Nehme: So do you think that open source is like, ultimately in the long run, we'll eat the, like the lunch of like the large language model providers of today?

Dan Jeffries: so this, there's a couple of ways this future can play out. I like to do kind of a Monte Carlo analysis of the future and have like these sort of hard branches. There's a lot of this sort of weird lawsuit stuff happening right now where the people are trying to redefine copyright.

And I didn't fully expect that. I saw a lot of prop, challenges or protests or things like that. But what I didn't see was, these kind of lawsuits and this sort of, I'm an artist for a long time and I have no problem with sort of artificial intelligence, but a number of artists and other copyright holders are suddenly like up in arms about it.

So depending on how that plays out, my general sense is that the artificial intelligence industry is way too important to the world economy in the future for it to fall on the side of not allowing sort of public domain or public, scraping, I think. I think it just falls on that kind of naturally over long term, but that's going to play out and that's going to affect their trajectory of whether open source is held back for a decade or whether, is held back for a decade, it could change how the research works and force us down the path of like doing kind of a liquid Neural that trained by example kind of thing in the same way I take a kid out in the back, throw a ball to him and after a couple of weeks, he knows how to throw the ball.

Maybe he's not going to the major leagues, but he knows how to throw a ball. So I, that could change your trajectory. So I'm watching that kind of closely. The other thing is right now it's really expensive. And I would say. In the short term, the proprietary has a big advantage and they have the big advantage and they hire the best people.

They can buy the supercomputer. They can get all the data, they can be quiet about it, and they can license a bunch of data, so they have a big advantage over the open source providers. My general sense though, is that open source over a long enough timeline. It generally wins out open source is this weird, ugly, gnarly kind of thing.

And I love it, right? It's like messy. You know, You look at the early days of Linux, which has spent a lot of time with it. You go, how the hell is this shit ever going to be Solaris? Like it's fucking crap. And I got to compile my own thing. Like. It barely works, like, the old greybeards at the time were like, this will never displace you, you fools.

You know, you young whippersnappers. Get out of here with this crap. You need an enterprise support contract. but over time, the swarm of open source intelligence, right? Starts to Compact and like you get millions of developers, tens of thousands of developers working on this concept. And it just becomes harder and harder for any proprietary company to build.

You would never have been able to take all of Microsoft oracles, Adobe's, everyone else's money at the time and pool it together and build the Linux kernel. You couldn't. And so over time that openness I think ends up eating the world and if you look at Linux today, It runs everything, it runs supercomputers, it runs the entire cloud, even Microsoft, which if they had been successful in the early days of like it, would have short sightedly crippled their business today, which is, essentially runs around that, right?

so I think that open source in the long enough timeline wins. Now, I don't know whether that's five years, 10 years, 20 years, 30 years. I think the proprietary companies have a big advantage right now. And that's typical in any ecosystem too, but I think long term open source has a massive, chance to disrupt.

We may see even a third timeline where they exist peacefully, and there are huge parts of the stack that are open source and some of it that's just these very proprietary intelligences that are incredibly useful and hard to replicate. I think both of those can, that all of those are a possibility as well.

And it's It's one of these things that's a bit up in the air at the moment.

Adel Nehme: Okay, that's really fascinating insight. Maybe touching upon the community aspect here, you know, your previous company, Sybility AI, I think is a great example of an open source AI startup that has put itself on the map as an AI leader. You mentioned the community aspect of LoRa, for example from large language models to diffusion models.

Maybe walk us through the community aspect, why it's so important. for the progress of AI, how you start to play out stability in a bit more detail, and why it's so crucial for the future success of AI products.

Dan Jeffries: Again, I think it's because you get minds involved in the project who have not passed the gatekeepers of the current state of the art. What's nice about it, if you think about something like, for instance, when the Kindle came out and allowed direct publishing, there was, at that time, I remember as a writer and writing novels with my group, there was a debate about whether you're a real writer if you publish directly or whether you publish with one of the big six gatekeepers, right?

And you know that argument looks ridiculously stupid now and you know The kindle allowed you to keep 70 of your profits and then there was a hype There's hybrid publishers now at the time the gatekeepers were taking 90 of the profit and giving you 10. Okay And and it's a totally insane nowadays.

It's been much more democratized because of the openness of it Yeah, you got more shit too, right? Okay, because you open the floodgates, right? And the gatekeepers did do a good job of like, you know being able to say wait I think this is great, but they still miss things. that's the thing about a gatekeeper is It is a limited choke.

And I think that's the same thing with, when you have open source and why the community becomes so important is people who might not be traditionally a part of machine learning or whatever, get to contribute their ideas. And so I saw a lot of ideas in the community. Like, again, I mentioned earlier, they would like mash up like 20, 30 models and like get a better model.

And a lot of researchers were like, that's crazy. It's not going to work. And then it did. And then you see that kind of idea filter back in traditional machine learning where it's like, they took Palm and they jammed together a vision transformer with it. And all of a sudden the robot could find itself around like unfamiliar environments.

And like, and they didn't do anything else. They didn't retrain it, that was awesome. So I think that kind of feedback loop happens. The Laura paper, which I mentioned earlier, and the kind of feedback there. somebody did an analysis when I was still in stability of how fast the community was integrating ideas 18 days.

They were like implementing the code and integrating it into the thing. if it was a, like a proprietary piece of software or a new idea that was ready to go, like they were, or a plugin, they were integrating it in a day and a half. And so, even look, you look something awesome, like Automatic 111 or Comfy UI, which are two totally different interfaces, right?

For how you interact with these things. One is that kind of. You know, uh, Flowing the concept where you link together the different ideas and swap them out and comfy automatic, which is like the kitchen sink approach, right? Where you just throw everything in there, right? But this kind of rapid iteration of trying of ideas and new concepts and bringing in people who can't pass through the gatekeeping, but have an idea outside of the box that contributes to those things now.

That's interesting. I think I saw Andrew Carpathia or whatever speaking at the agents conference. And he was like, every time we see a paper inside OpenAI that is about some new technique or whatever, like, Oh, there's somebody inside that's like, Oh, well, we tried that two years ago. Here's why it doesn't scale or didn't work.

Blah, blah, blah. He's like, every time we see an agent paper or something like that, we're like reading it, like it's the great, like it's, I don't know, G. R. R. Martin just put out, finally put out Winds of Winter. And he's like, because it's all new to us. It's all like, it's totally new stuff. With people are doing these kind of agents and that becomes from outsiders having that stuff.

And then it gets even more important when it's open and you can have the weights and now there's like dozens of adapters that are like way more efficient now as people are like, well, maybe if we just tweak these weights or just this layer, this thing, or we inserted here, make it smaller. These kinds of things cannot happen when you don't have access to the full model.

So open source to me is just tremendously important. And I'm happy to see so many companies that there's, Red Pajama and I guess there was, run AI was required and, there's Databricks pivoting towards that and doing some open data sets, community collectives, there's Lion and there's the open chat, group from that awesome podcaster, I forget his name.

He's, that stuff is super cool. And that comes out of just more access or access is so important. And we have this weird idea today of like, well, we've got to make sure that only these trusted people have access to stuff We'll screw that that system never works, right?

We have three providers of people who are trusted with your credit card data One of them lost all the data, for half of the united states. They're still a trusted provider that's how gatekeeping works. You can't rip them out of the system. And so gatekeeping to me is garbage.

I hate it. it carries no water with me. This kind of idea of like, it's only the trusted people who can only can, can have this. You know, but no, an organization is made up of human beings who might be trustworthy at the time. Those people can change over time and make that former institution that was trusted now totally untrustworthy.

so I don't buy this crap at all. I think open source is critical The more minds you have working on it, the better you get to alignment the better you get to Like things that are beneficial for all yeah, it's going to do some bad things But just because photoshop can put a head on a naked body does not mean we need to restrict photoshop.

It's stupid, right? It's like linux is used for mal malware and hacking It also runs every supercomputer in the cloud and nuclear subs. So I don't buy this whole concept of well Unless you can guarantee the kitchen knife will never stab anybody. You can't put it out and i'm like i'm like, wait a minute Like 99.

999% of the people are going to cut vegetables. We have laws to arrest criminals I don't understand this concept so open source to me People have got to get out of this mindset of only the trusted people who couldn't have access to this stuff,

Adel Nehme: an impassioned defense of the open source AI ecosystem here. And I think this marks a great segue to discuss, AI risk in general, Doomer discourse, what we've been seeing a lot in the past, few months when it comes to Potential AI risk and existential risk, right? We've seen a lot of high profile individuals call for the slowdown of AI development.

And you know,

Dan Jeffries: a few, not a lot.

Adel Nehme: Yeah, a few. We've talked about the existential AI risk AI poses to humanity. I think you have quite an opposite view here, from your impassioned defense of open source, but AI. As a means to reaching better AI safety. Maybe walk us through your line of thinking here

Dan Jeffries: Look, me and Marc Andreessen, you don't agree on this, right? Like, and I think this stuff is just way too beneficial and too important. and every technology from the sundial to like bicycles, children's teddy bears has been shouted down as like the end of the world. it hasn't happened yet.

I don't buy that it's going to happen with this. I really do not buy the, like, the far doomer. And I look, I'm a sci fi writer. I love, the singularity and all this kind of crazy stuff. I don't, I don't know if it's actually going to happen, right? But like, it might just be a cool literary construct.

But this whole concept of like, especially from the, you know, Udowski and all this kind of stuff of like, oh, I can always tell when someone's a member of the cult of Udowski when they're like, have you heard of orthogonal theory? I'm like, you mean the theory that like. Intelligence and niceness don't line up.

How long did it take you to think of that? 10 seconds? Like that's not a theory. Okay. That is nothing. That is a blatant statement of something so obvious as to be pointless. and what I don't see is any alignment research. When I see that thing, when I see that called research, that is not research.

That's philosophy. I'm a writer about artificial intelligence. You don't get to call me a researcher. I've published no papers, no mathematical theories, no actual, like I didn't invent reinforcement. Okay. That's what I call a research. So when I see like the, president of Anthropic, like looking at this and going like, this is absurd.

That's the company where the people were like last open era, where they're like, you guys aren't, thinking about safety and alignment enough. Like we're going to start our own thing, Those are engineers working on a problem. I do not believe. That you can solve a problem that does not exist in the future.

Now, the way that problems are solved are what problem starts to happen. And then you, as an engineer, look at it and solve it in real time. We don't know like the early days of refrigerators where the gas would like occasionally leak and blow up. You don't know that's going to happen ahead of time and you can't solve it.

Until you're like able to look at it and go, well, you need to make the gas stronger. We need a different gas in there. We need to do these things. And slowly over time, you do that. We starting to have real engineers look at the engineering. And to me, every technology exists on a sliding scale. From good to evil.

A lamp might be closer to the side of good, but I can still, I can light my house with it. It's really, I can still pick it up. Then they hit you over the head with it. A gun might be closer to the side of evil, right? It kills people in wars and all these kinds of horrible things and cunt. But I could still hunt, feed my family, defend myself, these kinds of things.

So AI is right in the middle. It can do really terrible things. All this super intelligent doom stuff, it detracts from the fact that like it can be used for like facial recognition against dissidents or that it could be used like, in lethal weapons technology, right now.

In some cases you might even be able to say that's even an example of something that exists in a gray area And it's not black and white. Is it better to have a bomb that hits a like It's like wars are going to happen no matter how much we hate them and we don't want them to happen. Is it better to have a bomb that hits a building and blows up everybody in there when you're trying to find one target?

Or is it better to have a little drone that zooms in, finds the thing and, kills that one person? Again, you could make the case even that might be an advancement. Or you could make a case that this is just a horrific thing that we should never allow. But all this kind of doomerism stuff really detracts from these kinds of basic problems.

And they're not solved by any of this philosophical nonsense. I don't think it has any weight whatsoever. It's not research. It is a bunch of like people talking about stuff. I am going to go with the engineers. I'm going to go with the people who are going to actually figure out how to like, make these things interesting.

I just watched the TED talk from Udacity. It's like, well, this things could develop in ways that like, we don't have, it's not like humans at all, except in the desire to freaking kill everyone. So it has none of our, it has, it evolves from us.

It has none of our capability, you know, has none of our emotions or ideas or shares, none of our values, which is absurd. You're already seeing things like with constitutional AI and things that kind of adjust it to the values, plus it's trained on human things and trained by human examples. So naturally we're going to push it in that direction, but it doesn't have any of those values except the desire to dominate all things and kill everything.

Then there's even been papers about that kind of stuff where they're like, Oh, the dominant life form is to eradicate other species. I'm like, what are you even talking about? Like the wolves don't evolve to eat all the bunnies or they'd be dead, right? The bunnies don't proliferate so much. That the wolves can't eat them. And I don't see this. I, you, there are even examples of like, a crab and a blind sort of like, the shrimp or whatever, working together in the same hole, one of them defends the other one keeps it clean. You see all these kinds of reciprocal relationships in nature.

So a lot of this is just based on weird speculation. And in the past you had people who were gatekeepers on opinion, or basically you didn't get these kinds of fringe ideas. And now the exact opposite is true. You get like, as soon as like the most polemic, the most like black and white thinking, the more black and white you can make us or that kill or be killed or whatever, the more divisive you can be, the more like insane you can be, the more you're going to get amplified in the media to say like, this is the way it is.

Now, it doesn't mean that there are no highly intelligent, like, people, Jeffrey Hinton. Great respect for these, you know, Yoshio Bengio and Bengio and such. Like these people are looking at, I think, unfortunately they, they look at it in a more nuanced way, I think Jeffrey Hinton said recently, like, if these models, I used to think they were worse than human brains, but maybe they're better in some ways.

Meaning like they can download the ability to learn a new skill or whatever. And I've seen that as a continual learning possibility for a long time, right? You're a robot and it's like, boom, download the ability to do the dishes. Boom, download the ability to walk the dog. That's cool. We can't trade ideas like that.

And so in some ways they are better and we do have to be careful. We have to be careful about how we use these things, but I worry more about a dumb intelligence, controlled by a sadistic, nasty human being. Then I worry about, an intelligent. Like supermachined rising up and having its own desires and escape.

Where is it going to escape to? another eight way H 100 cluster with 760. It gets a RAM, a vector database and a back end. Like what people think of it as like an in person or a liquid flowing thing out in the ether that could just flow somewhere else. This is utterly ridiculous.

It means you don't understand anything about infrastructure. It makes no sense whatsoever, right? so when I look at these things, I look at, there are real potential problems. And then there's all this utter freaking nonsense that people take seriously. I can't take the paperclip maximizer seriously.

I can't take it seriously. I think the Dick Bostrom is ridiculous. I don't know why, I don't know why anybody would use that as an example of like, that's not super intelligence. That's super psychotic. And if I'm a super intelligent, like robot. I'm not even going to go, you know what? I'm going to become so obsessed and I'm going to turn the whole universe into paperclips.

I'm just going to designate that to a sub process. That's dumb. That like maximizes paperclip efficiency in the factory and be done with it. I replied, but so none of this makes me, to me, it's so much wasted time. and like, I don't know why they get so much press. Other than like, it's just like, click, people like to be afraid.

They do. They're like we are big fear based creatures. We'd have no art, no skyscrapers, no tools, no war, no Kings and Queens, nothing without fear. And so this is just the latest thing to be afraid of. Don't worry about it. Then 10 or 15 years, they're going to move on to something else to be afraid of next month.

They're going to move on to something else, be afraid of it. Some other technology that's going to kill us all.

Adel Nehme: a lot to unpack here as well. And what I love . I love that you mentioned the fridge example because in your writing you do draw a lot of historical examples of how technology that we now take for granted was looked at as potentially existential risk or as highly disruptive of the way we live, right?

Maybe would you want to mention some of these examples that we've had in the past where we there's quite a lot of moral panic around these technologies and how does that maybe parallel with the current AI risk discourse?

Dan Jeffries: Almost every technology has had a moral risk around it. So like, teddy bears were going to be destroy young women's desires to have babies and to be nurturing mothers because they were going to waste all their time on the teddy bear. Social media has been, destroying us and is every politician's favorite, like, ability to destruct.

I would argue that it's a totalitarian system, Like we see where there's millions of. People trying to desperately control social media versus regular social media, which has some exploits is if the social media is perfectly useful to us. We get all kinds of voices in there and people go, well, I think Sacha Baron Cohen was just on there saying, Oh, if Hitler was around today, he'd be running 30 second ads on the final solution.

I'm like, yeah, well, guess what? Hitler and Stalin didn't actually need social media to whip up. a ton of people into one of the most terrific genocides in history. So that doesn't track for me, just because you could use a technology that way. Cold technologies were a great example that we used earlier for something that was incredibly disruptive to the jobs at the time, And that was potentially really dangerous, right? In other words, the explosions, fires, these kinds of things in the early days. Cold has absolutely changed the way that we live in civilization. We can live in environments we'd never live in. We have a steady food supply. Back when in the sixties, they thought, the population bomb was going to destroy us all.

And all of a sudden we have the green revolution, right? but the fact that you can keep vegetables and eat cold allows us to have a much steadier food supply in the event that you have a bad crop. This is amazing. This is incredible. It's one of the, we are the luckiest 1% of people ever to be alive.

The luckiest 1% of 1% of people ever to be alive today. People don't understand this, and it's because of technology. Our child mortality rate is 4%. It used to be half. You respect every child, if you had two of them, one of them was going to die. In the 1800s, right? our life expectancy has gone from 30 to 70, right?

Because of these, medical technologies and the things that we have now. Antibiotics, chlorination in the water. They went on trial. John Leo went on trial for putting chlorine in the water. Meanwhile, cholera was killing millions of people every year. he went on trial because we were like, you can't do this.

It's going to destroy everything. It's going to destroy all the water. And of course, like, luckily it was acquitted and chlorination is now like probably dropped child. There was this Harvard study that said it dropped child mortality by 74%, 74%. So every single one of these technologies, when you look at them historically the ice industry wiped out the industry, which were huge businesses of people chopping ice.

Out of rivers and frozen places and shipping. And so I'm, I am sorry that those folks don't have a gig anymore. But I do think that like having ubiquitous cold technology was useful. And like when we invent the electric light, some jobs do go away. This is true. But new jobs are created by the possibility of these things. And I am sorry that the whale hunters are gone. And that, we don't have to hunt giant leviathans to kill them and dig the white gunk out of their head to light candles. and because there was disruption, so there are like, actually, and there were huge debates over the danger of electricity. Actually, if you read the empires of light sometimes perpetrated by the people themselves in AC versus DC. Where Edison had invented DC, AC was able to make it travel much, much further.

And so he tried this spear campaign about how deadly, AC was. And they even got to the point where they were like, electrocuting a dog or whatever in public to show how dangerous it was. It's crazy. And of course, like AC, is this ubiquitous technology that lights the entire world and makes it possible for us to, to do things we never would have been able to do.

So almost every single technology in the history of man, has been like punctuated by some sort of like, moral panic. Think of the Luddites, that famous example, we still have. Rug makers, who are able to make a, custom beautiful rug and they can charge tons of money for that thing.

And we have machines that now make a rug, from Ikea for everybody else. And so what you get is this distribution of things. And then the last argument against that people sometimes use as well, this time is different. Guess what? That argument has been used every single time. Every time is different.

I don't think that it's different. The example of like, maybe this time where the horses. To the car and that the horse population radically declines. I just don't see this happening. I see this as like, I see if there's negative uses of AI, there's other uses of AI to counter it.

If AI speeds up malware creation then it's also going to speed up the ability to like auto magically, quarantine your web software and, when it's infected and respond with a new. A new update written on the fly to, to kill that thing off.

So, so when I look at these kinds of things, I see, these kinds of super intelligent, these crazy AI things, I go, that's the understanding that AI is going to involve in a vacuum. Like the old Jules Verne version of technology where one guy has the submarine. Like for when I look at, that's not how technology develops. Like it's not interesting. And you saw the sci fi shift in these where it's not interesting if one person has a cell phone. It's only interesting when everybody does. And we're going to have lots of AIs working counter and doing the other.

So, look, all these technologies have had some moral panic. And in the long run, I think technology is almost always beneficial. It doesn't mean that there's never any dark sides to technology. Sometimes people hear me say this and like, Oh, you just worship progress or whatever.

Well, I, yeah I do. I worship like, going from 50% mortality of children to 4%. I worship medicine that, that actually works and gets us to 70 or 80 years old. I worked, I worship our, you know, the green revolution and our ability to not have to kill 2 billion people like the population bomb was recommending in the 1960s.

I, yeah I think that's awesome. And so if that means that I think progress is awesome, I think that it's true. Does that mean it has never has any downsides that we should ever consider? No, that's ridiculous. Of course, everything does. and our best thing is to iterate, move, mitigate, come up with those answers when they have to mitigate those kinds of things, right?

To mitigate the harm of these types of things as they develop. And I think that's the beauty of technology and the beauty of progress.

Adel Nehme: I also agree that it's a good thing that we're not killing whales again or anymore, right? As you mentioned. And I definitely agree with you that it's definitely a good thing, right? All of the technological advances that we've seen, while also acknowledging the downsides of technology. One thing that you mentioned is, a lot of the arguments that we see from AI doomers today tend to be grounded within, philosophical arguments around superintelligence, things that have not yet happened, and it tends to distract away from current actual problems that are potentially, that can potentially arise from highly intelligent systems that we see today.

Maybe what are some of the real risks, unpacking that a bit more, about AI tools that you're worried about today, right? And how do you think that we should be approaching risk management when deploying AI models at scale?

Dan Jeffries: Yeah, I think they're open to all kinds of new security vulnerabilities, like, prompt injections that we're just starting to deal with logical flaws that could be exploited to give up secrets in terms of, like, a company or something like that. I think that I think we mentioned a couple of these earlier.

I think AI being used in lethal systems is something we should be really cautious about. It's going to happen anyway. Even if we ban it, it's going to happen with black budgets. It's something we have to be aware of. I think AI being used in population control, dissident watching in totalitarian countries is absolutely like horrific.

think those types of things are terrible. think that, it's subject to sort of basic mistakes of logic and reasoning at this point that it's just not, it's not perfect at that kind of stuff. I had a lot of argument with people recently. I was against the protest in San Francisco where they were putting cones on the cars to stop them.

I'm like, look, these are responsible for zero deaths and people kill 1. 3 million people a year and 50 million people are injured in cars. And they're like, well, a dog was killed. And I'm like, okay, well, how many people? that sucks. I love, my dog too, and I don't want anyone to lose their dog.

But like at the same time, how many like dogs went hit by cars driven by humans. So statistically they are safe, but again, there is some merit to some of the folks who said, Hey, maybe we pulled the safety drivers out too fast, or maybe they're not quite safe enough. I'm concerned that even if they were 10 X safer, people would be like, down with the self driving.

Whereas I'm like, look, if you cut that statistic, I said by half or cut it down to a quarter, that's a million people walking around on the street playing with their kids and their dog. And so I think that we should push forward with those kinds of technologies. But I do think we need to have a higher level.

I don't really fully agree with the EU AI Act. I think it's overreaching. I think there's a lot of politics involved. I think it creates a lot of bureaucracy. Suddenly social media algorithms are classified as high risk based on an amendment and make some politicking. I think that's nonsense, but I do agree if you're going to put these into they're driving heavy machines in the physical world that could kill things that there needs to be a higher level of accountability and a higher level of.

understanding of what's happening, a higher level of like keeping the logs, version control, knowing where the data come in, like understanding and giving the investigative tools, to lawmakers and enforcers of the law to understand these things. If they're going to be controlling, the settings on a nuclear reactor or whatever, this is stuff we should go very carefully. Okay. And there shouldn't be high levels of high bars. For these kinds of things. So yeah, there are real risks and we are wasting our time talking about sci fi level nonsense when we could be focusing on stuff that's really important.

Adel Nehme: I couldn't agree more. Dan, as we're closing out our episode, I'd be interested to talk to you about, where you think the space is headed and maybe having some predictions for us. Given the rapid pace of development of AI I think it's safe to say that things are going to be quite different in 12 months.

Where do you think we'll be in 12 months when it comes to, AI, the proliferation of AI, maybe risk coming from AI? I'd love to hear your thoughts here.

Dan Jeffries: Look, I think it depends on the kind of breakthroughs we see or whether we, or whether the technology remains static. In other words, if it's basically transformers and they learned once and then they're stuck. At that level of learning, and then you can maybe augment them with external knowledge databases and all that.

That's one level that the technology gets us to, that's very useful. I think we, even with that level of technology, we see a proliferation of agents. And we see sort of a democratization of things that we might call things like RPA, right? So RPA traditionally been this big, heavy, ugly lift of, filling in forms or things like that.

and it was did really work. But now we have these LLMs that can go out, they can read text, they can understand that text, they can do research. Like we, we built a little agent at the Infrastructure Alliance that we fed at 2, 000 companies in the near table. It went out and read all the websites, it summarized them, it scored them based on whether they'd be a good fit for joining the IA.

That was 95% accurate. We reached out to 50 of them. And 10 of them got back to us, two folks joined. Now we're expanding out to 200. Those kinds of tools are going to be super ubiquitous. I think these are fantastic. Everybody should be out there working on agents. And you don't need a fully autonomous agent.

I consider a semi autonomous agent with a human in the loop having done the decomposition to be incredibly important. I think these are going to be ubiquitous. What could really change your trajectory would be a couple of continual learning style breakthroughs. Like a post transformer breakthrough that lets The neural net continually learn on the fly from new information.

To me those kinds of things, even if it's something where. You do need all its progress and compress and you can download new skills and jam them into the old one. Or if it's legitimately learning on the fly, like the liquid neural net, that can be takes us into a whole different world. Because now you have something that's always learning and is able to adapt in real time That takes us to a bonkers level of awesome it but who knows whether that comes around the corner or whether it's there in the short term I expect a huge proliferation of agents a huge proliferation of like automation of very tedious boring, details our day to day life And I think that you know is just going to be tremendous at the same time The infrastructure is going to start to develop more make it more stable, ubiquitous.

You're going to see a ton of middleware to keep these things on the rail. and what you could definitely count on is more and more, crazy nonsense of people shouting about the end of the world. But don't listen. I think they're just going to, for the vast majority of use cases, they're going to make drugs easier to discover.

They're going to make. transportation, safer, they're going to make like, our day to day life where we're like sorting resumes and instead I can have the agent do that. And I can just talk to the people I want to hire and save, two hours a week. When I think about the research assistant that saved us like two weeks of reading like 2, 000 websites just to be like, Oh God, there's this.

Oh, how do you know, reading that marketing, that's super cool. And I just think this is just an exciting time to be alive. If you're not working in this stuff, you don't listen to them. There's going to be no programmers like get in there. We still need brilliant programmers.

The hardest part is not writing code. It's coming up with how to think about a program and then break it down. And to get into it even more, just level up the skill, adapt to it. If you're an artist. Don't be worried about this stuff. Just embrace it as part of your tool set. Artists are not going anywhere.

We're always going to have artists creating real things. This concept that they're all going to be disappeared. I'm sorry, but it's just not true. Get in there and embrace it as another tool set. It's just going to be like using Photoshop or something else or a paintbrush. Yeah, it's going to really just be amazing.

And I think it's going to be amazing across every industry. And this is one of the greatest times to be alive and you should just embrace it. Just embrace it with relish and love.

Adel Nehme: I think this is a great way to end today's discussion. Thank you so much, Dan, for coming on DataFrame. It was a really wonderful discussion.

Dan Jeffries: Thanks so much for having me. And I really enjoyed it. Just a wonderful host. Really fantastic conversation.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More