Skip to main content
HomePodcastsArtificial Intelligence (AI)

[AI and the Modern Data Stack] Accelerating AI Workflows with Nuri Cankaya, VP of AI Marketing & La Tiffaney Santucci, AI Marketing Director at Intel

Richie, Nuri, and La Tiffaney explore AI’s impact on marketing analytics, how AI is being integrated into existing products, the workflow for implementing AI into business processes and the challenges that come with it, the democratization of AI, what the state of AGI might look like in the near future, and much more.
Feb 2024

Photo of Nuri Cankaya
Guest
Nuri Cankaya

Nuri is VP of AI Marketing at Intel. Prior to Intel, Nuri spent 16 years at Microsoft, starting out as a Technical Evangelist, and leaving the organization as the Senior Director of Product Marketing. He ran the GTM team that helped generate adoption of GPT in Microsoft Azure products.


Photo of La Tiffaney Santucci
Guest
La Tiffaney Santucci

La Tiffaney is Intel’s AI Marketing Director, specializing in their Edge and Client products. La Tiffaney has spent over a decade at Intel, focussing on partnerships with Dell, Google Amazon and Microsoft.


Photo of Richie Cotton
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

In the next five to 10 years, we will merge with AGI. So we will be a part of artificial general intelligence. And we will all as humanity, we will benefit from that. So if you look at from that perspective, everything we interact today from an industry perspective will definitely drastically change. So we need engineers and the term engineers really looking at the phases of development and maybe parsing them out and make them more efficient. So in the industrial engineering age, we did it perfect with manufacturing, right? So like the Henry Ford's all model T and like production in the mass. It changed what we get today from a mass production and industrialization perspective. Similarly, I think the AI engineers will change the future of AI in terms of how do we interpret the data? How do we collect the data in a secure, private, and all the consistent ways, and generate the right level of output in a well-sequenced way?

If someone is wanting to really develop their skills in AI, I think one is going to be having that proficiency in machine learning and AI and all these other disciplines that are under this AI umbrella that we've talked about today. That's number one. Secondly, is really going to be having a good understanding of how all of these different models can kind of work together. Right? What we're gonna see with these engineers in the future is they're gonna be putting together all these different aspects into one solution. So it's not gonna be a one size fits all. So really having that flexibility and honestly, creativity to come up with these solutions based on whatever challenges they are faced with, I think that's really going to be essential. And then, you know, also I would say that just keeping an attitude of continuous learning because everything in this AI space is changing so rapidly from, you know, the new developments happening to the new software and hardware coming out. You have to just keep learning to stay up to date on this. So I think if someone wants to get in the AI field, those are some of the key areas they should focus on.

Key Takeaways

1

Ensure you have access to high-quality and sufficient data for your AI models, as the success of AI initiatives heavily depends on the underlying data.

2

AI implementation requires a clear problem definition. Starting with a well-defined problem is vital. AI solutions should be outcome-based, focusing on making, saving, or protecting money for businesses.

3

Collaborating with experienced partners can provide valuable guidance and support throughout the AI implementation process, from problem definition to solution deployment.

Links From The Show

Transcript

Richie Cotton: Welcome to DataFramed. This is Richie. Often on DataFramed, we talk about the high level applications of generative AI. We'll have a bit of that today, but the hardware for generative AI is also fascinating, and I'd like to dig into some details of the infrastructure for these models. To teach us today, I have two Intel executives.

They both have a background in marketing, so I'm also keen to pick their brains on AI for marketing analytics. Firstly, Nuri Cankaya is the VP of AI Marketing. His work focuses on helping businesses overcome their challenges through the use of AI. He's also the author of six technology books. Most of them are written in Turkish, but you can pick up an English copy of his Web 3.0 book, Everything is Naked. In his previous role as Senior Director of Product Marketing at Microsoft, Nuri ran the go to market team for adopting GPT on Azure. Secondly, La Tiffaney Santucci is the AI Marketing Director for Edge and Client. That is, she runs a team overseeing the marketing for AI on PCs and on devices for low latency computation.

Like Nuri, she has a deep expertise in helping businesses make use of AI technology. Let's hear what they have to say.

Hi, Nuri and La Tiffaney. Great to have you on the show.

Nuri Cankaya: Hi, Richie.

Richie Cotton: So I'd love to talk a little bit about adopting in the enterprise. And since you both work in marketing, let's start off with how AI is changing marketing analytics.

Nuri Cankaya: It's def... See more

initely like, the year of AI, maybe the decade of AI in the upcoming years. We will. I feel AI is infusing our marketing execution deeply. Starting with the data analysis, I think we see today we are heavily involved in getting some of the insights and predicting some of the trends. Before it happens.

So and previously we weren't able to analyze the data models and come up with some of the learning. So I believe understanding customer behavior and really getting into the next prediction level before they knew they want the next product using the marketing tactics to really help them. Hey, like we know your minds are all like going this way and this is the best fit for your I think that's a really tactical way of using uh, the marketing and AI together, but lit, I dunno if you wanna comment more on this.

La Tiffaney: So, think the data piece that we see with AI is really what's going to change how marketing is done with AI. You're seeing things already like personalization, like product recommendations, those sorts of things. It's only going to get better and that those future trends that we're going to be able to forecast from that information is also going to be at the forefront.

Richie Cotton: So both great points there. Nuri, I really like the idea being able to predict what customers want before the customers even know it. And La Tiffaney, yeah, the idea that Data is going to be the key to success with AI to seem incredibly important. these are things I'd love to get into in a bit more detail later.

But for now, I think one of the big trends is companies trying to include AI in their existing products. So maybe can you just give me some examples of how companies are trying to do this?

La Tiffaney: yes, that is absolutely true. We see that 58 percent of CEOs from leading public companies are actively investing in A. I. This is what is on the top of everyone's minds. And think when you look at what is most common right now, it's really that customer service with the chat box. And this is it.

Very popular right now with the rise of all of the LLM that we've been seeing lately. But also beyond just the customer service with the chatbots, we're also seeing a lot of vertical use cases. So depending on which vertical you're in from, AI and health care looking at. Treatment plans, diagnostics, things like that, but also retail, like we mentioned earlier, right?

A lot of those personalization engines on. And then finally there's even a lot of vertical specific use cases of an automotive, right? We're seeing this with the self driving cars and enhancing the safety features as well there.

Richie Cotton: That's brilliant. And I think, yeah, maybe chatbots is the thing that comes to everyone's mind initially, but I do like the idea that there are There really are sort of vertical specific use cases.Nuri, do you have any more examples of ways AI is being incorporated into products?

Nuri Cankaya: I think generative AI will touch many aspects of all those industries that lit mentioned. And you'll see like more disruption in many of the other untouched industries. So if you look at. Today, I mean, the jobs is always like the top.

question that I get like, and initially, when I think of the AI, I thought like, hey, this will start from the process automation, some of the basic things. But if you look at where the generative AI solutions are, it started with the white collar jobs, writing code, like generating images, generating videos.

Every day when I log into social media, I see a new AI solution coming up. So it's really unpredictable what the next major workflow and industry, but in a nutshell, I think all the industries in the next couple of years will be heavily impacted by AI. And as Intel, we are trying to be ready with our solutions to make that transition happen.

Richie Cotton: Absolutely. I mean, the, the impact of AI is certainly very wide ranging. I do think most industries are going to be impacted. In terms of some of the practical details, can you talk me through what a typical workflow is for going from, hey, we should probably do something with AI to it actually becoming implemented?

Nuri Cankaya: it all starts with the problem definition, right? So like today, what we really want to achieve with AI at the customer level is solve some of the problems or mitigate some of the issues. I always approach this as an outcome based solution. So all the customers really look for three outcomes.

They want to make more money, they want to save more money, or they want to protect their money. So that's the purpose of the business. And I think if you look at the identification of the problem, let's say they want to do a new way of innovation in their business lineup. The first thing they have to do is data preparation.

So without the data, AI will not work. So it's really dependent on it. And like, you cannot just like dump the data. I mean, like there are models that you need to choose from. If you look at the large language models and how they work today, It's like really predicting the next word in the sequence and really making it like really effective.

But in order to start with that, you need to have the data. So you need to agree on what are the training data that you will provide, what's the data model you will train, and you start. So the training is a big process. And people think on the AI side, I will train once and I'm done. Unfortunately, it's not the case.

Because AI will always transform the business, it will predict the next thing and will come with even better accuracy every time you do the modeling. And of course, the next level is inferencing. So, and we need to make sure that this is one of the workflows that is integrated into existing products.

At that point, you have the outcomes from like, AI algorithms and make it more sense, but the journey starts again, so it never ends. Like, you have to get all the outputs from your let's say, inferencing. You have to do all the training once again, and then, like, you have to fine tune the model. So, fine tuning is a big part of the journey and like, in some cases, you have to Be the moderator.

So you see some of the user inputs and you have seen like, on chat GT and open ai. When you ask specific questions like, gimme the predictions for the next big boom, it'll not answer that because that's the moderated learning. So you say like, Hey, I'm not, I'm just a chat bot. it interferes the model data and gives a input.

So the companies. needs to put some gates before it's shown to the users. But that's kind of the, the journey, like, from the data collection to training and inferencing and all this process repeating itself for a long time.

Richie Cotton: I love that there's a pretty well defined process now for adopting AI. I want to highlight the thing that you said first there, that You really need to start with like a business problem and have a well defined problem definition. Otherwise, it's all going to be meaningless. So maybe we'll talk a bit about like things that can go wrong.

So I guess not having a well defined problem is one of them. Let's see. Can you talk about any of pain points in terms of adopting AI?

La Tiffaney: So one of the main pain points that we really see with companies is that they have this excitement around AI and they want to include AI in their current workflows, but they don't really know what AI is needed, right? And AI is such a broad term in terms of what they could actually do for that.

So I think that is number one. Secondly, if they even really need AI, right? So sometimes there are Yeah, absolutely. Certain solutions that we can come up with that might not include something that is in the A. I. Arena, right? So that's another thing is just figuring out what is the outcome that they really want before we back into what that solution is or what it's gonna look like.

one of the things we always tell our customers is that that's why it's so key to work with partners like Intel to really understand that customer outcome that they're looking for. And then we can work with them along the way to it. find out those business requirements, find out what hardware or software is needed to really bring that solution to life.

But having a partner is, is definitely key in this journey. It's new for everyone. And we want to be at your side every step of the way, for sure. So as

Richie Cotton: Excellent. Yeah. So talking about the software and hardware requirements, I'd love to get into that in, in more depth. So, I remember Satya Nadella, the CEO of Microsoft, recently saying that Microsoft is having to change the whole infrastructure of Azure just to cope with all these new AI workloads.

The cost is a pretty eye watering. And large language models in particular are sort of notoriously expensive to train and to run. So, do you see this trend continuing? Or do you think that they're going to get cheaper? What's going to happen?Nuri?

Nuri Cankaya: I think we are in the, like, transition phase for AI everywhere. And that's why maybe you heard about Intel is trying to bring AI everywhere. I think the Azure example that you give is like a good example. So I worked at company for 17 years before joining Intel. And I have seen the early days of AI, like again, my team worked heavily on getting the collaboration with open AI team.

And in the early days, like I think the applications made it more tangible. But in the early days we had the models on the backend, but when you don't have a client to interact with, like the chat gt, and that's a brilliant idea because it uses natural language and any language actually to really get a response.

Request from the ai. It made it obvious that this is going to impact everyone. And with that, we are seeing that from AWS last week the reinvent event. And also like with our OEM partners like Dell, Lenovo, hp, everyone is trying to be ahead of the game including uh, the market dynamics are changing in both on, on-prem world where there is still data centers of customers.

They have to rethink like, Hey, am I gonna be in the AI game? And my hardware is really compatible with the needs of the ai. Because again, I will remind that like from data preparation to training to deployment, it's like a circle and you need AI compute. and I, I almost gonna say like the new compute is the AI compute.

Like everybody needs it. So, in the early days of cloud solution providers, we were always thinking about virtual machines. So, they will just run your own virtual machines on your data center on the cloud. Cost effective way. I think there's a value to have a hardware innovation. Back to your question.

I think this will be just accelerated in the upcoming months. And think we will also maybe share some of the news. But as an example, Intel acquired a company called Havana Labs. A couple of years back and our hero solution is Intel Gaudi and Gaudi is an AI accelerator. So it's a specific purpose built accelerator for AI in the early days of I think AI people were using GPUs because like graphics processing is really fast and you need that fast for all the training inferencing and like deployment solutions and all the training, but now like you need to focus on acceleration.

So simple answer to your question, we will just see a huge wave of innovation in the upcoming months. Even like not years, months on this journey. And there will be new hardware from Intel and also from many of the ecosystem players.

Richie Cotton: So, this is really interesting because my sort of Mental model for this is that a lot of people are using GPUs at the moment to train these models. I don't know, because Intel is sort of maybe perhaps more famously known for your, your CPUs. so you also have a product called Gaudi, which is chips specifically for AI training.

can you talk me through, is that an alternative to a GPU? what's going on there?

Nuri Cankaya: As I said, it's an AI accelerator, so I think the Gaudi is specifically designed for AI in the mind of the early days of deep learning. Now we're at the Gaudi to the second version and like we already introduced in 2024 in this year. So we'll have. Gaudi tree in the market. So again, it's like not years, but it's months almost we introduce new products with the pace of the ai And the reason we specifically have an ai accelerator is the models require not only the gpu Again, there is like specific tasks that gpu can be really a good let's say, resolution for any AI problem.

But moving forward, we will see more AI specific hardware, not only the graphics acceleration, but like acceleration in the neural processing, acceleration in the training models, acceleration in the inferencing models, like everything will be a part of the journey. So when you have a dedicated AI accelerator, which covers all those solutions.

You will get like more faster results. And it's gonna be very cheaper. Because one of the things it's hard to find GPUs in the market nowadays. The reason why is, again, like, everybody wants to do something on AI. they require all the models to use a GPU. But in the long run, we believe this is not going to be the case.

Before handing over to Litifani, like one of the, we also have a GPU product, by the way, so, and it has specific purposes. So if you look at high performance computing, for example, which is like there for a long time, Predicting the next weather by seconds, like not days, but like it's possible today with solutions like Intel GPU Max.

So it's a special product. We recently introduced some of it at the supercomputing event at Denver a couple of months back. And you can predict what's going to happen in the, space exploration, in the cancer treatment. So you can really do high performance computing for AI. Using the GPUs because that's when it's really successful.

But also handing over to La Tiffaney, like it's also possible to do it even on the PC today. So we call it the AI PC,

La Tiffaney: needs. And based on these needs is where they'll be able to choose which hardware is best for them. So whether that's an accelerator or GPU or CPU it's really going to be based on what their needs are. So, that's why we are excited here at Intel to bring AI everywhere and to really have options for our customers because Again, not everyone is going to need that AI horsepower that, we sometimes talk about, right?

It's really going to be based on that. And in terms of making AI more accessible We're so excited that we now have our core ultra processors that's coming to the A. I. P. C. that we recently launched and within that customers are going to be able to run those models on their P. C. And they're going to be able to utilize all of we've put into these new C.

P. U. S. So they can start going beyond just The traditional GPS for their A. I. S.

Richie Cotton: Okay. So this is interesting. Can you maybe talk me through like when you would want to use each of these different hardware solutions, like when would you want to use an AI accelerator? When would you want to use a GPU? When would you want to use CPU? what are the different use cases for each?

La Tiffaney: Yeah, I'm I can start on that. So, for example, where you want an accelerator such as Gaudi is where you're doing some of that deep learning and training of the models up front, So again, something where you really need that horsepower. Then as you kind of move down the line and you want to utilize, they want our general purpose.

GPUs um, that is something that you can have one of our GPUs would be sufficient there. And then finally, so going back to the AIPC use case, that is a great way to do, again, general purpose AI. However this is going to be really useful. you have sensitive data, if you want to utilize yo data does not leave your doesn't go up to the clou a lot of times with Specific organizations or governments that for security reasons, they don't want things to go up to the cloud or sometimes if you're offline, right, and you don't have access to the Internet, having your CPUs do a lot of this work is going to be really helpful as well.

Nuri Cankaya: And just to add on top of that, I think we haven't covered the Edge AI specifically until now, but thanks a lot, Lithuania, for mentioning think that's a very important part of the AI journey, because always think like AI will be on the clouds, AI will be on the GPUs, but it's not true. Like most of the cases today, like it's impacting our life.

It's actually happening on the Edge. So if you think about all the, like, self driving cars, You cannot just ask like, shall I pass this lane or not? I mean, you might just cry like, there is no tolerance for a latency. So you need that at the edge, at the, like, in, on the car, running AI instantly, and like, you have to learn all the way.

So if you look at like the industrial use of self driving cars fine for edge scenarios and similarly, it's impacting many other industries. Like manufacturing is another example. But just mentioned security is a big concern. You don't wanna train your models if you are, let's say government or imagine like ministry of defense.

Like, I mean, you're really protecting your ip. And you need to be secure. So like you need to add that layer on top. So where we call it the hybrid ai. So we'll see more of those scenarios where there's some AI at the edge, some AI on the client side, but also whenever needed you go to the data center or the cloud to get that enablement.

So it really brings us back to bringing AI everywhere because there's no one size fits all in the ai. So there needs to be solutions at the edge, at the client, at the data center, at the cloud. And I think Intel is really positioned well in terms of covering all those four components of AI. There

Richie Cotton: that's actually kind of interesting because I think maybe one of the big pushbacks in terms of saying, well, okay, I need all this hardware in order to run AI is actually a lot of times you can just call an API and have. hardware be someone else's problem. So those are really interesting examples the idea that you need if you're doing something in your self driving car you can't call the cloud or if you're a government agent then maybe you want to build things yourself.

Do you have any more examples of like the trade off between working with like other people's cloud models and building

Nuri Cankaya: models today in the market. So if you look at the large language models, there is like open AI likes where all the training and everything is done by the vendor. So you just use the output of that and you build your solution on top of that. It's kind of the closed AI solution. You don't have any. influence on the parameters and everything that is used by that model.

Or if you go to hugging phase and download a, let's say a llama model, which is like completely open source, then you can see what's this trained for. And then you can add your own training components and you can customize it. So there's also pros and cons on both models. One is like if you want to be fast in the market and get all the API support from.

OpenAI, StabilityAI, ScaleAI, and many others. Then you can use that route to be fast in the market. I mean, it boils down to developers, right? They have to choose what's right for the company and what's right for the solution. might be really easy to build a solution. I'm just making up, but like If you are looking at a healthcare solution, if you have a LLM that wants to focus on cancer, like let's say early detection on specific industry use cases, then there might be a, let's say, 3 billion parameter large language model, open source, which might be really helpful for you, so you don't have to Train the model and like, pay a lot of compute power for AI, but you really, I call it the nimble AI.

So there will be solutions that is like really nimble, like you don't need a GPT 4. 5 or like moving forward trillions of parameters, but like you can shrink the size of parameters to your domain and it will be more accurate. And again, it will be open source, so you can feed everything within your ecosystem.

So again, depending on the sensitive data, security, those are the concerns I think we will be discussing moving forward.

Richie Cotton: That's actually interesting, this idea of nimbly eyed, that you don't necessarily need the best, all powerful, cutting edge model. You just need something that's good enough for your purposes, but maybe is a bit cheaper to run.La Tiffaney, that? Like, when might you sort of care about having a smaller, Less powerful AI rather than having the cutting edge.

La Tiffaney: So it's really going to depend on the organization and the needs. And it's also going to depend on you know, again, what does that company want to do? And what sort of outcome do they want to come up with? When you think about AI, everyone just gets So creative and the ideas are endless of what they can do with this.

Right. And so, looking at what is capable of that company as well, something that we're going to need to give some attention to the other thing we always like to talk about as well is the data, the quantity of the data that a company might have, as well as the quality of that data. So.

As we spoke about earlier, AI all about the data piece, right? So we don't have enough of it, or if the quality isn't there, then we might find that certain companies aren't ready to do that, right? Everyone's AI journey is going to look different, and every company is going to be at a different stage.

And that's okay, But something that we definitely want to think about as we're looking for the right solution.

Richie Cotton: Okay. And adding on to this idea of solutions we've talked a bit about hardware. I'd love to talk about software as well. So, can you talk a little bit about like what are the most common software stacks people use for working with AI?

La Tiffaney: The stack again is going to really vary based on their needs are. But for AI overall, commonly what you're going to have within that stack is kind of starting off with whatever the programming language is. And then choosing which libraries and frameworks are going to be best for them when it comes to the machine learning portion as Naree spoke about earlier.

And then finally, you want to Process that data and then kind of start to choose which visualization tools might be right for you. It might be different analysis tools and more, But again, it's really going to be based on, like, what their needs are. That stack can look so different.

It's just really going to be based on what the solution is.

Richie Cotton: Excellent. Yeah. So all those components seem to make sense.Nuri, can you fill in any of the details on like what people might go for in terms of like sort of programming languages, frameworks, database tools, all that sort of stuff?

Nuri Cankaya: I think there's like a couple of frameworks already in the market, which are, again, like I will call TensorFlow is one of them. PyTorch is a big framework. And these are really like big foundations, like as a example, Intel is now a part of the PyTorch foundation because we completely support open source and we want the developers to get the best out of the solutions in that and specifically maybe unpacking where Intel sits on the software because when I joined Intel, it was like Yeah.

Is Intel in the software business? Like, I didn't know that, so. And I really did my, deep dive into the topic, and I was really surprised. especially on AI, I want to highlight two solutions that Intel provides, which is really important for all the developers who are listening to this podcast.

One is OpenVINO, and It's a runtime choice for the developers and really helps the developers to deploy the AI solutions everywhere. It's heavily initiated as an edge deployment solution. It's expanded the client and of course with our data center capabilities, open Video provides really a wide variety of AI deployment solutions as like, a core foundation.

And the big thing is the one API. One a p is again, open source, standard based software that simplifies the programming across architecture so that developers can have the flexibility. You might heard about Cuda, for example. So, uh, media at two eight come up with the idea of and brought the developers, but that's a closed source platform.

So like, I mean, if you're a Kudo developer, you have to develop on end media. So what we believe is one API, it's open source, so you should be able to develop wherever you want. You can deploy wherever you want. So that's giving the flexibility and power to the developers and architects. maybe the final thing which I was a part of the launch process, actually, Intel developer.

So. It's really exciting, like almost every six months you have a new product coming into market. Just mapping the pace of the AI innovation and for example, Intel Gaudi 2 in the near future Intel Gaudi 3. So whenever that products launch, it will be first available on Intel developer cloud. And it's a cloud infrastructure provided by Intel to developers.

People can just go to Intel developer cloud, create an account, and immediately try those things without really need to purchase 1, 000, 000 of hardware, it's like, really easy to get access. You can use the platform, develop your solutions, and this applies like the wide variety of solutions, including our Xeon hardware, GPU max, GPU flex.

So Intel has like wide solutions on the. The AI and I'm just connecting to that, but you can use API and all put me no solutions on top of Intel developer cloud, which makes it easy for developers to try before they come into the platform and they want to run. So it's just easy port because they have done all the testing development on Intel Developer Cloud.

We are not really planning to steal any, let's say customers from our vendors. We are trying to incubate them in early stage and when they are ready, like, Gau is available on AWS as an example. They can go to aws. they can go to Zon for fifth generation for gcp. So again, like, I mean, those are all the customers that have a journey to end their AI continuum.

Richie Cotton: So it sounds like there's a lot of software in a lot of different areas. So that's changing. So everything from like the low level sort of data center infrastructure stuff up to sort of tools for people who are developing with AI. I'm gonna have to make you both pick your favorites. So go on what do you think is kind of the biggest impact?

La Tiffaney, go first? Which bit of software do you think is going to have the most impact in the next year or so?

La Tiffaney: Yeah, I think it's really going to be open vino for me. As we mentioned, this is really going to be the client runtime of choice, and it's just going to open up what's possible for developers in terms of where they're coding and where they're deploying. I think you're really going to see just an explosion of different use cases and in solutions that come from this.

Richie Cotton: Okay, so easier deployment. That sounds like a very useful thing to have.Nuri, do you have a favorite bit of software you think is going to have high impact?

Nuri Cankaya: I'll also plus one lit on this one. Open. No. Especially, again, we recently launched the AI PC, which is, bringing AI to computers. And this is a drastic change for the industry. Like, AI was like somewhere in the clouds. Like, I mean, it's like the Greek mythical gods. Like, I mean, you need to have like GPUs.

Now you have a PC and you can run. As an example, stable diffusion on your PC, you don't need any expensive hardware or like commitment for a cloud provider for a long term. So this is going to change and in order to deploy that solution, OpenVINO is the only solution in the market right now.

which is really exciting for 2024. I think this is gonna be a big innovation for Intel and also, like, great help to developers for deployment. I

Richie Cotton: Okay, we'll have to look out for that then. I'd like to take a little sidestep now into talking about privacy, since this is a big deal for a lot of enterprises. So, can you Talk me through how privacy requirements affect the use of AI. What do organizations need to consider?

Nuri Cankaya: think it starts with really classifying the data first, privacy and, like, again, security is, like, a foundation for a company data process and when I met with customers, I asked, like, Who's your chief data officer? Like, if you don't have one, there's a problem. But then, like, you don't know where your data is, like, what's the classification that has been done.

And in AI, the problem is LLMs will never forget, like if you train accidentally an LLM, which is public facing, it will be a part of the training data for the upcoming training model, and it will leave that forever. And we have seen some customers, unfortunately, again, like I will not name the companies, but they accidentally trained all the information on their private data.

And now you can just like, prompt it, like, hey, give me all the coding backwards of this company. Writing a code like them and then boom, boom, boom. So it gives you all the backwards. So, it's really important to put that guardrails in front of your data, making the right classification and who access to the data preparation, because that's a very important step.

So today, at at marketing at least, the PII, really important for us to keep, like, customer's name, email, phone. And everything private, we don't really do anything with that data and really few people can access and if it's an opt in what we see is like the companies are throwing all their internal data to the training of a large language model or fine tuning one of the things and it's a little bit risky.

So you have to make sure that this is going to be understood by the. the machine learning, and it will be a part of the deep, deep learning algorithms, which can be simple prompting. It might be unveiling your secrets for trade or any other company confidential information. So that's why I think secure AI is an important step that before jumping into this AI journey, you have to know what's my security journey, what's my privacy, what's my Models and approach to this ai implementation and then move on.

So if you drive a car today or you produce a car, you get to like all this certifications, all this assessments that are government buddies that are approving those. But if you look at AI deployment today, companies are deploying. back and forth everywhere. and I think getting that ethical usage off a I data sources, getting the privacy and making sure I secure is an important thing.

I think it will impact the next A couple of years for all of us.

Richie Cotton: Yeah, definitely. So, I like that analogy that if you're sort of building anything else, like, you put out a phone or a car or something like that, there's a ton of regulatory hurdles you have to go through, but there's not that much for AI.La Tiffaney, you look like you had something you wanted to do out there.

So do you have any more advice on, like, how to think about when do you need to care about Privacy in the context of AI.

La Tiffaney: So Intel has a long history of always following GDPR, it's going to be the same with AI for us. And you're going to see that integrated within our products and within everything that we do when it comes to AI. So, I am very confident in terms of where Intel is going to be going with the privacy requirements for AI.

And you've even probably seen this in the news. Intel collaborate with VMware for a new venture called Private AI. Where we worked with them to build program that gives privacy from edge to cloud. And this is just the beginning, We're going to Seeing other initiatives like this happen we're working with all different types of companies to ensure that that security comes along with the performance.

Richie Cotton: And you talk about it from edge to cloud there. Is there going to be a difference then in terms of privacy requirements if you're not doing something in the cloud and you want your competition to happen, like where your data is?

La Tiffaney: we think it's going to be both, you can have that security that you need and want in the cloud. But there's also going to be additional securities that you can have by keeping some of your data local. So this is a a great example of what we see of being hybrid AI.

we talked about this a little bit earlier, right? is seamless transition from your data that you have on prem. And then also utilizing the cloud when it needs to, when you need to get additional data that might not be private, right? So this is going to really enable organizations to use their AI PCs or their edge Infrastructure for some of that sensitive data and then pull from the cloud.

Additional data that might not be so sensitive.

Richie Cotton: I'd like to talk a little bit about processes, just to touch on it briefly, and it seems like most of the processes you want for AI development are going to be similar to standard software development. Are there any things that are different here? Like, is there, what's peculiar to developing stuff for AI?

Nuri Cankaya: I think the key differentiation is the data piece. So like we have seen the data science jobs are like incrementally getting popular in the market because AI is built on top of the data without the input, there's no output and you need the right data scientist skills to develop that machine learning model, the data science correlation.

we have seen this machine learning engineering roles. And again, I'm a computer science graduate, but like, I believe there will be artificial intelligence engineers in the near future. So it will be a really a major study area because. This is really becoming like, on top of that data science layers, you develop everything from a, modeling base.

I just covered the TensorFlow, PyTorch, those foundations. But on top of that, also there is like, the development needs to think through the synthetic data creation, like where You need to feed the engine all the time with the latest and potential upcoming learning. And we covered the ethics and compliance, but that's also a, an important piece of the development journey.

You cannot just, I mean, historically I'm a developer, like I develop websites with a, at a point of time and it was easy. You just use net framework and then like you use it. Like a client, which is a web browser, it's easy. Everything is on the server side. You don't worry, but AI is multifaceted. So like, I mean, you, you have the models in a private environment.

You have to get the right training on top of that. you have to deploy. And we discussed the edge, you need to do edge inferencing in some of the cases. it's a life cycle of journey. If you miss the ethics and compliance piece, it's not just the server side thing anymore.

The data is in every location. So, those are, I think, the additional level of complexity for developers to think through. The final thought is, like, Writing a programming language was like a big task. So, but now it's not like, I mean, if you're building some blocks, you can just ask any GPT, like, Hey, please write me a Python code to enable this.

So you are more, enabled from like the time consumption perspective. So you have to spend more time as a developer on the the right architecture, the right model, work more collaboratively with the data scientists to brief them what you want and to work with the compliance team on like what's going to be the outcome.

So you don't really bury it into the line of codes and like Test and deploy the solution. So I think I is going to give a good breathing area to interact with the rest of the organization in a better way for developers.

Richie Cotton: Absolutely. And so you mentioned this, the idea of this AI engineer being a sort of fairly new career. This is like a ton of things they have to do. Can you break it down, like, what sort of skills do you need to get to be that AI engineer?

Nuri Cankaya: Yeah, let me go and then we'll maybe drive more depth conversation. But it's all about analytical thinking. So I think with AI, with artificial intelligence, we are trying to mitigate the intelligence that we have in the humans, right? So we want to repeat some of the process underlying and then really surpass some of the things that with our human intelligence, what we achieve.

As an AI engineer, I think this is first of all, a really interesting area to work on, because like, we don't know what's going to be the next big thing, right? So, and maybe we will cover that, but we are all heading to this artificial general intelligence. And in my opinion, In the five to 10 years we'll merge with a GI.

So like, we'll be a part of artificial general intelligence and we'll all as humanity, we'll benefit from that. So if you look at from that perspective, everything we interact today from an industry perspective will definitely and drastically change. So we need engineers. the term engineer is really looking at the phases of development and maybe parsing them out and make them more efficient.

So in the industrial engineering age, we did it perfect at manufacturing, right? So the Henry Ford's all model T and production in the mass. It changed what we get today from a, like a mass production and industrialization perspective. Similarly, I think the AI engineers will change the future of AI in terms of how the, we interpret the data.

, how do we collect the data.

In a secure, private, and like all the consistent ways and generate the right level of output in a well sequenced way, which will really accelerate humanity's growth in the journey.

Richie Cotton: So if people who are listening who want a career as an AI engineer or for managers who are wanting to hire an AI engineer, like, what do you think are the top skills you need here?

La Tiffaney: think that if someone is wanting to really develop their skills and AI, I think, One is going to be having that proficiency and machine learning and A. I. And all these other disciplines that are under this A. I. Umbrella that we've talked about today. That's number one. And secondly, is really going to be having a good understanding of how all of these different models can kind of like work together, As Mary mentioned, what we're going to see with these engineers in the future is they're putting together all these different aspects into one solution. So, it's not going to be a one size fits all. So really having that flexibility and honestly creativity to come up with these solutions based on whatever challenges that they are faced with I think that's really gonna be essential.

And then also I would say that just keeping a a an attitude of continuous learning. Because everything in this AI space is changing so rapidly from, the new developments happening to the new software and hardware coming out. You have to just keep learning this.

stay up to date on this. So I think if someone wants to get in the AI field those are some of the key areas they should focus on.

Richie Cotton: Absolutely. I do like that idea of continuous learning because you're right. It is changing so fast. I keep teaching things and then a few months later it's all out of date again. So suppose related to that, do you think there are any skills that are going to be made obsolete by AI?

La Tiffaney: So, I don't think there's going to be skills that are made obsolete. I really think what's going to happen is we're going to see this transformation happen with those skills. So AI is really just going to unlock additional time and resources for us. So it's going to take away some of those automated routine and repetitive tasks that you were doing and give you more time to build.

additional skills or use your skill set to learn new things. And I think that's really what we're going to see is more of a transformation rather than one skill or the other becoming obsolete.

Richie Cotton: And so I want to talk a little bit about, like, what you're most excited about for 2024. But before that, I'd like to take a little step back. Because Nuri, I know you were involved when ChatGPT was first coming out, and you were involved in the marketing of that from your time at Microsoft.

Can you just talk me through a little bit about what happened there and what you were aiming to do in terms of bringing AI to the masses then?

Nuri Cankaya: Absolutely. And I think it's a good story to share with everyone because like, this was a journey where Microsoft partner with open AI and then initially some of the components I will give the specific github copilot as an example. So when we run the first phase with github copilot, we have seen like tremendous results on like, hey.

This thing can write code like sometimes better than many coders and we observe what's going on for a while and then I think like before the chat GPT, it was easy to generate some of the. The coding Dali was another example. So like when we interacted with Dali's first version, wow, I mean, this is really creating some images which are really creative in a way that that surpasses like some of the human ingenuity and really goes beyond some of the the thinking but.

Almost in 18 months, the number of code lines written on GitHub by AI became 51%. So the human generated content was 49, so in 18 months. So this is like a huge hit, all the history of software engineering programming and everything. and again, like I see it's gonna be maybe 90 to 10 moving forward.

And back to your question to that Tiffany. I'm not really afraid of like, hey, the software engineers will lose their jobs. So I think on some of the repetitive tasks, well, it's also a company time and efficiency problem because you ask those developers to write a code which can be written by AI because they try to solve the same problem over again and again and again.

the basic foundation of the, the models are the same. So, and I think this will affect in many industries. So, when I was at Microsoft, so we used open AI for our internal marketing team there's a term called MPF, the messaging and position and framework, we said like, imagine you are launching a new product.

So what will be the MPF? Boom, boom, boom, boom, like it's, and you can just name, you can upload some documents to train it even further. And it was like so good, like I mean, he said like, hey, we will save most of the time, like writing documents or creating some messaging frameworks versus really marketing them.

and it accelerated our journey. And November 30th, 2022. So when we launched OpenAI on CheckGPT to the world running on Azure, that was an aha moment for everyone to interact with that. Using their own language, so you can start English, turn to Spanish, whatever, because technically the large language model understands the way and it responds the way and getting that I think the visual components like the video images, everything is happening so fast if you look at the last one year.

It really drastically made AI a number one topic for every leader in the world. So if they want to be in the game in the next five to 10 years, they have to AI. So the learning that I had was like, It was taking for my team long time to explain like AI is going to change, AI is going to change the world.

But when people see it in action, boom, it was an instant pause. Yeah, we have to be in it. Otherwise, this will gonna be disrupt our industry. So you have to do it before your competitor does. that's a good example. And I'm trying to apply that. At Intel, so like we are trying to be ahead of the game, we covered until now a lot of products, but all those products, I will go back to the first question that you asked, what are we trying to solve?

We are trying to help the customers on the outcomes. So either it's accelerating innovation. It's securing data. We covered it a lot. And also the TCO. If you of the GPU prices in the last one year, everybody added margins into the game and like, customer is not benefiting from that.

They are not saving dollars. So as Intel, we want to provide and bring AI everywhere. So that's, that's a big change. So again, with the core ultra processors, we made it to the PC. We are making it with Xeon to data centers, but also I will re highlight, Intel Gaudi is a big innovation. It's really bringing AI for large language models.

You asked Richie a question like, where do we position Gaudi? It's whenever there's a large language model work. That's the area for an AI accelerator and then Intel Gaudi will definitely help it. And we already seen some of the great examples of that solution in the market. So if you have kids, for example, Roblox is running Intel Gaudi on the back end.

If you are watching Netflix. It's running Intel Gaudi on the backend, so all these recommendation systems and everything that is in our lives are already filled by Intel technology today.

Richie Cotton: it's amazing how much AI is pervasive throughout everyone's lives. And I do like the point you mentioned when you're kind of going to promote chat GPT. It's like, well, until people saw it, it wasn't obvious that this was going to change the world. But once you've seen it, it's like, oh, yeah, I get it now.

Brilliant. Okay, so before we wrap up, can you tell me what you're 2024 and AI?

La Tiffaney: for me, I am most excited about seeing this democratization of a I really come to fruition and we're really seeing that through the A. I. P. C. I mean, this is truly putting a I into the hands of Anyone who wants to have it, right? So I'm so excited about that. And this just underscores Intel's mission of bringing a I everywhere as well.

It's now it's accessible. not only just the big companies, but everyday people who want to start using these models. And so just so excited about the opportunities that's going to create as well as seeing what people make.

Richie Cotton: Nice. I do like the the fact that it's coming to everyone. Yeah. The, the democratization of AI. Brilliant. Nuri what are you most excited about?

Nuri Cankaya: I think I gave some clues before, but I'm excited about AGI and I think artificial general intelligence is not that far. And I'm looking forward to, think through. As Intel, how do we collaborate into this like massive change that humanity will go through? Because I believe like we will merge with AGI.

For example, if you leave your phone at home today, you are disadvantaged, right? So you cannot get a taxi ride, you cannot get food, so you can still live, but you are like not advantaged. Similarly, I think in The next five to six years when the AGI comes, like people will use AGI and merge with AGI. So you will really utilize the power of AI in your daily life embedded. I'm excited about that discussion because this requires all the proverb that we discussed. How do we make it secure? How do we don't create like digital divide between different countries who own the AI? How do we bring AI everywhere to from devices to software? How do we make it really easy to deploy easy to train?

So again, i'm so excited about intel's portfolio of solutions to enable AGI in the long term

Richie Cotton: Excellent. Yeah. Look, definitely looking forward to yeah, seeing some uh, super powerful, well, maybe maybe artificial uh, general intelligence. That's gonna be an exciting thing. All right. On that note I think we'll wrap up. So thank you both for joining me. Uh, Thank you, Tiffany. Thank you, Nuri.

It's been great to have you on the show.

Nuri Cankaya: Thanks a lot for having us

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More