Skip to main content
HomePodcastsArtificial Intelligence (AI)

Causal AI in Business with Paul Hünermund, Assistant Professor, Copenhagen Business School

Richie and Paul explore Causal AI, how Causal AI contributes to better decision-making, the role of domain experts in getting accurate results, exciting new developments within the Causal AI space and much more.
Dec 2023

Photo of Paul Hünermund
Guest
Paul Hünermund
Paul Hünermund is an Assistant Professor of Strategy and Innovation at Copenhagen Business School. In his research, Dr. Hünermund studies how firms can leverage new technologies in the space of machine learning and artificial intelligence for value creation and competitive advantage. His work explores the potential for biases in organizational decision-making and ways for managers to counter them. It thereby sheds light on the origins of effective business strategies in markets characterized by a high degree of technological competition and the resulting implications for economic growth and environmental sustainability. 
 
His work has been published in The Journal of Management Studies, the Econometrics Journal, Research Policy, Journal of Product Innovation Management, International Journal of Industrial Organization, MIT Sloan Management Review, and Harvard Business Review, among others. His work has been covered by Frankfurter Allgemeine Zeitung, Süddeutsche Zeitung, Politiken, and Neue Zürcher Zeitung. Dr. Hünermund serves on the editorial board of the Journal of Causal Inference and on the executive team of the Technology and Innovation Management division at the Academy of Management. He earned a Ph.D. in business economics at KU Leuven in Belgium and graduated from the University of Mannheim in Germany with a master’s degree in economics.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

The areas where causal ML can really contribute to current AI practice is actually the big challenges that we face in AI, which is, well, fair decision making, fairness, robust decision making, and also explainable AI. So understanding why we're taking a certain decision and not the other

Causality is such a fundamental concept for us as humans. So if we want to develop artificial intelligences, they probably should also be able to reason in terms of causing, in fact, simply to have just a better conversation, to be able to have a better conversation with us

Key Takeaways

1

Causal AI offers a more comprehensive approach compared to traditional AI, emphasizing causal reasoning over mere correlation, which is crucial for deeper insights and more accurate predictions in AI applications.

2

Causal AI is not limited to tech or data-centric fields; it has diverse applications across various sectors like healthcare, marketing, human resources, and even defense, showcasing its adaptability and wide-ranging utility.

3

Causal AI enables us to understand and predict the effects of different actions in various contexts. This facilitates more strategic decision-making in business and product development.

Links From The Show

Transcript

Richie Cotton: Welcome to DataFramed. This is Richie. Artificial intelligence has many applications in business, whether it's machine learning being used to make predictions or more recently, generative AI being used to create text and images. One area where AI hasn't made that much progress to date is decision making.

It's notable that on the show, we talk a lot about data driven decision making. but not AI driven decision making. This is a big problem for managers and executives since they have to make many decisions, and it would be helpful if some of those could be automated, or at least have some AI suggestions behind them.

The problem is that in order to make predictions about the impact of your decision, you need a model that understands cause and effect. The solution is a set of techniques known as causal AI. It's a fairly new field, but some early adopter companies are already having big successes with the techniques.

I'm convinced that this is one of the up and coming frontiers of AI, and that there are big opportunities for companies making use of causal AI. Our guest is Paul Hunermund, an assistant professor at Copenhagen Business School. Paul's research focuses on how firms can use causal AI for organizational decision making and value creation.

In short, he's the perfect candidate to explain how to use this technology. Let's hear what he has to say.

Hi, Paul, great to have you on the show.

Paul Hunermund: Thanks for having me, Richie.

... See more

ltr">Richie Cotton: I'd love to just get started with what is causal AI?

Paul Hunermund: Well, it's a framework. for AI, artificial intelligence, that allows to reason causally. I think that's the main differentiator. probably listeners have heard about the saying correlation doesn't imply causation. If you're working with data, that's something that you're familiar with, but then making the next step, setting up a framework that, allows.

artificial learners, but also us as human learners to reason causally. That's the main differentiator.

Richie Cotton: And how would you say causal AI is different from maybe more traditional causal statistical techniques? Maybe can

Paul Hunermund: Yeah, I mean, we have a long tradition in statistics, for example, about experimental methods. And that's certainly an important tool for causal learning causal AI. But causal AI is much broader. First of all, it's an overarching framework that includes also what we call observational methods, so where an analyst would not necessarily manipulate a treatment by him or herself.

So working with exposed observed data then it's a tradition that is coming mostly from computer science. So these days we also have great tools for automatizing these processes and on a more. philosophical epistemological level it takes seriously what we call the letter of causation which is a concept saying that well, the, different inference tasks, so, associational correlational level is distinct from, from the causal level and you need different inputs for causal learning and causal AI takes this very seriously and, and also teaches this.

Richie Cotton: you give me an example of how this is used then? Just so we can get it a bit more concrete.

Paul Hunermund: Yeah. So the use cases are vast. We've seen this in drug development, for example, drug testing where of course, traditionally we relied on experimental methods randomized control trials in order to get a drug approved. But then, based on that, all sorts of other questions arise. So, for which groups are drugs most effective?

We've seen that, for example, with the vaccines, the COVID vaccines. When we rolled them out, we had experimental evidence about their effectiveness, but then we necessarily didn't know. Which age group to target first, we had a hunch that it was older people in that case, but that wasn't a given, right?

With the Spanish flu 100 years ago, it was, for example, more younger people. So, and there it's used, it's used in product development, marketing, advertising. We've seen applications HR. So if you roll out different HR procedures will that affect your employees? satisfaction will working from home affect productivity of, of your employees manufacturing in the defense industry.

So, various use cases all across the board.

Richie Cotton: It does sound like there are a lot of great sort of business use cases there. Just going back to your example about the vaccines. So I think that's something maybe a lot of people can understand. So you're saying that you're going to be able to understand which groups you're going to want to target with vaccines.

So, if I've understood this correctly, just traditional machine learning techniques are going to give you a prediction of which groups are going to be best, but the cause layer is going to tell you why, why they're better, is that correct? Or Have I misunderstood?

Paul Hunermund: Yeah. So, I mean, in a drug approval or drug testing framework, I think the, the causal effect that you're after is immediately clear. You want to know is this drug effective and then in, in which group and we're testing this in form of a randomized control trial and a lot of resources went into this.

But. What we, for example, didn't do in the vaccine trials, we were not stratifying according to age. So we didn't have a separate RCT for, let's say, 80 plus, 70 plus, and so forth. But then once that was done, immediately follow up questions arise so we are resource constrained, we cannot vaccinate everyone immediately, who to target first and this decision was then actually done based on exposed observed data.

So we saw people picking up the vaccine, mostly older people were picking up the vaccine first. And for example, there are good studies from Israel where we had pretty fast vaccine rollout. But whenever people choose, it's probably likely that those that already expect higher effectiveness or on a risk group go out and take the vaccine first and other groups might be more hesitant.

And that creates all sorts of confounding. And I remember in the, in the early days, we, we had this discrepancy between the RCTs that said Unfortunately Effectiveness around 90%. And then in the field, we saw things like effectiveness dropping to maybe 60 or 70%. And that could have been I'm, I'm not an epidemiologist.

So take this with a grain of salt, but that could have been because of different variant. of the, of the COVID strain, but also because well, maybe risk groups, like all the people getting the vaccine first where just the prevalence of COVID 19 was, was much higher. So, and to summarize this, we had experimental evidence, but then also exposed observed data and you have to be very careful in what kind of questions you can answer with what kind of data.

Richie Cotton: Okay. It just seemed like it has got a pretty good way of untangling some of these compounding factors when you're not entirely sure how they're all related. All right. So you mentioned lots of business use cases before, and I'd like to dig into those a little bit more. So, have any stories about how Businesses abused causal AI successfully.

That's

Paul Hunermund: Yes. I mean, as always, when we're talking about AI and machine learning, it's big tech companies that are at the forefront that are also doing a lot of research on, on methods and algorithms in that case but it's not just them. it's also well, outside of the, let's say big five tech we have companies like booking.

com which is an industry leader in A B testing on their platform, and they're increasingly thinking about also using some methods to make this process or to optimize this process. We have Spotify in Sweden Netflix LinkedIn experimenting on their platform, and then you have questions around.

You cannot just run a standard experiment because all these data points are clustered. People are in a cluster in a network. Then you need special kind of experimentation techniques. I've worked together with Zalando in Berlin, which is a fashion retailer, online retail store, and they're using causal ML techniques quite heavily and not just experiments, but also observational techniques.

I've seen cases in for example big supermarket chains using causal machine learning in order to, for example, optimize delivery times when you get, your groceries delivered at home but also using it to reduce food waste in their stores. So, that's one interesting use case because it relates to sustainability.

Companies like McKinsey are investing in, CausalML and investing in, in open source libraries for, for CausalML. I've seen applications in the aviation industry. So again it's very broad. It's also not just a big. firms, but also smaller startups investing in it and specializing in causal ML.

Richie Cotton: cool that there's like such a wide range of organizations are actually using this. And it sounds like perhaps the most common use case there is that if, if, It's a company who has a product where they do a lot of A B testing, so the simple experimentation, then maybe like the causal AI, causal ML is, is the next thing that they need to think about.

Is that fair? That

Paul Hunermund: Yeah, that's fair. We've talked with a lot of practitioners and data scientists in the industry, and one of them told us that A B tests are the big hammer that tech companies swing around. That's how he expressed it. Because I think if you have the possibility to experiment, let's say, on your platform, And you can try out different variants of your website, different shades of a button.

That's first of all, easy to understand. It's Often also easy to implement. There are dedicated services for setting up an experimentation platform on your website. And it's also easy to understand, let's say, for people that are not necessarily trained heavily in this space. so that's It's definitely like a first use case, but then often you also realize that many these questions are not possible to answer with a simple A B test because of limitations.

weLl, it might be too costly to run an experiment, right? The risk of losing sales in the B group might be too high. It might be simply unethical to run an experiment. One limitation that. we've also seen a lot is the actual business metrics that you care about are long term, let's say retention in a year or two, but an AB test usually runs for two, three, four weeks not longer.

So then you have to rely on some kind of proxy metric, click through and so on. And in these cases. Observational techniques like working with ex post data has a lot of advantage because, well, this data is often lying around. You just have to harvest it. You can really observe your customer behavior uh, back into the future and, and draw inferences from it.

PEople choose themselves, which creates a challenge for causal inference, but then alleviate some of these ethical problems, right? You're not, for example if people choose a vaccine by themselves, you're not withholding the vaccine for one group, which could be an ethical challenge. So these kinds of examples.

Richie Cotton: list of challenges with A B tests is really interesting, particularly the point about if you want to measure something that takes like months or years, something long term. So I know Basically, every company, whenever they run an A B test, there's always someone going, well, can we just make a decision yet?

Even when the test is still like just halfway through, like a few days in, someone will be crying out. Oh yeah, we need to just go and pick one or the other. So I do like that idea that these techniques can be used to deal with cases where you want to think a bit more long term, but you don't have the patience to run the test for that one.

oKay. So, we talk about. Data driven decision making a lot on this show, and it seems like causal AI might be helpful for making decisions. So have you seen any examples of this?

Paul Hunermund: On a high level, I think the current AI practice is actually the big challenges that we face in AI, which is well, fair decision making, fairness robust decision making, and also explainable AI. So, understanding why we're taking a certain decision and not the other. And that has to do because well, explainability is related to the why question.

And whenever we ask why questions, it immediately relates to counterfactual thinking. So we're comparing two possibilities, two states of the world. And why, for example, did my headache improve? Was it because of the pill that I took in the morning. So, so that immediately relates to, to AI. Excuse me, to, to causal AI and robustness in the sense that when we're talking about a B testing, for example there's always the question of external validity or what the computer scientists call transportability.

So, we've seen that in the COVID pandemic quite a lot. We've, yeah. run all these A B tests in the past. Now the marketplace has fundamentally changed because well, a pandemic is going on. Can we still rely on this experimental knowledge that we have accumulated in the past? Or do we need to repeat those A B tests?

And currently we have really good tools for transporting causal knowledge across, let's say, markets or time causal ML does contribute on that front so making decision making more robust. And then, yeah, the third pillar is, fairness. We all know the stories about well, bias in algorithmic decision making and causal AI techniques are able to incorporate, for example, protection of a protected attribute, like gender and, and race much more clearly.

Richie Cotton: tHat's really interesting. I'd maybe like to get into that in a bit more detail then. So, can you maybe give me an example of one of these stories where there has been a problem using traditional techniques and talk about how causal AI might mitigate those problems?

Paul Hunermund: Yeah, so my favorite example that I'm also using in when I'm teaching causal machine learning here at Copenhagen Business School is well, the Google controversy. That's how it was described, at least Twitter a few years back. So in a nutshell, Google was accused of underpaying female employees, and there was even an investigation by the Department of Labor in the U.

S. And then two years later, in 2019, Google published a blog post together with an associated white paper describing methodology that they've investigated this in detail using large data sets lots of variables that went into this analysis HR data from all across the world Google offices.

And surprisingly, they found exactly the opposite. They found that it's according to them, male employees and especially senior male employees, so senior software engineers that were underpaid. And then they took action and actually raised salaries of these employees. And if you investigate this case closely I don't think we have the time for it here, but it's very likely that this was a case of so called Simpson's paradox which maybe statisticians know about, but it's essentially it's a causal inference problem because you Holding certain variables fixed in a regression, for example, something like seniority gender.

And that can lead to this kind of paradoxical situation that a causal effect that goes in one direction actually flips. Around, but this is due to spurious correlations that you basically create. And that is one example, I think, where things went wrong. Google already tried doing a fairly sophisticated analysis like, big data regression analysis.

to get at something like a gender, a wage gap in their organization, but not having the right tools to set up this model correctly, and then got completely the wrong answer out of this.

Richie Cotton: Fascinating, but also slightly worrying. You think about how many decisions are made and you've done the analysis in a suboptimal way and come to exactly the opposite conclusion to what you should have come to. so I'd like to go into bit of depth about what these tools are. So, can you give me some examples of the sorts of techniques that causal AI encompasses?

Paul Hunermund: So we already talked about A B testing, which is the standard experimentation framework, but then going beyond that reinforcement learning and in particular causal reinforcement learning, which is a variant of well, not just mindlessly running experiments because the, the space of potential hypotheses that you could investigate on a platform like booking.

com would be almost infinite. But using causal knowledge that you've acquired in the past, using these kind of transportability techniques that I briefly mentioned, in order to optimize on what kind of experiments to run and where optimally to intervene on certain variables. So that's on the experimental side of things.

Then causal modeling techniques using directed acyclic graphs, for example, from the computer science literature where I've already heard about examples where data scientists sit together with stakeholders and domain experts to draw causal models on a flip chart in order to investigate a topic for example, the gender wage gap.

Then we have different uh, also intellectual traditions coming from epidemiology and, and economics the entire suite of quasi experimental methods like difference and differences, regression, discontinuity designs, instrumental variables, those are used in industry too to, for example, optimize these, Delivery times that I've talked about in the supermarket chains causal discovery is another technique that firms use in order to, for example not just find failure points in data warehouses and, and predict where likely failure points are, but then also finding the causes of failures and immediately finding a fix for those.

So I would say these are the. the broad classes of algorithms that are mostly used.

Richie Cotton: I really like your example of people just drawing graphs on a flip chart. So real pen and paper evolves like a good low tech solution. But you mentioned that it involves getting experts in the domain to contribute to this. Can you tell me a bit about how that works?

Paul Hunermund: Yes, so that's an important point. and some kind of perspective change to standard machine learning. For causal inference, you will always need some form of expert domain knowledge. Or, well, you could call it a causal model. in order to get causal effects out, there is no way. To do causal inference in a purely data driven way.

And this is because well, the correlation is easy to show in the data, but we know that many things correlate with each other. For example ice cream sales and shark attacks, the standard example, or Nicolas Cage movies and people drowning in a pool. Maybe some people have seen this example. But in order to claim causal effect, you need to rule out alternative explanations, you need to rule out confounding factors and that requires background knowledge, expert domain knowledge.

So in, in causal inference it's not enough that a team of data scientists is very savvy on the on the algorithm side. You need to understand. very clearly what are you investigating and you need to understand the processes behind that. And for that you need domain experts, for example, in marketing, depending on the context that you're analyzing.

But I think also managerial input is important. And we've seen these kind of mixed teams being set up. And yeah, then it can start with a flip chart. It's a low tech solution. It doesn't have to be low tech. You can also, for example, complement this with causal discovery techniques. But in the end, you will need to you will need to bring in this expert knowledge.

And that's also a hurdle. I think for adoption, what we have seen, because in well, in, in standard machine learning, you always have a ground truth, right? You can take out a holdout sample, you can train your algorithm and then check how well this algorithm performs on, on the holdout sample where you have a ground truth.

and this is not necessarily the case in causal inference. So it's more my students often ask me this and then I tell them like the perspective that you need to adopt is like an if then expression, right? Like certain assumptions go into it and then you get an answer out. But if you change the assumptions, you will get a different answer.

But then at least we have if done right, a vigorous way of, debating these assumptions and taking the right conclusions.

Richie Cotton: That's fascinating stuff. And I'm curious as to what the output of these models looks like. So, for a standard machine learning model, probably the thing you're going to get out of this is a prediction. But it sounds like with the causal models that what you get is going to be something different.

So can you talk me through what's the output from the models?

Paul Hunermund: So there is a definition of, of causal inference by James Woodward, which is a philosopher of science. And he said that causal inference is a special kind of prediction task. It's predicting the likely impact of an action or intervention. And There's a lot to unpack with this definition, I think. So first of all, it's the likely impact.

So also causal inference is a probabilistic framework. it's not a deterministic answer, right? It's the causal effect is, is a probability change is we increase the probability of retention of consumers by X percent, for example. But then. Predicting the likely impact of an action and intervention, and I think there you immediately see why causal inference is useful and important for decision making in, in all sorts of domains because decision making relates to taking an action and yeah, I mean.

Technically, what you get out is hopefully a valid causal effect estimate, like I said, We, we increased the the click through rate in the A group versus the B group by X percent. That is one number. Increasingly, we're going away from estimating just average causal effects to, to going to conditional causal effects, which would be the idea of we know the effectiveness of the vaccine, which is on average 90%, at least for the alpha strain of COVID, right?

But then it varies across different age groups. So the causal effect for different groups. And that is of course very important for for targeting consumers. Because we increasingly realize that metrics like customer lifetime value is not the right metric. What we really want to know is which consumers react, mostly to our ad that we're running.

And so those customers that have the highest customer lifetime value might anyway sign up to our platform because they like the product so much. We want to target those customers at the margin. Transcribed

Richie Cotton: That's pretty amazing. I think that's something a lot of people in marketing don't get because maybe they're looking at the wrong metrics. Like they are looking at standard things like customer lifetime value. And actually, yeah, that's not the same as campaign performance. Like how, how is it converting people?

That's absolutely fascinating. And it does sound like causal AI is then quite useful for scenario planning. So you mentioned that you can say, well, if we take this action, then we can see what the consequences are. So is that correct? You can plan for different strategies that way and say, well, let's do different things and see what the effects are.

Paul Hunermund: yEs, that's right. you're targeting or you want to understand which kind of action works best and under what kind of circumstances. And they're actually the prediction and the causal inference meet because if you're thinking about estimating conditional causal effects.

So in which age group is is the drug most effective, then that is bringing in a prediction problem again through the backdoor because you want to predict where's the cause effect the highest. And so techniques like, for example causal, causal forest, causal random forest bring in exactly this idea of predicting the groups for which a certain action is most effective, and that can be used in scenario planning, yes.

Richie Cotton: quick question on tooling. So, if any data practitioners want to get started with this, what sort of tools are they going to be using for it?

Paul Hunermund: Yeah, it depends on what kind of algorithms you want to use for experimental observational techniques. There, there are different packages, for example, in Python and R. I think what became somewhat of an industry standard is the do why package in Python. Or the Y library, it's maintained by Microsoft Research and is pretty well developed by now.

I think what is a big plus in, in doing Y, especially if you're already using Python, is that it has this causal inference pipeline from setting up a causal model estimating a causal effect, then doing specification tests. And, and refutation of these causal models altogether in sort of one pipeline.

I think that's very helpful in order to move things into production, for example. Then since causal inference is a topic that is coming out of academia mostly so, we see a lot of packages also now, which is used more, I guess, by statisticians and data scientists.

So, daggety in R ggdag would be packages that are used for causal discovery. There's the pcl package, which is named after the PC algorithm, which was one of the first algorithms for causal discovery. So these are the main packages that, or libraries that are used.

Richie Cotton: Okay, so, sounds like you've got options whether you're using Python or R. So, sorry, do y in tcl, gm, gg.

Paul Hunermund: Exactly. Directed acyclic graph.

Richie Cotton: so, let's get into how businesses can go about adopting this. And are there any roles or teams or sort of areas of business that you think should be paying particular attention to causal AI?

Paul Hunermund: Yeah. So what we do see in practice is that and I mentioned already that we've done interviews. We also run surveys among data scientists where the industry is in terms of adoption. So we see. More and more interest. We're probably still at an early phase of the adoption curve, but that can go fast.

So, better, better move fast if you want to be an early adopter. And since Those techniques are coming mostly out of academia. It is data scientists often with a fairly good technical background, maybe a PhD that, that are adopting those methods and spreading the word within an organization.

So it seems to be within an organization, a bottom up adoption. But because of these reasons I mentioned earlier that Data science team will hit a wall at one point if they don't bring in stakeholders with the, with the necessary background knowledge. And they, of course, also need to understand what we are doing on this flip chart, what kind of strange diagram we're, we're drawing there.

I think there's a lot of value also for for stakeholders and managers to at least get a, preliminary understanding of causal ML. You don't need to be an expert in all the algorithms but you need to understand why causal inference and causal decision making is important. So I would say, yes, data scientists, CTOs but then also larger executives, managers that are the, on the user side, perhaps, of data science in general, and they should also have an eye on this topic.

Richie Cotton: It sounds like, from some of the examples you gave earlier that people like product managers or maybe people involved in marketing, they're also going to need to know about this stuff as well just to make sure the experiments are running correctly.

Paul Hunermund: Yes, exactly. So that is something that we've seen. Often data scientists use A B tests not just because it's a great tool for causal learning, but it's also easy to communicate these results. Because in a, in a proper A B test, you compare let's say two bars and a bar chart, and for one option is higher, and then you go with that option.

But that tells you that in a broader organization, there needs to be some appreciation or some understanding of, of causal inference techniques, and it can be based on, on the standard experimental. mindset. But it would be good if more people actually understand, let's say, the basic principles of, for example, experimental versus observational causal inference techniques.

And what are the things to look out for if you're working with exposed observed data compared an experiment, because then data science teams also have an easier time to communicate these results. And that will relate to exactly what you said, product managers. Executives and and so forth.

Richie Cotton: I think that's often a danger with data scientists using new techniques or techniques that are unfamiliar to the organizations, they go and do something and then people aren't quite sure what to do with the results. So having that kind of broad organizational awareness does sound very useful indeed.

And from technical point of view, is this something you might expect business analysts to be able to perform or do you need like a really strong sort of statistical machine learning background to get started with course layer?

Paul Hunermund: I don't think so because for me, causal inference is, first of all, new perspective on data analytics and data science. So I think actually uh, In an ideal case, any kind of data science statistics curriculum would start with these basic principles understanding really what kind of data do we need in order, for example, to get causal estimate versus a prediction out before we jump into more sophisticated algorithms and running regressions.

And that's the way I also teach this topic here at Copenhagen Business School where I'm an assistant professor. More on a conceptual level we do, sophisticated causal analysis and, and there goes a lot of technicalities into it, but that's not the main point. So, If you want to use more advanced tools let's say in the area of causal discovery, of course, at one point, you will need to have a proper technical background and perhaps a PhD in that area.

But before you get there, you have already made a lot of progress, I think. So, yes, I hope that these techniques and this kind of well, just the perspective will diffuse more broadly. and you don't have to be a PhD scientist in, in order to understand causal ML principles.

Richie Cotton: Okay that's good to know. No PhDs need to get going. I think it's going to be particularly important for managers since often managers, they're going to need to know this stuff, but they're not necessarily from a data background or from an AI background. So, what do managers need to know in order to make use of causal AI?

Paul Hunermund: Yes, so I think Judea Pearl who's a eminent figure in, in that area has contributed over, over many decades, to the field has published a book of why in 2018, 2019. And that has created a lot of attention in the industry about causal AI and causal machine learning techniques. So I think that will be a great way to start because it's also a very engaging read just understand difference in perspectives and the difference between Well, the associational level and the causal level and then I think it depends on your personal style.

So, of course, you can take a course causal machine learning. I am offering, for example, online classes people could sign up to. There are others out there. They're good Documentations of packages like the do I package. If you, for example, already working with Python, you could just install that work through the examples that they give you.

There are many good books out there these days for various technical levels. So that's the way to start. But I think the most crucial part is that as a manager, you actually realize that you have an important role to play in this process. it's not just something that, you can outsource you have to or your contribution to this process is important and then the rest will follow.

Richie Cotton: Absolutely like your idea of online courses. We're obviously big fans of online learning at data camp. In terms of what you're going to do with causal AI, I think quite often organizations like, well, I don't know where to start. So do you have any examples of what like a good first project might be?

So something either simple or something high impact that companies can get started with?

Paul Hunermund: Yeah, I think like one of the one of the problems is that people get too ambitious and at the beginning and and then get overwhelmed. I think you should start slow and perhaps analyze the data set already out there. We can try to run some preliminary analysis where we run regressions in order to rule out some, some confounding influence factors, for example, or if there's a possibility to set up an A B test, that would be a good starting point.

So, in that sense, Causal ML is maybe the next step in the machine learning or AI pipeline. Because all the challenges that come with machine learning and AI, you need good data. need to have a good data pipeline. You need to move beyond let's say standard descriptive analysis to more modeling also goes into this.

So you need to make sure good. that you have the right data available but then I would say if you have, for example, a standard or a project that is already ongoing, try to use some of the causal inference techniques and complement them to, to what you've done before. That might be more in the predictive space.

Richie Cotton: Okay. So starting slow sounds good. Are there any mistakes that either people or organizations commonly make when they are trying to adopt causal AI?

Paul Hunermund: Yeah, again, I think getting too ambitious. So the 80 percent rule applies here too. I think people perceive causal inference at one point as this zero one thing, like either I get it 100 percent right, or there's no use to it at all. And so either it's a correlation or it's causation. There's nothing in between.

I think for many applications, it's already good if you can rule out some confounders and getting closer to something causal in a way. And there will be influence factors that you cannot control for because you don't have the data on it or it's like a decision taken by customers and you do observe their behavior, but not necessarily their characteristics, for example.

But already, there's a lots of value in during the first steps. So I think that's a common mistake. You will need to have data readily available. So you need to have a good data pipelines. And that sense is complementary maybe to other AI projects within the organization. But then make first steps then improve from there.

Richie Cotton: Can we talk a bit more about the decision making angle from causal AI? So, I think with the rise in popularity of AI this last year, there's been a lot of people worrying about decisions being made by AI. And just sound like Causal AI is going to be very good for this. So, how can you safely incorporate causal AI into decision making processes?

Paul Hunermund: Yeah, so causal inference is predicting the likely impact of an action. So there's immediately the connection to, to decision making. A correlation is not that. So, using the silly example of the ice cream sales and shark attacks we should not shut down the ice cream vendors on the beach in order to prevent shark attacks because there's no causal effect.

This is driven just by well, weather. Most likely people go to the beach and then get bitten by sharks. And in this case, yeah, that's a silly example, but a lot of similar examples are, are lurking behind the scene, I think, and that are not immediately visible anymore. And we've talked about explainability before, right?

Especially when we then. do a sophisticated machine learning analysis there is almost this kind of black box that gives us an answer, but we don't really understand where this answer is coming from, why why this prediction has been made that way that is very dangerous. And here causal inference is hopefully improving decision making.

It's there should always be a human in the loop. but the hope is that we can get at more robust decisions fairer decisions as well and better explainable decisions.

Richie Cotton: you mentioned the idea that there should be a human in the loop to assist with any decisions that an AI is making. So what happens when the human and the AI disagree?

Paul Hunermund: I think that is complementary to similar discussions that we have in, in standard machine learning. So causal and ai machine learning is, is not special in that sense. And I think most people would agree that in the end, it, it should be the human that has the final say and also a human overseeing this, process but, yeah, the hope is that an AI that is able to reason causally will likely propose better decisions. So an AI that is able to reason causally will, for example be less likely to discriminate against protected attributes like gender and race. An AI that, and hypothetical AI that would take salary decisions at Google would hopefully not make this mistake of raising salaries for senior male software engineers, for example, right?

Yeah, so the discussion around human in the loop, in AI is to a large degree, I think, also due to the fact that there is this mismatch between what a correlation on machine learning algorithm outputs and the actual decision that we later discover. need to take.

And once we can align these two things, we might actually be able to grant more autonomy to AI, for example, experimenting on a, on a platform. We still need to have oversight and humans should have the final say but maybe it's actually then safer. to let this AI system take decisions because they're optimized for reasoning in that, in that causal way.

Richie Cotton: In that case, do you think there are any cases where it would be safe to completely outsource decision making to an AI if it's using this? Al thinking.

Paul Hunermund: Yes, I do. I'm thinking, for example, about low stakes experimentations on an online platform something like booking. com. If we're thinking, for example, about really the, the layout of a platform is the the shape of the button, round corners versus sharp corners, right? Sometimes we, we actually do find.

Effects of this for consumer behavior and the potential for screwing things up are quite low. I think with these kinds of decisions and nowadays, I think organizations like booking they try to be Or have very low hierarchy in this process. So basically, anyone can run experiments on the platform with little red tape.

Because they want to reduce the cost of A B testing. They want to make it more efficient. And I think these kind of things we can actually outsource at one point. a causal AI that is not just running experiments, but then also optimally learning from past experiments and maybe incorporating past learnings, incorporating exposed data analysis these kind of settings.

Richie Cotton: Okay. That's pretty cool that idea mentioned around running, do this and then maybe like the end, well, you don't need people involved in have AI running experiments. Cool. Cool. Okay. So, how do you think AI assistance for decision making is gonna change the role of managers?

Paul Hunermund: I think it will contribute to evidence based management. So using data science analytics more effectively to take managerial decisions and relying less on gut feeling. And I think that has again to do with perhaps many people nowadays perceive, still perceive data science and AI as not completely useful for for managerial decisions because many of the outputs of standard machine learning are correlations that are not immediately usable in for taking management decisions, which customers to target.

Which markets to enter what to invest in the, the high stakes decisions in strategic management, for example. And I think causal AI, because it's optimized for predicting the impact of actions will play a role there. At the same time, I think it's also very useful to, to realize where the limitations of, data driven decision making are because in many situations we will actually find out that the cause effect that we're interested in might not be obtainable from the data that we're having because we're missing an important piece.

And then we might still need to take a decision based on gut feeling or based on theoretical considerations. But at least we're understanding better. where data helps and where it doesn't help. So also curing the hype around AI, that everything will be taking care of by AI. I think if you understand causal inference better, you also realize that, there are limitations data driven decision making and, yeah, purely data based machine learning.

Richie Cotton: That's interesting the idea that there are limitations to this and to data driven decision making. And so, yeah, sometimes more data isn't going to be the answer. Which makes me a little bit sad, but I guess that's a, that's a harsh fact of life. Okay, so, what are the most interesting developments at the moment going on in causal AI research?

Paul Hunermund: Yeah, what I'm most excited about is just, we have Many different traditions causal inferences, almost this kind of general purpose technology used in many different fields also within industry, you have the health sciences, epidemiology, you have statistics, computer science. I myself have an econometrics background and increasingly all these fields.

Different fields are coming together, sitting at one table, joint conferences not just academics, but also practitioners from industry coming together to share their well, latest advances but also problems with adoption. I think there's a lot of potential in, in that it also has challenges because well, different traditions means also different vocabulary.

And we're using different words for describing the same thing but we're, I think increasingly overcoming these challenges. And then, yeah, on a more concrete technical level, I think these transportability techniques. So thinking about the external validity of experiments is a very promising area because it combines.

Experimental tools, which many see as the gold standard in in causal inference because, of the things we talked about but then combining it with other data sources observational data. In order to more effectively make use of causal knowledge and for example, saving experimentation costs because we can use past learnings.

Causal discovery is. with yeah, this idea of how far can we get with, let's say, purely observational data or sometimes also together with experimentation in order to learn a complicated causal network. And I mentioned already applications like. in predictive maintenance, not just to be predictive, like finding out where are the likely failure points, but then also immediately getting at the root causes of this.

Another technique in that area is root cause analysis which is really interesting for. Many organizations really getting at the, like the technique says, the root causes of, of certain failure points. That's very important in manufacturing the defense industries using this.

So, Amazon science had a very influential paper two years ago, I believe. So that's for me, the most interesting developments in causal AI, but there are plenty.

Richie Cotton: It sounds like there's a going on. but it's perhaps fairly early stages research a of it's still happening at, of academic level and of research labs. But yeah, lots promising stuff happening.

Paul Hunermund: That's true. And it's an academic topic. It's very encouraging to see how many young minds fresh, for example, PhD students are getting into that area. I was recently at a conference in Tübingen, Germany. That was really, really encouraging to see. But we as a field, we also try to reach out. There are many Practitioners, data scientists and industry that are working with these techniques because the questions that they're dealing with often have this kind of causal component to it.

So, we set up in 2020 for the first time the, the causal data science meeting which is an online. Conference that has the goal to really bring together practitioners and academics to exchange ideas. So I think we're learning a lot from the various academic disciplines, but then also from practice.

And it's really interesting to see what are the challenges and obstacles practitioners face, and maybe we can together overcome them.

Richie Cotton: Blending academia and industry. I, I like that combination. Alright, so just to wrap up, do you have any final advice any organizations wanting to get started adopting Causal AI?

Paul Hunermund: Have a look at the book of why I think it's a really great resource. And it motivated many people to look more closely into this area. Especially the last chapter also has an outlook of how causality is important in AI. Because causality is such a fundamental concept for us as humans.

So if we want to develop artificial intelligences, they probably should also be able to reason in terms of causal effect. Simply to have just a better conversation, to be able to have a better conversation with us. Then, yeah, as I said earlier, it depends on what you like. I think there are many good online resources out there, packages that you can start with and play around if you have more of a hands on approach.

There are online courses that, that you can take if you have, want more of a structured approach. And yeah, meet us at the causal data science meeting, I would say, because that's where we really want to bring the different backgrounds to the table.

Richie Cotton: Alright, brilliant. Lots of ways to get started then. Thank you so much for your time, Paul.

Paul Hunermund: Thank you very much. It was great talking to you.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

ML Workflow Orchestration With Prefect

Learn everything about a powerful and open-source workflow orchestration tool. Build, deploy, and execute your first machine learning workflow on your local machine and the cloud with this simple guide.
Abid Ali Awan's photo

Abid Ali Awan

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More