Foundations of Inference in Python
Get hands-on experience making sound conclusions based on data in this four-hour course on statistical inference in Python.
Start Course for Free4 hours14 videos48 exercises
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.Training 2 or more people?
Try DataCamp for BusinessLoved by learners at thousands of companies
Course Description
Truly Understand Hypothesis Tests
What happens after you compute your averages and make your graphs? How do you go from descriptive statistics to confident decision-making? How can you apply hypothesis tests to solve real-world problems? In this four-hour course on the foundations of inference in Python, you’ll get hands-on experience in making sound conclusions based on data. You’ll learn all about sampling and discover how improper sampling can throw statistical inference off course.Analyze a Broad Range of Scenarios
You'll start by working with hypothesis tests for normality and correlation, as well as both parametric and non-parametric tests. You'll run these tests using SciPy, and interpret their output to use for decision making. Next, you'll measure the strength of an outcome using effect size and statistical power, all while avoiding spurious correlations by applying corrections. Finally, you'll use simulation, randomization, and meta-analysis to work with a broad range of data, including re-analyzing results from other researchers.Draw Solid Conclusions From Big Data
Following the course, you will be able to successfully take big data and use it to make principled decisions that leaders can rely on. You'll go well beyond graphs and summary statistics to produce reliable, repeatable, and explainable results.Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.In the following Tracks
Applied Statistics in Python
Go To Track- 1
Inferential Statistics and Sampling
FreeIn this chapter, we'll explore the relationship between samples and statistically justifiable conclusions. Choosing a sample is the basis of making sound statistical decisions, and we’ll explore how the choice of a sample affects the outcome of your inference.
Statistical inference and random sampling50 xpSampling and point estimates100 xpRepeated sampling, point estimates and inference100 xpSampling and bias50 xpVisualizing samples100 xpInference and bias100 xpConfidence intervals and sampling50 xpNormal sampling distributions100 xpCalculating confidence intervals100 xpDrawing conclusions from samples100 xp - 2
Hypothesis Testing Toolkit
Learn all about applying normality tests, correlation tests, and parametric and non-parametric tests for sound inference. Hypothesis tests are tools, and choosing the right tool for the job is critical for statistical decision-making. While you may be familiar with some of these tests in introductory courses, you'll go deeper to enhance your inferential toolkit in this chapter.
Normality tests50 xpTesting for normality100 xpDistribution of errors100 xpFitting a normal distribution100 xpCorrelation tests50 xpTesting for correlation100 xpAutocorrelation100 xpExplained variance100 xpParametric tests50 xpEqual variance100 xpNormality of groups100 xpANOVA100 xpNon-parametric tests50 xpComparing rankings100 xpComparing medians100 xp - 3
Effect Size
In this chapter, you'll measure and interpret effect size in various situations, encounter the multiple comparisons problem, and explore the power of a test in depth. While p-values tell you if a significant effect is present, they don't tell you how strong that effect is. Effect size measures how strong an effect a treatment has. Master the factors underpinning effect size in this chapter.
Effect size50 xpEffect size for means100 xpEffect size for correlations100 xpEffect size for categorical variables100 xpMultiple comparisons and corrections50 xpMultiple comparisons problem100 xpBonferonni-Holm correction100 xpPower of a test50 xpWhat is power anyway?100 xpPower for experimental design100 xpComputing power and sample sizes100 xp - 4
Simulation, Randomization, and Meta-Analysis
You’ll expand your inferential statistics toolkit further with a look at bootstrapping, permutation tests, and methods of combining evidence from p-values. Bootstrapping will provide you with a first look at statistical simulation. In the lesson meta-analysis, you’ll learn all about combining results from multiple studies. You’ll end with a look at permutation tests, a powerful and flexible non-parametric statistical tool.
Bootstrapping50 xpBootstrap confidence intervals100 xpBootstrapping vs. normality100 xpCombining evidence from p-values50 xpFisher's method in SciPy100 xpInference using Fisher's method50 xpSummarizing Fisher's method100 xpPermutation tests50 xpPermutation tests for correlations100 xpPermutation tests and bootstrapping100 xpAnalyzing skewed data with a permutation test100 xpCourse wrap-up video50 xp
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.In the following Tracks
Applied Statistics in Python
Go To Trackcollaborators
prerequisites
Hypothesis Testing in PythonPaul Savala
See MoreAssistant Professor of Mathematics
Paul joined St. Edward's University as an Assistant Professor of Mathematics after working as a Data Scientist in industry. His research interests include the use of recurrent neural networks to reason mathematically.
What do other learners have to say?
FAQs
Join over 15 million learners and start Foundations of Inference in Python today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.