Skip to main content
HomeProbability & StatisticsStatistical Thinking in Python (Part 2)

Statistical Thinking in Python (Part 2)

4.6+
15 reviews
Intermediate

Learn to perform the two key tasks in statistical inference: parameter estimation and hypothesis testing.

Start Course for Free
4 Hours15 Videos66 Exercises
88,941 LearnersTrophyStatement of Accomplishment

Create Your Free Account

GoogleLinkedInFacebook

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Loved by learners at thousands of companies


Course Description

After completing Statistical Thinking in Python (Part 1), you have the probabilistic mindset and foundational hacker stats skills to dive into data sets and extract useful information from them. In this course, you will do just that, expanding and honing your hacker stats toolbox to perform the two key tasks in statistical inference, parameter estimation and hypothesis testing. You will work with real data sets as you learn, culminating with analysis of measurements of the beaks of the Darwin's famous finches. You will emerge from this course with new knowledge and lots of practice under your belt, ready to attack your own inference problems out in the world.
  1. 1

    Parameter estimation by optimization

    Free

    When doing statistical inference, we speak the language of probability. A probability distribution that describes your data has parameters. So, a major goal of statistical inference is to estimate the values of these parameters, which allows us to concisely and unambiguously describe our data and draw conclusions from it. In this chapter, you will learn how to find the optimal parameters, those that best describe your data.

    Play Chapter Now
    Optimal parameters
    50 xp
    How often do we get no-hitters?
    100 xp
    Do the data follow our story?
    100 xp
    How is this parameter optimal?
    100 xp
    Linear regression by least squares
    50 xp
    EDA of literacy/fertility data
    100 xp
    Linear regression
    100 xp
    How is it optimal?
    100 xp
    The importance of EDA: Anscombe's quartet
    50 xp
    The importance of EDA
    50 xp
    Linear regression on appropriate Anscombe data
    100 xp
    Linear regression on all Anscombe data
    100 xp
  2. 2

    Bootstrap confidence intervals

    To "pull yourself up by your bootstraps" is a classic idiom meaning that you achieve a difficult task by yourself with no help at all. In statistical inference, you want to know what would happen if you could repeat your data acquisition an infinite number of times. This task is impossible, but can we use only the data we actually have to get close to the same result as an infinitude of experiments? The answer is yes! The technique to do it is aptly called bootstrapping. This chapter will introduce you to this extraordinarily powerful tool.

    Play Chapter Now
  3. 3

    Introduction to hypothesis testing

    You now know how to define and estimate parameters given a model. But the question remains: how reasonable is it to observe your data if a model is true? This question is addressed by hypothesis tests. They are the icing on the inference cake. After completing this chapter, you will be able to carefully construct and test hypotheses using hacker statistics.

    Play Chapter Now
  4. 4

    Hypothesis test examples

    As you saw from the last chapter, hypothesis testing can be a bit tricky. You need to define the null hypothesis, figure out how to simulate it, and define clearly what it means to be "more extreme" in order to compute the p-value. Like any skill, practice makes perfect, and this chapter gives you some good practice with hypothesis tests.

    Play Chapter Now

Datasets

Anscombe dataBee sperm countsFemale literacy and fertilityFinch beaks (1975)Finch beaks (2012)Fortis beak depth heredityFrog tongue dataMajor League Baseball no-hittersScandens beak depth hereditySheffield Weather Station

Collaborators

Collaborator's avatar
Yashas Roy
Collaborator's avatar
Hugo Bowne-Anderson
Justin Bois HeadshotJustin Bois

Lecturer at the California Institute of Technology

Justin Bois is a Teaching Professor in the Division of Biology and Biological Engineering at the California Institute of Technology. He teaches nine different classes there, nearly all of which heavily feature Python. He is dedicated to empowering students in the biological sciences with quantitative tools, particularly data analysis skills. Beyond biologists, he is thrilled to develop courses for DataCamp, whose students are an excited bunch of burgeoning data scientists!
See More

Don’t just take our word for it

*4.6
from 15 reviews
73%
20%
0%
7%
0%
Sort by
  • I T.
    10 months

    The instructor is excellent, and the course is interesting. The practice could have been more challenging so the learner could learn more about how to code.

  • Allan M.
    over 1 year

    It is a very good course.

  • Tuan N.
    over 1 year

    no

  • Greg T.
    over 1 year

    Really gave me a greater understanding of Probability and Statistics.

  • Jennifer V.
    over 1 year

    This was an excellent course. The instructor really knew how to communicate the concepts to the student (something which cannot be said for all of the instructors on DataCamp). It included a lot of vital information and I learned more in this course than most of my previous courses on this platform.

"The instructor is excellent, and the course is interesting. The practice could have been more challenging so the learner could learn more about how to code."

I T.

"It is a very good course."

Allan M.

"no"

Tuan N.

Join over 13 million learners and start Statistical Thinking in Python (Part 2) today!

Create Your Free Account

GoogleLinkedInFacebook

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.