Machine Learning with Tree-Based Models in R
Learn how to use tree-based models and ensembles to make classification and regression predictions with tidymodels.
Commencer Le Cours Gratuitement4 heures16 vidéos58 exercices8 091 apprenantsDéclaration de réalisation
Créez votre compte gratuit
ou
En continuant, vous acceptez nos Conditions d'utilisation, notre Politique de confidentialité et le fait que vos données sont stockées aux États-Unis.Formation de 2 personnes ou plus ?
Essayer DataCamp for BusinessApprécié par les apprenants de milliers d'entreprises
Description du cours
Tree-based machine learning models can reveal complex non-linear relationships in data and often dominate machine learning competitions. In this course, you'll use the tidymodels package to explore and build different tree-based models—from simple decision trees to complex random forests. You’ll also learn to use boosted trees, a powerful machine learning technique that uses ensemble learning to build high-performing predictive models. Along the way, you'll work with health and credit risk data to predict the incidence of diabetes and customer churn.
Formation de 2 personnes ou plus ?
Donnez à votre équipe l’accès à la plateforme DataCamp complète, y compris toutes les fonctionnalités.Dans les titres suivants
Principes fondamentaux de l'apprentissage automatique en R
Aller à la pisteScientifique en apprentissage automatique en R
Aller à la pisteApprentissage automatique supervisé en R
Aller à la piste- 1
Classification Trees
GratuitReady to build a real machine learning pipeline? Complete step-by-step exercises to learn how to create decision trees, split your data, and predict which patients are most likely to suffer from diabetes. Last but not least, you’ll build performance measures to assess your models and judge your predictions.
Welcome to the course!50 xpWhy tree-based methods?100 xpSpecify that tree100 xpTrain that model100 xpHow to grow your tree50 xpTrain/test split100 xpAvoiding class imbalances100 xpFrom zero to hero100 xpPredict and evaluate50 xpMake predictions100 xpCrack the matrix100 xpAre you predicting correctly?100 xp - 2
Regression Trees and Cross-Validation
Ready for some candy? Use a chocolate rating dataset to build regression trees and assess their performance using suitable error measures. You’ll overcome statistical insecurities of single train/test splits by applying sweet techniques like cross-validation and then dive even deeper by mastering the bias-variance tradeoff.
Continuous outcomes50 xpTrain a regression tree100 xpPredict new values100 xpInspect model output50 xpPerformance metrics for regression trees50 xpIn-sample performance100 xpOut-of-sample performance100 xpBigger mistakes, bigger penalty100 xpCross-validation50 xpCreate the folds100 xpFit the folds100 xpEvaluate the folds100 xpBias-variance tradeoff50 xpCall things by their names100 xpAdjust model complexity100 xpIn-sample and out-of-sample performance100 xp - 3
Hyperparameters and Ensemble Models
Time to get serious with tuning your hyperparameters and interpreting receiver operating characteristic (ROC) curves. In this chapter, you’ll leverage the wisdom of the crowd with ensemble models like bagging or random forests and build ensembles that forecast which credit card customers are most likely to churn.
Tuning hyperparameters50 xpGenerate a tuning grid100 xpTune along the grid100 xpPick the winner100 xpMore model measures50 xpCalculate specificity100 xpDraw the ROC curve100 xpArea under the ROC curve100 xpBagged trees50 xpCreate bagged trees100 xpIn-sample ROC and AUC100 xpCheck for overfitting100 xpRandom forest50 xpBagged trees vs. random forest50 xpVariable importance100 xp - 4
Boosted Trees
Ready for the high society of tree-based models? Apply gradient boosting to create powerful ensembles that perform better than anything that you have seen or built. Learn about their fine-tuning and how to compare different models to pick a winner for production.
Introduction to boosting50 xpBagging vs. boosting50 xpSpecify a boosted ensemble100 xpGradient boosting50 xpTrain a boosted ensemble100 xpEvaluate the ensemble100 xpCompare to a single classifier100 xpOptimize the boosted ensemble50 xpTuning preparation100 xpThe actual tuning100 xpFinalize the model100 xpModel comparison50 xpCompare AUC100 xpPlot ROC curves100 xpWrap-up50 xp
Formation de 2 personnes ou plus ?
Donnez à votre équipe l’accès à la plateforme DataCamp complète, y compris toutes les fonctionnalités.Dans les titres suivants
Principes fondamentaux de l'apprentissage automatique en R
Aller à la pisteScientifique en apprentissage automatique en R
Aller à la pisteApprentissage automatique supervisé en R
Aller à la pistecollaborateurs
prérequis
Modeling with tidymodels in RSandro Raabe
Voir PlusData Scientist
Qu’est-ce que les autres apprenants ont à dire ?
Inscrivez-vous 15 millions d’apprenants et commencer Machine Learning with Tree-Based Models in R Aujourd’hui!
Créez votre compte gratuit
ou
En continuant, vous acceptez nos Conditions d'utilisation, notre Politique de confidentialité et le fait que vos données sont stockées aux États-Unis.