Hyperparameter Tuning in R
Learn how to tune your model's hyperparameters to get the best predictive results.
Comece O Curso Gratuitamente4 horas14 vídeos47 exercícios7.108 aprendizesDeclaração de Realização
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.Treinar 2 ou mais pessoas?
Tentar DataCamp for BusinessAmado por alunos de milhares de empresas
Descrição do Curso
For many machine learning problems, simply running a model out-of-the-box and getting a prediction is not enough; you want the best model with the most accurate prediction. One way to perfect your model is with hyperparameter tuning, which means optimizing the settings for that specific model. In this course, you will work with the caret, mlr and h2o packages to find the optimal combination of hyperparameters in an efficient manner using grid search, random search, adaptive resampling and automatic machine learning (AutoML). Furthermore, you will work with different datasets and tune different supervised learning models, such as random forests, gradient boosting machines, support vector machines, and even neural nets. Get ready to tune!
Treinar 2 ou mais pessoas?
Obtenha acesso à sua equipe à plataforma DataCamp completa, incluindo todos os recursos.Nas seguintes faixas
Cientista de aprendizado de máquina in R
Ir para a trilhaAprendizado de máquina supervisionado em R
Ir para a trilha- 1
Introduction to hyperparameters
GratuitoWhy do we use the strange word "hyperparameter"? What makes it hyper? Here, you will understand what model parameters are, and why they are different from hyperparameters in machine learning. You will then see why we would want to tune them and how the default setting of caret automatically includes hyperparameter tuning.
Parameters vs hyperparameters50 xpModel parameters vs. hyperparameters100 xpHyperparameters in linear models50 xpWhat are the coefficients?100 xpRecap of machine learning basics50 xpMachine learning with caret100 xpResampling schemes50 xpHyperparameter tuning in caret50 xpHyperparameters in Stochastic Gradient Boosting50 xpChanging the number of hyperparameters to tune100 xpTune hyperparameters manually100 xp - 2
Hyperparameter tuning with caret
In this chapter, you will learn how to tune hyperparameters with a Cartesian grid. Then, you will implement faster and more efficient approaches. You will use Random Search and adaptive resampling to tune the parameter grid, in a way that concentrates on values in the neighborhood of the optimal settings.
Hyperparameter tuning in caret50 xpFinding hyperparameters50 xpCartesian grid search in caret100 xpPlot hyperparameter model output100 xpGrid vs. Random Search50 xpGrid search with range of hyperparameters100 xpFind train() option for random search50 xpRandom search with caret100 xpAdaptive resampling50 xpAdvantages of Adaptive Resampling50 xpAdaptive Resampling with caret100 xp - 3
Hyperparameter tuning with mlr
Here, you will use another package for machine learning that has very convenient hyperparameter tuning functions. You will define a Cartesian grid or perform Random Search, as well as advanced techniques. You will also learn different ways to plot and evaluate models with different hyperparameters.
Machine learning with mlr50 xpMachine Learning with mlr50 xpModeling with mlr100 xpGrid and random search with mlr50 xpRandom search with mlr100 xpPerform hyperparameter tuning with mlr100 xpEvaluating hyperparameters with mlr50 xpWhy to evaluate tuning?50 xpEvaluating hyperparameter tuning results100 xpAdvanced tuning with mlr50 xpDefine advanced tuning controls50 xpDefine aggregated measures100 xpSetting hyperparameters100 xp - 4
Hyperparameter tuning with h2o
In this final chapter, you will use h2o, another package for machine learning with very convenient hyperparameter tuning functions. You will use it to train different models and define a Cartesian grid. Then, You will implement a Random Search use stopping criteria. Finally, you will learn AutoML, an h2o interface which allows for very fast and convenient model and hyperparameter tuning with just one function.
Machine learning with h2o50 xpPrepare data for modelling with h2o100 xpModeling with h2o100 xpGrid and random search with h2o50 xpGrid search with h2o100 xpRandom search with h2o100 xpStopping criteria100 xpAutomatic machine learning with H2O50 xpAutoML in h2o100 xpScoring the leaderboard50 xpExtract h2o models and evaluate performance100 xpWrap-up50 xp
Treinar 2 ou mais pessoas?
Obtenha acesso à sua equipe à plataforma DataCamp completa, incluindo todos os recursos.Nas seguintes faixas
Cientista de aprendizado de máquina in R
Ir para a trilhaAprendizado de máquina supervisionado em R
Ir para a trilhaconjuntos de dados
Bc test dataBc train dataBreast cancer dataBreast cancer data origDatasets descriptionsKnowledge dataKnowledge origKnowledge test dataKnowledge train dataSeeds dataSeeds datasetSeeds test dataSeeds train dataVoters dataVoters origVoters test dataVoters train datacolaboradores
pré-requisitos
Machine Learning with caret in RShirin Elsinghorst (formerly Glander)
Ver MaisData Scientist @ codecentric
O que os outros alunos têm a dizer?
Junte-se a mais de 15 milhões de alunos e comece Hyperparameter Tuning in R hoje mesmo!
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.