1. This is illustrated in Figure 7.5 where 27 features have \(>0\) importance values while the rest of the features have an importance value of zero since they were not included in the final model. May be you are not a big fan of losing time in redoing the same task again and again? If provided, all Stacked Ensembles produced by AutoML will be trained using Blending (a.k.a. Defaults to FALSE. Although H2O has made it easy for non-experts to experiment with machine learning, there is still a fair bit of knowledge and background in data science that is required to produce high-performing machine learning models. As explained above, both data and label are stored in a list.. prediction speed) to get a sense of what might be practically useful in your specific use-case, and then turn off algorithms that are not interesting or useful to you. Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter. Technometrics 21 (2). This argument only needs to be specified if the user wants to exclude columns from the set of predictors. \text{y} = If with your own dataset you do not have such results, you should think about how you divided your dataset in training and test. Data leakage is a big problem in machine learning when developing predictive models. Alternatively, variables such as JobSatisfaction, OverTime, and EnvironmentSatisfaction reduced this observations probability of attriting. Irrelevant or partially relevant features can negatively impact model performance. Its important to realize that variable importance will only measure the impact of the prediction error as features are included; however, it does not measure the impact for particular hinge functions created for a given feature. As seen below, the data are stored in a dgCMatrix which is a sparse matrix and label vector is a numeric vector ({0,1}): This step is the most critical part of the process for the quality of our model. # Create training (70%) and test (30%) sets for the rsample::attrition data. eval.metric allows us to monitor two new metrics for each round, logloss and error. When running AutoML with XGBoost (it is included by default), be sure you allow H2O no more than 2/3 of the total available RAM. It provides parallel boosting trees algorithm that can solve Machine Learning tasks. In the real world, it would be up to you to make this division between train and test data. The Python package is consisted of 3 different interfaces, including native interface, scikit-learn interface and dask interface. JSTOR, 167. The individual PDPs illustrate that our model found that one knot in each feature provides the best fit. Feature importance is similar to R gbm packages relative influence (rel.inf). H2OAutoML can interact with the h2o.sklearn module. More models can be trained and added to an existing AutoML project by specifying the same project name in multiple calls to the AutoML function (as long as the same training frame is used in subsequent runs). XGBoost has several features to help you view the learning progress internally. R has emerged over the last couple decades as a first-class tool for scientific computing tasks, and has been a consistent leader in implementing statistical methodologies for analyzing data. Be it a decision tree or xgboost, caret helps to find the optimal model in the shortest possible time. Consequently, once the full set of knots has been identified, we can sequentially remove knots that do not contribute significantly to predictive accuracy. ## 19 Overall_CondAbove_Average * h(2787-Gr_Liv_Area) 5.80. In a sparse matrix, cells containing 0 are not stored in memory. Refer to the Extremely Randomized Trees section in the DRF chapter and the histogram_type parameter description for more information. The feature importance is given as below. A benefit of using ensembles of decision tree methods like gradient boosting is that they can automatically provide estimates of feature importance from a trained predictive model. Negative weights are not allowed. AutoML will always produce a model which has a MOJO. Uses Alan Millers Fortran utilities with Thomas Lumleys leaps wrapper. We will train decision tree model using the following parameters: objective = "binary:logistic": we will train a binary classification model ; max.depth = 2: the trees wont be deep, because our case is very simple ; nthread = 2: the number of CPU threads we are going to use; nrounds = 2: there will be two passes on the data, the second one will enhance the model by further reducing the difference between ground truth and prediction. In Chapter 5 we saw a slight improvement in our cross-validated accuracy rate using regularized regression. Introduction to Boosted Trees . The cross-validated RMSE for these models is displayed in Figure 7.4; the optimal models cross-validated RMSE was $26,817. The default is 0 (no limit), but dynamically sets to 1 hour if none of max_runtime_secs and max_models are specified by the user. . In recent years, the demand for machine learning experts has outpaced the supply, despite the surge of people entering the field. \end{cases} Mushroom data is cited from UCI Machine Learning Repository. The H2O AutoML algorithm was first released in H2O 3.12.0.1 on June 6, 2017. This chapter discusses multivariate adaptive regression splines (MARS) (Friedman 1991), an algorithm that automatically creates a piecewise linear model which provides an intuitive stepping block into nonlinearity after grasping the concept of multiple linear regression. In this post, I will show you how to get feature importance from Xgboost model in Python. If all columns (other than the response) should be used in prediction, then this does not need to be set. So what is the feature importance of the IP address feature. Figure 7.7: Cross-validated accuracy rate for the 30 different hyperparameter combinations in our grid search. In R there are pre-built functions to plot feature importance of Random Forest model. The main difference is that above it was after building the model, and now it is during the construction that we measure errors. There are many types and sources of feature importance scores, although popular examples include statistical correlation scores, coefficients calculated as part of linear models, decision trees, and permutation importance XGBoost is short for eXtreme Gradient Boosting package. Again 0? The h2o.sklearn module exposes 2 wrappers for H2OAutoML (H2OAutoMLClassifier and H2OAutoMLRegressor), which expose the standard API familiar to sklearn users: fit, predict, fit_predict, score, get_params, and set_params. If with your own dataset you do not have such results, you should think about how you divided your dataset in training and test. exploitation_ratio: Specify the budget ratio (between 0 and 1) dedicated to the exploitation (vs exploration) phase. Also, if we look at the interaction terms our model retained, we see interactions between different hinge functions. It provides parallel boosting trees algorithm that can solve Machine Learning tasks. Allowed options include: training_time_ms: A column providing the training time of each model in milliseconds. However, when each model picks up unique signals in variables that the other models do not capture (i.e. Data leakage is when information from outside the training dataset is used to create the model. Explanations can be generated automatically with a single function call, providing a simple interface to exploring and explaining the AutoML models. An interesting test to see how identical our saved model is to the original one would be to compare the two predictions. As a first step you could leave all the algorithms on, and examine their performance characteristics (e.g. XGBoost 1.5 . Basic training . Provides convenient approaches to compare results across multiple models. For linear model, only weight is defined and its the normalized coefficients without bias. Using the previous code example, you can generate test set predictions as follows: The AutoML object includes a leaderboard of models that were trained in the process, including the 5-fold cross-validated model performance (by default). Therefore, in a dataset mainly made of 0, memory size is reduced.It is very common to have such a dataset. Alternatively, you can put your dataset in a dense matrix, i.e. To measure the model performance, we will compute a simple metric, the average error. Copyright 2022, xgboost developers. Holdout Stacking) instead of the default Stacking method based on cross-validation. Following are explanations of the columns: year: 2016 for all data points month: number for month of the year day: number for day of the year week: day of the week as a character string temp_2: max temperature 2 days prior temp_1: max temperature Generally speaking, it is unusual to use \(d\) greater than 3 or 4 as the larger \(d\) becomes, the easier the function fit becomes overly flexible and oddly shapedespecially near the boundaries of the range of \(X\) values. Heres an example showing basic usage of the h2o.automl() function in R and the H2OAutoML class in Python. AutoML objects are fully supported though the H2O Model Explainability interface. log transformation). So what is the feature importance of the IP address feature. If the oversampled size of the dataset exceeds the maximum size calculated using the max_after_balance_size parameter, then the majority classes will be undersampled to satisfy the size limit. 2016. As explained above, both data and label are stored in a list.. We are using the train data. As in previous chapters, well perform a CV grid search to identify the optimal hyperparameter mix. This value defaults to -1. stopping_tolerance: This option specifies the relative tolerance for the metric-based stopping criterion to stop a grid search and the training of individual models within the AutoML run. You can even add other meta data in it. Each model has a similar prediction that the new observation has a low probability of predicting: However, how each model comes to that conclusion in a slightly different way. For information about how previous versions of AutoML were different than the current one, theres a brief description here. Again 0? H2O offers a number of model explainability methods that apply to AutoML objects (groups of models), as well as individual models (e.g. \end{cases} While all models are importable, only individual models are exportable. One of the simplest way to see the training progress is to set the verbose option (see below for more advanced techniques). If these models also have a non-default value set for a hyperparameter, we identify it in the list as well. Alternatively, you can put your dataset in a dense matrix, i.e. The models are ranked by a default metric based on the problem type (the second column of the leaderboard). In this post, I will show you how to get feature importance from Xgboost model in Python. The usefulness of R for data science stems from the large, active, and growing ecosystem of third-party packages: tidyverse for common data analysis activities; h2o, ranger, xgboost, and others for fast and scalable machine learning; iml, pdp, vip, and others for machine learning interpretability; and many more tools will be mentioned throughout the pages that follow. The purpose of this Vignette is to show you how to use XGBoost to build a model and make predictions. This metric is 0.02 and is pretty low: our yummly mushroom model works well! The following applies a basic MARS model to our ames example. Speed: it can automatically do parallel computation on Windows and Linux, with OpenMP. However, often, we also need to perform local interpretation which allows us to understand why a particular prediction was made for an observation. Who should read this 16.3 Permutation-based feature importance. 1979. Vol. For the purpose of this tutorial we will load XGBoost package. eval.metric allows us to monitor two new metrics for each round, logloss and error. Introduction to Boosted Trees . One of the special features of xgb.train is the capacity to follow the progress of the learning after each round. 2016. As explained above, both data and label are stored in a list.. You can check if XGBoost is available by using the h2o.xgboost.available() in R or h2o.estimators.xgboost.H2OXGBoostEstimator.available() in Python. This procedure continues until many knots are found, producing a (potentially) highly non-linear prediction equation. The more complex the relationship between your features and your label is, the more passes you need. GBMxgboostsklearnfeature_importanceget_fscore() 2) Can I use the feature importance returned by XGBoost classifer to perform Recursive Feature elimination and evaluation of kNN classifer manually with a for loop. The DALEX architecture can be split into three primary operations:. XGBoost stands for Extreme Gradient Boosting, where the term Gradient Boosting originates from the paper Greedy Function Approximation: A Gradient Boosting Machine, by Friedman.. Deep Learning. Copyright 2022, xgboost developers. Since these variables do not provide consistent signals across all models we should use domain experts or other sources to help validate whether or not these predictors are trustworthy. Residual diagnostics: allows you to compare residual distributions. (Note that this doesnt include the training of cross validation models.). \begin{cases} Figure 16.3 presents single-permutation results for the random forest, logistic regression (see Section 4.2.1), and gradient boosting (see Section 4.2.3) models.The best result, in terms of the smallest value of \(L^0\), is obtained for the generalized One of the special features of xgb.train is the capacity to follow the progress of the learning after each round. if you provide a path to fname parameter you can save the trees to your hard drive. Considering many data sets today can Although LIME and SHAP (1, 2) values have recently become popular for local ML interpretation, DALEX uses a process called break down to compute localized variable importance scores. The information is in the tidy data format with each row forming one observation, with the variable values in the columns.. no multinomial support). The plot method for MARS model objects provides useful performance and residual plots. lm, randomForest), it does not have native many of the preferred ML packages produced more recently (i.e. There are two important tuning parameters associated with our MARS model: the maximum degree of interactions and the number of terms retained in the final model. This document gives a basic walkthrough of the xgboost package for Python. This step is the most critical part of the process for the quality of our model. More details about the hyperparameter ranges for the models in addition to the hard-coded models will be added to the appendix at a later date. Mushroom data is cited from UCI Machine Learning Repository. This dataset is very small to not make the R package too heavy, however XGBoost is built to manage huge datasets very efficiently. For example, as homes exceed 2,787 square feet, each additional square foot demands a higher marginal increase in sale price than homes with less than 2,787 square feet. GBMGBM XGBoost(max_depth) Intro to AutoML + Hands-on Lab (1 hour video) (slides), Scalable Automatic Machine Learning in H2O (1 hour video) (slides). Figure 7.3: Model summary capturing GCV \(R^2\) (left-hand y-axis and solid black line) based on the number of terms retained (x-axis) which is based on the number of predictors used to make those terms (right-hand side y-axis). For the following advanced features, we need to put data in xgb.DMatrix as explained above. \end{equation}\], An alternative to polynomials is to use step functions. Defaults to NULL/None, which means a project name will be auto-generated based on the training frame ID. max_runtime_secs_per_model: Specify the max amount of time dedicated to the training of each individual model in the AutoML run. Multiclass classification works in a similar way. Although these models have distinct AUC scores, our objective is to understand how these models come to this conclusion in similar or different ways based on underlying logic and data structure. The length of the remaining variables represent the variable importance. 2001. We can use DALEX::model_performance to compute the predictions and residuals. This process is known as pruning and we can use cross-validation, as we have with the previous models, to find the optimal number of knots. As explained before, we will use the test dataset for this step. Until now, all the learnings we have performed were based on boosting trees. The gradient boosted trees has been around for a while, and there are a lot of materials on the topic. As a next step, we could perform a grid search that focuses in on a refined grid space for nprune (e.g., comparing 4565 terms retained). This step is the most critical part of the process for the quality of our model. For introduction to dask interface please see Distributed XGBoost with Dask. Basic Training using XGBoost . The larger the line segment, the larger the loss when that variable is randomized. y: This argument is the name (or index) of the response column. The model that provides the optimal combination includes second degree interaction effects and retains 56 terms. It is generally over 10 times faster than the classical gbm. XGBoost 1.5 . It is an efficient and scalable implementation of gradient boosting framework by @friedman2000additive and @friedman2001greedy. An object to be used as a cross-validation generator. For the purpose of this example, we use watchlist parameter. To examine the trained models more closely, you can interact with the models, either by model ID, or a convenience function which can grab the best model of each model type (ranked by the default metric, or a metric of your choosing). Basic training . We need to perform a simple transformation before being able to use these results. In this post you will discover how you can estimate the importance of features for a predictive modeling problem using the XGBoost library in Python. We invite you to learn more at page linked above. TIP: You can incorporate custom loss functions using the loss_function argument. preprocessing: The list of preprocessing steps to run. Particular algorithms (or groups of algorithms) can be switched off using the exclude_algos argument. Note that this requires balance_classes set to True. \tag{7.2} What results is known as a hinge function \(h\left(x-a\right)\), where \(a\) is the cutpoint value. \end{equation}\]. The package is made to be extendible, so that users are also allowed to define their own objective functions easily. The data features that you use to train your machine learning models have a huge influence on the performance you can achieve. Most of the features below have been implemented to help you to improve your model by offering a better understanding of its content. However, one disadvantage to MARS models is that theyre typically slower to train. See include_algos below for the list of available options. OReilly Media, Inc. #> Session info , #> version R version 3.6.2 (2019-12-12), #> Packages , #> ! As advanced machine learning algorithms are gaining acceptance across many organizations and domains, machine learning interpretability is growing in importance to help extract insight and clarity regarding how these algorithms are performing and why one prediction is made over another. Rather, these algorithms will search for, and discover, nonlinearities and interactions in the data that help maximize predictive accuracy. We can extend linear models to capture any non-linear relationship. SageMaker XGBoost allows customers to differentiate the importance of labelled data points by assigning each instance a weight value. ## .. ..@ i : int [1:143286] 2 6 8 11 18 20 21 24 28 32 ## .. ..@ p : int [1:127] 0 369 372 3306 5845 6489 6513 8380 8384 10991 ## .. .. ..$ : chr [1:126] "cap-shape=bell" "cap-shape=conical" "cap-shape=convex" "cap-shape=flat" ## .. ..@ x : num [1:143286] 1 1 1 1 1 1 1 1 1 1 ## $ label: num [1:6513] 1 0 0 1 0 0 0 1 0 0 # verbose = 2, also print information about tree, ## [11:41:01] amalgamation/../src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2, ## [11:41:01] amalgamation/../src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2, # limit display of predictions to the first 10, ## [1] 0.28583017 0.92392391 0.28583017 0.28583017 0.05169873 0.92392391, ## [0] train-error:0.046522 test-error:0.042831, ## [1] train-error:0.022263 test-error:0.021726, ## [0] train-error:0.046522 train-logloss:0.233376 test-error:0.042831 test-logloss:0.226686, ## [1] train-error:0.022263 train-logloss:0.136658 test-error:0.021726 test-logloss:0.137874, ## [0] train-error:0.024720 train-logloss:0.184616 test-error:0.022967 test-logloss:0.184234, ## [1] train-error:0.004146 train-logloss:0.069885 test-error:0.003724 test-logloss:0.068081, ## [11:41:01] 6513x126 matrix with 143286 entries loaded from dtrain.buffer, ## [2] "0:[f28<-1.00136e-05] yes=1,no=2,missing=1,gain=4000.53,cover=1628.25", ## [3] "1:[f55<-1.00136e-05] yes=3,no=4,missing=3,gain=1158.21,cover=924.5", ## [6] "2:[f108<-1.00136e-05] yes=5,no=6,missing=5,gain=198.174,cover=703.75", ## [10] "0:[f59<-1.00136e-05] yes=1,no=2,missing=1,gain=832.545,cover=788.852", ## [11] "1:[f28<-1.00136e-05] yes=3,no=4,missing=3,gain=569.725,cover=768.39". Both variable importance measures will usually give you very similar results. ## .. ..@ i : int [1:143286] 2 6 8 11 18 20 21 24 28 32 ## .. ..@ p : int [1:127] 0 369 372 3306 5845 6489 6513 8380 8384 10991 ## .. .. ..$ : chr [1:126] "cap-shape=bell" "cap-shape=conical" "cap-shape=convex" "cap-shape=flat" ## .. ..@ x : num [1:143286] 1 1 1 1 1 1 1 1 1 1 ## $ label: num [1:6513] 1 0 0 1 0 0 0 1 0 0 # verbose = 2, also print information about tree, ## [11:41:01] amalgamation/../src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2, ## [11:41:01] amalgamation/../src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2, # limit display of predictions to the first 10, ## [1] 0.28583017 0.92392391 0.28583017 0.28583017 0.05169873 0.92392391, ## [0] train-error:0.046522 test-error:0.042831, ## [1] train-error:0.022263 test-error:0.021726, ## [0] train-error:0.046522 train-logloss:0.233376 test-error:0.042831 test-logloss:0.226686, ## [1] train-error:0.022263 train-logloss:0.136658 test-error:0.021726 test-logloss:0.137874, ## [0] train-error:0.024720 train-logloss:0.184616 test-error:0.022967 test-logloss:0.184234, ## [1] train-error:0.004146 train-logloss:0.069885 test-error:0.003724 test-logloss:0.068081, ## [11:41:01] 6513x126 matrix with 143286 entries loaded from dtrain.buffer, ## [2] "0:[f28<-1.00136e-05] yes=1,no=2,missing=1,gain=4000.53,cover=1628.25", ## [3] "1:[f55<-1.00136e-05] yes=3,no=4,missing=3,gain=1158.21,cover=924.5", ## [6] "2:[f108<-1.00136e-05] yes=5,no=6,missing=5,gain=198.174,cover=703.75", ## [10] "0:[f59<-1.00136e-05] yes=1,no=2,missing=1,gain=832.545,cover=788.852", ## [11] "1:[f28<-1.00136e-05] yes=3,no=4,missing=3,gain=569.725,cover=768.39". In simple cases, this will happen because there is nothing better than a linear algorithm to catch a linear link. AutoML performs a hyperparameter search over a variety of H2O algorithms in order to deliver the best model. To demonstrate DALEXs capabilities well use the employee attrition data that has been included in the rsample package. 1. test: will be used to assess the quality of our model. DistanceFromHome, NumCompaniesWorked), its important to be careful how we communicate these signals to stakeholders. The purpose of this Vignette is to show you how to use XGBoost to build a model and make predictions. Hereafter we will extract label data. The metalearner used in all ensembles is a variant of the default Stacked Ensemble metalearner: a non-negative GLM with regularization (Lasso or Elastic net, chosen by CV) to encourage more sparse ensembles. Therefore, in a dataset mainly made of 0, memory size is reduced. The printed output just provides a data frame with the output and plotting the three variable importance objects allows us to compare the most influential variables for each model. There are several advantages to MARS. Looking forward to applying it into my models. However, comparing our MARS model to the previous linear models (logistic regression and regularized regression), we do not see any improvement in our overall accuracy rate. Plots similar to those presented in Figures 16.1 and 16.2 are useful for comparisons of a variables importance in different models. XGBoost, which is included in H2O as a third party library, requires its own memory outside the H2O (Java) cluster. ## H2O cluster uptime: 4 hours 30 minutes, ## H2O cluster timezone: America/New_York, ## H2O cluster version: 3.18.0.11, ## H2O cluster version age: 1 month and 17 days, ## H2O cluster name: H2O_started_from_R_bradboehmke_gny210, ## H2O cluster total memory: 1.01 GB, ## H2O Connection ip: localhost, ## H2O API Extensions: XGBoost, Algos, AutoML, Core V3, Core V4, ## R Version: R version 3.5.0 (2018-04-23), # create train, validation, and test splits, # convert feature data to non-h2o objects, # make response variable numeric binary vector, ## [1] 0.18181818 0.27272727 0.06060606 0.54545455 0.03030303 0.42424242, ## Length Class Mode, ## model 1 H2OBinomialModel S4, ## data 30 data.frame list, ## y 233 -none- numeric, ## predict_function 1 -none- function, ## link 1 -none- function, ## class 1 -none- character, ## label 1 -none- character, ## 0% 10% 20% 30% 40% 50%, ## -0.99155845 -0.70432615 0.01281214 0.03402030 0.06143281 0.08362550, ## 60% 70% 80% 90% 100%, ## 0.10051641 0.12637877 0.17583980 0.22675709 0.47507569, ## -0.96969697 -0.66666667 0.00000000 0.03030303 0.06060606 0.09090909, ## 0.12121212 0.15151515 0.18181818 0.27272727 0.66666667, ## -0.96307337 -0.75623698 0.03258538 0.04195091 0.05344621 0.06382511, ## 0.07845749 0.09643740 0.11312648 0.18169305 0.66208105, # create comparison plot of residuals for each model, # compute permutation-based variable importance, # compute PDP for a given variable --> uses the pdp package, ## [1] "prediction_breakdown_explainer" "data.frame", # check out the top 10 influential variables for this observation, ## variable contribution, ## 1 (Intercept) 0.0000000000, ## JobRole + JobRole = Laboratory_Technician 0.0377083508, ## StockOptionLevel + StockOptionLevel = 0 0.0243714089, ## MaritalStatus + MaritalStatus = Single 0.0242334088, ## JobLevel + JobLevel = 1 0.0318770608, ## Age + Age = 32 0.0261924164, ## BusinessTravel + BusinessTravel = Travel_Frequently 0.0210465713, ## RelationshipSatisfaction + RelationshipSatisfaction = High 0.0108111555, ## Education + Education = College 0.0016911550, ## PercentSalaryHike + PercentSalaryHike = 13 0.0001157596, ## variable_name variable_value, ## 1 Intercept 1, ## JobRole JobRole Laboratory_Technician, ## StockOptionLevel StockOptionLevel 0, ## MaritalStatus MaritalStatus Single, ## JobLevel JobLevel 1, ## Age Age 32, ## BusinessTravel BusinessTravel Travel_Frequently, ## RelationshipSatisfaction RelationshipSatisfaction High, ## Education Education College, ## PercentSalaryHike PercentSalaryHike 13, # filter for top 10 influential variables for each model and plot, ## version R version 3.5.0 (2018-04-23), ## package * version date source, ## abind 1.4-5 2016-07-21 CRAN (R 3.5.0), ## agricolae 1.2-8 2017-09-12 CRAN (R 3.5.0), ## ALEPlot 1.1 2018-05-24 CRAN (R 3.5.0), ## AlgDesign 1.1-7.3 2014-10-15 CRAN (R 3.5.0), ## assertthat 0.2.0 2017-04-11 CRAN (R 3.5.0), ## BH 1.66.0-1 2018-02-13 CRAN (R 3.5.0), ## bindr 0.1.1 2018-03-13 CRAN (R 3.5.0), ## bindrcpp * 0.2.2 2018-03-29 CRAN (R 3.5.0), ## bitops 1.0-6 2013-08-17 CRAN (R 3.5.0), ## boot 1.3-20 2017-08-06 CRAN (R 3.5.0), ## breakDown 0.1.6 2018-06-14 CRAN (R 3.5.0), ## broom * 0.4.4 2018-03-29 CRAN (R 3.5.0), ## class 7.3-14 2015-08-30 CRAN (R 3.5.0), ## classInt 0.2-3 2018-04-16 CRAN (R 3.5.0), ## cli 1.0.0 2017-11-05 CRAN (R 3.5.0), ## cluster 2.0.7-1 2018-04-13 CRAN (R 3.5.0), ## coda 0.19-1 2016-12-08 CRAN (R 3.5.0), ## colorRamps 2.3 2012-10-29 CRAN (R 3.5.0), ## colorspace 1.3-2 2016-12-14 CRAN (R 3.5.0), ## combinat 0.0-8 2012-10-29 CRAN (R 3.5.0), ## compiler 3.5.0 2018-04-24 local, ## cowplot 0.9.2 2017-12-17 CRAN (R 3.5.0), ## crayon 1.3.4 2017-09-16 CRAN (R 3.5.0), ## CVST 0.2-2 2018-05-26 CRAN (R 3.5.0), ## DALEX * 0.2.3 2018-06-13 CRAN (R 3.5.0), ## ddalpha 1.3.3 2018-04-30 CRAN (R 3.5.0), ## deldir 0.1-15 2018-04-01 CRAN (R 3.5.0), ## DEoptimR 1.0-8 2016-11-19 CRAN (R 3.5.0), ## dichromat 2.0-0 2013-01-24 CRAN (R 3.5.0), ## digest 0.6.15 2018-01-28 CRAN (R 3.5.0), ## dimRed 0.1.0 2017-05-04 CRAN (R 3.5.0), ## dplyr * 0.7.5 2018-05-19 CRAN (R 3.5.0), ## DRR 0.0.3 2018-01-06 CRAN (R 3.5.0), ## e1071 1.6-8 2017-02-02 CRAN (R 3.5.0), ## evaluate 0.10.1 2017-06-24 CRAN (R 3.5.0), ## expm 0.999-2 2017-03-29 CRAN (R 3.5.0), ## factorMerger 0.3.6 2018-04-04 CRAN (R 3.5.0), ## forcats 0.3.0 2018-02-19 CRAN (R 3.5.0), ## foreign 0.8-70 2017-11-28 CRAN (R 3.5.0), ## formula.tools 1.7.1 2018-03-01 CRAN (R 3.5.0), ## gdata 2.18.0 2017-06-06 CRAN (R 3.5.0), ## geometry 0.3-6 2015-09-09 CRAN (R 3.5.0), ## ggplot2 * 2.2.1 2016-12-30 CRAN (R 3.5.0), ## ggpubr 0.1.6 2017-11-14 CRAN (R 3.5.0), ## ggrepel 0.8.0 2018-05-09 CRAN (R 3.5.0), ## ggsci 2.9 2018-05-14 CRAN (R 3.5.0), ## ggsignif 0.4.0 2017-08-03 CRAN (R 3.5.0), ## glue 1.2.0.9000 2018-07-04 Github (tidyverse/glue@a2c0f8b), ## gmodels 2.18.1 2018-06-25 CRAN (R 3.5.0), ## gower 0.1.2 2017-02-23 CRAN (R 3.5.0), ## graphics * 3.5.0 2018-04-24 local, ## grDevices * 3.5.0 2018-04-24 local, ## grid 3.5.0 2018-04-24 local, ## gridExtra 2.3 2017-09-09 CRAN (R 3.5.0), ## gtable 0.2.0 2016-02-26 CRAN (R 3.5.0), ## gtools 3.5.0 2015-05-29 CRAN (R 3.5.0), ## h2o * 3.18.0.11 2018-05-24 CRAN (R 3.5.0), ## haven 1.1.1 2018-01-18 CRAN (R 3.5.0), ## highr 0.6 2016-05-09 CRAN (R 3.5.0), ## hms 0.4.2 2018-03-10 CRAN (R 3.5.0), ## htmltools 0.3.6 2017-04-28 CRAN (R 3.5.0), ## httpuv 1.4.3 2018-05-10 CRAN (R 3.5.0), ## ipred 0.9-6 2017-03-01 CRAN (R 3.5.0), ## jsonlite 1.5 2017-06-01 CRAN (R 3.5.0), ## kernlab 0.9-26 2018-04-30 CRAN (R 3.5.0), ## KernSmooth 2.23-15 2015-06-29 CRAN (R 3.5.0), ## klaR 0.6-14 2018-03-19 CRAN (R 3.5.0), ## knitr 1.20 2018-02-20 CRAN (R 3.5.0), ## labeling 0.3 2014-08-23 CRAN (R 3.5.0), ## labelled 1.1.0 2018-05-24 CRAN (R 3.5.0), ## later 0.7.2 2018-05-01 CRAN (R 3.5.0), ## lattice 0.20-35 2017-03-25 CRAN (R 3.5.0), ## lava 1.6.1 2018-03-28 CRAN (R 3.5.0), ## lazyeval 0.2.1 2017-10-29 CRAN (R 3.5.0), ## LearnBayes 2.15.1 2018-03-18 CRAN (R 3.5.0), ## lubridate 1.7.4 2018-04-11 CRAN (R 3.5.0), ## magic 1.5-8 2018-01-26 CRAN (R 3.5.0), ## magrittr 1.5 2014-11-22 CRAN (R 3.5.0), ## markdown 0.8 2017-04-20 CRAN (R 3.5.0), ## MASS 7.3-49 2018-02-23 CRAN (R 3.5.0), ## Matrix 1.2-14 2018-04-13 CRAN (R 3.5.0), ## methods * 3.5.0 2018-04-24 local, ## mgcv 1.8-23 2018-01-21 CRAN (R 3.5.0), ## mime 0.5 2016-07-07 CRAN (R 3.5.0), ## miniUI 0.1.1.1 2018-05-18 CRAN (R 3.5.0), ## mnormt 1.5-5 2016-10-15 CRAN (R 3.5.0), ## munsell 0.4.3 2016-02-13 CRAN (R 3.5.0), ## mvtnorm 1.0-8 2018-05-31 CRAN (R 3.5.0), ## nlme 3.1-137 2018-04-07 CRAN (R 3.5.0), ## nnet 7.3-12 2016-02-02 CRAN (R 3.5.0), ## numDeriv 2016.8-1 2016-08-27 CRAN (R 3.5.0), ## operator.tools 1.6.3 2017-02-28 CRAN (R 3.5.0), ## parallel 3.5.0 2018-04-24 local, ## pdp 0.6.0 2017-07-20 CRAN (R 3.5.0), ## pillar 1.2.3 2018-05-25 CRAN (R 3.5.0), ## pkgconfig 2.0.1 2017-03-21 CRAN (R 3.5.0), ## plogr 0.2.0 2018-03-25 CRAN (R 3.5.0), ## plyr 1.8.4 2016-06-08 CRAN (R 3.5.0), ## prodlim 2018.04.18 2018-04-18 CRAN (R 3.5.0), ## promises 1.0.1 2018-04-13 CRAN (R 3.5.0), ## proxy 0.4-22 2018-04-08 CRAN (R 3.5.0), ## psych 1.8.4 2018-05-06 CRAN (R 3.5.0), ## purrr 0.2.5 2018-05-29 CRAN (R 3.5.0), ## questionr 0.6.2 2017-11-01 CRAN (R 3.5.0), ## R6 2.2.2 2017-06-17 CRAN (R 3.5.0), ## RColorBrewer 1.1-2 2014-12-07 CRAN (R 3.5.0), ## Rcpp 0.12.17 2018-05-18 CRAN (R 3.5.0), ## RcppRoll 0.2.2 2015-04-05 CRAN (R 3.5.0), ## RCurl 1.95-4.10 2018-01-04 CRAN (R 3.5.0), ## readr 1.1.1 2017-05-16 CRAN (R 3.5.0), ## recipes 0.1.2 2018-01-11 CRAN (R 3.5.0), ## reshape2 1.4.3 2017-12-11 CRAN (R 3.5.0), ## rlang 0.2.1 2018-05-30 CRAN (R 3.5.0), ## robustbase 0.93-0 2018-04-24 CRAN (R 3.5.0), ## rpart 4.1-13 2018-02-23 CRAN (R 3.5.0), ## rsample * 0.0.2 2017-11-12 CRAN (R 3.5.0), ## rstudioapi 0.7 2017-09-07 CRAN (R 3.5.0), ## scales 0.5.0 2017-08-24 CRAN (R 3.5.0), ## sfsmisc 1.1-2 2018-03-05 CRAN (R 3.5.0), ## shiny 1.1.0 2018-05-17 CRAN (R 3.5.0), ## sourcetools 0.1.7 2018-04-25 CRAN (R 3.5.0), ## sp 1.2-7 2018-01-19 CRAN (R 3.5.0), ## spData 0.2.8.3 2018-03-25 CRAN (R 3.5.0), ## spdep 0.7-7 2018-04-03 CRAN (R 3.5.0), ## splines 3.5.0 2018-04-24 local, ## SQUAREM 2017.10-1 2017-10-07 CRAN (R 3.5.0), ## stats * 3.5.0 2018-04-24 local, ## stringi 1.2.2 2018-05-02 CRAN (R 3.5.0), ## stringr 1.3.1 2018-05-10 CRAN (R 3.5.0), ## survival 2.41-3 2017-04-04 CRAN (R 3.5.0), ## tibble 1.4.2 2018-01-22 CRAN (R 3.5.0), ## tidyr * 0.8.1 2018-05-18 CRAN (R 3.5.0), ## tidyselect 0.2.4 2018-02-26 CRAN (R 3.5.0), ## timeDate 3043.102 2018-02-21 CRAN (R 3.5.0), ## tools 3.5.0 2018-04-24 local, ## utf8 1.1.4 2018-05-24 CRAN (R 3.5.0), ## utils * 3.5.0 2018-04-24 local, ## viridis 0.5.1 2018-03-29 CRAN (R 3.5.0), ## viridisLite 0.3.0 2018-02-01 CRAN (R 3.5.0), ## xtable 1.8-2 2016-02-05 CRAN (R 3.5.0), ## yaImpute 1.0-29 2017-12-10 CRAN (R 3.5.0), ## yaml 2.1.19 2018-05-01 CRAN (R 3.5.0), UC Business Analytics R Programming Guide.
Browser Redirect Virus,
Aesthetic Development Activities In School,
Fresh Squeezed Juices Near Me,
Are Microexpressions Real,
Malkin Athletic Center Membership,
Ajax Withcredentials: True,
Can I Use Dove Beauty Cream On My Face,
Fusioncharts Color Palette,