evaluate_models.Rd
`evaluate_models` is a wrapper function for evaluating predictions from multiple fitted models that were trained using caret, tidymodels, or h2o backends. This wrapper function provides uniformity of input arguments to easily switch between the different modeling packages.
evaluate_models(pred_df, ytest, metrics = NULL, na_rm = TRUE)
Data frame of predictions to evaluate against `ytest`. Typically the output of `predict_models()`.
Test response vector for which to evaluate against the predictions.
A `metric_set` object indicating the metrics to evaluate. See `yardstick::metric_set()` for more details. Default `NULL` will use a default set of metrics that depends on the type of problem (e.g., classification vs regression).
Logical indicating whether `NA` values should be stripped before the computation proceeds.
A list with the following elements:
A tibble containing the metric name and its evaluated value for each method in `pred_df`.
In a classification problem, this is a tibble with the confusion matrices for each method (see output of `yardstick::conf_mat()`). This element is omitted for regression problems.
In a classification problem with the predicted probabilities provided in `pred_df`, this is a ggplot object with the ROC evaluation plot. This element is omitted for regression problems or if the predicted probabilities are not provided.
In a classification problem with the predicted probabilities provided in `pred_df`, this is a ggplot object with the PR evaluation plot. This elemented is omitted for regression problems or if the predicted probabilities are not provided.