Plot testing error evaluation results according to various metrics.
Source:R/visualizer-lib-inference.R
plot_testing_err.Rd
Plot the raw or summarized testing errors as a boxplot, scatter plot, line plot, or bar plot with or without 1 SD error bars.
Usage
plot_testing_err(
fit_results,
eval_results = NULL,
evaluator_name = NULL,
vary_params = NULL,
metrics = NULL,
show = c("point", "line", "errorbar"),
...
)
Arguments
- fit_results
A tibble, as returned by the
fit
method.- eval_results
A list of result tibbles, as returned by the
evaluate
method.- evaluator_name
Name of
Evaluator
containing results to plot. To compute the evaluation summary results from scratch or if the evaluation summary results have not yet been evaluated, set toNULL
.- vary_params
A vector of parameter names that are varied across in the
Experiment
.- metrics
A
metric_set
object indicating the metrics to plot. Seeyardstick::metric_set()
for more details. DefaultNULL
will use the default metrics inyardstick::metrics()
.- show
Character vector with elements being one of "boxplot", "point", "line", "bar", "errorbar", "ribbon" indicating what plot layer(s) to construct.
- ...
Additional arguments to pass to
plot_eval_summary()
. This includes arguments for plotting and for passing intosummarize_testing_err()
.
Value
If interactive = TRUE
, returns a plotly
object if
plot_by
is NULL
and a list of plotly
objects if
plot_by
is not NULL
. If interactive = FALSE
, returns
a ggplot
object if plot_by
is NULL
and a list of
ggplot
objects if plot_by
is not NULL
.
See also
Other inference_funs:
eval_reject_prob()
,
eval_testing_curve_funs
,
eval_testing_err_funs
,
plot_reject_prob()
,
plot_testing_curve()
Examples
# generate example fit_results data
fit_results <- tibble::tibble(
.rep = rep(1:2, times = 2),
.dgp_name = c("DGP1", "DGP1", "DGP2", "DGP2"),
.method_name = c("Method"),
feature_info = lapply(
1:4,
FUN = function(i) {
tibble::tibble(
# feature names
feature = c("featureA", "featureB", "featureC"),
# true feature support
true_support = c(TRUE, FALSE, TRUE),
# estimated p-values
pval = 10^(sample(-3:0, 3, replace = TRUE))
)
}
)
)
# generate example eval_results data
eval_results <- list(
`Testing Errors` = summarize_testing_err(
fit_results,
nested_data = "feature_info",
truth_col = "true_support",
pval_col = "pval"
)
)
# create bar plot using pre-computed evaluation results
plt <- plot_testing_err(fit_results = fit_results,
eval_results = eval_results,
evaluator_name = "Testing Errors",
show = c("bar", "errorbar"))
# or alternatively, create the same plot without pre-computing evaluation results
plt <- plot_testing_err(fit_results,
show = c("bar", "errorbar"),
nested_data = "feature_info",
truth_col = "true_support",
pval_col = "pval")
# can customize plot (see plot_eval_summary() for possible arguments)
plt <- plot_testing_err(fit_results = fit_results,
eval_results = eval_results,
evaluator_name = "Testing Errors",
show = c("bar", "errorbar"),
plot_by = ".alpha")