Create a new Evaluator
.
Usage
create_evaluator(
.eval_fun,
.name = NULL,
.rmd_options = list(),
.rmd_show = TRUE,
...
)
Arguments
- .eval_fun
The evaluation function.
- .name
(Optional) The name of the
Evaluator
. The argument must be specified by position or typed out in whole; no partial matching is allowed for this argument.- .rmd_options
(Optional) List of options to control the aesthetics of the displayed
Evaluator
's results table in the knitted R Markdown report. Seevthemes::pretty_DT()
for possible options. The argument must be specified by position or typed out in whole; no partial matching is allowed for this argument.- .rmd_show
If
TRUE
(default), showEvaluator
's results as a table in the R Markdown report; ifFALSE
, hide output in the R Markdown report.- ...
Arguments to pass into
.eval_fun()
.
Details
When evaluating or running the Experiment
(see
evaluate_experiment()
or run_experiment()
), the named
arguments fit_results
and vary_params
are automatically
passed into the Evaluator
function .eval_fun()
and serve
as placeholders for the fit_experiment()
results (i.e., the
results from the method fits) and the name of the varying parameter,
respectively. To evaluate the performance of a method(s) fit then,
the Evaluator
function .eval_fun()
should almost always
take in the named argument fit_results
. See
Experiment$fit()
or fit_experiment()
for details on the
format of fit_results
. If the Evaluator
is used for Experiments
with varying parameters,
vary_params
should be used as a stand in for the name of this
varying parameter.
Examples
# create an example Evaluator function
reject_prob_fun <- function(fit_results, vary_params = NULL, alpha = 0.05) {
group_vars <- c(".dgp_name", ".method_name", vary_params)
eval_out <- fit_results %>%
dplyr::group_by(across({{group_vars}})) %>%
dplyr::summarise(
`X1 Reject Prob.` = mean(`X1 p-value` < alpha),
`X2 Reject Prob.` = mean(`X2 p-value` < alpha)
)
return(eval_out)
}
# create Evaluator using the default arguments (i.e., alpha = 0.05)
reject_prob_eval <- create_evaluator(.eval_fun = reject_prob_fun,
.name = "Rejection Prob (alpha = 0.05)")
# create Evaluator using non-default arguments (here, alpha = 0.1)
reject_prob_eval2 <- create_evaluator(.eval_fun = reject_prob_fun,
.name = "Rejection Prob (alpha = 0.1)",
# additional named parameters to pass to reject_prob_fun(),
alpha = 0.1)
# create Evaluator from a function in the built-in Evaluator library
pred_err_eval <- create_evaluator(.eval_fun = summarize_pred_err,
.name = "Prediction Error",
# additional named parameters to pass to summarize_pred_err()
truth_col = "y", estimate_col = "predictions")
# set rmd options for displaying Evaluator in Rmd report to show 3 decimal points
pred_err_eval <- create_evaluator(.eval_fun = summarize_pred_err,
.name = "Prediction Error",
.rmd_options = list(digits = 3),
# additional named parameters to pass to summarize_pred_err()
truth_col = "y", estimate_col = "predictions")