Create an Evaluator which can evaluate()
the performance of
methods in an Experiment.
Usage
create_evaluator(
.eval_fun,
.name = NULL,
.doc_options = list(),
.doc_show = TRUE,
...
)
Arguments
- .eval_fun
The user-defined evaluation function.
- .name
(Optional) The name of the
Evaluator
, helpful for later identification. The argument must be specified by position or typed out in whole; no partial matching is allowed for this argument.- .doc_options
(Optional) List of options to control the aesthetics of the displayed
Evaluator
's results table in the knitted R Markdown report. Seevthemes::pretty_DT()
for possible options. The argument must be specified by position or typed out in whole; no partial matching is allowed for this argument.- .doc_show
If
TRUE
(default), showEvaluator
's results as a table in the R Markdown report; ifFALSE
, hide output in the R Markdown report.- ...
User-defined arguments to pass into
.eval_fun()
.
Value
A new Evaluator object.
Details
When evaluating or running the Experiment
(see
evaluate_experiment()
or run_experiment()
), the named
arguments fit_results
and vary_params
are automatically
passed into the Evaluator
function .eval_fun()
and serve
as placeholders for the fit_experiment()
results (i.e., the
results from the method fits) and the name of the varying parameter(s),
respectively.
To evaluate the performance of a method(s) fit then,
the Evaluator
function .eval_fun()
should almost always
take in the named argument fit_results
. See
Experiment$fit()
or fit_experiment()
for details on the
format of fit_results
. If the Evaluator
is used for Experiments
with varying parameters,
vary_params
should be used as a stand in for the name of this
varying parameter(s).
Examples
# create DGP
dgp_fun <- function(n, beta, rho, sigma) {
cov_mat <- matrix(c(1, rho, rho, 1), byrow = TRUE, nrow = 2, ncol = 2)
X <- MASS::mvrnorm(n = n, mu = rep(0, 2), Sigma = cov_mat)
y <- X %*% beta + rnorm(n, sd = sigma)
return(list(X = X, y = y))
}
dgp <- create_dgp(.dgp_fun = dgp_fun,
.name = "Linear Gaussian DGP",
n = 50, beta = c(1, 0), rho = 0, sigma = 1)
# create Method
lm_fun <- function(X, y, cols) {
X <- X[, cols]
lm_fit <- lm(y ~ X)
pvals <- summary(lm_fit)$coefficients[-1, "Pr(>|t|)"] %>%
setNames(paste(paste0("X", cols), "p-value"))
return(pvals)
}
lm_method <- create_method(
.method_fun = lm_fun,
.name = "OLS",
cols = c(1, 2)
)
# create Experiment
experiment <- create_experiment() %>%
add_dgp(dgp) %>%
add_method(lm_method) %>%
add_vary_across(.dgp = dgp, rho = seq(0.91, 0.99, 0.02))
fit_results <- fit_experiment(experiment, n_reps=10)
#> Fitting experiment...
#> 10 reps completed (totals: 10/10) | time taken: 0.389395 minutes
#> ==============================
# create an example Evaluator function
reject_prob_fun <- function(fit_results, vary_params = NULL, alpha = 0.05) {
fit_results[is.na(fit_results)] <- 1
group_vars <- c(".dgp_name", ".method_name", vary_params)
eval_out <- fit_results %>%
dplyr::group_by(across({{group_vars}})) %>%
dplyr::summarise(
n_reps = dplyr::n(),
`X1 Reject Prob.` = mean(`X1 p-value` < alpha),
`X2 Reject Prob.` = mean(`X2 p-value` < alpha)
)
return(eval_out)
}
reject_prob_eval <- create_evaluator(.eval_fun = reject_prob_fun,
.name = "Rejection Prob (alpha = 0.05)")
reject_prob_eval$evaluate(fit_results, vary_params = "rho")
#> `summarise()` has grouped output by '.dgp_name', '.method_name'. You can
#> override using the `.groups` argument.
#> # A tibble: 5 × 6
#> .dgp_name .method_name rho n_reps `X1 Reject Prob.` `X2 Reject Prob.`
#> <chr> <chr> <dbl> <int> <dbl> <dbl>
#> 1 Linear Gaussian… OLS 0.91 10 0.9 0.1
#> 2 Linear Gaussian… OLS 0.93 10 0.8 0
#> 3 Linear Gaussian… OLS 0.95 10 0.6 0.1
#> 4 Linear Gaussian… OLS 0.97 10 0.6 0.2
#> 5 Linear Gaussian… OLS 0.99 10 0.3 0