In this vignette, we show how to implement a custom tuner for
mlr3tuning. The main task of a tuner is to iteratively
propose new hyperparameter configurations that we want to evaluate for a
given task, learner and validation strategy. The second task is to
decide which configuration should be returned as a tuning result -
usually it is the configuration that led to the best observed
performance value. If you want to implement your own tuner, you have to
implement an R6-Object that offers an
.optimize method that implements
the iterative proposal and you are free to implement
.assign_result to differ from
the before-mentioned default process of determining the result.
Before you start with the implementation make yourself familiar with
the main R6-Objects in
bbotk (Black-Box Optimization
Toolkit). This package does not only provide basic black box
optimization algorithms and but also the objects that represent the
optimization problem (
OptimInstance) and the log of all
evaluated configurations (
Archive). d There are two ways to
implement a new tuner: a ) If your new tuner can be applied to any kind
of optimization problem it should be implemented as a
Optimizer can be easily
transformed to a
Tuner. b) If the new custom tuner is only
usable for hyperparameter tuning, for example because it needs to access
the task, learner or resampling objects it should be directly
mlr3tuning as a
This is a summary of steps for adding a new tuner. The fifth step is
only required if the new tuner is added via
in the GitHub repositories.
private method of the optimizer / tuner.
class to transform the
Optimizer to a
and optionally a second one for the `Optimizer.
If the new custom tuner is implemented via
one of the existing optimizer as a template e.g.
There are currently only two tuners that are not based on a
Both are rather complex but you can still use the documentation and
class structure as a template. The following steps are identical for
optimizers and tuners.
Rewrite the meta information in the documentation and create a new
class name. Scientific sources can be added in
R/bibentries.R which are added under
in the documentation. The example and dictionary sections of the
documentation are auto-generated based on the
@templateVar id <tuner_id>. Change the parameter set
of the optimizer / tuner and document them under
@section Parameters. Do not forget to change
mlr_tuners$add() in the
last line which adds the optimizer / tuner to the dictionary.
$.optimize() private method is the main part of the
tuner. It takes an instance, proposes new points and calls the
$eval_batch() method of the instance to evaluate them. Here
you can go two ways: Implement the iterative process yourself or call an
external optimization function that resides in another package.
Usually, the proposal and evaluation is done in a
repeat-loop which you have to implement. Please consider
the following points:
$eval_batch() won’t allow more evaluations then allowed by
bbotk::Terminator. This implies, that code after the
repeat-loop will not be executed.
Objective in the
Archive you can simply add
columns to the
data.table object that is passed to
Optimization functions from external packages usually take an
objective function as an argument. In this case, you can pass
inst$objective_function which internally calls
$eval_batch(). Check out
for an example.
$.assign_result() private method simply
obtains the best performing result from the archive. The default method
can be overwritten if the new tuner determines the result of the
optimization in a different way. The new function must call the
$assign_result() method of the instance to write the final
result to the instance. See
for an implementation of
This step is only needed if you implement via
mlr3tuning::TunerFromOptimizer class transforms a
Optimizer to a
Tuner. Just add the
Optimizer to the
optimizer field. See
for an example.
The new custom tuner should be thoroughly tested with unit tests.
Tuners can be tested with the
helper function. If you added the Tuner via a
you should additionally test the
Optimizer with the
test_optimizer() helper function.