Last updated on 2026-01-22 00:52:16 CET.
| Package | ERROR | OK |
|---|---|---|
| autonewsmd | 13 | |
| BiasCorrector | 13 | |
| DQAgui | 13 | |
| DQAstats | 13 | |
| kdry | 13 | |
| mlexperiments | 13 | |
| mllrnrs | 1 | 12 |
| mlsurvlrnrs | 1 | 12 |
| rBiasCorrection | 13 | |
| sjtable2df | 13 |
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: OK: 13
Current CRAN status: ERROR: 1, OK: 12
Version: 0.0.7
Check: tests
Result: ERROR
Running ‘testthat.R’ [90s/100s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
> # https://github.com/Rdatatable/data.table/issues/5658
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mllrnrs)
>
> test_check("mllrnrs")
CV fold: Fold1
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-binary-225.R
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Classification: using 'mean classification error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Classification: using 'mean classification error' as optimization metric.
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Saving _problems/test-regression-107.R
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================>---------------] 2/3 ( 67%)
Regression: using 'mean squared error' as optimization metric.
Parameter settings [=============================================] 3/3 (100%)
Regression: using 'mean squared error' as optimization metric.
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-regression-309.R
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
══ Skipped tests (3) ═══════════════════════════════════════════════════════════
• On CRAN (3): 'test-binary.R:57:5', 'test-lints.R:10:5',
'test-multiclass.R:57:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-binary.R:225:5'): test nested cv, bayesian, binary - lightgbm ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─lightgbm_optimizer$execute() at test-binary.R:225:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:107:5'): test nested cv, bayesian, regression - glmnet ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─glmnet_optimizer$execute() at test-regression.R:107:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-regression.R:309:5'): test nested cv, bayesian, reg:squarederror - xgboost ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─xgboost_optimizer$execute() at test-regression.R:309:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
[ FAIL 3 | WARN 0 | SKIP 3 | PASS 25 ]
Error:
! Test failures.
Execution halted
Flavor: r-release-linux-x86_64
Current CRAN status: ERROR: 1, OK: 12
Version: 0.0.7
Check: tests
Result: ERROR
Running ‘testthat.R’ [12s/14s]
Running the tests in ‘tests/testthat.R’ failed.
Complete output:
> # This file is part of the standard setup for testthat.
> # It is recommended that you do not modify it.
> #
> # Where should you do additional test configuration?
> # Learn more about the roles of various files in:
> # * https://r-pkgs.org/tests.html
> # * https://testthat.r-lib.org/reference/test_package.html#special-files
>
> Sys.setenv("OMP_THREAD_LIMIT" = 2)
> Sys.setenv("Ncpu" = 2)
>
> library(testthat)
> library(mlsurvlrnrs)
>
> test_check("mlsurvlrnrs")
CV fold: Fold1
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold2
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold3
Parameter 'ncores' is ignored for learner 'LearnerSurvCoxPHCox'.
CV fold: Fold1
Saving _problems/test-surv_glmnet_cox-99.R
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-surv_ranger_cox-110.R
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-surv_rpart_cox-108.R
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-surv_xgboost_aft-121.R
CV fold: Fold1
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================>---------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold1
Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'...
... reducing initialization grid to 10 rows.
Saving _problems/test-surv_xgboost_cox-118.R
CV fold: Fold1
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold2
CV progress [==================================>-----------------] 2/3 ( 67%)
Parameter settings [=============================================] 3/3 (100%)
CV fold: Fold3
CV progress [====================================================] 3/3 (100%)
Parameter settings [=============================================] 3/3 (100%)
[ FAIL 5 | WARN 0 | SKIP 1 | PASS 9 ]
══ Skipped tests (1) ═══════════════════════════════════════════════════════════
• On CRAN (1): 'test-lints.R:10:5'
══ Failed tests ════════════════════════════════════════════════════════════════
── Error ('test-surv_glmnet_cox.R:99:5'): test nested cv, grid - surv_glmnet_cox ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─surv_glmnet_cox_optimizer$execute() at test-surv_glmnet_cox.R:99:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-surv_ranger_cox.R:110:5'): test nested cv, bayesian - surv_ranger_cox ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─surv_ranger_cox_optimizer$execute() at test-surv_ranger_cox.R:110:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-surv_rpart_cox.R:108:5'): test nested cv, bayesian - surv_rpart_cox ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─surv_rpart_cox_optimizer$execute() at test-surv_rpart_cox.R:108:5
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-surv_xgboost_aft.R:121:3'): test nested cv, bayesian - surv_xgboost_aft ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─surv_xgboost_aft_optimizer$execute() at test-surv_xgboost_aft.R:121:3
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
── Error ('test-surv_xgboost_cox.R:118:3'): test nested cv, bayesian - surv_xgboost_cox ──
Error: Package "rBayesianOptimization" must be installed to use 'strategy = "bayesian"'.
Backtrace:
▆
1. └─surv_xgboost_cox_optimizer$execute() at test-surv_xgboost_cox.R:118:3
2. └─mlexperiments:::.run_cv(self = self, private = private)
3. └─mlexperiments:::.fold_looper(self, private)
4. ├─base::do.call(private$cv_run_model, run_args)
5. └─mlexperiments (local) `<fn>`(train_index = `<int>`, fold_train = `<list>`, fold_test = `<list>`)
6. ├─base::do.call(.cv_run_nested_model, args)
7. └─mlexperiments (local) `<fn>`(...)
8. └─hparam_tuner$execute(k = self$k_tuning)
9. └─private$select_optimizer(self, private)
10. └─BayesianOptimizer$new(...)
11. └─mlexperiments (local) initialize(...)
[ FAIL 5 | WARN 0 | SKIP 1 | PASS 9 ]
Error:
! Test failures.
Execution halted
Flavor: r-release-linux-x86_64
Current CRAN status: OK: 13
Current CRAN status: OK: 13