iter_evaluate¶
Evaluates the performance of a forecaster on a time series dataset and yields results.
This does exactly the same as evaluate.progressive_val_score
. The only difference is that this function returns an iterator, yielding results at every step. This can be useful if you want to have control over what you do with the results. For instance, you might want to plot the results.
Parameters¶
-
dataset
Type → base.typing.Dataset
A sequential time series.
-
model
Type → time_series.base.Forecaster
A forecaster.
-
metric
Type → metrics.base.RegressionMetric
A regression metric.
-
horizon
Type → int
-
agg_func
Type → typing.Callable[[list[float]], float] | None
Default →
None
-
grace_period
Type → int | None
Default →
None
Initial period during which the metric is not updated. This is to fairly evaluate models which need a warming up period to start producing meaningful forecasts. The value of this parameter is equal to the horizon by default.