Skip to content

iter_evaluate

Evaluates the performance of a forecaster on a time series dataset and yields results.

This does exactly the same as evaluate.progressive_val_score. The only difference is that this function returns an iterator, yielding results at every step. This can be useful if you want to have control over what you do with the results. For instance, you might want to plot the results.

Parameters

  • dataset ('base.typing.Dataset')

    A sequential time series.

  • model ('time_series.base.Forecaster')

    A forecaster.

  • metric ('metrics.base.RegressionMetric')

    A regression metric.

  • horizon ('int')

  • agg_func ('typing.Callable[[list[float]], float]') – defaults to None

  • grace_period ('int') – defaults to None

    Initial period during which the metric is not updated. This is to fairly evaluate models which need a warming up period to start producing meaningful forecasts. The value of this parameter is equal to the horizon by default.