Skip to content

iter_evaluate

Evaluates the performance of a forecaster on a time series dataset and yields results.

This does exactly the same as evaluate.progressive_val_score. The only difference is that this function returns an iterator, yielding results at every step. This can be useful if you want to have control over what you do with the results. For instance, you might want to plot the results.

Parameters

  • dataset (Iterable[Tuple[dict, Any]])

    A sequential time series.

  • model (river.time_series.base.Forecaster)

    A forecaster.

  • metric (river.metrics.base.RegressionMetric)

    A regression metric.

  • horizon (int)

  • grace_period (int) – defaults to None

    Initial period during which the metric is not updated. This is to fairly evaluate models which need a warming up period to start producing meaningful forecasts. The value of this parameter is equal to the horizon by default.