Skip to content

evaluate

Evaluates the performance of a forecaster on a time series dataset.

To understand why this method is useful, it's important to understand the difference between nowcasting and forecasting. Nowcasting is about predicting a value at the next time step. This can be seen as a special case of regression, where the value to predict is the value at the next time step. In this case, the evaluate.progressive_val_score function may be used to evaluate a model via progressive validation.

Forecasting models can also be evaluated via progressive validation. This is the purpose of this function. At each time step t, the forecaster is asked to predict the values at t + 1, t + 2, ..., t + horizon. The performance at each time step is measured and returned.

Parameters

  • dataset ('base.typing.Dataset')

    A sequential time series.

  • model ('time_series.base.Forecaster')

    A forecaster.

  • metric ('metrics.base.RegressionMetric')

    A regression metric.

  • horizon ('int')

  • agg_func ('typing.Callable[[list[float]], float]') – defaults to None

  • grace_period ('int') – defaults to None

    Initial period during which the metric is not updated. This is to fairly evaluate models which need a warming up period to start producing meaningful forecasts. The value of this parameter is equal to the horizon by default.