Skip to content

iter_progressive_val_score

Evaluates the performance of a model on a streaming dataset and yields results.

This does exactly the same as evaluate.progressive_val_score. The only difference is that this function returns an iterator, yielding results at every step. This can be useful if you want to have control over what you do with the results. For instance, you might want to plot the results.

Parameters

  • dataset

    Typebase.typing.Dataset

    The stream of observations against which the model will be evaluated.

  • model

    The model to evaluate.

  • metric

    Typemetrics.base.Metric

    The metric used to evaluate the model's predictions.

  • moment

    Typestr | typing.Callable | None

    DefaultNone

    The attribute used for measuring time. If a callable is passed, then it is expected to take as input a dict of features. If None, then the observations are implicitly timestamped in the order in which they arrive.

  • delay

    Typestr | int | dt.timedelta | typing.Callable | None

    DefaultNone

    The amount to wait before revealing the target associated with each observation to the model. This value is expected to be able to sum with the moment value. For instance, if moment is a datetime.date, then delay is expected to be a datetime.timedelta. If a callable is passed, then it is expected to take as input a dict of features and the target. If a str is passed, then it will be used to access the relevant field from the features. If None is passed, then no delay will be used, which leads to doing standard online validation.

  • step

    Default1

    Iteration number at which to yield results. This only takes into account the predictions, and not the training steps.

  • measure_time

    DefaultFalse

    Whether or not to measure the elapsed time.

  • measure_memory

    DefaultFalse

    Whether or not to measure the memory usage of the model.

Examples

Take the following model:

from river import linear_model
from river import preprocessing

model = (
    preprocessing.StandardScaler() |
    linear_model.LogisticRegression()
)

We can evaluate it on the Phishing dataset as so:

from river import datasets
from river import evaluate
from river import metrics

steps = evaluate.iter_progressive_val_score(
    model=model,
    dataset=datasets.Phishing(),
    metric=metrics.ROCAUC(),
    step=200
)

for step in steps:
    print(step)
{'ROCAUC': ROCAUC: 89.80%, 'Step': 200}
{'ROCAUC': ROCAUC: 92.09%, 'Step': 400}
{'ROCAUC': ROCAUC: 93.13%, 'Step': 600}
{'ROCAUC': ROCAUC: 93.99%, 'Step': 800}
{'ROCAUC': ROCAUC: 94.74%, 'Step': 1000}
{'ROCAUC': ROCAUC: 95.03%, 'Step': 1200}
{'ROCAUC': ROCAUC: 95.04%, 'Step': 1250}