Skip to content

ClassificationReport

A report for monitoring a classifier.

This class maintains a set of metrics and updates each of them every time update is called. You can print this class at any time during a model's lifetime to get a tabular visualization of various metrics.

You can wrap a metrics.ClassificationReport with utils.Rolling in order to obtain a classification report over a window of observations. You can also wrap it with utils.TimeRolling to obtain a report over a period of time.

Parameters

  • decimals

    Default2

    The number of decimals to display in each cell.

  • cm

    DefaultNone

    This parameter allows sharing the same confusion matrix between multiple metrics. Sharing a confusion matrix reduces the amount of storage and computation time.

Attributes

  • bigger_is_better

    Indicate if a high value is better than a low one or not.

  • requires_labels

    Indicates if labels are required, rather than probabilities.

  • works_with_weights

    Indicate whether the model takes into consideration the effect of sample weights

Examples

from river import metrics

y_true = ['pear', 'apple', 'banana', 'banana', 'banana']
y_pred = ['apple', 'pear', 'banana', 'banana', 'apple']

report = metrics.ClassificationReport()

for yt, yp in zip(y_true, y_pred):
    report.update(yt, yp)

print(report)
               Precision   Recall   F1       Support
<BLANKLINE>
   apple       0.00%    0.00%    0.00%         1
  banana     100.00%   66.67%   80.00%         3
    pear       0.00%    0.00%    0.00%         1
<BLANKLINE>
   Macro      33.33%   22.22%   26.67%
   Micro      40.00%   40.00%   40.00%
Weighted      60.00%   40.00%   48.00%
<BLANKLINE>
                 40.00% accuracy

Methods

get

Return the current value of the metric.

is_better_than

Indicate if the current metric is better than another one.

Parameters

  • other

revert

Revert the metric.

Parameters

  • y_true
  • y_pred
  • w — defaults to 1.0

update

Update the metric.

Parameters

  • y_true
  • y_pred
  • w — defaults to 1.0

works_with

Indicates whether or not a metric can work with a given model.

Parameters