Skip to content

MicroFBeta

Micro-average F-Beta score.

This computes the F-Beta score by merging all the predictions and true labels, and then computes a global F-Beta score.

Parameters

  • beta (float)

    Weight of precision in the harmonic mean.

  • cm – defaults to None

    This parameter allows sharing the same confusion matrix between multiple metrics. Sharing a confusion matrix reduces the amount of storage and computation time.

Attributes

  • bigger_is_better

    Indicate if a high value is better than a low one or not.

  • requires_labels

    Indicates if labels are required, rather than probabilities.

  • works_with_weights

    Indicate whether the model takes into consideration the effect of sample weights

Examples

>>> from river import metrics

>>> y_true = [0, 1, 2, 2, 0]
>>> y_pred = [0, 1, 1, 2, 1]

>>> metric = metrics.MicroFBeta(beta=2)
>>> for yt, yp in zip(y_true, y_pred):
...     metric = metric.update(yt, yp)

>>> metric
MicroFBeta: 60.00%

Methods

get

Return the current value of the metric.

is_better_than
revert

Revert the metric.

Parameters

  • y_true
  • y_pred
  • sample_weight – defaults to 1.0
update

Update the metric.

Parameters

  • y_true
  • y_pred
  • sample_weight – defaults to 1.0
works_with

Indicates whether or not a metric can work with a given model.

Parameters

  • model (river.base.estimator.Estimator)

References

  1. Why are precision, recall and F1 score equal when using micro averaging in a multi-class problem?