MicroFBeta¶
Micro-average F-Beta score.
This computes the F-Beta score by merging all the predictions and true labels, and then computes a global F-Beta score.
Parameters¶
-
beta
Type → float
Weight of precision in the harmonic mean.
-
cm
Default →
None
This parameter allows sharing the same confusion matrix between multiple metrics. Sharing a confusion matrix reduces the amount of storage and computation time.
Attributes¶
-
bigger_is_better
Indicate if a high value is better than a low one or not.
-
requires_labels
Indicates if labels are required, rather than probabilities.
-
works_with_weights
Indicate whether the model takes into consideration the effect of sample weights
Examples¶
from river import metrics
y_true = [0, 1, 2, 2, 0]
y_pred = [0, 1, 1, 2, 1]
metric = metrics.MicroFBeta(beta=2)
for yt, yp in zip(y_true, y_pred):
metric.update(yt, yp)
metric
MicroFBeta: 60.00%
Methods¶
get
Return the current value of the metric.
is_better_than
Indicate if the current metric is better than another one.
Parameters
- other
revert
Revert the metric.
Parameters
- y_true
- y_pred
- w — defaults to
1.0
update
Update the metric.
Parameters
- y_true
- y_pred
- w — defaults to
1.0
works_with
Indicates whether or not a metric can work with a given model.
Parameters
- model — 'base.Estimator'
1. Why are precision, recall and F1 score equal when using micro averaging in a multi-class problem?