ExactMatch¶
Exact match score.
This is the most strict multi-label metric, defined as the number of samples that have all their labels correctly classified, divided by the total number of samples.
Attributes¶
-
bigger_is_better
Indicate if a high value is better than a low one or not.
-
requires_labels
-
works_with_weights
Indicate whether the model takes into consideration the effect of sample weights
Examples¶
from river import metrics
y_true = [
{0: False, 1: True, 2: True},
{0: True, 1: True, 2: False},
{0: True, 1: True, 2: False},
]
y_pred = [
{0: True, 1: True, 2: True},
{0: True, 1: False, 2: False},
{0: True, 1: True, 2: False},
]
metric = metrics.multioutput.ExactMatch()
for yt, yp in zip(y_true, y_pred):
metric = metric.update(yt, yp)
metric
ExactMatch: 33.33%
Methods¶
get
Return the current value of the metric.
is_better_than
Indicate if the current metric is better than another one.
Parameters
- other
revert
Revert the metric.
Parameters
- y_true — 'dict[str | int, base.typing.ClfTarget]'
- y_pred — 'dict[str | int, base.typing.ClfTarget] | dict[str | int, dict[base.typing.ClfTarget, float]]'
- sample_weight — defaults to
1.0
update
Update the metric.
Parameters
- y_true — 'dict[str | int, base.typing.ClfTarget]'
- y_pred — 'dict[str | int, base.typing.ClfTarget] | dict[str | int, dict[base.typing.ClfTarget, float]]'
- sample_weight — defaults to
1.0
works_with
Indicates whether or not a metric can work with a given model.
Parameters
- model