Skip to content

FBeta

Binary F-Beta score.

The FBeta score is a weighted harmonic mean between precision and recall. The higher the beta value, the higher the recall will be taken into account. When beta equals 1, precision and recall and equivalently weighted, which results in the F1 score (see metrics.F1).

Parameters

  • beta

    Typefloat

    Weight of precision in the harmonic mean.

  • cm

    DefaultNone

    This parameter allows sharing the same confusion matrix between multiple metrics. Sharing a confusion matrix reduces the amount of storage and computation time.

  • pos_val

    DefaultTrue

    Value to treat as "positive".

Attributes

Examples

from river import metrics

y_true = [False, False, False, True, True, True]
y_pred = [False, False, True, True, False, False]

metric = metrics.FBeta(beta=2)
for yt, yp in zip(y_true, y_pred):
    metric.update(yt, yp)

metric
FBeta: 35.71%

Methods

get

Return the current value of the metric.

is_better_than

Indicate if the current metric is better than another one.

Parameters

  • other

revert

Revert the metric.

Parameters

  • y_true'bool'
  • y_pred'bool | float | dict[bool, float]'
  • w — defaults to 1.0

update

Update the metric.

Parameters

  • y_true'bool'
  • y_pred'bool | float | dict[bool, float]'
  • w — defaults to 1.0

works_with

Indicates whether or not a metric can work with a given model.

Parameters