Skip to content

VariationInfo

Variation of Information.

Variation of Information (VI) 1 2 is an information-based clustering measure. It is presented as a distance measure for comparing partitions (or clusterings) of the same data. It therefore does not distinguish between hypothesised and target clustering. VI has a number of useful properties, as follows

  • VI satisifes the metric axioms

  • VI is convexly additive. This means that, if a cluster is split, the distance from the new cluster to the original is the distance induced by the split times the size of the cluster. This guarantees that all changes to the metrics are "local".

  • VI is not affected by the number of data points in the cluster. However, it is bounded by the logarithm of the maximum number of clusters in true and predicted labels.

The Variation of Information is calculated using the following formula

\[ VI(C, K) = H(C) + H(K) - 2 H(C, K) = H(C|K) + H(K|C) \]

The bound of the variation of information 3 can be written in terms of the number of elements, \(VI(C, K) \leq \log(n)\), or with respect to the maximum number of clusters \(K^*\), \(VI(C, K) \leq 2 \log(K^*)\).

Parameters

  • cm – defaults to None

    This parameter allows sharing the same confusion matrix between multiple metrics. Sharing a confusion matrix reduces the amount of storage and computation time.

Attributes

  • bigger_is_better

    Indicate if a high value is better than a low one or not.

  • requires_labels

    Indicates if labels are required, rather than probabilities.

  • sample_correction

  • works_with_weights

    Indicate whether the model takes into consideration the effect of sample weights

Examples

>>> from river import metrics

>>> y_true = [1, 1, 2, 2, 3, 3]
>>> y_pred = [1, 1, 1, 2, 2, 2]

>>> metric = metrics.VariationInfo()

>>> for yt, yp in zip(y_true, y_pred):
...     print(metric.update(yt, yp).get())
0.0
0.0
0.9182958340544896
1.1887218755408673
1.3509775004326938
1.2516291673878228

>>> metric
VariationInfo: 1.251629

Methods

clone

Return a fresh estimator with the same parameters.

The clone has the same parameters but has not been updated with any data. This works by looking at the parameters from the class signature. Each parameter is either - recursively cloned if it's a River classes. - deep-copied via copy.deepcopy if not. If the calling object is stochastic (i.e. it accepts a seed parameter) and has not been seeded, then the clone will not be idempotent. Indeed, this method's purpose if simply to return a new instance with the same input parameters.

get

Return the current value of the metric.

revert

Revert the metric.

Parameters

  • y_true
  • y_pred
  • sample_weight – defaults to 1.0
  • correction – defaults to None
update

Update the metric.

Parameters

  • y_true
  • y_pred
  • sample_weight – defaults to 1.0
works_with

Indicates whether or not a metric can work with a given model.

Parameters

  • model (river.base.estimator.Estimator)

References


  1. Andrew Rosenberg and Julia Hirschberg (2007). V-Measure: A conditional entropy-based external cluster evaluation measure. Proceedings of the 2007 Joing Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 410 - 420, Prague, June 2007. 

  2. Marina Meila and David Heckerman. 2001. An experimental comparison of model-based clustering methods. Mach. Learn., 42(1/2):9–29. 

  3. Wikipedia contributors. (2021, February 18). Variation of information. In Wikipedia, The Free Encyclopedia, from https://en.wikipedia.org/w/index.php?title=Variation_of_information&oldid=1007562715