Skip to content

ComplementNB

Naive Bayes classifier for multinomial models.

Complement Naive Bayes model learns from occurrences between features such as word counts and discrete classes. ComplementNB is suitable for imbalance dataset. The input vector must contain positive values, such as counts or TF-IDF values.

Parameters

  • alpha – defaults to 1.0

    Additive (Laplace/Lidstone) smoothing parameter (use 0 for no smoothing).

Attributes

  • class_dist (proba.Multinomial)

    Class prior probability distribution.

  • feature_counts (collections.defaultdict)

    Total frequencies per feature and class.

  • class_totals (collections.Counter)

    Total frequencies per class.

Examples

>>> import pandas as pd
>>> from river import compose
>>> from river import feature_extraction
>>> from river import naive_bayes

>>> docs = [
...     ("Chinese Beijing Chinese", "yes"),
...     ("Chinese Chinese Shanghai", "yes"),
...     ("Chinese Macao", "maybe"),
...     ("Tokyo Japan Chinese", "no")
... ]

>>> model = compose.Pipeline(
...     ("tokenize", feature_extraction.BagOfWords(lowercase=False)),
...     ("nb", naive_bayes.ComplementNB(alpha=1))
... )

>>> for sentence, label in docs:
...     model = model.learn_one(sentence, label)

>>> model["nb"].p_class("yes")
0.5

>>> model["nb"].p_class("no")
0.25

>>> model["nb"].p_class("maybe")
0.25

>>> model.predict_proba_one("test")
{'yes': 0.275, 'maybe': 0.375, 'no': 0.35}

>>> model.predict_one("test")
'maybe'

You can train the model and make predictions in mini-batch mode using the class methods learn_many and predict_many.

>>> df_docs = pd.DataFrame(docs, columns = ["docs", "y"])

>>> X = pd.Series([
...    "Chinese Beijing Chinese",
...    "Chinese Chinese Shanghai",
...    "Chinese Macao",
...    "Tokyo Japan Chinese"
... ])

>>> y = pd.Series(["yes", "yes", "maybe", "no"])

>>> model = compose.Pipeline(
...     ("tokenize", feature_extraction.BagOfWords(lowercase=False)),
...     ("nb", naive_bayes.ComplementNB(alpha=1))
... )

>>> model = model.learn_many(X, y)

>>> unseen = pd.Series(["Taiwanese Taipei", "Chinese Shanghai"])

>>> model.predict_proba_many(unseen)
      maybe        no       yes
0  0.415129  0.361624  0.223247
1  0.248619  0.216575  0.534807

>>> model.predict_many(unseen)
0    maybe
1      yes
dtype: object

Methods

clone

Return a fresh estimator with the same parameters.

The clone has the same parameters but has not been updated with any data. This works by looking at the parameters from the class signature. Each parameter is either - recursively cloned if it's a River classes. - deep-copied via copy.deepcopy if not. If the calling object is stochastic (i.e. it accepts a seed parameter) and has not been seeded, then the clone will not be idempotent. Indeed, this method's purpose if simply to return a new instance with the same input parameters.

joint_log_likelihood

Computes the joint log likelihood of input features.

Parameters

  • x (dict)

Returns

float: Mapping between classes and joint log likelihood.

joint_log_likelihood_many

Computes the joint log likelihood of input features.

Parameters

  • X (pandas.core.frame.DataFrame)

Returns

DataFrame: Input samples joint log likelihood.

learn_many

Learn from a batch of count vectors.

Parameters

  • X (pandas.core.frame.DataFrame)
  • y (pandas.core.series.Series)

Returns

MiniBatchClassifier: self

learn_one

Updates the model with a single observation.

Parameters

  • x (dict)
  • y (Union[bool, str, int])

Returns

Classifier: self

p_class
p_class_many
predict_many

Predict the outcome for each given sample.

Parameters

  • X (pandas.core.frame.DataFrame)

Returns

Series: The predicted labels.

predict_one

Predict the label of a set of features x.

Parameters

  • x (dict)

Returns

typing.Union[bool, str, int]: The predicted label.

predict_proba_many

Return probabilities using the log-likelihoods in mini-batchs setting.

Parameters

  • X (pandas.core.frame.DataFrame)
predict_proba_one

Return probabilities using the log-likelihoods.

Parameters

  • x (dict)

References