Skip to content

HOFMRegressor

Higher-Order Factorization Machine for regression.

The model equation is defined as:

\[\hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{l=2}^{d} \sum_{j_1=1}^{p} \cdots \sum_{j_l=j_{l-1}+1}^{p} \left(\prod_{j'=1}^{l} x_{j_{j'}} \right) \left(\sum_{f=1}^{k_l} \prod_{j'=1}^{l} v_{j_{j'}, f}^{(l)} \right)\]

For more efficiency, this model automatically one-hot encodes strings features considering them as categorical variables.

Parameters

  • degree – defaults to 3

    Polynomial degree or model order.

  • n_factors – defaults to 10

    Dimensionality of the factorization or number of latent factors.

  • weight_optimizer (optim.base.Optimizer) – defaults to None

    The sequential optimizer used for updating the feature weights. Note thatthe intercept is handled separately.

  • latent_optimizer (optim.base.Optimizer) – defaults to None

    The sequential optimizer used for updating the latent factors.

  • loss (optim.losses.RegressionLoss) – defaults to None

    The loss function to optimize for.

  • sample_normalization – defaults to False

    Whether to divide each element of x by x's L2-norm.

  • l1_weight – defaults to 0.0

    Amount of L1 regularization used to push weights towards 0.

  • l2_weight – defaults to 0.0

    Amount of L2 regularization used to push weights towards 0.

  • l1_latent – defaults to 0.0

    Amount of L1 regularization used to push latent weights towards 0.

  • l2_latent – defaults to 0.0

    Amount of L2 regularization used to push latent weights towards 0.

  • intercept – defaults to 0.0

    Initial intercept value.

  • intercept_lr (Union[optim.base.Scheduler, float]) – defaults to 0.01

    Learning rate scheduler used for updating the intercept. An instance of optim.schedulers.Constant is used if a float is passed. No intercept will be used if this is set to 0.

  • weight_initializer (optim.base.Initializer) – defaults to None

    Weights initialization scheme. Defaults to optim.initializers.Zeros().

  • latent_initializer (optim.base.Initializer) – defaults to None

    Latent factors initialization scheme. Defaults to optim.initializers.Normal(mu=.0, sigma=.1, random_state=self.random_state).

  • clip_gradient – defaults to 1000000000000.0

    Clips the absolute value of each gradient value.

  • seed (int) – defaults to None

    Randomization seed used for reproducibility.

Attributes

  • weights

    The current weights assigned to the features.

  • latents

    The current latent weights assigned to the features.

Examples

>>> from river import facto

>>> dataset = (
...     ({'user': 'Alice', 'item': 'Superman', 'time': .12}, 8),
...     ({'user': 'Alice', 'item': 'Terminator', 'time': .13}, 9),
...     ({'user': 'Alice', 'item': 'Star Wars', 'time': .14}, 8),
...     ({'user': 'Alice', 'item': 'Notting Hill', 'time': .15}, 2),
...     ({'user': 'Alice', 'item': 'Harry Potter ', 'time': .16}, 5),
...     ({'user': 'Bob', 'item': 'Superman', 'time': .13}, 8),
...     ({'user': 'Bob', 'item': 'Terminator', 'time': .12}, 9),
...     ({'user': 'Bob', 'item': 'Star Wars', 'time': .16}, 8),
...     ({'user': 'Bob', 'item': 'Notting Hill', 'time': .10}, 2)
... )

>>> model = facto.HOFMRegressor(
...     degree=3,
...     n_factors=10,
...     intercept=5,
...     seed=42,
... )

>>> for x, y in dataset:
...     _ = model.learn_one(x, y)

>>> model.predict_one({'user': 'Bob', 'item': 'Harry Potter', 'time': .14})
5.311745

>>> report = model.debug_one({'user': 'Bob', 'item': 'Harry Potter', 'time': .14})

>>> print(report)
Name                                  Value      Weight     Contribution
                          Intercept    1.00000    5.23495        5.23495
                           user_Bob    1.00000    0.11436        0.11436
                               time    0.14000    0.03185        0.00446
                    user_Bob - time    0.14000    0.00884        0.00124
user_Bob - item_Harry Potter - time    0.14000    0.00117        0.00016
                  item_Harry Potter    1.00000    0.00000        0.00000
           item_Harry Potter - time    0.14000   -0.00695       -0.00097
       user_Bob - item_Harry Potter    1.00000   -0.04246       -0.04246

Methods

clone

Return a fresh estimator with the same parameters.

The clone has the same parameters but has not been updated with any data. This works by looking at the parameters from the class signature. Each parameter is either - recursively cloned if it's a River classes. - deep-copied via copy.deepcopy if not. If the calling object is stochastic (i.e. it accepts a seed parameter) and has not been seeded, then the clone will not be idempotent. Indeed, this method's purpose if simply to return a new instance with the same input parameters.

debug_one

Debugs the output of the FM regressor.

Parameters

  • x (dict)
  • decimals (int) – defaults to 5

Returns

str: A table which explains the output.

learn_one

Fits to a set of features x and a real-valued target y.

Parameters

  • x (dict)
  • y (numbers.Number)
  • sample_weight – defaults to 1.0

Returns

Regressor: self

predict_one

Predicts the target value of a set of features x.

Parameters

  • x

Returns

The prediction.

References