Skip to content

LinearRegression

Linear regression.

This estimator supports learning with mini-batches. On top of the single instance methods, it provides the following methods: learn_many, predict_many, predict_proba_many. Each method takes as input a pandas.DataFrame where each column represents a feature.

It is generally a good idea to scale the data beforehand in order for the optimizer to converge. You can do this online with a preprocessing.StandardScaler.

Parameters

  • optimizer

    Typeoptim.base.Optimizer | None

    DefaultNone

    The sequential optimizer used for updating the weights. Note that the intercept updates are handled separately.

  • loss

    Typeoptim.losses.RegressionLoss | None

    DefaultNone

    The loss function to optimize for.

  • l2

    Default0.0

    Amount of L2 regularization used to push weights towards 0. For now, only one type of penalty can be used. The joint use of L1 and L2 is not explicitly supported.

  • l1

    Default0.0

    Amount of L1 regularization used to push weights towards 0. For now, only one type of penalty can be used. The joint use of L1 and L2 is not explicitly supported.

  • intercept_init

    Default0.0

    Initial intercept value.

  • intercept_lr

    Typeoptim.base.Scheduler | float

    Default0.01

    Learning rate scheduler used for updating the intercept. A optim.schedulers.Constant is used if a float is provided. The intercept is not updated when this is set to 0.

  • clip_gradient

    Default1000000000000.0

    Clips the absolute value of each gradient value.

  • initializer

    Typeoptim.base.Initializer | None

    DefaultNone

    Weights initialization scheme.

Attributes

  • weights (dict)

    The current weights.

Examples

from river import datasets
from river import evaluate
from river import linear_model
from river import metrics
from river import preprocessing

dataset = datasets.TrumpApproval()

model = (
    preprocessing.StandardScaler() |
    linear_model.LinearRegression(intercept_lr=.1)
)
metric = metrics.MAE()

evaluate.progressive_val_score(dataset, model, metric)
MAE: 0.558735

model['LinearRegression'].intercept
35.617670

You can call the debug_one method to break down a prediction. This works even if the linear regression is part of a pipeline.

x, y = next(iter(dataset))
report = model.debug_one(x)
print(report)
0. Input
--------
gallup: 43.84321 (float)
ipsos: 46.19925 (float)
morning_consult: 48.31875 (float)
ordinal_date: 736389 (int)
rasmussen: 44.10469 (float)
you_gov: 43.63691 (float)
<BLANKLINE>
1. StandardScaler
-----------------
gallup: 1.18810 (float)
ipsos: 2.10348 (float)
morning_consult: 2.73545 (float)
ordinal_date: -1.73032 (float)
rasmussen: 1.26872 (float)
you_gov: 1.48391 (float)
<BLANKLINE>
2. LinearRegression
-------------------
Name              Value      Weight      Contribution
      Intercept    1.00000    35.61767       35.61767
          ipsos    2.10348     0.62689        1.31866
morning_consult    2.73545     0.24180        0.66144
         gallup    1.18810     0.43568        0.51764
      rasmussen    1.26872     0.28118        0.35674
        you_gov    1.48391     0.03123        0.04634
   ordinal_date   -1.73032     3.45162       -5.97242
<BLANKLINE>
Prediction: 32.54607

Methods

debug_one

Debugs the output of the linear regression.

Parameters

  • x'dict'
  • decimals'int' — defaults to 5

Returns

str: A table which explains the output.

learn_many

Update the model with a mini-batch of features X and real-valued targets y.

Parameters

  • X'pd.DataFrame'
  • y'pd.Series'
  • w'float | pd.Series' — defaults to 1

learn_one

Fits to a set of features x and a real-valued target y.

Parameters

  • x'dict'
  • y'base.typing.RegTarget'
  • w — defaults to 1.0

predict_many

Predict the outcome for each given sample.

Parameters

  • X

Returns

The predicted outcomes.

predict_one

Predict the output of features x.

Parameters

  • x

Returns

The prediction.