Pipelines¶
Pipelines are an integral part of River. We encourage their usage and apply them in many of their examples.
The compose.Pipeline
contains all the logic for building and applying pipelines. A pipeline is essentially a list of estimators that are applied in sequence. The only requirement is that the first n - 1
steps be transformers. The last step can be a regressor, a classifier, a clusterer, a transformer, etc.
Here is an example:
from river import compose
from river import linear_model
from river import preprocessing
from river import feature_extraction
model = compose.Pipeline(
preprocessing.StandardScaler(),
feature_extraction.PolynomialExtender(),
linear_model.LinearRegression()
)
You can also use the |
operator, as so:
model = (
preprocessing.StandardScaler() |
feature_extraction.PolynomialExtender() |
linear_model.LinearRegression()
)
Or, equally:
model = preprocessing.StandardScaler()
model |= feature_extraction.PolynomialExtender()
model |= linear_model.LinearRegression()
A pipeline, as any River estimator, has a _repr_html_
method, which can be used to visualize it in Jupyter-like notebooks:
model
StandardScaler
StandardScaler (
with_std=True
)
PolynomialExtender
PolynomialExtender (
degree=2
interaction_only=False
include_bias=False
bias_name="bias"
)
LinearRegression
LinearRegression (
optimizer=SGD (
lr=Constant (
learning_rate=0.01
)
)
loss=Squared ()
l2=0.
l1=0.
intercept_init=0.
intercept_lr=Constant (
learning_rate=0.01
)
clip_gradient=1e+12
initializer=Zeros ()
)
compose.Pipeline
implements a learn_one
method which in sequence calls the learn_one
of each component and a predict_one
(resp predict_proba_one
) method which calls transform_one
on the first n - 1
steps and predict_one
(resp predict_proba_one
) on the last step.
Here is a small example to illustrate the previous point:
from river import datasets
dataset = datasets.TrumpApproval()
x, y = next(iter(dataset))
x, y
[1m([0m
[1m{[0m
[32m'ordinal_date'[0m: [1;36m736389[0m,
[32m'gallup'[0m: [1;36m43.843213[0m,
[32m'ipsos'[0m: [1;36m46.19925042857143[0m,
[32m'morning_consult'[0m: [1;36m48.318749[0m,
[32m'rasmussen'[0m: [1;36m44.104692[0m,
[32m'you_gov'[0m: [1;36m43.636914000000004[0m
[1m}[0m,
[1;36m43.75505[0m
[1m)[0m
We can predict the target value of a new sample by calling the predict_one
method, however, by default, predict_one
does not update any model parameter, therefore the predictions will be 0 and the model parameters will remain the default values (0 for StandardScaler
component):
for (x, y) in dataset.take(2):
print(f"{model.predict_one(x)=:.2f}, {y=:.2f}")
print(f"{model['StandardScaler'].means = }")
model.predict_one(x)=0.00, y=43.76
model['StandardScaler'].means = defaultdict(<class 'float'>, {'ordinal_date': 0.0, 'gallup': 0.0, 'ipsos': 0.0, 'morning_consult': 0.0, 'rasmussen': 0.0, 'you_gov': 0.0})
model.predict_one(x)=0.00, y=43.71
model['StandardScaler'].means = defaultdict(<class 'float'>, {'ordinal_date': 0.0, 'gallup': 0.0, 'ipsos': 0.0, 'morning_consult': 0.0, 'rasmussen': 0.0, 'you_gov': 0.0})
learn_one
updates pipeline stateful steps, parameters and the prediction change:
for (x, y) in dataset.take(2):
model.learn_one(x, y)
print(f"{model.predict_one(x)=:.2f}, {y=:.2f}")
print(f"{model['StandardScaler'].means = }")
model.predict_one(x)=0.88, y=43.76
model['StandardScaler'].means = defaultdict(<class 'float'>, {'ordinal_date': 736389.0, 'gallup': 43.843213, 'ipsos': 46.19925042857143, 'morning_consult': 48.318749, 'rasmussen': 44.104692, 'you_gov': 43.636914000000004})
model.predict_one(x)=9.44, y=43.71
model['StandardScaler'].means = defaultdict(<class 'float'>, {'ordinal_date': 736389.5, 'gallup': 43.843213, 'ipsos': 46.19925042857143, 'morning_consult': 48.318749, 'rasmussen': 45.104692, 'you_gov': 42.636914000000004})
Each component of the pipeline has been updated with the new data point.
A pipeline is a very powerful tool that can be used to chain together multiple steps in a machine learning workflow.
Notice that it is also possible to call transform_one
with a pipeline, this method will run transform_one
of each transformer in it, and return the result of the last transformer (which is thus the penultimate step if the last step is a predictor or clusterer, while it is the last step if the last step is a transformer):
model.transform_one(x)
[1m{[0m
[32m'ordinal_date'[0m: [1;36m1.0[0m,
[32m'gallup'[0m: [1;36m0.0[0m,
[32m'ipsos'[0m: [1;36m0.0[0m,
[32m'morning_consult'[0m: [1;36m0.0[0m,
[32m'rasmussen'[0m: [1;36m1.0[0m,
[32m'you_gov'[0m: [1;36m-1.0[0m,
[32m'ordinal_date*ordinal_date'[0m: [1;36m1.0[0m,
[32m'gallup*ordinal_date'[0m: [1;36m0.0[0m,
[32m'ipsos*ordinal_date'[0m: [1;36m0.0[0m,
[32m'morning_consult*ordinal_date'[0m: [1;36m0.0[0m,
[32m'ordinal_date*rasmussen'[0m: [1;36m1.0[0m,
[32m'ordinal_date*you_gov'[0m: [1;36m-1.0[0m,
[32m'gallup*gallup'[0m: [1;36m0.0[0m,
[32m'gallup*ipsos'[0m: [1;36m0.0[0m,
[32m'gallup*morning_consult'[0m: [1;36m0.0[0m,
[32m'gallup*rasmussen'[0m: [1;36m0.0[0m,
[32m'gallup*you_gov'[0m: [1;36m-0.0[0m,
[32m'ipsos*ipsos'[0m: [1;36m0.0[0m,
[32m'ipsos*morning_consult'[0m: [1;36m0.0[0m,
[32m'ipsos*rasmussen'[0m: [1;36m0.0[0m,
[32m'ipsos*you_gov'[0m: [1;36m-0.0[0m,
[32m'morning_consult*morning_consult'[0m: [1;36m0.0[0m,
[32m'morning_consult*rasmussen'[0m: [1;36m0.0[0m,
[32m'morning_consult*you_gov'[0m: [1;36m-0.0[0m,
[32m'rasmussen*rasmussen'[0m: [1;36m1.0[0m,
[32m'rasmussen*you_gov'[0m: [1;36m-1.0[0m,
[32m'you_gov*you_gov'[0m: [1;36m1.0[0m
[1m}[0m
In many cases, you might want to connect a step to multiple steps. For instance, you might to extract different kinds of features from a single input. An elegant way to do this is to use a compose.TransformerUnion
. Essentially, the latter is a list of transformers who's results will be merged into a single dict
when transform_one
is called.
As an example let's say that we want to apply a feature_extraction.RBFSampler
as well as the feature_extraction.PolynomialExtender
. This may be done as so:
model = (
preprocessing.StandardScaler() |
(feature_extraction.PolynomialExtender() + feature_extraction.RBFSampler()) |
linear_model.LinearRegression()
)
model
StandardScaler
StandardScaler (
with_std=True
)
PolynomialExtender
PolynomialExtender (
degree=2
interaction_only=False
include_bias=False
bias_name="bias"
)
RBFSampler
RBFSampler (
gamma=1.
n_components=100
seed=None
)
LinearRegression
LinearRegression (
optimizer=SGD (
lr=Constant (
learning_rate=0.01
)
)
loss=Squared ()
l2=0.
l1=0.
intercept_init=0.
intercept_lr=Constant (
learning_rate=0.01
)
clip_gradient=1e+12
initializer=Zeros ()
)
Note that the +
symbol acts as a shorthand notation for creating a compose.TransformerUnion
, which means that we could have declared the above pipeline as so:
model = (
preprocessing.StandardScaler() |
compose.TransformerUnion(
feature_extraction.PolynomialExtender(),
feature_extraction.RBFSampler()
) |
linear_model.LinearRegression()
)
Pipelines provide the benefit of removing a lot of cruft by taking care of tedious details for you. They also enable to clearly define what steps your model is made of.
Finally, having your model in a single object means that you can move it around more easily.
Note that you can include user-defined functions in a pipeline by using a compose.FuncTransformer
.
Learning during predict¶
In online machine learning, we can update the unsupervised parts of our model when a sample arrives. We don't really have to wait for the ground truth to arrive in order to update unsupervised estimators that don't depend on it.
In other words, in a pipeline, learn_one
updates the supervised parts, whilst predict_one
(or predict_proba_one
for that matter) can update the unsupervised parts, which often yields better results.
In river, we can achieve this behavior using a dedicated context manager: compose.learn_during_predict
.
Here is the same example as before, with the only difference of activating the such learning during predict behavior:
model = (
preprocessing.StandardScaler() |
feature_extraction.PolynomialExtender() |
linear_model.LinearRegression()
)
with compose.learn_during_predict():
for (x, y) in dataset.take(2):
print(f"{model.predict_one(x)=:.2f}, {y=:.2f}")
print(f"{model['StandardScaler'].means = }")
model.predict_one(x)=0.00, y=43.76
model['StandardScaler'].means = defaultdict(<class 'float'>, {'ordinal_date': 736389.0, 'gallup': 43.843213, 'ipsos': 46.19925042857143, 'morning_consult': 48.318749, 'rasmussen': 44.104692, 'you_gov': 43.636914000000004})
model.predict_one(x)=0.00, y=43.71
model['StandardScaler'].means = defaultdict(<class 'float'>, {'ordinal_date': 736389.5, 'gallup': 43.843213, 'ipsos': 46.19925042857143, 'morning_consult': 48.318749, 'rasmussen': 45.104692, 'you_gov': 42.636914000000004})
Calling predict_one
within this context will update each transformer of the pipeline. For instance here we can see that the mean of each feature of the standard scaler step have been updated.
On the other hand, the supervised part of our pipeline, the linear regression, has not been updated or learned anything yet. Hence the prediction on any sample will be nil because each weight is still equal to 0.
model.predict_one(x), model["LinearRegression"].weights
[1m([0m[1;36m0.0[0m, [1m{[0m[1m}[0m[1m)[0m
Performance Comparison¶
One may wonder what is the advantage of learning during predict. Let's compare the performance of a pipeline with and without learning during predict, in two scenarios: one in which the flow of data stays the same, we just update
from contextlib import nullcontext
from river import metrics
import pandas as pd
def score_pipeline(learn_during_predict: bool, n_learning_samples: int | None = None) -> float:
"""Scores a pipeline on the TrumpApproval dataset.
Parameters
----------
learn_during_predict : bool
Whether or not to learn the unsupervided components during the prediction step.
If False it will only learn when `learn_one` is explicitly called.
n_learning_samples : int | None
Number of samples used to `learn_one`.
Return
------
MAE : float
Mean absolute error of the pipeline on the dataset
"""
dataset = datasets.TrumpApproval()
model = (
preprocessing.StandardScaler() |
linear_model.LinearRegression()
)
metric = metrics.MAE()
ctx = compose.learn_during_predict if learn_during_predict else nullcontext
n_learning_samples = n_learning_samples or dataset.n_samples
with ctx():
for _idx, (x, y) in enumerate(dataset):
y_pred = model.predict_one(x)
metric.update(y, y_pred)
if _idx < n_learning_samples:
model.learn_one(x, y)
return metric.get()
max_samples = datasets.TrumpApproval().n_samples
results = [
{
"learn_during_predict": learn_during_predict,
"pct_learning_samples": round(100*n_learning_samples/max_samples, 0),
"mae": score_pipeline(learn_during_predict=learn_during_predict, n_learning_samples=n_learning_samples)
}
for learn_during_predict in (True, False)
for n_learning_samples in range(max_samples, max_samples//10, -(max_samples//10))
]
(pd.DataFrame(results)
.pivot(columns="learn_during_predict", index="pct_learning_samples", values="mae")
.sort_index(ascending=False)
.style.format_index('{0}%')
)
learn_during_predict | False | True |
---|---|---|
pct_learning_samples | ||
100.0% | 1.314548 | 1.347434 |
90.0% | 1.629333 | 1.355274 |
80.0% | 2.712125 | 1.371599 |
70.0% | 4.840620 | 1.440773 |
60.0% | 8.918634 | 1.498240 |
50.0% | 15.112753 | 1.878434 |
40.0% | 26.387331 | 2.105553 |
30.0% | 42.997083 | 3.654709 |
20.0% | 90.703102 | 3.504950 |
10.0% | 226.836953 | 4.803600 |
As we can see from the resulting table above, the scores are comparable only in the case in which the percentage of learning samples above 90%. After that the score starts to degrade quite fast as the percentage of learning samples decreases, and it is very remarkable (one order of magnitude or more) when less than 50% of the samples are used for learning.
Although a simple case, this examplify how powerful it can be to learn unsupervised components during predict.