warm_up_mode¶
A context manager for training pipelines during a warm-up phase.
You don't have to worry about anything when you call predict_one
and learn_one
with a pipeline during in a training loop. The methods at each step of the pipeline will be called in the correct order.
However, during a warm-up phase, you might just be calling learn_one
because you don't need the out-of-sample predictions. In this case the unsupervised estimators in the pipeline won't be updated, because they are usually updated when predict_one
is called.
This context manager allows you to override that behavior and make it so that unsupervised estimators are updated when learn_one
is called.
Examples¶
Let's first see what methods are called if we just call learn_one
.
import io
import logging
from river import anomaly
from river import compose
from river import datasets
from river import preprocessing
from river import utils
model = compose.Pipeline(
preprocessing.MinMaxScaler(),
anomaly.HalfSpaceTrees()
)
class_condition = lambda x: x.__class__.__name__ in ('MinMaxScaler', 'HalfSpaceTrees')
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logs = io.StringIO()
sh = logging.StreamHandler(logs)
sh.setLevel(logging.DEBUG)
logger.addHandler(sh)
with utils.log_method_calls(class_condition):
for x, y in datasets.CreditCard().take(1):
model = model.learn_one(x)
print(logs.getvalue())
MinMaxScaler.transform_one
HalfSpaceTrees.learn_one
Now let's use the context manager and see what methods get called.
logs = io.StringIO()
sh = logging.StreamHandler(logs)
sh.setLevel(logging.DEBUG)
logger.addHandler(sh)
with utils.log_method_calls(class_condition), compose.warm_up_mode():
for x, y in datasets.CreditCard().take(1):
model = model.learn_one(x)
print(logs.getvalue())
MinMaxScaler.learn_one
MinMaxScaler.transform_one
HalfSpaceTrees.learn_one
We can see that the scaler got updated before transforming the data.