Perceptron¶
Perceptron classifier.
In this implementation, the Perceptron is viewed as a special case of the logistic regression. The loss function that is used is the Hinge loss with a threshold set to 0, whilst the learning rate of the stochastic gradient descent procedure is set to 1 for both the weights and the intercept.
Parameters¶
-
l2
Default →
0.0
Amount of L2 regularization used to push weights towards 0.
-
clip_gradient
Default →
1000000000000.0
Clips the absolute value of each gradient value.
-
initializer
Type → optim.initializers.Initializer | None
Default →
None
Weights initialization scheme.
Attributes¶
-
weights
The current weights.
Examples¶
from river import datasets
from river import evaluate
from river import linear_model as lm
from river import metrics
from river import preprocessing as pp
dataset = datasets.Phishing()
model = pp.StandardScaler() | lm.Perceptron()
metric = metrics.Accuracy()
evaluate.progressive_val_score(dataset, model, metric)
Accuracy: 85.84%
Methods¶
learn_many
Update the model with a mini-batch of features X
and boolean targets y
.
Parameters
- X — 'pd.DataFrame'
- y — 'pd.Series'
- w — 'float | pd.Series' — defaults to
1
Returns
MiniBatchClassifier: self
learn_one
Update the model with a set of features x
and a label y
.
Parameters
- x — 'dict'
- y — 'base.typing.ClfTarget'
- w — defaults to
1.0
Returns
Classifier: self
predict_many
Predict the outcome for each given sample.
Parameters
- X — 'pd.DataFrame'
Returns
pd.Series: The predicted labels.
predict_one
Predict the label of a set of features x
.
Parameters
- x — 'dict'
- kwargs
Returns
base.typing.ClfTarget | None: The predicted label.
predict_proba_many
Predict the outcome probabilities for each given sample.
Parameters
- X — 'pd.DataFrame'
Returns
pd.DataFrame: A dataframe with probabilities of True
and False
for each sample.
predict_proba_one
Predict the probability of each label for a dictionary of features x
.
Parameters
- x
Returns
A dictionary that associates a probability which each label.