The actual implementation is the latter, that Perceptron() is equivalent to
 SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”,
penalty=None). I agree that it the comment in the documentation that
training the Perceptron is "slightly faster" and that the resulting model
will be more sparse is not really true. yi_pred will be in the domain of
real numbers. It might be worth changing the linear model documentation.

On Mon, Feb 23, 2015 at 11:14 AM, Sebastian Raschka <se.rasc...@gmail.com>
wrote:

> Hi all,
>
> I find the description of the perceptron classifier a little bit ambiguous
> and was wondering if it would be worthwhile to clarify it a little bit.
> What do you think?
>
>
> While browsing through the documentation at
> http://scikit-learn.org/stable/modules/linear_model.html I found the
> following paragraph about perceptrons:
>
> > The Perceptron is another simple algorithm suitable for large scale
> learning. By default:
> >       • It does not require a learning rate.
> >       • It is not regularized (penalized).
> >       • It updates its model only on mistakes.
> > The last characteristic implies that the Perceptron is slightly faster
> to train than SGD with the hinge loss and that the resulting models are
> sparser.
>
>
> To me, it sounds like the "classic" Rosenblatt Perceptron update rule
>
> weights = weights + eta(yi - yi_pred)xi
>
> where yi_pred = sign(w^T.x)     [yi_pred ele in {-1, 1 }]
>
>
> However, when I read the documentation on
> http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Perceptron.html#sklearn.linear_model.Perceptron
>
> > Perceptron and SGDClassifier share the same underlying implementation.
> In fact, Perceptron() is equivalent to SGDClassifier(loss=”perceptron”,
> eta0=1, learning_rate=”constant”, penalty=None).
>
> it sounds more like the slightly more modern online learning variant of
> gradient descent (i.e. stochastic gradient descent):
>
> weights = weights + eta(yi - yi_pred)xi
>
> where yi_pred = w^T.x     [yi_pred ele Real]
>
>
>
> Best,
> Sebastian
>
> ------------------------------------------------------------------------------
> Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
> from Actuate! Instantly Supercharge Your Business Reports and Dashboards
> with Interactivity, Sharing, Native Excel Exports, App Integration & more
> Get technology previously reserved for billion-dollar corporations, FREE
>
> http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to