So indeed in the perceptron update yi_pred is {-1, 1}, not real, in sklearn, right?

On 02/23/2015 08:35 AM, Mathieu Blondel wrote:
Rosenblatt's Perceptron is a special case of SGD, see:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/tests/test_perceptron.py

The perceptron loss leads to sparser weight vectors than the hinge loss in the sense that it updates the weight vector less aggressively (only on mistakes while the hinge loss updates the model if the prediction is not "good enough").

Mathieu

On Mon, Feb 23, 2015 at 7:14 PM, Sebastian Raschka <se.rasc...@gmail.com <mailto:se.rasc...@gmail.com>> wrote:

    Hi all,

    I find the description of the perceptron classifier a little bit
    ambiguous and was wondering if it would be worthwhile to clarify
    it a little bit. What do you think?


    While browsing through the documentation at
    http://scikit-learn.org/stable/modules/linear_model.html I found
    the following paragraph about perceptrons:

    > The Perceptron is another simple algorithm suitable for large
    scale learning. By default:
    >       • It does not require a learning rate.
    >       • It is not regularized (penalized).
    >       • It updates its model only on mistakes.
    > The last characteristic implies that the Perceptron is slightly
    faster to train than SGD with the hinge loss and that the
    resulting models are sparser.


    To me, it sounds like the "classic" Rosenblatt Perceptron update rule

    weights = weights + eta(yi - yi_pred)xi

    where yi_pred = sign(w^T.x)     [yi_pred ele in {-1, 1 }]


    However, when I read the documentation on
    
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Perceptron.html#sklearn.linear_model.Perceptron

    > Perceptron and SGDClassifier share the same underlying
    implementation. In fact, Perceptron() is equivalent to
    SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”,
    penalty=None).

    it sounds more like the slightly more modern online learning
    variant of gradient descent (i.e. stochastic gradient descent):

    weights = weights + eta(yi - yi_pred)xi

    where yi_pred = w^T.x     [yi_pred ele Real]



    Best,
    Sebastian
    
------------------------------------------------------------------------------
    Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
    from Actuate! Instantly Supercharge Your Business Reports and
    Dashboards
    with Interactivity, Sharing, Native Excel Exports, App Integration
    & more
    Get technology previously reserved for billion-dollar
    corporations, FREE
    http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk
    _______________________________________________
    Scikit-learn-general mailing list
    Scikit-learn-general@lists.sourceforge.net
    <mailto:Scikit-learn-general@lists.sourceforge.net>
    https://lists.sourceforge.net/lists/listinfo/scikit-learn-general




------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk


_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to