Hi, all,

I have a question about the multiclass perceptron in scikit-learn. I noticed, 
that I only get one decision boundary (http://i.imgur.com/CfyxPbt.png 
<http://i.imgur.com/CfyxPbt.png>) in a simple 3 class setting.

iris = datasets.load_iris()
X = iris.data[:, [0,2]]  
y = iris.target

I tried the perceptron with and without the OneVsRestClassifier

e.g., 
OneVsRestClassifier(estimator=Perceptron(alpha=1e-05, class_weight=None, 
eta0=1.0, fit_intercept=True,
      n_iter=20, n_jobs=1, penalty=None, random_state=123, shuffle=False,
      verbose=0, warm_start=False),
          n_jobs=1)

I also tried GridSearch on the alpha and n_iter parameter space. However, the 
results didn't improve. Now, I am wondering why that is. Shouldn't the 
multiclass perceptron produce something similar (but worse) like the linear 
SVM? When the weights are updated at each iteration, shouldn't there be a 
second hyperplane somewhat separating the green and red class (in the figure) 
-- due to minimizing the cost function (i.e., minimizing number of 
misclassifications) ?


Thanks,
Sebastian 
------------------------------------------------------------------------------
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to