Hi again,

I looked at Olivers diagrams. I plot the error, Oliver the classification score
which starts low (high error). So it's the same problem. Or isn't this
even a problem?
I'm getting confused.

I try to grasp the concept based on
http://jakevdp.github.com/_images/plot_bias_variance_examples_4.png
where, in both cases, the training error starts small.

But then I looked at this picture:
http://jakevdp.github.com/_images/plot_bias_variance_examples_3.png
I see that for small degrees, the classifier will always have a high
bias (all sets show high error rates).
When I use the bayes classifier, what is the degree? Isn't it always 1 which
leads to the observed behavior?

Thanks!

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to