>>> [(0.4, 0.5), (0.7, 0.3), (0.8, 0.2), (0.9, 0.1), (0.6, 0.4)] for five 
>>> classes
>>> showing the probability that the input does not belong/does belong to that
>>> class, respectively.
>>>
>> Yes, if you don't normalize.
>> You are aware that this is inconsistent when you are doing multi-class,
>> not multi-label, right?
>> It there is only one correct label, it can not be label 2 with
>> probability .7 and label 3 with probability .8.
>>
> Those are "does not belong"/"does belong" pairs; the first number is the
> probability that the input is NOT part of the class. :)
Ok, but then the second should sum to one ;)

>
> I'm starting to understand what you mean; the "[(0.4, 0.5), (0.7, 0.3), ..."
> values are achieved by taking the sigmoid of each value in the decision
> function, right? And if I then normalize that, I'll get something in the form 
> of
> "[0.5, 0.3, 0.1, 0.05, 0.05]"? Apologies, I'm still new to some of this stuff!
That is exactly what I meant. So just do that, it will give you exactly 
the same as OvR.
You should train with loss="log" for it to be "meaningful".
Sorry, I was brief as I should have been working ;)


------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to