Actually, in mainstream computer vision, some efforts are going  on to
compare
the different  outputs in form of (calibrated)probabilties from different
classifiers.
Please go through a paper "Ensemble of Exemplar-SVMs for Object Detection
and Beyond" by Tomasz Malisiewicz (ICCV 2011)
For. Ex. In ESVM (Exemplar SVM), The method is based on training a separate
linear SVM classifier for every exemplar in the training set.
Each of these Exemplar-SVMs is thus defined by a single positive instance
and millions of negatives.During testing, a special calibration techniques
 used to choose the best output. On terms of calibration, differently sized
templates will produce different scores so you have to do something like
Platt's method.





On Tue, Nov 26, 2013 at 2:51 PM, Jaques Grobler <[email protected]>wrote:

> I believe this message is from Abhi too - regarding the same question:
>
> How can we combine probabilities from multiple classifiers in sklearn?
>> [Classifiers are trained on similar type datasets, difference being their
>> sizes and the way each result might be used]. I am using SGDClassifier to
>> train the individual classifiers, and need to choose the best amongst
>> them.
>>  But as I understand I would need to normalize first before comparing
>>  them and was not sure how to calibrate them as such. Any pointers
>>  to would be helpful.
>
>
>
>
>
> 2013/11/26 Olivier Grisel <[email protected]>
>
>> 2013/11/26 Abhi <[email protected]>:
>> > How can we normalize and compare probabilities from different classifier
>> >  models in scikit?
>>
>> Why do you want to do that?
>> To be more specific, how do you quantify success of your task?
>>
>> By definition, probabilities are "normalized" in the sense that the
>> are guaranteed to live in the [0-1] range. However the  classifier
>> models can be predict arbitrarily bad probabilities. For instance a
>> badly trained or badly parameterized binary classifier could predict 0
>> proba for the positive class 100% of the time.
>>
>>
>> --
>> Olivier
>> http://twitter.com/ogrisel - http://github.com/ogrisel
>>
>>
>> ------------------------------------------------------------------------------
>> Shape the Mobile Experience: Free Subscription
>> Software experts and developers: Be at the forefront of tech innovation.
>> Intel(R) Software Adrenaline delivers strategic insight and game-changing
>> conversations that shape the rapidly evolving mobile landscape. Sign up
>> now.
>>
>> http://pubads.g.doubleclick.net/gampad/clk?id=63431311&iu=/4140/ostg.clktrk
>> _______________________________________________
>> Scikit-learn-general mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>
>
>
>
> ------------------------------------------------------------------------------
> Shape the Mobile Experience: Free Subscription
> Software experts and developers: Be at the forefront of tech innovation.
> Intel(R) Software Adrenaline delivers strategic insight and game-changing
> conversations that shape the rapidly evolving mobile landscape. Sign up
> now.
> http://pubads.g.doubleclick.net/gampad/clk?id=63431311&iu=/4140/ostg.clktrk
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>


-- 
    Warm Regards
    Yogesh Karpate
------------------------------------------------------------------------------
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to