In relation to the above, I see now that my PR has resulted in a
slightly broken auto-generated doc page along with a missing image:
http://scikit-learn.org/dev/auto_examples/ensemble/plot_forest_iris.html#example-ensemble-plot-forest-iris-py
 # new with problems
http://scikit-learn.org/0.13/auto_examples/ensemble/plot_forest_iris.html#example-ensemble-plot-forest-iris-py
 # old version with image

I've commented on the (closed) issue but I'm not sure on the correct
procedure. Please tell me if I should open a new bug instead? I have
noted how to fix 3 formatting issue on that page but I don't know how
to include a replacement image?
https://github.com/scikit-learn/scikit-learn/pull/2146

Cheers,
Ian.

On 7 July 2013 21:40, Ian Ozsvald <i...@ianozsvald.com> wrote:
> PR for the demo to follow in a day or so once the other email's issues
> re. weights are figured out. Much obliged for the notes.
> i.
>
> On 7 July 2013 19:20, Olivier Grisel <olivier.gri...@ensta.org> wrote:
>> 2013/7/7 Ian Ozsvald <i...@ianozsvald.com>:
>>> Following on from the previous post, I thought (from reading only and
>>> accepting no prior experience with AdaBoost) that the main goal of
>>> AdaBoost was to combine weak classifiers (e.g. a depth-restricted
>>> DecisionTree) rather than building an ensemble of strong classifiers
>>> (as in e.g. a RandomForest).
>>>
>>> The example on the site:
>>> http://scikit-learn.org/dev/auto_examples/ensemble/plot_forest_iris.html
>>> uses DecisionTrees with max_depth=None for each of the 4 classifiers.
>>> Using a depth restricted classifier (e.g. max_depth=3) for AdaBoost
>>> results in the same classification quality in this example.
>>>
>>> Might the example say more about AdaBoost's ability to use weak
>>> classifiers if we used a restricted depth DecisionTree?
>>
>> +1, PR accepted :)
>>
>> Boosting is good for ensembling a large number of underfitting models
>> and thus correcting their individual bias.
>> Bagging and other randomized voting aggregates is good for ensembling
>> a large number of overfitting models and thus correcting their
>> individual variance.
>>
>> --
>> Olivier
>> http://twitter.com/ogrisel - http://github.com/ogrisel
>>
>> ------------------------------------------------------------------------------
>> This SF.net email is sponsored by Windows:
>>
>> Build for Windows Store.
>>
>> http://p.sf.net/sfu/windows-dev2dev
>> _______________________________________________
>> Scikit-learn-general mailing list
>> Scikit-learn-general@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
>
> --
> Ian Ozsvald (A.I. researcher)
> i...@ianozsvald.com
>
> http://IanOzsvald.com
> http://MorConsulting.com/
> http://Annotate.IO
> http://SocialTiesApp.com/
> http://TheScreencastingHandbook.com
> http://FivePoundApp.com/
> http://twitter.com/IanOzsvald
> http://ShowMeDo.com



-- 
Ian Ozsvald (A.I. researcher)
i...@ianozsvald.com

http://IanOzsvald.com
http://MorConsulting.com/
http://Annotate.IO
http://SocialTiesApp.com/
http://TheScreencastingHandbook.com
http://FivePoundApp.com/
http://twitter.com/IanOzsvald
http://ShowMeDo.com

------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to