Am 30.11.2012 13:56, schrieb Gael Varoquaux:
> On Fri, Nov 30, 2012 at 01:33:42PM +0100, Philipp Singer wrote:
>> I decided to stick around Leave-one-Out for now and Im doing grid search
>> with cross validation using Leave-one-out.
>
> Don't. This is not a good model selection strategy, and it is very
> costly. Use a stratified kfold with k between 5 or 10.

Well, I only have a few samples and I am explicitely interested in 
seeing the performance of each sample alone trained on the rest.
>
> With regards to your question, I don't know, as I haven't used the scikit
> in this setting.
Well, I think the standard accuracy is fine for this, as it always is 0 
or 1 anyways.
>
> G
>
> ------------------------------------------------------------------------------
> Keep yourself connected to Go Parallel:
> TUNE You got it built. Now make it sing. Tune shows you how.
> http://goparallel.sourceforge.net
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>


------------------------------------------------------------------------------
Keep yourself connected to Go Parallel: 
TUNE You got it built. Now make it sing. Tune shows you how.
http://goparallel.sourceforge.net
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to