sorry about the delay. 
nothing strikes my mind as obvious reason... also we don't know anything
about data preprocessing/nature (are there inherent groups etc)

blind guess that it could also be related to leave-1-out... what if you
group samples (randomly if you like) into 4 groups (chunks) and then do
cross-validation -- does bias persist?

> > I have 140 structural images: 78 are in class A and 62 are in class B. To 
> > ensure that the training algorithm (LinearNuSVMC) doesn't build a biased 
> > model, I am using the nperlabel='equal' option in my splitter. I know this 
> > part of my code is working (see below), so I'm confused why my CVs 
> > (leave-one-scan-out) are biased with random data (e.g., 55.71%). Can 
> > someone please clarify why I'm not getting 50% with random data? I suspect 
> > I'm just not understanding something simple...

> > Thanks!
> > David

-- 
=------------------------------------------------------------------=
Keep in touch                                     www.onerussian.com
Yaroslav Halchenko                 www.ohloh.net/accounts/yarikoptic

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to