Hi all,
I have a question in regards of feature selection if more than one
classifier is involved, because there are more than two classes. If I
understand it correctly, in multi-class problem PyMVPA will train a
classifier for every possible pair of classes and result is decided by
vote. So if
This is achieved through a 'searchlight' command from command line
interface (http://www.pymvpa.org/generated/cmd_searchlight.html),
which is a different thing as 'sphere_searchlight' you are calling
from python. In the code it is done in dosl.sh script.
You can see that preprocessing and cv setup
My sample is not balanced (there happens to have been 22 responders
and 13 non-responders) and is not particularly large. I would like,
if possible, to use all the data and adjust the classifier to the
unbalanced set rather than selecting a subset of the responders.
You don't have to
Hello, I cannot help you with the inference on subject level, however we do
have nice example for the group level inference. It is what we used in
pandora data paper http://f1000research.com/articles/4-174/v1 (review
pending) to produce figure 4 and table3. Code for the replication of the
analysis
Thank you so much Richard! This was super helpful!
One last question, do you know if the averaging can be done using the
command line without sparse ROI's? Maybe by using --scatter-rois 0? or is
it the default regardless to the input of scatter-rois?
I am sorry, but I don't know. The feature
Hello,
I was playing with a permutation schemes. There is one case I don't know
how to do
If I want to shuffle targets within chunks, I would use limit='chunks' in
my permutator,
If I want to shuffle targets only in test set, I would use
limit={'partitions': 1},
How can I shuffle targets within
If you don't want to do the fancy dance, you can do just simply:
permutator = AttributePermutator(attr='targets')
permuted_dataset = naive_permutation(original_dataset)
print Here are your new labels: , permuted_dataset.sa.targets
then you don't need to put permutator into your CV, just use
> Hi,
>
> Oh, I had skimmed over the page and had not noticed the algorithm was
> expecting accuracy maps, so I was passing it error maps (thus lower is
> better), hence my confusion.
> I'm performing searchlight support vector regression, not classification,
> so my error goes from 0 to 2
Hi,
I am pretty sure, that they are added to a cluster if they are higher than
a threshold.
You are looking for voxels with the accuracy significantly higher than a
chance.
Best,
Richard
___
Pkg-ExpPsy-PyMVPA mailing list
Hi Bill,
You might take a look at Relief algorithm (also implemented in PyMVPA),
that is less hacky approach to your feature weighting problem.
BW,
Richard
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
Bill Broderick wrote:
> However, to determine which timecourse is contributing the most to the
> classifiers performance,
> see which timecourses or which combination
> of time courses caused the greatest drop in performance when removed.
I wrote:
> You might take a look at Relief algorithm
ich I assume you mean by RSA) may also be
> suitable, depending on your hypotheses.
>
> Jo
>
> On May 30, 2016 4:31:50 PM Richard Dinga <ding...@gmail.com> wrote:
>
>> > Do you have to do within-subjects classification, or could you
>> cross-validate
You cannot use sensitivity banalysis with KNN because kNN is not using or
producing any feature weights.
On Tue, May 31, 2016 at 5:39 PM, marco tettamanti
wrote:
> Dear all,
> since kNN performs best on a particular dataset, I am trying to obtain a
> sensitivity map based
> On Sat, Jan 16, 2016 at 5:40 AM, Kaustubh Patil
wrote:
> BTW another way to handle imbbalanced data (and perhaps easier to
implement and test) could be assign weights in libvsm. This has to be done
for each partition separately, any ideas on how this can be done?
>
You can just do the same search light crossvalidation as is showed in the
tutorial or example scripts, but combine your two datasets to one and make
data set A be partition 1 and dataset B partition 2 and don't use
postproc=mean_sample() argument.
On Fri, Jan 22, 2016 at 7:15 PM, Alyson Saenz
I might be wrong, but it sounds like you have invariant features in your
data. U can get a better mask or just remove them with
remove_invariant_features()
On Sat, Jan 23, 2016 at 5:37 PM, Maria Hakonen
wrote:
>
> Hi,
>
> Many thanks for your answers!
> I would like to
; 80%, 85% and 90%) and viewed the results.
>
> -Maria
>
> 2016-01-23 20:31 GMT+02:00 Richard Dinga <ding...@gmail.com>:
>
>> I might be wrong, but it sounds like you have invariant features in your
>> data. U can get a better mask or just remove them with
>&g
Hi,
I am sorry about your bellow chance accuracy, it's always very annoying. Do
you also have bellow chance accuracy with other than classifiers than GNB?
So is it just speed that is your concern? you can try M1NNSearchlight, that
should be also efficiently implemented, but i think the results
I guess you have invariant features in your dataset, therefore you will get
problems when trying to divide by 0. There is a function to remove them.
On Fri, Sep 16, 2016 at 8:01 PM, Liang, Guangsheng wrote:
> Hello PyMVPA community,
>
>
>
> I am currently working on a
gt; When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Pkg-ExpPsy-PyMVPA digest..."
>>
>>
>> Today's Topics:
>>
>> 1. Re: GroupClusterThreshold data / attributes (Richard Dinga)
>>
>>
>> -
Hi,
Does this answer your question?
http://www.pymvpa.org/tutorial_significance.html#avoiding-the-trap-or-advanced-magic-101
On Fri, May 19, 2017 at 8:18 PM, Michael Bannert
wrote:
> Dear all,
>
> I would like to use permutation testing for spatially aligned
>
> well -- all the disbalanced cases are "tricky" to say the least ;)
Imbalanced cases are fine, just use AUC or some other measure that is not
thresholding the results, and all your problems will go away.
On Wed, Oct 18, 2017 at 5:21 AM, Anna Manelis
wrote:
> Thanks for
Yes, you can do it, each permutation is independent, doesn't matter if they
are computed in series or parallel. Unless you specify the same RNG seed
for your splits, there shouldn't be any problem.
BW,
Richard
On Sat, Dec 9, 2017 at 3:33 AM, Regina Lapate wrote:
> Hello all:
23 matches
Mail list logo