Hi Gavin,

Sorry for the late reply. We are working on the multisubject analysis from
confusion matrices and so you may see updates in the next months.

The prior over the hypotheses are not in log scale. See for example what
happens if you don't specify them, i.e. you accept uniform priors:
https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/clfs/transerror.py#L1308

About the order of targets, I guess it does not matter. The test of independence
(implemented in PyMVPA) from my paper is based on confusion matrices, which
are summaries of the targets vs predictions. If you change the order of them,
the confusion matrix does not change provided that both targets and predictions
are in the same order.

The small numerical differences that you observe after permuting the 
participants
are expected, because of the little numerical instability of the algorithm. I 
did my
best to keep the algorithm as stable as possible and if you see a perfect match up to
8 decimal place, then I guess I did pretty well :).

The idea of doing inference at the group level by chaining the subjects is 
indeed
meaningful and we are doing something similar - it is work in progress. I'd 
like to
be in touch with you on this topic. I'll send you an email soon.

Best,

Emanuele

On 02/17/2014 04:43 PM, Hanson, Gavin Keith wrote:
Hi!
I did actually go ahead and try to implement this kind of analysis, and it 
appears to work very well!
I shuffled up the participant order for it, and found that it made no 
difference to 8 decimal places in the final posterior probabilities, which I 
assume is just rounding/float handling error.

I’d love to hear from Emanuele or another author of the tool to see if they 
have tried this kind of group analysis approach with the Bayes Confusion 
Hypothesis and what they might say about its validity, but it appears to be a 
viable approach to group analysis of classification performance and information 
content!

- Gavin

On Feb 16, 2014, at 3:21 AM, Michael Hanke<[email protected]>  wrote:

Hi,

On Tue, Feb 11, 2014 at 02:28:53AM +0000, Hanson, Gavin Keith wrote:
Hi all - I just want to make sure that I’m doing this right.
In the ‘default' implementation of the BayesConfusionHypothesis node, i.e.
        cv = CrossValidation(clf, NFoldPartitioner(), errorfx=None, 
postproc=ChainNode([Confusion(labels=np.unique(ds.targets)), 
BayesConfusionHypothesis()]))
it’s set up to use a flat prior, correct?
If that’s the case, then if I have a set of participants and a nice anatomical 
ROI for each one, could I do a “group” bayesian hypothesis test for the 
information encoded in that ROI by chaining the posterior probabilities for one 
into the prior for the next, and the posterior for that into the prior of the 
next, etc?
Sounds like a viable approach to me. You'd probbaly need to account for
potential order effects.

And if yes, then is the prior_Hs argument looking for an array of 
probabilities, or log probabilities?
Finally, if I insure that my targets are in the same relative order for each 
subject - that is, ds.UT is in the same order for everyone, can I safely assume 
that I don’t need to explicitly pass a list of hypotheses to the ‘hypotheses' 
argument every time to make sure my array of priors is lining up with the 
appropriate hypotheses?
Maybe Emanuele -- the author of this functionality could comment on the
details?

Cheers,

Michael

--
Michael Hanke
http://mih.voxindeserto.de

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa




_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Reply via email to