Dear all,
I have to analyze some fmri data with pymvpa: my images are in Analyze
format: every volume is a .hdr .img pair and I have to load as a time
series. How can I do?
The solution is to convert my datasets with some software as MRIcro or
similar, but I want to know if there is a library
.
With best regards,
Yarik
On Thu, 03 Mar 2011, Roberto Guidotti wrote:
Thank you Michael it works perfectly
Sorry but I'm a novice with Python and this technique, I'm glad to
have
the support of you and the other authorities of MVPA techniques!!
Thank you
Roberto
, 08 Mar 2011, Roberto Guidotti wrote:
Checking my images with MRIcro and viewing fields on PyMVPA with
command mydataset.niftiheader.get() (Images loaded by list of
NiftiImages!) I found the same values:
scl_interp = 19 e scl_slope = 17; now my question is: Could I proceed
2011/3/28 Yaroslav Halchenko deb...@onerussian.com
not sure what is wrong with MDP -- its code in mdp 3.0 looks ok
1.
to overcome it, just disable mdp and check for mdp ge 2.4 at once (or do
you need MDP functionality?) by adding
[externals]
have mdp = no
have mdp ge 24 = no
to your
appropriately upon loading from 1 or
multiple volumes (unless you explicitly instruct it not to do so with
'scale_data=False')
On Wed, 09 Mar 2011, Roberto Guidotti wrote:
Well, you're right Yaroslav, only the first .img file has right
scl_slope, scl_inter field setted.
So, with your last
and what was the right value? in the first image?
scl_slope = 17.0 and scl_inter = 19.0
Probably I've misunderstood your ANNOUNCEMENT of the new release of the
software!! I'm sorry
Thank you
Roberto
On Tue, 29 Mar 2011, Roberto Guidotti wrote:
So, I've tested my Images
the first values would be offset by
19 and divided by 17 (i.e. scaling was not applied).
in any case you should be safe if you are using pymvpa 0.6.* or 0.4.7
On Tue, 29 Mar 2011, Roberto Guidotti wrote:
and what was the right value? in the first image?
scl_slope = 17.0
deb...@onerussian.com
On Tue, 29 Mar 2011, Roberto Guidotti wrote:
I haven't only my images in Talairach space after basic preprocessing
steps (time, motion correction etc.), when I load my dataset I print
scl_* values
from an image or from a dataset? how do you load them
Dear all,
I'm trying to analyze a dataset with PyMVPA, but I want to know how the
classifier acts across folds in cross-validation or having a ConfusionMatrix
for every cv fold.
I've tried enabling in cv class 'training_stats' but the results is
incomprehesible for me!!
Is there a method or
Dear all,
today after updating my computer (Ubuntu 10.10), I have this issue importing
mvpa.suite:
from mvpa.suite import *
/usr/lib/pymodules/python2.6/matplotlib/numerix/__init__.py:18:
DeprecationWarning:
**
matplotlib.numerix and all
edit your ~/.pymvpa.cfg and place
[externals]
have nipy = no
have nipy.neurospin = no
to disable that single function we take from nipy atm ;)
Done!!
sorry about that
No problem man!!
Thank you!!
RG
On Tue, 12 Apr 2011, Roberto Guidotti wrote:
Dear all,
today after
2011/4/12 Yaroslav Halchenko deb...@onerussian.com
On Tue, 12 Apr 2011, Roberto Guidotti wrote:
yes -- sorry, there was refactoring in nipy which managed to affect
pymvpa. it is fixed in pymvpa GIT ...
Ok, now I attach my version to git!!! :D
eh -- and I just started
,
Francisco
On Fri, Apr 15, 2011 at 1:45 PM, Roberto Guidotti robbenso...@gmail.com
wrote:
Thank you, for the quick response and for the advice.
For the first question: in the review papers I've read this question is
never treated in depth, but only if the study is across-subject, thank
you
Dear all,
I don't know if this is a known issue (probably it affects only me!!) or, as
always, I was doing something wrong, but running the eyemovement.py example
I have the following error:
print mclf.ca.confusion
File /usr/local/lib/python2.7/dist-packages/mvpa/base/collections.py,
line
Hi all,
I would like to know if there is a method to extract the feature ranking
from ANOVA feature selection or if there is a method to drop first 1% of
voxel included in the ranking and select other voxel as usual (with
FractionTailSelection()).
Thank you
Roberto
Thank you for the responses!
It doesn't sound wrong ;-) However, I'm also not 100% sure want kind of
feature selection you are aiming at. Do you want to select features that
vary to a certain degress within an event, and come from the same voxel?
Exactly, I would like to have a feature
Hi all,
a two second question about searchlight map.
I did a whole brain searchlight analysis with two classes, using a SVM and
cross validation with half partitioner. As results I obtained a map with
accuracies for each voxels and two volumes, this means that I have a volume
for each CV fold?
Thank you! (You spent a little bit more than two seconds!) ;)
RG
On 7 September 2012 16:22, Yaroslav Halchenko deb...@onerussian.com wrote:
usually -- mean (if your splits are balanced) ;-)
On Fri, 07 Sep 2012, Roberto Guidotti wrote:
Hi all,
a two second question about
Hi,
is there a python method (or a kind of package/module/tool) to store the
swig trained classifiers instead? I've tried with the h5save and pickling
it but nothing... :(
Thank you
Roberto
On 7 January 2012 03:30, Yaroslav Halchenko deb...@onerussian.com wrote:
unfortunately only some. E.g.
Dear all,
I've a problem with classifier's conditional attributes (it is possibly due
to a bad command understanding, by myself, I admit!):
clf = LinearCSVMC(C=1, probability=1, enable_ca=['probabilities'])
cvte = CrossValidation(clf, NFoldPartitioner(cvtype = 1),
enable_ca=['stats',
need a manual CrossValidation (and sometimes a
Feature Selection)?
Thank you!
Roberto
On 27 November 2012 15:49, Yaroslav Halchenko deb...@onerussian.com wrote:
On Tue, 27 Nov 2012, Roberto Guidotti wrote:
Dear all,
I've a problem with classifier's conditional attributes
Dear all,
I have a question that do not strictly concern to PyMVPA strictly.
I trained a classifier to discriminate two classes (e.g. bananas and
apples), using SVM, cross-validation etc. then I would like to try it with
some unlabelled fruits, could be, bananas and apples but also melon,
lemon,
this using mahalanobis distance or something else.
I've to try with SMLR which probably is good enough!
Thank you Yaroslav!
R
PS: If anyone would like to enrich the discussion!!! :))
On 18 January 2013 15:23, Yaroslav Halchenko deb...@onerussian.com wrote:
On Fri, 18 Jan 2013, Roberto Guidotti wrote
Hi all,
I have a simple (for you) question about SVM output, in particular about
*estimates* attribute. Is this the distance from SVM built hyperplane?
(think that yes!) And in case of multiclass problem is the distance from
binary hyperplanes found?
Is there any kind of normalization in that
Hi all,
excuse me if I am persistent with question but it's a period!
I have a 3 class problem, I've used Linear SVM and I have to predict
unlabeled example.
last three predictions are: [zip(clf.ca.predictions, clf.ca.estimates)]
('2', {(0, 1): -0.09890229020627583,
(0, 2):
(remove_invariant_features?) and re-run
the hyperalignment.
Best,
Swaroop
On Mon, Jul 1, 2013 at 9:16 AM, Roberto Guidotti robbenso...@gmail.com
wrote:
Hi all,
I would like to run HyperAlignment algorithm using my dataset, but when I
try to run from example
hypmaps = hyper(my_ds_train
Hi all,
I'm trying to figure out a problem about HyperAlignment.
As explained HyperAlignment tries to align brain trajectories in a common
representation space.
Now I'd like to use in combination hyperalignment and cross decoding, thus
training a classifier with hyperaligned data and then use it
Dear Marco,
From your snippet I notice that you get the sensitivity analyzer without
run any CrossValidation/Classification.
So try to run
---
err = fclfsvm(fds) #That let you to cross validate/classify/select features
on your dataset
---
This let you to train your object and then you
Hi,
thank you for the quick response!
It will be the classifier trained on the last fold.
Ok!
I don't think it would make much sense. Best is higher accuracy when
predicting a particular test dataset compared to all others. I doesn;t
necessarily mean that that particular model fit is
Hi,
have you tried using the vstack command implemented in pymvpa?
http://www.pymvpa.org/generated/mvpa2.base.dataset.vstack.html#mvpa2.base.dataset.vstack
Roberto
On 1 October 2014 05:15, Shane Hoversten shanusmag...@gmail.com wrote:
Hi -
I have some code that creates a dataset from a
Dear all,
Suppose I have two conditions, n runs with m trial and each trial is
composed by t volumes (frames). I would like to do a time-wise
(time-resolved) decoding to have the decoding accuracy curve for each time
frame.
I saw several approaches to do this:
1) the first is to train the
have runs with unbalanced condition (e.g. 8 A
vs 2 B, etc.).
Then if it is suggested to run different cv scheme and Stelzer's method, I
will... :)
Thank you
Roberto
> ...
>
> Jo
>
>
>
>
> On 2/25/2016 9:43 AM, Roberto Guidotti wrote:
>
>> Hi all mvpaers,
>>
sional data where all (or the
majority of) samples are support vectors, thank you (and Jo) for the
explanation!
>
> BW,
> Richard
>
Thank you,
Bests,
Roberto
>
> On Tue, Mar 1, 2016 at 6:43 PM, Roberto Guidotti <robbenso...@gmail.com>
> wrote:
>
>> So, t
thought that SVM was not so sensible to unbalancing, because
it uses only few samples as support vectors!
Thank you,
Roberto
On 1 March 2016 at 16:00, Jo Etzel <jet...@wustl.edu> wrote:
> Here's a response to the second part of your question:
>
> On 2/29/2016 11:30 AM, Roberto
Hi all,
I'm writing the second part of my story, begun with the thread "Balancing
with searchlight and statistical issue"!
I have a problem with searchlight results, I ran some whole brain
searchlight to decode betas of an fMRI dataset regarding a memory task
(this time the dataset is balanced).
35 matches
Mail list logo