Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-30 Thread Nick Oosterhof

 On 29 Jul 2015, at 20:57, John Baublitz jb...@bu.edu wrote:
 
 Thank you very much for the support. Unfortunately I have tried using this 
 GIFTI file that it outputs with FreeSurfer as an overlay and surface

Both at the same time?

 and it throws errors for all FreeSurfer utils and even AFNI utils. FreeSurfer 
 mris_convert outputs:
 
 mriseadGIFTIfile: mris is NULL! found when parsing file f_mvpa_rh.func.gii
 
 This seems to indicate that it is not saving it as a surface file. Likewise 
 AFNI's gifti_tool outputs:
 
 ** failed to find coordinate/triangle structs
 
 How exactly is the data being stored in the GIFTI file? It seems that it is 
 not saving it as triangles and coordinates even based on the code you linked 
 to in the github commit given that the NIFTI intent codes are neither 
 NIFTI_INTENT_POINTSET nor NIFTI_INTENT_TRIANGLE by default.

For your current purposes (visualizing surface-based data), consider there are 
two types of surface GIFTI files:

1) functional node-data, where each node is associated with the same number 
of values. Examples are time series data or statistical maps. Typical 
extensions are .func.gii or .time.gii.
2) anatomical surfaces, that have coordinates in 3D space (with 
NIFTI_INTENT_POINTSET) and node indices in face information (with 
NIFTI_INTENT_TRIANGLE). The typical extension is surf.gii.

In PyMVPA:
(1) functional surface data is handled through mvpa2.datasets.gifti. Data is 
stored in a Dataset instance.
(2) anatomical surfaces are handled through mvpa2.support.nibabel.surf (for 
GIFTI, mvpa2.support.nibabel.surf_gifti). Vertex coordinates and face indices 
are stored in a Surface instance (from mvpa2.support.nibabel.surf)

(I'm aware that documentation about this distinction can be improved in PyMVPA).

 I've also run into a problem where the dataset that I've loaded has no intent 
 codes and unfortunately it appears that this means that the NIFTI intent code 
 is set to NIFTI_INTENT_NONE.

Why is that a problem? What are you trying to achieve? If the dataset has no 
intent, then NIFTI_INTENT_NONE seems valid to me, as the GIFTI standard 
describes this as Data intent not specified.


___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa


Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-29 Thread John Baublitz
Thank you very much for the support. Unfortunately I have tried using this
GIFTI file that it outputs with FreeSurfer as an overlay and surface and it
throws errors for all FreeSurfer utils and even AFNI utils. FreeSurfer
mris_convert outputs:

mriseadGIFTIfile: mris is NULL! found when parsing file f_mvpa_rh.func.gii

This seems to indicate that it is not saving it as a surface file. Likewise
AFNI's gifti_tool outputs:

** failed to find coordinate/triangle structs

How exactly is the data being stored in the GIFTI file? It seems that it is
not saving it as triangles and coordinates even based on the code you
linked to in the github commit given that the NIFTI intent codes are
neither NIFTI_INTENT_POINTSET nor NIFTI_INTENT_TRIANGLE by default. I've
also run into a problem where the dataset that I've loaded has no intent
codes and unfortunately it appears that this means that the NIFTI intent
code is set to NIFTI_INTENT_NONE. Is there any way to work around these
problems?

On Jul 28, 2015 8:01 AM, Nick Oosterhof n.n.ooster...@googlemail.com
wrote:


  On 23 Jul 2015, at 17:38, Nick Oosterhof n.n.ooster...@googlemail.com
 wrote:
 
 
  is there a utility to convert from SUMA surface files to FreeSurfer
 surface files included in PyMVPA?
 
  ConvertDset (included with AFNI) can convert between NIML and GIFTI, and
 mris_convert (included with Freesufer) can convert between GIFTI and a
 variety of other file formats used in FreeSurfer.

 With the latest code on github [1] there is now basic support for GIFTI
 datasets in PyMVPA [2].

 [1] https://github.com/PyMVPA/PyMVPA
 [2]
 https://github.com/PyMVPA/PyMVPA/commit/05ebdda025401148425a7894b3a14ea73b932dfc
 ___
 Pkg-ExpPsy-PyMVPA mailing list
 Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
 http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-28 Thread Nick Oosterhof

 On 23 Jul 2015, at 17:38, Nick Oosterhof n.n.ooster...@googlemail.com wrote:
 
 
 is there a utility to convert from SUMA surface files to FreeSurfer surface 
 files included in PyMVPA?
 
 ConvertDset (included with AFNI) can convert between NIML and GIFTI, and 
 mris_convert (included with Freesufer) can convert between GIFTI and a 
 variety of other file formats used in FreeSurfer. 

With the latest code on github [1] there is now basic support for GIFTI 
datasets in PyMVPA [2].

[1] https://github.com/PyMVPA/PyMVPA
[2] 
https://github.com/PyMVPA/PyMVPA/commit/05ebdda025401148425a7894b3a14ea73b932dfc
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa


Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-23 Thread John Baublitz
Thank you for the quick response. I tried outputting a surface file file
before using both niml.write() and surf.write() as my lab would prefer to
visualize the results on the surface. I mentioned this in a previous email
and was told that I should be using niml.write() and visualize using SUMA.
I decided against this because not only would it fail to open with our
version of SUMA (I can include the error if that would be helpful) but I
have found no evidence that .dset files are compatible with FreeSurfer. My
lab has a hard requirement that whatever we are outputting from the
analysis must be able to be visualized in FreeSurfer. Is there any way to
output a FreeSurfer-compatible surface file using PyMVPA? If not, is there
a utility to convert from SUMA surface files to FreeSurfer surface files
included in PyMVPA?

On Thu, Jul 23, 2015 at 6:45 AM, Nick Oosterhof 
n.n.ooster...@googlemail.com wrote:


  On 22 Jul 2015, at 20:11, John Baublitz jb...@bu.edu wrote:
 
  I have been battling with a surface searchlight that has been taking 6
 to 8 hours for a small dataset. It outputs a usable analysis but the time
 it takes is concerning given that our lab is looking to use even higher
 resolution fMRI datasets in the future. I profiled the searchlight call and
 it looks like approximately 90% of those hours is spent mapping in the
 function from feature IDs to linear voxel IDs (the function
 feature_id2linear_voxel_ids).

 From mvpa2.misc.surfing.queryengine, you are using the
 SurfaceVoxelsQueryEngine, not the SurfaceVerticesQueryEngine? Only the
 former should be using the feature_id2linear_voxel_ids function.

 (When instantiating a query engine through disc_surface_queryengine, the
 Vertices variant is the default; the Voxels variant is used then
 output_modality=‘volume’).

 For the typical surface-based analysis, the output is a surface-based
 dataset, and the SurfaceVerticesQueryEngine is used for that. When using
 the SurfaceVoxelsQueryEngine, the output is a volumetric dataset.

  I looked into the source code and it appears that it is using the in
 keyword on a list which has to search through every element of the list for
 each iteration of the list comprehension and then calls that function for
 each feature. This might account for the slowdown. I'm wondering if there
 is a way to work around this or speed it up.

 When using the SurfaceVoxelsQueryEngine, the euclidean distance between
 each node (on the surface) and each voxel (in the volume) is computed. My
 guess is that this is responsible for the slow-down. This could probably be
 made faster by dividing the 3D space into blocks and assigning nodes and
 vertices to each block, and then compute distances between nodes and voxels
 only within each block and across neighbouring ones. (a somewhat similar
 approach is taken in
 mvpa2.support.nibabel.Surface.map_to_high_resolution_surf). But that would
 take some time to implement and test. How important is this feature for
 you?  Is there a particular reason why you would want the output to be a
 volumetric, not surface-based, dataset?
 ___
 Pkg-ExpPsy-PyMVPA mailing list
 Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
 http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-23 Thread Christopher J Markiewicz
On 07/23/2015 06:45 AM, Nick Oosterhof wrote:
 
 On 22 Jul 2015, at 20:11, John Baublitz jb...@bu.edu wrote:

 I have been battling with a surface searchlight that has been taking 6 to 8 
 hours for a small dataset. It outputs a usable analysis but the time it 
 takes is concerning given that our lab is looking to use even higher 
 resolution fMRI datasets in the future. I profiled the searchlight call and 
 it looks like approximately 90% of those hours is spent mapping in the 
 function from feature IDs to linear voxel IDs (the function 
 feature_id2linear_voxel_ids).
 
 From mvpa2.misc.surfing.queryengine, you are using the 
 SurfaceVoxelsQueryEngine, not the SurfaceVerticesQueryEngine? Only the former 
 should be using the feature_id2linear_voxel_ids function. 
 
 (When instantiating a query engine through disc_surface_queryengine, the 
 Vertices variant is the default; the Voxels variant is used then 
 output_modality=‘volume’).
 
 For the typical surface-based analysis, the output is a surface-based 
 dataset, and the SurfaceVerticesQueryEngine is used for that. When using the 
 SurfaceVoxelsQueryEngine, the output is a volumetric dataset.
 
 I looked into the source code and it appears that it is using the in keyword 
 on a list which has to search through every element of the list for each 
 iteration of the list comprehension and then calls that function for each 
 feature. This might account for the slowdown. I'm wondering if there is a 
 way to work around this or speed it up.
 
 When using the SurfaceVoxelsQueryEngine, the euclidean distance between each 
 node (on the surface) and each voxel (in the volume) is computed. My guess is 
 that this is responsible for the slow-down. This could probably be made 
 faster by dividing the 3D space into blocks and assigning nodes and vertices 
 to each block, and then compute distances between nodes and voxels only 
 within each block and across neighbouring ones. (a somewhat similar approach 
 is taken in mvpa2.support.nibabel.Surface.map_to_high_resolution_surf). But 
 that would take some time to implement and test. How important is this 
 feature for you?  Is there a particular reason why you would want the output 
 to be a volumetric, not surface-based, dataset?  
 ___
 Pkg-ExpPsy-PyMVPA mailing list
 Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
 http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
 

Nick,

To clarify, are you saying that using SurfaceVerticesQueryEngine runs
the classifiers (or other measure) on sets of vertices, not sets of
voxels? I'm not familiar enough with AFNI surfaces, but the ratio of
vertices to intersecting voxels in FreeSurfer is about 6:1. If a
searchlight is a set of vertices, how is the implicit resampling
accounted for?

Sorry if this is explained in documentation. I have my own
FreeSurfer-based implementation that I've been using that uses the
surface only to generate sets of voxels, so I haven't been keeping close
tabs on how PyMVPA's AFNI-based one works.

Also, if mapping vertices to voxel IDs is a serious bottleneck, you can
have a look at my query engine
(https://github.com/effigies/PyMVPA/blob/qnl_surf_searchlight/mvpa2/misc/neighborhood.py#L383).
It uses FreeSurfer vertex map volumes (see: mri_surf2vol --vtxvol),
where each voxel contains the ID of the vertex nearest its center. Maybe
AFNI has something similar?

-- 
Christopher J Markiewicz
Ph.D. Candidate, Quantitative Neuroscience Laboratory
Boston University



signature.asc
Description: OpenPGP digital signature
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-23 Thread Nick Oosterhof

 On 23 Jul 2015, at 17:43, John Baublitz jb...@bu.edu wrote:
 
 The error upon executing suma -spec sl.dset is:
 
 Error SUMA_Read_SpecFile: Your spec file contains uncommented gibberish:
   AFNI_dataset
 Please deal with it.
 Error SUMA_Engine: Error in SUMA_Read_SpecFile.

You cannot use NIML .dset files as SUMA .spec files. 

If the anatomical surface file (with node coordinates and face indices) is 
stored in a file my_surface.asc (some other extensions are supported, including 
GIFTI), you can view that surface in SUMA using:

suma -i my_surface.asc

and then, in the SUMA viewer object-controller window (ctrl+s), click ‘load 
set’ to select a NIML .dset file.

A .spec file defines file names and other properties for a set of anatomical 
surfaces that can be shown in SUMA. An example of how a .spec file is organised 
is  lh_ico16_al.spec included in the tutorial_data_surf*.gz [1].

[1] http://data.pymvpa.org/datasets/tutorial_data/
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-23 Thread Nick Oosterhof

 On 23 Jul 2015, at 16:46, Christopher J Markiewicz effig...@bu.edu wrote:
 
 To clarify, are you saying that using SurfaceVerticesQueryEngine runs
 the classifiers (or other measure) on sets of vertices, not sets of
 voxels?

No, the *input* for classification (or other measure) is from voxels (without 
interpolation); the output (such as classification accuracy) is assigned to 
nodes. Distances are measured along the cortical surface, meaning that the 
shape of each searchlight region (in voxel space) resembles that of a curved 
cylinder with the top and bottom part lying on the pial and white surfaces, and 
the side connecting those two surfaces.

 I'm not familiar enough with AFNI surfaces, but the ratio of
 vertices to intersecting voxels in FreeSurfer is about 6:1. If a
 searchlight is a set of vertices, how is the implicit resampling
 accounted for?

As above, there is no resampling of data. All unique voxels contained in the 
‘curved cylinder’ searchlight are used for classification.  

 
 Also, if mapping vertices to voxel IDs is a serious bottleneck, you can
 have a look at my query engine
 (https://github.com/effigies/PyMVPA/blob/qnl_surf_searchlight/mvpa2/misc/neighborhood.py#L383).
 It uses FreeSurfer vertex map volumes (see: mri_surf2vol --vtxvol),
 where each voxel contains the ID of the vertex nearest its center. Maybe
 AFNI has something similar?

Thanks for the reference. It is possible that AFNI has something similar, but 
in PyMVPA we try to be independent from AFNI is possible (the 
pymvpa2-prep-afni-surf script is a clear exception). But a similar approach 
could possible be used to speed up the mapping between voxels and nearest 
nodes. 
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-23 Thread Nick Oosterhof

 On 23 Jul 2015, at 16:47, John Baublitz jb...@bu.edu wrote:
 
 Thank you for the quick response. I tried outputting a surface file file 
 before using both niml.write() and surf.write() as my lab would prefer to 
 visualize the results on the surface. I mentioned this in a previous email 
 and was told that I should be using niml.write() and visualize using SUMA. I 
 decided against this because not only would it fail to open with our version 
 of SUMA (I can include the error if that would be helpful)

Indeed, that would be helpful.

 but I have found no evidence that .dset files are compatible with FreeSurfer. 
 My lab has a hard requirement that whatever we are outputting from the 
 analysis must be able to be visualized in FreeSurfer. Is there any way to 
 output a FreeSurfer-compatible surface file using PyMVPA?

Not at the moment, but it would be nice to support GIFTI. Currently surface 
anatomy can be exported as GIFTI, but there is no support currently for 
functional files. I’ve added an issue [1], so it may be added in the future.

 If not, is there a utility to convert from SUMA surface files to FreeSurfer 
 surface files included in PyMVPA?

ConvertDset (included with AFNI) can convert between NIML and GIFTI, and 
mris_convert (included with Freesufer) can convert between GIFTI and a variety 
of other file formats used in FreeSurfer. 


[1] https://github.com/PyMVPA/PyMVPA/issues/347
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-23 Thread John Baublitz
The error upon executing suma -spec sl.dset is:

Error SUMA_Read_SpecFile: Your spec file contains uncommented gibberish:
  AFNI_dataset
Please deal with it.
Error SUMA_Engine: Error in SUMA_Read_SpecFile.

I followed the tutorial on how to generate this so I'm not sure if this is
a bug or maybe a version problem.

On Thu, Jul 23, 2015 at 11:38 AM, Nick Oosterhof 
n.n.ooster...@googlemail.com wrote:


  On 23 Jul 2015, at 16:47, John Baublitz jb...@bu.edu wrote:
 
  Thank you for the quick response. I tried outputting a surface file file
 before using both niml.write() and surf.write() as my lab would prefer to
 visualize the results on the surface. I mentioned this in a previous email
 and was told that I should be using niml.write() and visualize using SUMA.
 I decided against this because not only would it fail to open with our
 version of SUMA (I can include the error if that would be helpful)

 Indeed, that would be helpful.

  but I have found no evidence that .dset files are compatible with
 FreeSurfer. My lab has a hard requirement that whatever we are outputting
 from the analysis must be able to be visualized in FreeSurfer. Is there any
 way to output a FreeSurfer-compatible surface file using PyMVPA?

 Not at the moment, but it would be nice to support GIFTI. Currently
 surface anatomy can be exported as GIFTI, but there is no support currently
 for functional files. I’ve added an issue [1], so it may be added in the
 future.

  If not, is there a utility to convert from SUMA surface files to
 FreeSurfer surface files included in PyMVPA?

 ConvertDset (included with AFNI) can convert between NIML and GIFTI, and
 mris_convert (included with Freesufer) can convert between GIFTI and a
 variety of other file formats used in FreeSurfer.


 [1] https://github.com/PyMVPA/PyMVPA/issues/347
 ___
 Pkg-ExpPsy-PyMVPA mailing list
 Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
 http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

Re: [pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-23 Thread Nick Oosterhof

 On 22 Jul 2015, at 20:11, John Baublitz jb...@bu.edu wrote:
 
 I have been battling with a surface searchlight that has been taking 6 to 8 
 hours for a small dataset. It outputs a usable analysis but the time it takes 
 is concerning given that our lab is looking to use even higher resolution 
 fMRI datasets in the future. I profiled the searchlight call and it looks 
 like approximately 90% of those hours is spent mapping in the function from 
 feature IDs to linear voxel IDs (the function feature_id2linear_voxel_ids).

From mvpa2.misc.surfing.queryengine, you are using the 
SurfaceVoxelsQueryEngine, not the SurfaceVerticesQueryEngine? Only the former 
should be using the feature_id2linear_voxel_ids function. 

(When instantiating a query engine through disc_surface_queryengine, the 
Vertices variant is the default; the Voxels variant is used then 
output_modality=‘volume’).

For the typical surface-based analysis, the output is a surface-based dataset, 
and the SurfaceVerticesQueryEngine is used for that. When using the 
SurfaceVoxelsQueryEngine, the output is a volumetric dataset.

 I looked into the source code and it appears that it is using the in keyword 
 on a list which has to search through every element of the list for each 
 iteration of the list comprehension and then calls that function for each 
 feature. This might account for the slowdown. I'm wondering if there is a way 
 to work around this or speed it up.

When using the SurfaceVoxelsQueryEngine, the euclidean distance between each 
node (on the surface) and each voxel (in the volume) is computed. My guess is 
that this is responsible for the slow-down. This could probably be made faster 
by dividing the 3D space into blocks and assigning nodes and vertices to each 
block, and then compute distances between nodes and voxels only within each 
block and across neighbouring ones. (a somewhat similar approach is taken in 
mvpa2.support.nibabel.Surface.map_to_high_resolution_surf). But that would take 
some time to implement and test. How important is this feature for you?  Is 
there a particular reason why you would want the output to be a volumetric, not 
surface-based, dataset?  
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

[pymvpa] Surface searchlight taking 6 to 8 hours

2015-07-22 Thread John Baublitz
Hi all,
I have been battling with a surface searchlight that has been taking 6 to 8
hours for a small dataset. It outputs a usable analysis but the time it
takes is concerning given that our lab is looking to use even higher
resolution fMRI datasets in the future. I profiled the searchlight call and
it looks like approximately 90% of those hours is spent mapping in the
function from feature IDs to linear voxel IDs (the
function feature_id2linear_voxel_ids). I looked into the source code and it
appears that it is using the in keyword on a list which has to search
through every element of the list for each iteration of the list
comprehension and then calls that function for each feature. This might
account for the slowdown. I'm wondering if there is a way to work around
this or speed it up.

Thanks,
John
___
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa