Re: [caret-users] performance issues

2006-10-30 Thread John Harwell


Caret Users,

Caret uses QT software (www.trolltech.com) which allows Caret to run  
on Linux, Mac OSX, and Windows.  In 2005, Trolltech released QT 4 and  
announced that support for QT 3 would end in 2007.  So, starting in  
December 2005, after the release of QT 4.1, I began the conversion of  
Caret to use QT 4.  The first release of Caret using QT 4 was Caret  
5.33 (03 March 2006).


Unfortunately, QT 4's Input/Output library (used for file reading and  
writing) contained some critical bugs.  Functions that reported end- 
of-file or the current position in the file did not operate properly  
(http://www.trolltech.com/developer/task-tracker/index_html? 
id=99383&method=entry).  Another function that read from a file one  
line at a time (http://www.trolltech.com/developer/task-tracker/ 
index_html?method=entry&id=104776) did not operate properly for lines  
containing many characters.


Due to the problems with QT 4's I/O library, extra code was added to  
Caret so that file reading and writing would function correctly and  
it is this extra code that is causing some of the performance issues  
you are experiencing.  QT 4.2 which contains fixes for some of the  
bugs was just released.  I will not upgrade the version of QT used by  
Caret until QT 4.2.1 is released.  Hopefully QT 4.2.1 will contain  
the needed bug fixes so that the extra code added to Caret and  
hampering performance can be removed.


At one time, the Display Control Dialog was very slow to launch on  
some systems.  Eliminating the use of a QT 3 support library  
contained in QT 4 eliminated this problem.  This fix was contained in  
Caret 5.4, 09 June 2006.


--
John Harwell
[EMAIL PROTECTED]
314-362-3467

Department of Anatomy and Neurobiology
Washington University School of Medicine
660 S. Euclid Ave.Box 8108
St. Louis, MO 63110   USA

On Oct 26, 2006, at 10:45 AM, Johannes Klein wrote:


Hi Graham,
Thanks for the tip. I've run Filemon, and it turns out that v5.5  
produces a log that is 35 times as big as the v5.31 one.
It seems that 5.5 reads its files byte by byte, whereas 5.31 reads  
data in chunks of 4096. That surely explains performance  
degradation when opening files, although I think there are other  
issues too, it seems the latency of opening dialogues has increased  
(I suppose this could well be due to the Qt version switch Donna  
mentioned).

For reference, here are a few lines of output.
v5.5:
7877	16:05:09	caret5.exe:2296	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 852 Length: 1	
7878	16:05:09	caret5.exe:2296	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 853 Length: 1	
7879	16:05:09	caret5.exe:2296	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 854 Length: 1	
7880	16:05:09	caret5.exe:2296	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 855 Length: 1	
7881	16:05:09	caret5.exe:2296	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 856 Length: 1	
7882	16:05:09	caret5.exe:2296	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 857 Length: 1	

.. and so forth.

v5.31:
1238	16:29:43	caret5.exe:1600	QUERY INFORMATION F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec BUFFER  
OVERFLOW	FileAllInformation	
1239	16:29:43	caret5.exe:1600	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 0 Length: 4096	
1240	16:29:43	caret5.exe:1600	READ F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec SUCCESS	 
Offset: 0 Length: 512	
1241	16:29:43	caret5.exe:1600	QUERY INFORMATION F:\caret52 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785\HUMAN.LEFT_HEM 
\Human.colin.Cerebral.L.ATLAS.ALL-EXTENDED.71785.spec BUFFER  
OVERFLOW	FileFsVolumeInformation	

Is there any chance of a fix?
Thanks a lot,
Johannes
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://pulvinar.wustl.edu/mailman/listinfo/caret-users




Re: [caret-users] Fishy brainfish sampling

2006-10-30 Thread Donna Dierker

Graham,

1.  You're right:  BrainFish isn't mapping negative activity as 
intended.  I just mapped 
CARET_TUTORIAL_SEPT06/BURTON_04_VibroTactile_EARLY_BLIND+orig.HEAD onto 
PALS_B12 711-2C using BrainFish, and although this volume has big 
regions of negative activity, the metric file gets assigned 0.0 in these 
regions:


VOXEL IJK(78, 171, 96)   XYZ(-10.5, 47.5, 21.5)
   Anatomy: 1048.8
   Functional: -16.0
Node 30291
Metric: 0.0 0.0

MCW was happy with it back when we implemented it, but evidently they 
were mapping just positive data.  I'll figure out what lines need 
changing, but I'm not sure when this will be impemented.


Nor will I guarantee satisfaction, since these lines in the code suggest 
an MCW priority not entirely aligned with your suggestion #2 (which I 
interpret as "most extreme voxel" method -- not a bad alternative to have):


  //
  // Allow positive activity to override negative activity
  // Negative only overrides "less negative"
  //
  if (nearestNode >= 0) {
 assigned[nearestNode] = true;
 const float nodeValue = activity[nearestNode];
 if (voxel > 0.0) {
if (voxel > nodeValue) {
   activity[nearestNode] = voxel;
}
 }
 else if (nodeValue < 0.0) {
if (voxel < nodeValue) {
   activity[nearestNode] = voxel;
}
 }
  }

Regarding converting Metric files to ASCII, DVE pointed out the 
caret_file_convert utility and the option when saving the metric, but 
there's a third alternative: File: Convert Data File Formats.


Also, on the D/C metric menu, there is a histogram with Min, Max, Range, 
etc.  (Also, the utilities under Attributes: Metric are handy.)


2.  Besides the comments above, I confess I'm not a big fan of the 
BrainFish algorithm.  I'm trying not to tell you what you want, but 
there are only a handful of cases in my experience where I deemed this 
voxel-centric algorithm the best tool for the job.  I suspect your 
choice of this method is related to your separate message regarding the 
average fiducial surface itself, so I'm going to focus my efforts on 
helping you better understand its nature (in a separate reply).


On 10/30/2006 08:53 AM, John Harwell wrote:


The MCW BrainFish algorithm was developed by a group at the Medical 
College of Wisconsin to meet a specific need they had.  Unless they 
tell me it is not working correctly, I am not going to dig into it.  I 
believe the enclosing voxel algorithm is the most popular, you might 
consider it.


--
John Harwell
[EMAIL PROTECTED]
314-362-3467

Department of Anatomy and Neurobiology
Washington University School of Medicine
660 S. Euclid Ave.Box 8108
St. Louis, MO 63110   USA

On Oct 27, 2006, at 9:20 PM, Graham Wideman wrote:


Folks:

The brainfish algorithm is described here:
http://brainmap.wustl.edu/caret/html4.6/
map_fmri_to_surface/map_fmri_to_surface_dialog.html

... as picking up negative voxels if there are no positive ones 
around.  Couple of issues:


1. We don't seem to be able to get negative voxels to appear at all 
when using the brainfish mapping algorithm. This is for a volume that 
has been thresholded, hence lots of zero voxels, and it has well 
separated islands of positive and negative values.


(And applying mapping other than brainfish indeed shows the negative 
regions in caret -- so we *think* we know how to get the rest of the 
display settings right...)


So the question is: are you sure that the negative aspect of this 
mapping algorithm is currently working?


Also, is there a tool for inspecting metric files? (Or conversion to 
text?)


2. Would it make sense to have an alternative brainfish strategy 
which allocates to a node the voxel value with the highest *absolute* 
value, instead of "any positive beats any negative"?  This is 
particularly a concern for fMRI input that has not been thresholded.


Thanks,

Graham

--
Donna L. Dierker
(Formerly Donna Hanlon; no change in marital status -- see 
http://home.att.net/~donna.hanlon for details.)



Re: [caret-users] Fishy brainfish sampling

2006-10-30 Thread John Harwell


The MCW BrainFish algorithm was developed by a group at the Medical  
College of Wisconsin to meet a specific need they had.  Unless they  
tell me it is not working correctly, I am not going to dig into it.   
I believe the enclosing voxel algorithm is the most popular, you  
might consider it.


--
John Harwell
[EMAIL PROTECTED]
314-362-3467

Department of Anatomy and Neurobiology
Washington University School of Medicine
660 S. Euclid Ave.Box 8108
St. Louis, MO 63110   USA

On Oct 27, 2006, at 9:20 PM, Graham Wideman wrote:


Folks:

The brainfish algorithm is described here:
http://brainmap.wustl.edu/caret/html4.6/
map_fmri_to_surface/map_fmri_to_surface_dialog.html

... as picking up negative voxels if there are no positive ones  
around.  Couple of issues:


1. We don't seem to be able to get negative voxels to appear at all  
when using the brainfish mapping algorithm. This is for a volume  
that has been thresholded, hence lots of zero voxels, and it has  
well separated islands of positive and negative values.


(And applying mapping other than brainfish indeed shows the  
negative regions in caret -- so we *think* we know how to get the  
rest of the display settings right...)


So the question is: are you sure that the negative aspect of this  
mapping algorithm is currently working?


Also, is there a tool for inspecting metric files? (Or conversion  
to text?)


2. Would it make sense to have an alternative brainfish strategy  
which allocates to a node the voxel value with the highest  
*absolute* value, instead of "any positive beats any negative"?   
This is particularly a concern for fMRI input that has not been  
thresholded.


Thanks,

Graham

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://pulvinar.wustl.edu/mailman/listinfo/caret-users