Hello, I am running a searchlight analysis and I encountered a strange feature of the resulting classification accuracies. All of the non-zero accuracies have only one decimal point of precision (e.g., .5, .6., .7., .8, etc.). Unfortunately, I'm not able to figure out what aspect of my code is leading to this truncating or rounding of the accuracies.
I attached my code below, but here's some information about my analysis. Please let me know if you would like me to provide any other information about the analysis. My searchlight repeats a 10-fold cross-validation procedure for a linear support vector classifier with default parameters. The number of classes is 2 and the total number of samples is roughly 240. The sequence of samples is randomized and balanced so that there is an equal number of instances of the two classes in each of the 10 folds. The searchlight also applies the mean_sample() postproc so that the resulting classification accuracies are averaged over the cross-validation folds. As mentioned above, I'm unsure what about my script is leading to the rounding/truncating of the accuracies. Most of the script is copied from one of the PyMVPA searchlight tutorials, which further adds to my confusion, since the tutorial searchlight clearly outputs accuracies with greater than 1-decimal precision. I would greatly appreciate any ideas you might have about what could be causing this problem or how to address it. Thank you for your time. Best, Tyler Adkins PhD Pre-candidate | Cognition and Cognitive Neuroscience University of Michigan Department of Psychology 530 Church Street Ann Arbor, MI 48109-1043 Email: adkin...@umich.edu Office: 3036 East Hall Lab: B018 East Hall >
Description: Binary data
_______________________________________________ Pkg-ExpPsy-PyMVPA mailing list Pkg-ExpPsy-PyMVPA@alioth-lists.debian.net https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa