Hello,

I am currently using linear SVM to perform classification analyses with 
fMRI data. I wanted to extract the weight for each feature in order to 
map them back on the brain. Since I am doing a 4 class classification 
(supposedly under the 'one vs. one' heuristic), there are 3 weights for 
each feature. I do not come from a math speciality so I wouldn't know 
how to obtain only 1 weight to map back.

I am also using the RFE and since it just runs several linear SVM and 
keeps only the best weights to continue, I found that the ordering of 
the features by weights is done via this line of code:

ranks = np.argsort(np.sum(estimator.coef_ ** 2, axis=0))

My question is: Why the summation of the squared weight matrix is used? 
What is the logic behind it?

I would be glad if you could point me to articles or math definitions 
that would allow me to grasp it.

Mathieu Ruiz

PhD Student
+0033 04 56 52 06 03
Grenoble Institut des Neurosciences (GIN, Centre de Recherche Inserm U 
836 ? UJF - CEA - CHU)
Centre de Recherche Cerveau et Cognition (CerCo, UMR 5549, 
CNRS-Université Paul Sabatier Toulouse 3)

------------------------------------------------------------------------------
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to