Hi All,

I would like to compute the norm of weight vector w for the Support Vector Regression algorithm. I am correct in thinking that it can be computed in the following way?

clf = SVR(C=1.0, epsilon=0.2, kernel="rbf", gamma=g)
clf.fit(X, y)
v = np.squeeze(clf.dual_coef_)

SV = clf.support_vectors_

K = sklearn.metrics.pairwise.rbf_kernel(SV, SV, g)

norm = np.sqrt(v.T.dot(K).dot(v))


Is there some way to get the norm without recomputing the kernel matrix entries of the support vectors?

In addition, I would like to use the same norm in model selection for training on the whole set of examples. For example, if I use 5 fold cross validation for model selection, then parameters are selected using 4/5 of the training set, but the selected parameters are used in conjunction with the whole training set. I would like to use model selection to pick the norm of w with the lowest error and then use the same norm when training on the entire training set. How might this be achieved? One way I can think of is to try a number of C values on the whole training set and then pick the one with norm closest to that found during model selection.

Thanks in advance for any help,

Charanpal

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to