Thanks Andy for your response, and sorry for being unclear.

Here is the idea behind what I wish to study: Let's say we have a set S 
of examples and I wish to find the best SVR penalty parameter C from a 
set T of Cs. As you probably know, one way of doing this is to use, for 
example, 5 fold cross validation for each value of C in T and then pick 
the C with smallest error. In each iteration of cross validation one 
trains with 4/5 th of the examples in S, however once a value of C is 
chosen we train with all the examples in S and this value of C might not 
be the best one now that we train using more examples (I noticed a 
thread about this here: 
http://sourceforge.net/mailarchive/forum.php?thread_name=4F9DAE79.3010202%40ais.uni-bonn.de&forum_name=scikit-learn-general).
 


The idea was that the norm of w is a measure of the "complexity" of the 
SVR and we should user a similar norm of w for both model selection and 
when training using all of S. I realise we can't fix the norm of w, but 
we can find the norm corresponding to the lowest error during model 
selection. Then, one can train using all values of C in T on S, and pick 
the model with norm w closest to that found during model selection. What 
do you think - does it seem like a reasonable thing to do?

Thanks,
Charanpal


On 13/07/12 19:24, Andreas Müller wrote:
> Hi Charanpal.
> I think your formular is correct and I don't think it is possible
> to do without recomputing the kernel. You definitely need the kernel values
> and they are internal to LibSVM.
>
> I don't really understand what you try to accomplish, though.
> The SVR has the length of w in the objective, so I don't see how you can 
> further use that for model selection.
>
> In particular, I don't understand what you mean by "using the same norm when 
> training on the entire training set"
> Do you want to fix the length of w? That seems a bit weird to me and I don't 
> think this is possible
> in the standard SVM setup (as the whole point in SVMs is to find the maximum 
> margin plane).
>
> Cheers,
> Andy
>
>
> ----- Ursprüngliche Mail -----
> Von: "Charanpal Dhanjal" <[email protected]>
> An: [email protected]
> Gesendet: Donnerstag, 12. Juli 2012 12:34:23
> Betreff: [Scikit-learn-general] Norm of SVR weight vector
>
>
> Hi All,
>
> I would like to compute the norm of weight vector w for the Support Vector 
> Regression algorithm. I am correct in thinking that it can be computed in the 
> following way?
>
> clf = SVR(C=1.0, epsilon=0.2, kernel="rbf", gamma=g)
> clf.fit(X, y)
> v = np.squeeze(clf.dual_coef_)
>
> SV = clf.support_vectors_
>
> K = sklearn.metrics.pairwise.rbf_kernel(SV, SV, g)
>
>
>
> norm = np.sqrt(v.T.dot(K).dot(v))
> Is there some way to get the norm without recomputing the kernel matrix 
> entries of the support vectors?
>
> In addition, I would like to use the same norm in model selection for 
> training on the whole set of examples. For example, if I use 5 fold cross 
> validation for model selection, then parameters are selected using 4/5 of the 
> training set, but the selected parameters are used in conjunction with the 
> whole training set. I would like to use model selection to pick the norm of w 
> with the lowest error and then use the same norm when training on the entire 
> training set. How might this be achieved? One way I can think of is to try a 
> number of C values on the whole training set and then pick the one with norm 
> closest to that found during model selection.
>
> Thanks in advance for any help,
>
> Charanpal
>
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general



------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to