Hi,
Is it possible to combine kernels in the SVM in Scikit-learn, i.e. if I
wanted a sum of different kernels, or different kernels on different
dimensions, etc.?
Or would I need to define a new kernel?
Best,
James McMurray
Hi James,
if by chance your combination of kernels yields one of the built-in kernels
for SVM ('linear', 'poly', 'rbf', 'sigmoid', see
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/svm/classes.py#L221),
then yes ... :)
Otherwise you will have to pass a new kernel function /
Okay, I think the precomputed kernel is the easiest way to do this then.
However, I would note that the GPy Gaussian Processes package:
https://github.com/SheffieldML/GPy/
Has a built-in way of summing and multiplying the built-in kernels (since
these will always yield valid kernels). This could
Hi James.
One issue here is that these are pure python, which is a bit problematic
when using machine learning with SVMs.
Using a python kernel function means that the whole kernel needs to be
precomputed, which can be very time-consuming or might not
be feasible because of memory constraints.
Perhaps you have become aware of this by now,
but only K-1 subset tests are needed to find the best
categorical split, not 2^(K-1)-1. This was a central
result proved in Brieman's book.
--
Download BIRT iHub F-Type
+1
Just wanted to point out that the K-1 subset proof is only true for binary
classification. Such heuristics do perform reasonably for the multiclass
classification criterion though.
On Monday, November 17, 2014, Alexander Hawk tomahawkb...@gmail.com wrote:
Perhaps you have become aware of