Hi Andy,
I have since been able to resolve the pickling issue, though I am now
getting an error message saying that an error message does not include the
expected string 'fit'. In general, I am trying to use the fit() method of
my classifier to instantiate a separate SVC() classifier with a custom
kernel, fit THAT to the data, then return this instance as the fitted
version of the new classifier. Is this possible in theory? If so, what is
the best way to implement it?

As before, the requisite code and a .ipynb file is attached.

Best,
Sam

On Thu, Aug 3, 2017 at 6:35 PM, Andreas Mueller <t3k...@gmail.com> wrote:

> Hi Sam.
> You need to put these into a reachable namespace (possibly as private
> functions) so that they can be pickled.
> Please stay on the sklearn mailing list, I might not have time to reply.
>
> Andy
>
>
> On 08/03/2017 01:24 PM, Sam Barnett wrote:
>
> Hi Andy,
>
> I've since tried a different solution: instead of a pipeline, I've simply
> created a classifier that is for the most part like svm.SVC, though it
> takes a few extra inputs for the sequentialisation step. I've used a Python
> function that can compute the Gram matrix between two datasets of any shape
> to pass into SVC(), though I'm now having trouble with pickling on the
> check_estimator test. It appears that SeqSVC.fit() doesn't like to have
> methods defined within it. Can you see how to pass this test? (the .ipynb
> file shows the error).
>
> Best,
> Sam
>
> On Wed, Aug 2, 2017 at 9:44 PM, Sam Barnett <sambarnet...@gmail.com>
> wrote:
>
>> You're right: it does fail without GridSearchCV when I change the size of
>> seq_test. I will look at the transform tomorrow to see if I can work this
>> out. Thank you for your help so far!
>>
>> On Wed, Aug 2, 2017 at 9:20 PM, Andreas Mueller <t3k...@gmail.com> wrote:
>>
>>> Change the size of seq_test in your notebook and you'll see the failure
>>> without GridSearchCV.
>>> I haven't looked at your code in detail, but transform is supposed to
>>> work on arbitrary new data with the same number of features.
>>> Your code requires the test data to have the same shape as the training
>>> data.
>>> Cross-validation will lead to training data and test data having
>>> different sizes. But I feel like something is already wrong if your
>>> test data size depends on your training data size.
>>>
>>>
>>>
>>> On 08/02/2017 03:08 PM, Sam Barnett wrote:
>>>
>>> Hi Andy,
>>>
>>> The purpose of the transformer is to take an ordinary kernel (in this
>>> case I have taken 'rbf' as a default) and return a 'sequentialised' kernel
>>> using a few extra parameters. Hence, the transformer takes an ordinary
>>> data-target pair X, y as its input, and the fit_transform(X, y) method will
>>> output the Gram matrix for X that is associated with this sequentialised
>>> kernel. In the pipeline, this Gram matrix is passed into an SVC classifier
>>> with the kernel parameter set to 'precomputed'.
>>>
>>> Therefore, I do not think your hacky solution would be possible.
>>> However, I am still unsure how to implement your first solution: won't the
>>> Gram matrix from the transformer contain all the necessary kernel values?
>>> Could you elaborate further?
>>>
>>>
>>> Best,
>>> Sam
>>>
>>> On Wed, Aug 2, 2017 at 5:05 PM, Andreas Mueller <t3k...@gmail.com>
>>> wrote:
>>>
>>>> Hi Sam.
>>>> GridSearchCV will do cross-validation, which requires to "transform"
>>>> the test data.
>>>> The shape of the test-data will be different from the shape of the
>>>> training data.
>>>> You need to have the ability to compute the kernel between the training
>>>> data and new test data.
>>>>
>>>> A more hacky solution would be to compute the full kernel matrix in
>>>> advance and pass that to GridSearchCV.
>>>>
>>>> You probably don't need it here, but you should also checkout what the
>>>> _pairwise attribute does in cross-validation,
>>>> because that it likely to come up when playing with kernels.
>>>>
>>>> Hth,
>>>> Andy
>>>>
>>>>
>>>> On 08/02/2017 08:38 AM, Sam Barnett wrote:
>>>>
>>>> Dear all,
>>>>
>>>> I have created a 2-step pipeline with a custom transformer followed by
>>>> a simple SVC classifier, and I wish to run a grid-search over it. I am able
>>>> to successfully create the transformer and the pipeline, and each of these
>>>> elements work fine. However, when I try to use the fit() method on my
>>>> GridSearchCV object, I get the following error:
>>>>
>>>>      57         # during fit.
>>>>      58         if X.shape != self.input_shape_:
>>>> ---> 59             raise ValueError('Shape of input is different from
>>>> what was seen '
>>>>      60                              'in `fit`')
>>>>      61
>>>>
>>>> ValueError: Shape of input is different from what was seen in `fit`
>>>>
>>>> For a full breakdown of the problem, I have written a Jupyter notebook
>>>> showing exactly how the error occurs (this also contains all .py files
>>>> necessary to run the notebook). Can anybody see how to work through this?
>>>>
>>>> Many thanks,
>>>> Sam Barnett
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> scikit-learn mailing 
>>>> listscikit-learn@python.orghttps://mail.python.org/mailman/listinfo/scikit-learn
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> scikit-learn mailing list
>>>> scikit-learn@python.org
>>>> https://mail.python.org/mailman/listinfo/scikit-learn
>>>>
>>>>
>>>
>>>
>>> _______________________________________________
>>> scikit-learn mailing 
>>> listscikit-learn@python.orghttps://mail.python.org/mailman/listinfo/scikit-learn
>>>
>>>
>>>
>>
>
>
import numpy as np
from functools import partial

from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
from sklearn.utils.multiclass import unique_labels
from sklearn.svm import SVC

from SeqKernelLucy import *

def kPolynom(x,y,coef0=0,gamma=1,degree=1):
    return (coef0+gamma*np.inner(x,y))**degree
def kGauss(x,y,scale=1,gamma=1):
    return scale * np.exp(-gamma*np.sum(np.square(x-y)))
def kLinear(x,y,scale=1):
    return scale * np.inner(x,y)
def kSigmoid(x,y,gamma=1,coef0=0):
    return np.tanh(gamma*np.inner(x,y) +coef0)

def kernselect(kername, coef0, gamma, degree, scale):
    switcher = {
        'linear': partial(kPolynom, coef0=coef0, gamma=gamma, degree=degree),
        'rbf': partial(kGauss, scale=scale, gamma=gamma),
        'sigmoid': partial(kLinear, scale=scale),
        'poly': partial(kSigmoid, gamma=gamma, coef0=coef0),
            }
    return switcher.get(kername, "nothing")


class SeqSVC(BaseEstimator, ClassifierMixin):

    def __init__(self, C=1.0, kernel='rbf', degree=3, gamma=1.0, \
        coef0=0.0, shrinking=True, probability=False, tol=0.001, \
        cache_size=200, class_weight=None, verbose=False, max_iter=-1, \
        decision_function_shape=None, random_state=None, normalise=False,\
        scale=1.0, cut_ord_pair=(2,1)):

        self.C = C
        self.shrinking = shrinking
        self.probability = probability
        self.tol = tol
        self.cache_size = cache_size
        self.class_weight = class_weight
        self.verbose = verbose
        self.max_iter = max_iter
        self.decision_function_shape = decision_function_shape
        self.random_state = random_state
        self.normalise = normalise
        self.kernel = kernel
        self.degree = degree
        self.gamma = gamma
        self.coef0 = coef0
        self.scale = scale
        self.cut_ord_pair = cut_ord_pair


    def fit(self, X, y=None):
        # Check that X and y have correct shape
        X, y = check_X_y(X, y)

        cut_off = self.cut_ord_pair[0]
        order = self.cut_ord_pair[1]

        # Store the classes seen during fit
        self.classes_ = unique_labels(y)

        self.X_ = np.array(X)
        self.y_ = np.array(y)
        self.input_shape_ = X.shape

        self.ord_svc_ = SVC(C=self.C, \
            kernel=partial(seq_kernel_free, \
            pri_kernel=kernselect(self.kernel, self.coef0, self.gamma, self.degree, self.scale), \
            cut_off=cut_off, order=order), \
            degree=self.degree, gamma=self.gamma, \
            coef0=self.coef0, shrinking=self.shrinking, probability=self.probability, tol=self.tol, \
            cache_size=self.cache_size, class_weight=self.class_weight, verbose=self.verbose, \
            max_iter=self.max_iter, decision_function_shape=self.decision_function_shape, \
            random_state=self.random_state)

        self.ord_svc_.fit(X, y)

        return self


    def predict(self, X):

        return self.ord_svc_.predict(X)

Attachment: Sequential Kernel SVC GridSearchCV Test.ipynb
Description: Binary data

import numpy as np

"""assume m positive"""

def shift( A,j,m):
    newA=np.roll(A, m, axis=j)

    for index, x in np.ndenumerate(newA):
        if (index[j]<m):
            newA[index]=0
    return newA


def getDiffMatrix(x, y, primary_kernel):
    shape = list([x.shape[0]-1, y.shape[0]-1])
    left = np.zeros([shape[0], shape[0]+1])
    right = np.zeros([shape[1]+1, shape[1]])
    l = max(shape)+1
    ones = np.diag(np.ones([l]))
    ones_up = np.diag(np.ones([l-1]), 1)
    ones_down = np.diag(np.ones([l-1]), -1)
    left += (ones-ones_up)[tuple(slice(0,n) for n in left.shape)]
    right += (ones-ones_down)[tuple(slice(0,n) for n in right.shape)]
    diff = np.zeros([shape[0]+1, shape[1]+1])
    for i in range(shape[0]+1):
        for j in range(shape[1]+1):
            diff[i,j] = primary_kernel(x[i], y[j])
    return np.dot(left, np.dot(diff, right))


def kernel(K,M,D):
    (lens,lent)=K.shape
    A=np.ndarray([M,lens,lent],float)
    A[0]=K
    for m in range(1,M):
        Q = np.cumsum(np.cumsum(A[m-1], axis=0), axis=1)
        Qshifted=shift(Q,0,1)
        Qshifted=shift(Qshifted,1,1)
        A[m]=K*(Qshifted+1)

    return 1+np.sum(A[M-1])



def kernelHO(K,M,D):
    (lens,lent)=K.shape
    B=np.ndarray([M,D,D,lens,lent],float)
    B[0,0,0]=K
    for m in range(1,M):
        D_=min(D,m)
        K1=np.sum(B[m-1],axis=0)
        K2=np.sum(K1,axis=0)
        K3=np.cumsum(np.cumsum(K2, axis=0), axis=1)
        K3shifted=shift(K3,0,1)
        K3shifted2=shift(K3shifted,1,1)
        B[m,0,0]=K*(K3shifted2+1)
        for d in range(1,D_):
            K1=np.sum(B[m-1,d-1],axis=0)
            K2=np.cumsum(K1,axis=1)
            K2shifted=shift(K2,1,1)
            B[m,d,0]=(1/(d+1))*K*K2shifted

            K2_=np.sum(B[m-1],axis=0)
            K4_=np.cumsum(K2_[d-1],axis=0)
            K4shifted_=shift(K4_,0,1)
            B[m,0,d]=(1/(d+1))*K*K4shifted_

            for d_ in range(1,D_):
                B[m,d,d_]+=(1/((d+1)*(d_+1)))*K*B[m,d-1,d_-1]

    return 1+np.sum(B[M-1])


def seq_kernel_free(A, B, pri_kernel=np.dot, cut_off=2, order=1):
    gram_matrix = np.zeros( (A.shape[0], B.shape[0]) )
    K_temp = np.zeros( (A.shape[1]-1, A.shape[1]-1) )

    for row1ind in range(A.shape[0]):
        for row2ind in range(B.shape[0]):
            K_temp = getDiffMatrix(A[row1ind], B[row2ind], pri_kernel)
            gram_matrix[row1ind,row2ind] = kernelHO(K_temp, cut_off, order)

    return gram_matrix


# def seq_kernel_fixed(pri_kernel, cut_off, order):
#     def seq_kernel_temp(A, B):
#         return seq_kernel_free(A, B, pri_kernel, cut_off, order)
#     return seq_kernel_temp
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to