All sounds good. Looking forward to more concrete versions of this: class
diagrams, proof of concept PRs, etc :)

Am Mo., 23. März 2020 um 15:53 Uhr schrieb sai_ng <jonpsy...@gmail.com>:

> I'll do that under GSOC :) . I'm including that in my proposal, In my
> proposal I'll be including the previous class diagram and my proposed class
> diagram. I'll be glad if you see it Heiko ! :D.
> Basically, what I'd rather go by is:
> We could rather have separate classes for RFF,RBF(Random
> binning..),KRR,Nystrom and then they could inherit from a super class, say
> "CKernelApproximator". And our CKernelApproximator could inherit from
> CKernel, how does that sound?
> Doing this gives the advantage of having more freedom to add methods
> suitable for different types of situations, like Primal and Dual
> formulations. This would also make it more structured and in general more
> Object Oriented.
>
> Thoughts?
>
> On Mon, Mar 23, 2020 at 9:08 PM Heiko Strathmann <
> heiko.strathm...@gmail.com> wrote:
>
>> Yes that is a good point. It would be cool to have it implemented say for
>> all subclasses of KernelMachine. All it really does is changing the basis
>> set to represent the kernel function using landmark points rather than the
>> training data. If you want to impement it in this manner, this would be a
>> very welcome contribution. However, doing this in general is difficult, and
>> e.g. SVMs will have a different implementation as the solver itself will be
>> changed, so definitely checking out sklearn would help here for
>> abstractions.
>> Any ideas how to go ahead with this?
>> H
>>
>>
>> Am Mo., 23. März 2020 um 15:06 Uhr schrieb sai_ng via shogun-list <
>> shogun-list@shogun-toolbox.org>:
>>
>>> Hi again,
>>> It's me Nanubala Gnana Sai. I was checking out SKlearn implementation of
>>> Nystrom  and tried comparing with our own implementation. I was wondering,
>>> why is the given approximation technique bounded to a specific method  (
>>> for example: KRR), it should be implemented as different class atleast
>>> that's what's done in Sklearn. There is an implementation for  RFF which
>>> works in a similar fashion, so I thought it's only logical to have Nystrom
>>> implementation as well. Looking forward to hearing from you ! :D
>>>
>>

Reply via email to