Hallo,

I think this looks good!

I like that it is flexible in the sense of supporting algorithms with and
without the dispatcher.

Just to be sure: CAlgorithm, CKernelAlgorithm, and CKernelAlgorithm2 are
showcasing three different ways of algorithm implementation, right? Namely,
with only DenseFeatures, with Dense and String Features, and with no
dispatching, respectively.

Cheers,
Fernando.

On Fri, 29 Jun 2018 at 11:02, Heiko Strathmann <heiko.strathm...@gmail.com>
wrote:

> This mix-in idea doesnt work, as it turned out in discussions with
> Fernando and Shubham.
>
> Here is an attempt to do the same thing using a macro. Slightly ugly but
> well within Shogun's general style
> https://gist.github.com/karlnapf/2dd6a23001242cf01a45c99103b736d6
>
> Am Di., 26. Juni 2018 um 18:05 Uhr schrieb Heiko Strathmann <
> heiko.strathm...@gmail.com>:
>
>> The main thing is that we still want templated specialisations of the
>> train methods. Not sure that works with this other approach? But would have
>> to check...
>> I think we could think of the proposed solution as a mix of the double
>> dispatching (for feature type, string, dense, etc) and the mix ins for the
>> template method overloading...or?
>>
>> On Tue, 26 Jun 2018 at 12:07, Fernando J. Iglesias García <
>> fernando.iglesi...@gmail.com> wrote:
>>
>>> A different idea to using mix-ins for dispatching:
>>> https://en.wikipedia.org/wiki/Double_dispatch#Double_dispatch_in_C++
>>> In a nutshell the idea is that on calling CMahine::train(CFeatures* f)
>>> we use some method in the hierarchy of CFeatures that works out the
>>> "downcast". It feels a bit like Shogun's obtain_from_generic, though a key
>>> difference is that obtain_from_generic is static.
>>>
>>> What do you think?
>>>
>>> On Mon, 25 Jun 2018 at 17:55, Heiko Strathmann <
>>> heiko.strathm...@gmail.com> wrote:
>>>
>>>> feedback welcome.
>>>>
>>>> https://gist.github.com/karlnapf/95a9c72a642d61ec268a39407f8761b2
>>>>
>>>> Problem:
>>>> currently, dispatching happens inside train_machine of algorithm
>>>> specializations: redundant code, error prone, might be forgotten at all
>>>> Actually, only LARS, LDA do this, most other classes do nothing, i.e.
>>>> crash/error when something else than float64 is passed (bad)
>>>> LARS/LDA solved this via making the train method templated and then
>>>> dispatch inside train_machine.
>>>>
>>>> Solution we propose (result of a discussion with Viktor last week,
>>>> refined in a meeting with Giovanni and Shubhab today): Use mixins to keep
>>>> dispatching code in algorithm specializations, this allows for templated
>>>> train methods and gives a compile error if they are not implemented by the
>>>> algorithm author. Yet we can centralize the code to make algorithm
>>>> specialisations nicer and less error prone. See gist.
>>>> We will have to think how all this works with multiple feature types
>>>> (string, etc), and also how multiple mix-ins can be combined (e.g. LARS is
>>>> a LinearMachine, IterativeMixIn, DenseTrainMixIn, and it would be the
>>>> 'iteration' method that would be templated.
>>>> Shubham will draft a compiling minimal example for this.
>>>>
>>>>
>>>> First attempt (doesnt work)
>>>> Move dispatching into base class CMachine. Call templated train methods
>>>> there which are overloaded in subclass. BUT cannot have virtual templated
>>>> methods, so this wont fly.
>>>>
>>>>
>>>> H
>>>>
>>> _______________________________________________
>>>> shogun-team mailing list
>>>> shogun-t...@shogun-toolbox.org
>>>> https://nn7.de/cgi-bin/mailman/listinfo/shogun-team
>>>>
>>> --
>> Sent from my phone
>>
>

Reply via email to