Yes, sure.
Ignite ML algorithms actively use data sending between nodes and in several
cases uses per class loading mechanism.
I want to exclude failures when algorithms use unserializable data or try
to send lambdas with big context etc.
>From this point of view we can just run ML examples on a little cluster
where servers is started from binary build.

сб, 2 мар. 2019 г. в 08:39, Павлухин Иван <vololo...@gmail.com>:

> Hi Alexey,
>
> Could you please share some background? What problem are you solving
> with running tests against binary builds? Perhaps, we need something
> similar for other Ignite sub-projects as well.
>
> пт, 1 мар. 2019 г. в 19:04, Алексей Платонов <aplaton...@gmail.com>:
> >
> > Hello, Igniters!
> > I would like to create several tests for ML algorithms using binary
> builds.
> > These tests should work in this way:
> > 1) Get last master (or user-defined branch) from git repository;
> > 2) Build Ignite with a release profile and create binary build;
> > 3) Run several Ignite instances from binary build;
> > 4) Run examples or synthetic tests with a training of ML algorithms and
> > inference;
> > 5) Accumulate fails statistics on some board.
> >
> > Currently, I'm working with own open repository in git that contains
> > scripts for Docker and Travis as the prototype. I want to complete these
> > tests and contribute them to Ignite.
> >
> > Should I adapt such tests for TC after prototype complete or Travis can
> be
> > reused? Maybe such a process was created for other Ignite modules and I
> can
> > use it for ML. What do you think?
> >
> > Best regards
> > Alexey Platonov.
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>

Reply via email to