I think a proper fix would be to reuse the logic of automatic baseline
change that is planned to be implemented and automatically activate the
cluster after a certain timeout is reached.

Note that immediate cluster activation on first node start won't work
because under concurrent nodes start and existing baseline this will lead
to potential error when a new node is started first, forms a new baseline
and prevents existing nodes with old baseline from joining the cluster.

вт, 26 июн. 2018 г. в 14:49, Ivan Rakov <ivan.glu...@gmail.com>:

> Guys,
>
> We have auto-activation on restart of persistent cluster when last
> baseline node joins the cluster. On first start we have no baseline
> topology, and thus no auto-activation.
>
> I think it would be useful to add java snippet to Ignite documentation
> that will safely activate the cluster on certain condition. We already
> did this as intermediate solution for baseline autochange issue, see
>
> https://apacheignite.readme.io/docs/baseline-topology#section-triggering-rebalancing-programmatically
> page.
>
> I'll share my version of ActivationWatcher in this thread soon.
>
> Best Regards,
> Ivan Rakov
>
> On 26.06.2018 14:36, Dmitriy Govorukhin wrote:
> > Vladimir,
> >
> > Auto-activation on the first start?
> > Please, shared an issue link if you have.
> >
> > On Tue, Jun 26, 2018 at 11:29 AM Vladimir Ozerov <voze...@gridgain.com>
> > wrote:
> >
> >> Pavel,
> >>
> >> As far as I know we agreed to implement auto activation in one of the
> >> nearest releases. Am I missing something?
> >>
> >> вт, 26 июня 2018 г. в 0:56, Pavel Kovalenko <jokse...@gmail.com>:
> >>
> >>> Igniters,
> >>>
> >>> By the results of the recent Ignite meeting at St. Petersburg I've
> >> noticed
> >>> that some of our users getting stuck with the problem when a cluster is
> >>> activated the first time.
> >>> At the moment we have only manual options to do it (control.sh, Visor,
> >>> etc.), but it's not enough. Manual activation might be good when users
> >> have
> >>> a dedicated cluster in production with a stable environment.
> >>> But this problem becomes harder when users deploy embedded Ignite (with
> >>> persistence) inside other services, or frequently deploy to temporary
> >> stage
> >>> / test environment.
> >>> It's uncomfortable to manual invoke control.sh each time after deploy
> to
> >>> clean environment and hard to write a custom script to do it
> >> automatically.
> >>> This is the clearly usability problem.
> >>>
> >>> I think we should introduce an example of how to write such policy
> using
> >>> Ignite API, similarly as we did it with Baseline Watcher.
> >>>
> >>> I've created a ticket regarding the problem:
> >>> https://issues.apache.org/jira/browse/IGNITE-8844
> >>> I think we should provide an example of one of the simplest and most
> >>> useful policy - when cluster server nodes size reaches some number.
> >>>
> >>> Moreover, I think it would be nice to have some sort of automatic
> cluster
> >>> management service (external or internal) like Spark Driver or Storm
> >>> Nimbus which
> >>> will do such things without user actions.
> >>>
> >>> What do you think?
> >>>
>
>

Reply via email to