Igniters,

By the results of the recent Ignite meeting at St. Petersburg I've noticed
that some of our users getting stuck with the problem when a cluster is
activated the first time.
At the moment we have only manual options to do it (control.sh, Visor,
etc.), but it's not enough. Manual activation might be good when users have
a dedicated cluster in production with a stable environment.
But this problem becomes harder when users deploy embedded Ignite (with
persistence) inside other services, or frequently deploy to temporary stage
/ test environment.
It's uncomfortable to manual invoke control.sh each time after deploy to
clean environment and hard to write a custom script to do it automatically.
This is the clearly usability problem.

I think we should introduce an example of how to write such policy using
Ignite API, similarly as we did it with Baseline Watcher.

I've created a ticket regarding the problem:
https://issues.apache.org/jira/browse/IGNITE-8844
I think we should provide an example of one of the simplest and most
useful policy - when cluster server nodes size reaches some number.

Moreover, I think it would be nice to have some sort of automatic cluster
management service (external or internal) like Spark Driver or Storm
Nimbus which
will do such things without user actions.

What do you think?

Reply via email to