Hello!

I don't think it is a problem. Please provide actual logs from node failure.

I recommend having failureDetectionTimeout configured as the same value on
all server nodes.

Regards,
-- 
Ilya Kasnacheev


ср, 19 авг. 2020 г. в 21:30, tschauenberg <[email protected]>:

> To use visor we typically ssh onto a server node and use run visor there.
> When doing so we launch visor with the exact same configuration as that
> server node is running.
>
> Two questions regarding this:
> * Is running visor from a server node problematic?
> * Should we be using a different configuration for visor such as one that
> sets IgniteConfiguration.clientMode to true?
>
> Additionally related to this we see running visor often causes one of N
> server nodes to be terminated in Ignite 2.7.0 (haven't tried reproducing in
> 2.8.1 as we need to upgrade first).  I think this related to us not having
> the failureDetectionTimeout set anywhere in the config.
>
> Two questions regarding this:
> * Why are the server nodes able to be connected just fine without failures
> but as soon as visor is connected it causes one of them to be kicked for
> responding too slowly?  The servers and visor are all using the same config
> and are all in the same network.  Visor in this scenario is running on one
> of the server nodes.
> * When setting the failureDetectionTimeout does it have to be set to the
> same value on all server nodes, all client nodes, and on visor?  Or is
> failureDetectionTimeout a setting on the local node only for determining
> how
> long that local node will wait talking to remote nodes?  For example, if we
> are seeing the problem just when starting visor is it reasonable to
> increase
> the failureDetectionTimeout just in the visor configuration?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to