John,
This is usually required for the server nodes.
-
Denis
On Wed, Jul 3, 2019 at 10:28 AM John Smith wrote:
> Should I do this on the server nodes or the client nodes?
>
> On Tue, 25 Jun 2019 at 10:18, Maxim.Pudov wrote:
>
>> You could increase failureDetectionTimeout [1] from default
Should I do this on the server nodes or the client nodes?
On Tue, 25 Jun 2019 at 10:18, Maxim.Pudov wrote:
> You could increase failureDetectionTimeout [1] from default value of 1
> to
> 6 or so.
>
> https://apacheignite.readme.io/docs/tcpip-discovery#section-failure-detection-timeout
>
You could increase failureDetectionTimeout [1] from default value of 1 to
6 or so.
https://apacheignite.readme.io/docs/tcpip-discovery#section-failure-detection-timeout
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
How to turn it off?
Also i think i know what may have been the visor issue. I was connecting to
cluster not specifying ports 47500..47509. But once I added that it seems
more stable. I can even see the wifi node and everything.
On Fri, 21 Jun 2019 at 06:01, Ilya Kasnacheev
wrote:
> Hello!
>
>
Hello!
It is recommended to turn off failure detection since its default config is
not very convenient. Maybe it is also fixed in 2.7.5.
This just means some operation took longer than expected and Ignite
panicked.
Regards,
чт, 20 июн. 2019 г., 19:28 John Smith :
> Actually this hapenned when
Actually this hapenned when the WIFI node connected. But it never hapenned
before...
[14:51:46,660][INFO][exchange-worker-#43%xx%][GridDhtPartitionsExchangeFuture]
Completed partition exchange
[localNode=e9e9f4b9-b249-4a4d-87ee-fc97097ad9ee,
exchange=GridDhtPartitionsExchangeFuture
Ok, where do I look for the visor logs when it hangs? And it's not a no
caches issue the cluster works great. It when visor cannot reach a specific
client node.
On Thu., Jun. 20, 2019, 8:45 a.m. Vasiliy Sisko,
wrote:
> Hello @javadevmtl
>
> I failed to reproduce your problem.
> In case of any
Hello @javadevmtl
I failed to reproduce your problem.
In case of any error in cache command Visor CMD shows message "No caches
found".
Please provide logs of visor, server and client nodes after command hang.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Correct and this is a pure practical issue. I can even imagine scenario
where you have a cluster and for compliance reasons visor is running in a
demilitarized zone.
And all I'm saying is that the visor CACHE command or any for that matter
should not hang waiting to connect to specific clients.
John,
Sure, you’re right that Visor is the tool for management and monitoring.
Not sure that Ilya’s statement makes a practical sense.
Looping in our Visor experts. Alexey, Yury, could you please check out the
issue?
Denis
On Tuesday, June 18, 2019, John Smith wrote:
> Ok but visor is used
Ok but visor is used to get info on cache etc... So it just hangs on
client's it cannot reach. Maybe it should have a timeout if it can't reach
the specific node? Or does it have one but it's super high?
Or if it knows it's a client node then to handle it differently?
On Tue, 18 Jun 2019 at
Hello!
Visor is not the tool to debug cluster. control.sh probably is.
Visor is a node in topology (a daemon node, but still) and as such it
follows the same limitations as any other node.
Regards,
--
Ilya Kasnacheev
пт, 14 июн. 2019 г. в 22:41, John Smith :
> Hi, It's 100% that.
>
> I'm
Hi, It's 100% that.
I'm just stating that my applications run inside a container network and
the Ignite is installed on it's own VMS. The networks see each other and
this works. Also Visor can connect. No problems.
It's only when for example we have a dev machine connect from WIFI and
while a
Hello!
Please enable verbose logging and share logs from both visor, client and
server nodes, so that we could check that.
There should be messages related to connection attempts.
Regards,
--
Ilya Kasnacheev
чт, 13 июн. 2019 г. в 00:06, John Smith :
> The clients are in the same low latency
The clients are in the same low latency network, but they are running
inside container network. While ignite is running on it's own cluster. So
from that stand point they all see each other...
On Wed, 12 Jun 2019 at 17:04, John Smith wrote:
> Ok thanks
>
> On Mon, 10 Jun 2019 at 04:48, Ilya
Ok thanks
On Mon, 10 Jun 2019 at 04:48, Ilya Kasnacheev
wrote:
> Hello!
>
> As a rule, a faulty thick client can destabilize a cluster. Ignite's
> architecture assumes that all clients are collocated, i.e. that the network
> between any two nodes (including clients) is reliable, fast and
Hello!
As a rule, a faulty thick client can destabilize a cluster. Ignite's
architecture assumes that all clients are collocated, i.e. that the network
between any two nodes (including clients) is reliable, fast and low-latency.
It is not recommended to connect thick clients from different
Correct. Should it not at least timeout and at least show what it has
available? Basically we have a central cluster and various clients connect
to it from different networks. As an example: Docker containers.
We make sure that the clients are client nodes only and we avoid creating
any caches on
Hello!
I think that Visor will talk to all nodes when trying to run caches
command, and if it can't reach client nodes the operation will never finish.
Regards,
--
Ilya Kasnacheev
ср, 5 июн. 2019 г. в 22:34, John Smith :
> Hi, any thoughts on this?
>
> On Fri, 31 May 2019 at 10:21, John
Hi, any thoughts on this?
On Fri, 31 May 2019 at 10:21, John Smith wrote:
> I think it should at least time out and show stats of the nodes it could
> reach? I don't see why it's dependant on client nodes.
>
> On Thu, 30 May 2019 at 11:58, John Smith wrote:
>
>> Sorry pressed enter to
I think it should at least time out and show stats of the nodes it could
reach? I don't see why it's dependant on client nodes.
On Thu, 30 May 2019 at 11:58, John Smith wrote:
> Sorry pressed enter to quickly
>
> So basically I'm 100% sure if visor cache command cannot reach the client
>
Sorry pressed enter to quickly
So basically I'm 100% sure if visor cache command cannot reach the client
node then it just stays there not doing anything.
On Thu, 30 May 2019 at 11:57, John Smith wrote:
> Hi, running 2.7.0
>
> - I have a 4 node cluster and it seems to be running ok.
> - I
22 matches
Mail list logo