> avoid stretching between multiple availability zones and some persistence
tuning,

Like disabling MMAP for WAL (IGNITE_WAL_MMAP=false)
We don't use persistence.

> Do you mean, that switching on GKE makes it working or it was the initial
> setup and nothing has changed since that?
I will clarify the details. After an unsuccessful deployment in our
corporate vmware and openshift, we just created a VM on google cloud to
compare and started the same configurated kubernetes cluster in less then 5
minutes.

I found many "duplicate message" at debug logs:
[2021-04-30
11:31:27,075][DEBUG][tcp-disco-msg-worker-[]-#2%datanode%-#36%datanode%][TcpDiscoverySpi]
Ignoring duplicate message: TcpDiscoveryCustomEventMessage [msg=null,
super=TcpDiscoveryAbstractMessage
[sndNodeId=7c2bc47c-a33f-4a32-9d1d-75f9f7c4ba45,
id=78f73e12971-92f0b0a3-1eae-4505-b08b-52b6f7f60780, verifierNodeId=null,
topVer=0, pendingIdx=0, failedNodes=null, isClient=false]]

Can any benchmark show us possible freezes of internode networking?
I have attached two archives with thread-dumps and dsicovery debug logs.
Сould you look at the logs?

thread-dumps-10-nodes.7z
<http://apache-ignite-users.70518.x6.nabble.com/file/t2921/thread-dumps-10-nodes.7z>
  
discovery-debug-logs.7z
<http://apache-ignite-users.70518.x6.nabble.com/file/t2921/discovery-debug-logs.7z>
  





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to