Finally found the issue.
In the thin client configuration I had only one port provided to connect to the server nodes. As soon as I provided the default range (10800-10900) the connections seemed to behave better. This resolved the issue somehow.

However I tried it out at the cluster on the campus. With 50 nodes as thin clients connecting to two server nodes the server nodes seem to be completely overwhelmed.

Then I thought why not let every node have it's own Ignite instance.
It seems that a lot of resources are consumed from the Ignite Servers on the nodes even why no load at all on requests.

Then I realized that starting up all servers is a nightmare. It takes several minutes and just activating all nodes is more than 3 minutes without even reliably bringing up all of the nodes....

Does anybody know how to define the consistend-id's in the xml file?

Regards,

Wolfgang





Am 02.12.20 um 10:58 PM schrieb akorensh:
Hi,
   The connection parameters are set by the thin client connector.
see:
https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients#configuring-thin-client-connector

  all attributes are listed here:

https://ignite.apache.org/releases/2.9.0/javadoc/org/apache/ignite/configuration/ClientConnectorConfiguration.html
see:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ClientConnectorConfiguration.html#getThreadPoolSize--
(this specifies the number of threads set to process client requests)

Simple C++ put/get example:
https://github.com/apache/ignite/blob/master/modules/platforms/cpp/examples/thin-client-put-get-example/src/thin_client_put_get_example.cpp

connection resources should released when you destroy the IgniteClient
object.

monitor the Server logs to see what is happening.
see if the app is still connected using netstat.

try to connect via other thin clients to see if connection requests are
honored or not.

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to