Re: Why does Thin client connect to all servers in connection string???

2022-09-05 Thread Alex Plehanov
Hello,

When "partition awareness" is enabled, the client tries to connect to all
server nodes from the connection string. If "partition awareness" is
disabled, only one connection is maintained. Since Ignite 2.11 "partition
awareness" is enabled by default. See [1] for more information.

[1]:
https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients#partition-awareness


вт, 6 сент. 2022 г. в 02:09, Gregory Sylvain :

>
>
> Hi,
>
>
>
> I am running an Ignite 2.12.0 Service Grid with 5 Server Nodes and 32 Thin
> Client nodes in a private cloud of RHEL 7.9 VMs.  (no docker images).
>
>
>
> Doing a sqlline CLI query for connected clients, I get the expected 32
> connected clients.  However, if I execute netstat (or ss) on all
> ServerNodes, I see an ESTABLISHed connection from each client to each
> ServerNode on port 10800?  These connections seem to be maintained (e.g.
> they do not timeout after 2 hours).
>
>
>
> I am using Spring XML to configure both client and server nodes.  The
> server nodes are also utilizing systemd for reliability.
>
>
>
> Any idea what is going on?
>
>
>
> Thanks in advance.
>
> Greg
>
>
>


Re: Why does Thin client connect to all servers in connection string???

2022-09-05 Thread Pavel Tupitsyn
Hi, this is called "partition awareness" [1] - thin client establishes
connections to all known nodes to be able to send requests directly to the
primary node for the given entry.
Multiple connections are also useful for load balancing and improved
reliability.

You can disable this behavior with
ClientConfiguration#setPartitionAwareness(false) [2]

[1]
https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness
[2]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ClientConfiguration.html#setPartitionAwarenessEnabled-boolean-

On Tue, Sep 6, 2022 at 2:09 AM Gregory Sylvain 
wrote:

>
>
> Hi,
>
>
>
> I am running an Ignite 2.12.0 Service Grid with 5 Server Nodes and 32 Thin
> Client nodes in a private cloud of RHEL 7.9 VMs.  (no docker images).
>
>
>
> Doing a sqlline CLI query for connected clients, I get the expected 32
> connected clients.  However, if I execute netstat (or ss) on all
> ServerNodes, I see an ESTABLISHed connection from each client to each
> ServerNode on port 10800?  These connections seem to be maintained (e.g.
> they do not timeout after 2 hours).
>
>
>
> I am using Spring XML to configure both client and server nodes.  The
> server nodes are also utilizing systemd for reliability.
>
>
>
> Any idea what is going on?
>
>
>
> Thanks in advance.
>
> Greg
>
>
>


UNSUBSCRIBE

2022-09-05 Thread Renan Pinzon
>


Why does Thin client connect to all servers in connection string???

2022-09-05 Thread Gregory Sylvain
Hi,



I am running an Ignite 2.12.0 Service Grid with 5 Server Nodes and 32 Thin
Client nodes in a private cloud of RHEL 7.9 VMs.  (no docker images).



Doing a sqlline CLI query for connected clients, I get the expected 32
connected clients.  However, if I execute netstat (or ss) on all
ServerNodes, I see an ESTABLISHed connection from each client to each
ServerNode on port 10800?  These connections seem to be maintained (e.g.
they do not timeout after 2 hours).



I am using Spring XML to configure both client and server nodes.  The
server nodes are also utilizing systemd for reliability.



Any idea what is going on?



Thanks in advance.

Greg


Deadlock analysis

2022-09-05 Thread Thomas Kramer

I'm experiencing a transaction deadlock and would like to understand how
to find out the cause of it.

Snipped from the log I get:

/Deadlock detected:

K1: TX1 holds lock, TX2 waits lock.
K2: TX2 holds lock, TX1 waits lock.

Transactions:

TX1 [txId=GridCacheVersion [topVer=273263429, order=1661784224309,
nodeOrder=4, dataCenterId=0],
nodeId=8841e579-43b5-4c23-a690-1208bdd34d8c, threadId=30]
TX2 [txId=GridCacheVersion [topVer=273263429, order=1661784224257,
nodeOrder=14, dataCenterId=0],
nodeId=f08415e4-0ae7-45cd-aeca-2033267e92c3, threadId=3815]

Keys:

K1 [key=e9228c01-b17e-49a5-bc7f-14c1541d9916, cache=TaskList]
K2 [key=e9228c01-b17e-49a5-bc7f-14c1541d9916, cache=MediaSets]/

I can see that the same key (e9228c1) is used in a transaction on two
different nodes.

Ignite documentation says: /"One major rule that you must follow when
working with distributed transactions is that locks for the keys
participating in a transaction must be acquired in the same order.
Violating this rule can lead to a distributed deadlock."/

If the order of keys in the transaction must be in the same order, how
can the same key cause a deadlock here? Is it because it's in two
different caches? Maybe I don't fully understand how the transaction
lock works.

Is there a code sample that demonstrates a potential violation? How can
I now try to find in my source code where the issue happens on both nodes?

Thanks,
Thomas.