Hello all, We're developing a multithread application using one C++ Thin Client to connect to a cluster with a single Server Node. The C++ Thin Client version is "master" from January 21.
We have implemented a "lock-and-update" system based on the "GetAndPut" function and PESSIMISTIC+READ_COMMITTED transactions. The idea is to lock a set of cache entries, update them and commit them atomically. In our tests we have detected a deadlock when following piece of code is executed for more than one thread on our application: ... ClientTransactions transactions = client.ClientTransactions(); ClientTransaction tx = transactions.TxStart(PESSIMISTIC, READ_COMMITTED); // This call should atomically get the current value for "key" and put "value" instead, locking the "key" cache entry at the same time auto oldValue = cache.GetAndPut(key, value); // Only the thread able of locking "key" should reach this code. Others have to wait for tx.Commit() to complete cache.Put (key, newValue); // After this call, other thread waiting in GetAndPut for "key" to be released should be able of continuing tx.Commit (); ... The thread reaching "cache.Put (key, newValue);" call, gets blocked in there, concretely in the lockGuard object created at the beginning of DataChannel::InternalSyncMessage function (data_channel.cpp:108). After debugging, we realized that this lockGuard is owned by a different thread, which is currently waiting on socket while executing GetAndPut function. According to this, my guess is that data routing for C++ Thin Clients is not multithread friendly. I did a test creating a C++ Thin Client for each different thread and the problem disappeared, but this is something I would like to avoid since threads are created and destroyed on the fly. So, my questions is: do I have to create a C++ thin client for each different thread or there is any workaround? Thanks in advance! -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
