Hi,
According to plan described in corresponding ticket [1] we are going
to extend TransactionConfiguration with parameters related to deadlock
detection. As you might remember suggested deadlock detection approach
is going to be enabled for MVCC caches only. Let's discuss what
properties should
Hi,
In continuation of discussion summarized in [1] I would like to
highlight some points related to avoidance of non-deadlocked
transactions rollback. In [1] maintaining a lock counter for
transaction is suggested in order to avoid rolling back a transaction
which have made progress since a
Igor, thanks for your feedback! Let's do according to your proposal.
пн, 24 дек. 2018 г. в 14:57, Seliverstov Igor :
>
> Ivan,
>
> We have to design configuration carefully.
> There are possible issues with compatibility and this API is public, it
> might be difficult to redesign it after while.
Ivan,
We have to design configuration carefully.
There are possible issues with compatibility and this API is public, it
might be difficult to redesign it after while.
Since there is a ticket about MVCC related configuration parts [1] (TxLog
memory region/size, vacuum thread count and intervals,
Hi,
I prepared a patch with deadlock detection algorithm implementation
(you can find it from ticket [1]).
Now I would like to discuss options for configuring deadlock
detection. From a usability standpoint following need to be supported:
1. Turn deadlock detection on/off (on by default).
2.
Igor,
I see your points. And I agree to start with "lightweight
implementation". Today we have 2PL and there is no activity on
implementing rollback to savepoint. And if we do it in the future we
will have to return to the subject of deadlock detection anyway.
I will proceed with "forward-only"
Ivan,
I would prefer forward-only implementation even knowing it allows false
positive check results.
Why I think so:
1) From my experience, when we have any future is waiting for reply, we
have to take a failover into consideration.
Usually failover implementations are way more complex than an
Hi folks,
During implementation of edge-chasing deadlock detection algorithm in
scope of [1] it has been realized that basically we have 2 options for
"chasing" strategy. I will use terms Near when GridNearTxLocal is
assumed and Dht when GridDhtTxLocal (tx which updates primary
partition). So,
Vladimir,
I think it might work. So, if nobody minds I can start prototyping
edge-chasing approach.
пн, 26 нояб. 2018 г. в 14:32, Vladimir Ozerov :
>
> Ivan,
>
> The problem is that in our system a transaction may wait for N locks
> simultaneously. This may form complex graphs which spread
Ivan,
The problem is that in our system a transaction may wait for N locks
simultaneously. This may form complex graphs which spread between many
nodes. Now consider that I have a deadlock between 4 nodes: A -> B -> *C*
-> D -> A. I've sent a message from a and never reached D because C failed.
Hi Vladimir,
Regarding fault tolerance. It seems that it is not a problem for
edge-chasing approaches. A found deadlock is identified by a message
returned to a detection initiator with initiator's identifier. If
there is no deadlock then such message will not come. If some node
containing a
Hi Ivan,
Great analysis. Agree that edge-chasing looks like better candidate. First,
it will be applicable to both normal and MVCC transactions. Second, in MVCC
we probably will also need to release some locks when doing rollbacks. What
we should think about is failover - what if a node which was
Vladimir,
Thanks for the articles! I studied them and a couple of others. And I
would like to share a knowledge I found.
BACKGROUND
First of all our algorithm implemented in
org.apache.ignite.internal.processors.cache.transactions.TxDeadlockDetection
is not an edge-chasing algorithm. In essence
Ivan,
This is interesting question. I think we should spend some time for formal
verification whether this algorithm works or not. Several articles you may
use as a startitng point: [1], [2]. From what I understand, Ignite fall
into "AND" model, and currently implemented algorithm is a variation
Hi,
Next part as promised. A working item for me is a deadlock detector
for MVCC transactions [1]. The message is structured in 2 parts. First
is an analysis of the current state of affairs and possible options to
go. Second is a proposed option. First part is going to be not so
short so some
Hi Igniters,
I would like to resume the discussion about a deadlock detector. I start
with a motivation for a further work on a subject. As I see current
implementation (entry point IgniteTxManager.detectDeadlock) starts a
detection only after a transaction was timed out. In my mind it is not
On Mon, Nov 20, 2017 at 10:15 PM, Vladimir Ozerov
wrote:
> It doesn’t need all txes. Instead, other nodes will send info about
> suspicious txes to it from time to time.
>
I see your point, I think it might work.
It doesn’t need all txes. Instead, other nodes will send info about
suspicious txes to it from time to time.
вт, 21 нояб. 2017 г. в 8:04, Dmitriy Setrakyan :
> How does it know about all the Txs?
>
> D.
>
> On Nov 20, 2017, 8:53 PM, at 8:53 PM, Vladimir Ozerov <
>
How does it know about all the Txs?
D.
On Nov 20, 2017, 8:53 PM, at 8:53 PM, Vladimir Ozerov
wrote:
>Dima,
>
>What is wrong with coordinator approach? All it does is analyze small
>number of TXes which wait for locks for too long.
>
>вт, 21 нояб. 2017 г. в 1:16, Dmitriy
Dima,
What is wrong with coordinator approach? All it does is analyze small
number of TXes which wait for locks for too long.
вт, 21 нояб. 2017 г. в 1:16, Dmitriy Setrakyan :
> Vladimir,
>
> I am not sure I like it, mainly due to some coordinator node doing some
>
Vladimir,
I am not sure I like it, mainly due to some coordinator node doing some
periodic checks. For the deadlock detection to work effectively, it has to
be done locally on every node. This may require that every tx request will
carry information about up to N previous keys it accessed, but
Igniters,
We are currently working on transactional SQL and distributed deadlocks are
serious problem for us. It looks like current deadlock detection mechanism
has several deficiencies:
1) It transfer keys! No go for SQL as we may have millions of keys.
2) By default we wait for a minute. Way
22 matches
Mail list logo