On Tue, Jul 5, 2011 at 12:46 PM, Sanne Grinovero <sa...@infinispan.org> wrote:
> 2011/7/5 Galder Zamarreño <gal...@redhat.com>:
>>
>>
>> On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
>>
>>> I agree they don't make sense, but only in the sense of exposed API
>>> during a transaction: some time ago I admit I was expecting them to
>>> just work: the API is there, nice public methods in the public
>>> interface with javadocs explaining that that was exactly what I was
>>> looking for, no warnings, no failures. Even worse, all works fine when
>>> running a local test because how the locks currently work they are
>>> acquired locally first, so unless you're running such a test in DIST
>>> mode, and happen to be *not* the owner of the being tested key, people
>>> won't even notice that this is not supported.
>>>
>>> Still being able to use them is very important, also in combination
>>> with transactions: I might be running blocks of transactional code
>>> (like a CRUD operation via OGM) and still require to advance a
>>> sequence for primary key generation. This needs to be an atomic
>>> operation, and I should really not forget to suspend the transaction.
>>
>> Fair point. At first glance, the best way to deal with this is suspending 
>> the tx cos that guarantees the API contract while not forcing locks to be 
>> acquired for too long.
>>
>> I'd advice though that whoever works on this though needs to go over 
>> existing use cases and see if the end result could differ somehow if this 
>> change gets applied. If any divergences are found and are to be expected, 
>> these need to be thoroughly documented.
>>
>> I've gone through some cases and end results would not differ at first 
>> glance if the atomic ops suspend the txs. The only thing that would change 
>> would be the expectations of lock acquisition timeouts by atomic ops within 
>> txs.
>>
>> For example:
>>
>> Cache contains: k1=galder
>>
>> 1. Tx1 does a cache.replace(k1, "galder", "sanne") -> suspends tx and 
>> applies change -> k1=sanne now
>> 2. Tx2 does a cache.replace(k1, "galder", "manik") -> suspends tx and is not 
>> able to apply change
>> 3. Tx2 commits
>> 4. Tx1 commits
>> End result: k1=sanne
>
> Right.
> To clarify, this is what would happen with the current implementation:
>
> 1. Tx2 does a cache.get(k1) -> it reads the value of k1, and is
> returned "galder"
> 2. Tx1 does a cache.replace(k1, "galder", "sanne") -> k1="sanne" in
> the scope of this transaction, but not seen by other tx
> 3. Tx2 does a cache.replace(k1, "galder", "manik") -> k1="manik" is
> assigned, as because of repeatable read we're still seeing "galder"
> 4. Tx2  & Tx1 commit
>
> ..and the end result depends on who commits first.
>
>>
>> 1. Tx1 does a cache.replace(k1, "galder", "sanne") -> acquires lock
>> 2. Tx2 does a cache.replace(k1, "galder", "manik") -> waits for lock
>> 3. Tx2 rollback -> times out acquiring lock
>> 4. Tx1 commits -> applies change
>> End result: k1=sanne
>
> I'm not sure we're on the same line here. 1) should apply the
> operation right away, so even if it might very briefly have to acquire
> a lock on it, it's immediately released (not at the end of the
> transaction), so why would TX2 have to wait for it to the point it
> needs to rollback?
>

I think it would make sense to make atomic operations pessimistic by
default, so they would behave like in Galder's example.

Then if you wanted to reduce contention you could suspend/resume the
transaction around your atomic operations and make them behave like
you're expecting them to.

Here is a contrived example:

1. Start tx Tx1
2. cache.get("k") -> "v0"
3. cache.replace("k", "v0", "v1")
4. gache.get("k") -> ??

With repeatable read and suspend/resume around atomic operations, I
believe operation 4 would return "v0", and that would be very
surprising for a new user.
So I'd rather require explicit suspend/resume calls to make sure
anyone who uses atomic operations in a transaction understands what
results he's going to get.

Dan

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to