Justin,

Good to know that. Some comments:

> Their SQL is built on top of the key-value interface so they get parallel 
> optimistic transactions "for free"
AFAIK Cockroach transactions does not use RocksDB transactional
capabilities at all. Moreover do not know if your case is possible
with Cockroach, if transaction A updated a value for key 1 then
transaction B would block on reading the value for the same key (I
believe it is due SERIALIZABLE isolation). Also there is a special
case -- deadlock. In case of deadlock (e.g. tx A locked key 1, tx B
locked key 2, A wants to update key 2, B wants to update key 1) one of
transactions will be aborted shortly.

And a bit of purism. I cannot call Cockroach transactions "optimistic"
because they employ locking and waiting during updates, but really
optimistic transactions does not.

сб, 19 окт. 2019 г. в 13:17, Justin Moore <[email protected]>:
>
> If I've read this right -- 
> https://www.cockroachlabs.com/blog/how-cockroachdb-distributes-atomic-transactions/
>  -- Cockroach achieves it by using a convention of writing "intents" to 
> provisionally "change" any given key with a timestamp suffix from the hybrid 
> logical clock which orders transactions.  When reading a given key, they'll 
> actually do a range scan of sorts over all the key's write intents to see 
> which is the latest that successfully committed -- subsequently they'll 
> promote write intents to genuine write "effects" and remove prior records.  
> Their SQL is built on top of the key-value interface so they get parallel 
> optimistic transactions "for free" -- I think YugaByte's "provisional writes" 
> work much the same way 
> (https://docs.yugabyte.com/latest/architecture/transactions/distributed-txns/).
>
> Seeing that Ignite would support the optimistic transactions for it's 
> key-value cache, I assumed you were already doing something similar and I was 
> kind of surprised that SQL optimistic transactions weren't naturally 
> supported (hence my question).
>
> On Sat, Oct 19, 2019 at 12:06 AM Ivan Pavlukhin <[email protected]> wrote:
>>
>> Hi Justin,
>>
>> Thank you for sharing a details about your use case. Quite interesting.
>>
>> It seems that in Ignite something like that is possible with
>> optimistic cache transactions (key-value API). Technically it seems to
>> be achievable when transaction protocol accumulates transaction
>> read-write set in memory area allocated for each transaction. Ignite
>> key-value transactions behaves so. But MVCC transactions was designed
>> to support SQL update operations as well which possible be larger than
>> available memory. So, new row versions are written in the same place
>> where a regular (already committed) data resides. Supporting multiple
>> not committed versions will complexify existing implementation a lot
>> (do not have good ideas from scratch how to implement it).
>>
>> Really interesting if cockroachdb supports something like this.
>>
>> сб, 19 окт. 2019 г. в 03:36, Justin Moore <[email protected]>:
>> >
>> > Thanks Ivan,
>> >
>> > First, apologies, I don't have a simple example to share...
>> >
>> > I'm trying to explore the possibility answering "what-if" hypothetical 
>> > queries using (SQL) transactions that don't commit (i.e. transactions that 
>> > always rollback/abort).  Each what-if transaction would be like a 
>> > self-contained alternate reality that can surface information from an 
>> > alternate future -- but to do so effectively, the transaction ought to 
>> > process all statements, even though it will never (succeed to) commit, so 
>> > it must not abort early/unintentionally.
>> >
>> > I think a couple of other "distributed SQL" offerings 
>> > (cockroachDB/yugabyte) are architected in a way that makes this possible, 
>> > where traditionally "unclustered" databases (Postgres) generally seemed to 
>> > rely on locking that would prevent the capability.  I was -- and still am 
>> > -- looking for other (probably distributed) options that might make this 
>> > feasible, which lead me to the Ignite docs.
>> >
>> > My specific use case is to do something kind of like a Log Sorted Merge 
>> > representation with a twist, using a shared database and a shared Write 
>> > Ahead Log (the twist is that the WAL is not strictly append-only).  So 
>> > concurrent clients would assemble the current state on demand by applying 
>> > the log to the database -- which is a "snapshot" practically speaking -- 
>> > in a transaction that would not be committed (a separate "compaction" 
>> > process would apply a prefix of the log to be persisted permanently).  As 
>> > such, concurrent clients are going to be trying to do the exact same 
>> > writes to the database in their transactions -- they need not commit but 
>> > all other statements should be executed.
>> >
>> > Sorry if it's a bit confusing...
>> >
>> > Cheers,
>> > Justin
>> >
>> > On Fri, Oct 18, 2019 at 6:31 AM Ivan Pavlukhin <[email protected]> wrote:
>> >>
>> >> Hi,
>> >>
>> >> Currently there are no activity on optimistic transaction support for SQL.
>> >>
>> >> A transaction will be aborted on a first write conflict. May be I got
>> >> it wrong, but what is the benefit of aborting later (on commit)
>> >> instead of earlier (on write conflict)? Perhaps a scenario written in
>> >> terms of Ignite operations (cache.get, cache.put, cache.query) can
>> >> illustrate your problem better for my understanding.
>> >>
>> >> пт, 18 окт. 2019 г. в 06:58, Justin Moore <[email protected]>:
>> >> >
>> >> > Hi,
>> >> >
>> >> > Very new to Ignite and just trying to assess its capabilities.
>> >> >
>> >> > tl;dr: are optimistic serializable sql transactions on the roadmap?
>> >> >
>> >> > It seems that currently only pessimistic repeatable_read sql 
>> >> > transactions are supported (in beta as of the current version -- 2.7).  
>> >> > I'm trying to confirm that, in Ignite, if I started two concurrent 
>> >> > transactions (from the same snapshot) where one intends to execute 
>> >> > statements #1 (compare-and-set), #2 (read-only), and #3 (whatever 
>> >> > else), while the other intends to execute the exact same update 
>> >> > statements #1, #2, and #3, but also a subsequent #4, understanding that 
>> >> > (all but) one of those transactions would probably fail to commit, I'm 
>> >> > looking to clarify whether or not the failing one would throw/abort 
>> >> > even before reaching statement #2 (which might be a read returning 
>> >> > values)?
>> >> >
>> >> > If I'm reading the docs correctly it seems that in pessimistic 
>> >> > repeatable_read mode the transaction would fail one of the transactions 
>> >> > at statement #1 (due to write conflict), but if it could have been 
>> >> > optimistic serializable, the transaction would simply rollback at the 
>> >> > point a commit was attempted.  Please correct me if I'm wrong.
>> >> >
>> >> > Lastly, are optimistic serializable sql transactions on the roadmap?
>> >> >
>> >> > Thanks
>> >>
>> >>
>> >>
>> >> --
>> >> Best regards,
>> >> Ivan Pavlukhin
>>
>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin

Reply via email to