<snipped lots of interesting stuff about relative efficiency of Serializable
isolation>

Tom,

Good arguments for Serializable but... I think the real answer is "it
depends". If I recall correctly, Serializable* can be more "efficient", but
it can also be less "efficient", depending on your access patterns and
degree of concurrency. The main transactional semantic tradeoff is that
Serializable can give you one of three results returned from the dbms when
you try to commit:

- the tx committed successfully (because nothing that you touched had
changed in the interim)

- the tx cannot commit (in other words, you asked for a Serializable tx but
your transactions aren't "really" Serializable). The programmer deals with
this just as with other update conflict verification techniques.

- "don't know" (too many changes have taken place and the tx history no
longer goes back far enough to determine if the row has changed or not). The
programmer is going to have to figure out how to deal with this in the
context of the application. The amount of "history" is determined by the
INTRANS parameter Tom mentioned, so increasing it reduces the likelihood of
this happening at the cost of requiring greater system resources. Depending
on the degree of concurrency, duration of the tx, etc., you may need to
compromise on how high you set INTRANS.

This last outcome, "don't know", can't happen with explicit verification
techniques discussed in other threads; but on the other hand Serializable is
much easier to implement because the database does all the work for you.

There are also other resource tradeoffs - IIRC correctly, Serializable
requires retaining multiple versions of a row (i.e. the history) depending
on how many concurrent users of that row there are, whereas explicit
verification requires only the most recent committed data to check against.

In my experience, Serializable works best when the transactions really are
(at least, most of the time) truly serializable, i.e. you don't get (many)
rejected transactions, and works worst when the length of transactions and
degree of concurrency causes a high number of "don't know" results.

In general, I think you have to choose the verification technique and
isolation level that matches the characteristics (both performance and
business semantics) of the tx in question, and there is no "best" answer.


Now I have a question for the list (if anybody is still reading!) When
people write that VendorX app server CMP "implements" serializable isolation
by default, do they mean (i) that the CMP is actually checking for update
conflicts, (ii) that the CMP is verifying that the database is implementing
serializable isolation for this transaction, or (iii) that the CMP /assumes/
that the database is serializing the tx and therefore is actually doing /no/
verification itself.

Carl Zetie


*At least as implemented by Oracle - I don't know anything about anybody
else's implementation. I should also hedge that my experience of
Serializable was with Oracle8; 9i may be different.

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff EJB-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to