To answer the question in the subject line:

Q.  BeginTransaction - EndTransaction section: does it really work?

A.  Yes


I will also attempt to answer both these statements, hopefully without
upsetting any of the real db gurus out there.

> In either case db deadlocks are a possibility ( which is different from�the
> previous�version of the�software�).


As there are still quite a few people using T2K, I had a look back at the
pervasive support logs to get a better perspective on all this debate.  It
seems that the standard edition of T2K would not cause this behaviour as it
would only have one db request active at any particular time.  Apparently
those that had long sql queries running would have seen how this stalled the
T2K server's other db requests until it finishes the long query.

The professional edition of T2K could and did cause db deadlocks quite a bit
as the load increased with multiple T2K processes hitting the db and the
professional server was used mainly in high volume sites.

Witango 5 being preemptively multi threaded allows multiple db requests to
be processing at the same time by a single witango server.  This means that
the server does not stall when a processing thread is performing a long db
task as all the worker threads are processing simultaneously and get a share
of all the resources and CPUs.  A by-product of this is that the db load is
pushed through to the dbms with the Witango Server 5 and is not throttled by
its threading model like in the T2K server.  This is like running 4-8 T2K
processes all doing db queries.

So regardless of what version of the server you are running (T3.6, T2K or
Witango 5), you need to look at how you are building a db transaction and
analyse the worst case scenario of what rows (or tables) are getting locked
and for how long the locks are being enforced.  The more rows being locked
and the longer they are locked for will increase the likelihood of a
deadlock with concurrent user requests.  One of the problems with deadlocks
is that the more deadlocks you encounter the more transaction will have to
be rerun at a future point in time which adds unnecessary load with each
retry.  It compounds your problem.

If you need to use a lot of transactions you may want to consider the data
structure you are using.  If you normalise the data structure further you
will quite often find that the amount of locking on the data will be reduced
as you have a finer control over the precise data you need to modify or
read.  It can also mean that there are db queries that can be run outside of
the transaction safely.  In a high transaction database you need to be able
to utilise smaller selects, inserts, updates and deletes.  The draw back of
the extra normalisation is that as the degree of normalization increases so
to does the complexity of the data model you end up with.  So you need to
find a balance between the best level of normalization and the performance
considerations of extra joins versus the better performance of your
transactions in being able to avoid locks and contentions.  In a application
with a lot of data retrieval it is common to de-normalise the data structure
to improve the performance of calculations over millions of records.

The other thing to remember is that a transactional database and structure
is a different beast to a data warehouse database structure.    The server
is also tuned very differently so work out what the database is
predominantly going to be used for before you design it.  Also make sure
that your indexes are optimal for all your queries.  Check each db actions
criteria and joins to make sure that an optimal index is available to the
server when it processes the sql query.

>From a Witango programmers point of view if it is not possible to reduce the
factors causing the deadlocks and there is a high probability of a deadlock
occurring make sure you use the Error HTML to catch the error, use
@CLEARERRORS to continue processing and compensate for the transaction that
did not complete.  As a rule of thumb, keep the locks to a minimum in your
db actions especially in a read/write exclusive transactions.

Personally, I think good db design is a balance between functionality,
complexity, data integrity, readability, convenience, speed, concurrency and
time.  It is knowing when to break the rules, which rules to break and why
that makes a great database design.  It is an art form and not an
engineering science.

Just my 2 cents worth (US 1.4c).

Phil

________________________________________________________________________
TO UNSUBSCRIBE: Go to http://www.witango.com/developer/maillist.taf

Reply via email to