Thanks Karl,

In my view, the cases where fine-grained locks can make a difference are
very few. But since you've been working with databases with very high insert
and update loads, you might know of a scenario that makes them desirable.
Please let us know.

Currently we are using 40 year old AVL trees. The paper by Lehman and Yao is
20 years old. I'm wondering to what extent their approach was influenced by
the state of hardware in their time. For sake of simplicity, let's assume
the ratio between RAM and disk space (Primary and Secondary storage as they
call it) has remained the same over the last 20 years. What has changed
though is the number of nodes that can be held in primary storage by a
process is probably on average a 1000 times that of 20 years ago (using the
same proportion of total Primary storage available).  Could this influence
our approach? For example, if a node takes 16 bytes then we can store the
index for a million rows in 16MB of memory. On a typical server, indexes for
a very large database with multiple indexes can be processed mainly in
memory and written to disk using the platform backed methods without any
effort on our part.

Also something to bear in mind is we need on the fly generation of indexes
for result sets. Currently result sets are held entirely in memory and have
no index. As a result, queries with subqueries perform poorly. Also, if we
want be able to process and return very large result sets we need to create
an index for the result set as it is built and then fetch the rows from
tables when the result set is returned to the client. In short, we need an
indexing solution that covers both permanent tables and temporary result
sets.


Fred Toussi

----- Original Message -----
From: "Karl Meissner" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 19, 2002 9:39 PM
Subject: [Hsqldb-developers] fine grain threading




I implemented BLink Trees in a tiny stand alone demo.
The algorithm turned out to be a littttttle more
complicated then I thought.   So I have not had time
to implement the nio paging system yet.

The big claim to fame for the BLink Tree algorithm
over regular B-Trees is that it allows read and write
multiple threads to access the tree at the same time.

The seminal paper on BLink Trees is
http://lambda.cs.yale.edu/cs424/lect/btree.pdf

This turns out to be really slow.

With the help of some special data structures you lock
single BTree nodes as you move up and down the tree.
The cost of locking is enormous.  It is about 10X
slower with fine grain locking turned on then without
locking.

Doing a coarse lock that locks entire tables for a
each query would be a lot faster but it increase the
wait time for some queries.   But a 10X speed up would
you notice.

Anyway how keen are you guys to have fine grain
locking?  I can't get it to go fast.

If you don't need fine grain concurrency then I may
abandon BLink-Trees and just do BTrees which are
simpler anyway.

I have an implementation (without nio) if you guys
want to look at code.
Oh and with fine grain locking turned off (and no
memory page misses ) the BLink Tree is only 10%
slower then the java.util.TreeMap.  Most of that speed
loss is from keeping the BTree nodes in a cache so
they can be used with nio.

Karl




=====
Karl Meissner

...........................................................
: Meissner Software Development, LLC                      :
: "Software development and project management services." :
: http://meissner.v0.net/resume.htm                       :
...........................................................

__________________________________________________
Do You Yahoo!?
Yahoo! Tax Center - online filing with TurboTax
http://taxes.yahoo.com/

_______________________________________________
hsqldb-developers mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/hsqldb-developers


_______________________________________________
hsqldb-developers mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/hsqldb-developers

Reply via email to