You're talking about db size much less than 1 billion records.

In 1 billion records db with described scenario cache hit ratio so small
that everything you're talking about just very close to zero difference in
effect. Because 1 uncached random IO operation is 10ms. Any reasonable
calculations (in the current scenario) or even saving new page near current
page far less than 10ms.

And that's what I've said - that proposal won't help and will make things a
bit worse and more complex.

In future, when we all will forget 100 IOPS wall and will be hitting 100K-1M
IOPS, your assumptions might become in place with large DB. But I'm sure -
tricks like splitting table into 100-10000 tables with hash wheel in mind
won't give any additional significant benefit. Hashtables can be faster in
case you don't need range operations, but placing hash on top of B-Tree to
eleminate single b-tree page shouldn't give any speedup.

If you have proven that this trick still works - I will be glad to see code
sample with benchmarks.

Thanks.
Max.


John Stanton-3 wrote:
> 
> Quite wrong.   Searching a B-Tree is relatively inexpensive but node 
> splits are expensive.
> 
> Inserting a non-terminal key in a part filled leaf node is cheap, 
> inserting a terminal key is more expensive and a split is more expensive 
> again
> 
> The reason we spend the extra resources maintaining B-tree indices is 
> because they maintain the keys in sorted sequence.  If maintaining keys 
> in order is not required a hashing method can be faster.
> 
> Our fastest B-Tree indices use the virtual memory capability of the OS 
> as cache and perform very well.by avoiding buffer shadowing and 
> maximizing utilization of physical memory..
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24233133.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to