Doug Fajardo wrote:
> No, I admit I haven't tried this under SQLITE.
> 
> Whether this approach will help for the specific application will depend
> on data usage patterns, which we haven't delved into for this application.
> Call me simple: since the main issue is degraded performance with larger
> groupings of data, it seemed to make sense that breaking the data into
> smaller groupings would help. 
> 
> Of course it's very possible that the size of the database in question may
> mean that the number of hash buckets needed to reap significant benefits
> makes this approach counter-productive. That's why it is only a suggestion
> :-)

I think you're assumptions wrong initially. I just can't imagine scenario
where your proposal witl give any benefit except wrong implementation of
B-Tree which is not the case with SQLite.

As I have posted in answer to thread-starter, degradation of performance
because of the cache hit ratio becoming less with amount of data. Your
proposal may work in case keys feeding not random and hitting only several
tables and that gives higher cache hit ratio. But if that's the case - the
same will occur with B-Tree. And if that's the case - why not to feed data
sorted - in that case by logic and as it's proven SQLite will insert the
data without any performance degradation.

Could you describe me situation in which your proposal would help and why?

-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24223839.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to