No, I admit I haven't tried this under SQLITE.

Whether this approach will help for the specific application will depend on 
data usage patterns, which we haven't delved into for this application. Call me 
simple: since the main issue is degraded performance with larger groupings of 
data, it seemed to make sense that breaking the data into smaller groupings 
would help. 

Of course it's very possible that the size of the database in question may mean 
that the number of hash buckets needed to reap significant benefits makes this 
approach counter-productive. That's why it is only a suggestion :-)

*** Doug Fajardo

-----Original Message-----
From: [email protected] [mailto:[email protected]] 
On Behalf Of Kosenko Max
Sent: Friday, June 26, 2009 4:06 AM
To: [email protected]
Subject: Re: [sqlite] very large SQLite tables


Have you ever tested such proposal?
I believe that doesn't works.


Doug Fajardo wrote:
> 
> One approach might be to split the big, monolithic table into some number
> of hash buckets, where each 'bucket' is separate table. When doing a
> search, the program calculates the hash and accesses reads only the bucket
> that is needed.
> 
> This approach also has the potential for allowing multiple databases,
> where tables would be spread across the different databases. The databases
> could be spread across multiple drives to improve performance.
> 
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24218386.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to