Have you ever tested such proposal?
I believe that doesn't works.

Doug Fajardo wrote:
> 
> One approach might be to split the big, monolithic table into some number
> of hash buckets, where each 'bucket' is separate table. When doing a
> search, the program calculates the hash and accesses reads only the bucket
> that is needed.
> 
> This approach also has the potential for allowing multiple databases,
> where tables would be spread across the different databases. The databases
> could be spread across multiple drives to improve performance.
> 
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24218386.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to