Hi all ...
a couple of days back, I had a question about the performance of large
tables. You really got me going there, thanks again!

But now, there's another thing.I figured out how large my database
will become and I'm scared of its size: up to 20GB and more! A single
table, 4 columns, each holding an integer (32 bit) will have
approximately 750 million rows. This mounts up to ~11GB. Adding an
unique two-column index, I get another 10GB worth of data, that's an
awful lot.

Was sqlite designed for those numbers? The docs state that sqlite
supports  "databases up to 2 terabytes in size". OTOH, "supports" is
not the same as "works-well-with"?! Any suggestions whether my
descision to use sqlite was appropiate for this table design?

Another question: is there any way to specify a comparison function
for blobs? I thought about packing those 128 bits. it should be
possible to get rid of some overhead to reduce them to 64 (using a
bitset). This bitset could then be stored in a single column as blob.
Nevertheless, to ensure uniqueness in two of those four columns, I
have to specify my own comparison function. Any chance to do this?

Thanks for your time!

With kind regards

      Daniel

Reply via email to