Regarding:  ...is to use an MD5/sha1 or similar checksum of the record and use 
the last 32 bits of that checksum.  It is extremely unlikely for there to be a 
collision ...

Except that the OP wrote:  "...I don't think it works very well for 2^32 
possible values (when there may well be only a couple of hundred unused 
ones)...."



I think that, with 500 unused values, you'll have consumed over
   99.99998 percent of your values -- and you appear to be describing this 
situation as ordinary.   (Collisions, in this situation, of course, would be in 
the majority.)

Is your application such that, being just a hair's breadth away, it will almost 
certainly then exhaust the rest of your 2**32 values, resulting in "bad things 
happening"?

If there may well be only a couple of hundred unused values, then unless your 
rows and indices are tiny, wouldn't your database be so large that a 2GB table 
of integers will be small in comparison?    Anyway, Scott has beat me to a 
suggestion that you could just store the ID of the *deleted* rows while 
monitoring your maximum ID to be sure it remains under 2**32 (i.e., 4294967296).


   
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to