On 12/02/2010 10:47 AM, Ted Dunning wrote:
I would recommend that you increment the counter by 100 or 1000 and then increment a local counter over the implied range. This will drive the amortized ZK overhead down to tens of microseconds which should be good for almost any application. Your final ids will still be almost entirely contiguous. You could implement a fancier counter in ZK that remembers returned chunks for re-use to get perfect contiguity if you really wanted that.
This is what our library does. You request chunks of, say, 1000 ID's, and then push back any remaining unused ID's in the chunk you took.
DR
