I have a CGI program that needs to generate a unique identifier each time it gets executed. The problem is that it can get executed multiple times per second (duh ... CGI), and requirements limit me from having a central source from which to generate a unique id. Besides, I have a much simpler solution ... well I thought I did. Take the time in seconds since the beginning of the epoch, the number of microseconds in the current second, and a 3 digit random number, and concatenate them together with delimiters. Sounds reasonable, right? Maybe even a little excessive with the random number. Well, 3 times in the past month we've seen the same id generated by 2 requests running simultaneously!
It's Perl code, but according to the documentation the seconds and microseconds are grabbed using the standard gettimeofday system function, and the random number generator is seeded by /dev/urandom. So they should both work pretty well, and seem to when tested. The only partial explanation I can think of is that this is a dual CPU system and both requests were literally running at the same time, down to the microsecond. Anyone know if there is any locking on /dev/urandom to prevent 2 processes grabbing the same data at the same time? Anyway, I have a simple solution ... add the process id to the mix. That should be unique amongst concurrently executing processes, right? ;-) Owen -- TriLUG mailing list : http://www.trilug.org/mailman/listinfo/trilug TriLUG Organizational FAQ : http://trilug.org/faq/ TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
