Interesting. Thanks for pointing that out. I definitely would have been interested at initial implementation, but I'm afraid it's too much of a PITA to get additional Perl modules installed now.
And, unless somebody can point out a flaw in my logic, adding the process id to the mix should be a simple, clean solution to my problem. Owen On Thu, Aug 17, 2006 at 02:30:58PM -0400, Lance A. Brown wrote: > What about Data::UUID? > > http://search.cpan.org/~rjbs/Data-UUID-0.14/UUID.pm > > This module provides a framework for generating UUIDs (Universally > Unique Identifiers, also known as GUIDs (Globally Unique Identifiers). A > UUID is 128 bits long, and is guaranteed to be different from all other > UUIDs/GUIDs generated until 3400 CE. > > UUIDs were originally used in the Network Computing System (NCS) and > later in the Open Software Foundation's (OSF) Distributed Computing > Environment. Currently many different technologies rely on UUIDs to > provide unique identity for various software components. Microsoft > COM/DCOM for instance, uses GUIDs very extensively to uniquely identify > classes, applications and components across network-connected systems. > > The algorithm for UUID generation, used by this extension, is described > in the Internet Draft "UUIDs and GUIDs" by Paul J. Leach and Rich Salz > (http://hegel.ittc.ku.edu/topics/internet/internet-drafts/draft-l/draft-leach-uuids-guids-01.txt). > It provides reasonably efficient and reliable framework for generating > UUIDs and supports fairly high allocation rates -- 10 million per second > per machine -- and therefore is suitable for identifying both extremely > short-lived and very persistent objects on a given system as well as > across the network. > > > Owen Berry wrote: > > I have a CGI program that needs to generate a unique identifier each > > time it gets executed. The problem is that it can get executed multiple > > times per second (duh ... CGI), and requirements limit me from having a > > central source from which to generate a unique id. Besides, I have a > > much simpler solution ... well I thought I did. Take the time in seconds > > since the beginning of the epoch, the number of microseconds in the > > current second, and a 3 digit random number, and concatenate them > > together with delimiters. Sounds reasonable, right? Maybe even a little > > excessive with the random number. Well, 3 times in the past month we've > > seen the same id generated by 2 requests running simultaneously! > > > > It's Perl code, but according to the documentation the seconds and > > microseconds are grabbed using the standard gettimeofday system > > function, and the random number generator is seeded by /dev/urandom. So > > they should both work pretty well, and seem to when tested. > > > > The only partial explanation I can think of is that this is a dual CPU > > system and both requests were literally running at the same time, down > > to the microsecond. Anyone know if there is any locking on /dev/urandom > > to prevent 2 processes grabbing the same data at the same time? > > > > Anyway, I have a simple solution ... add the process id to the mix. That > > should be unique amongst concurrently executing processes, right? ;-) > > > > Owen -- TriLUG mailing list : http://www.trilug.org/mailman/listinfo/trilug TriLUG Organizational FAQ : http://trilug.org/faq/ TriLUG Member Services FAQ : http://members.trilug.org/services_faq/
