>> I think I can give you a reason for random numbers versus autonumbers.
I
was once asked to look into interfacing with a very sophisticated contact
management system. This system used no autonumbers. It used something it
called a GUID. I forget what the GUID stood for, but they were very large
random numbers. The reason for the GUIDs in this application was
replication. The system was built to support offline processing. Whent
the
user came back on line and merged their changes, the randomness of the IDs,
made the merging process much easier. The developer I talked to said the
odds of duplicating one of these very large numbers were extremely low. <<
Assuming that the developer of that app was correct about the chances of
duplication being small, were they so small that no checking needed to be
done? Iac, that is an issue when you need to be able to merge datasets.
But, does the application under discussion here have that problem? Absent
that issue, is it still "good" or desirable? IT probably would not be a
big problem - IF it did not mean writing extra code to do it and worrying
that someone might be able to by pass it...
>> There is a minore performance reason for using random numbers. The way
indexes are built, if your ID is indexed, the performance of your indexes
will degrade as new records are added with the values scewed towards the
same range of numbers. <<
I'd have to say minor indeed. I have a database with a master table
indexed on clientid. We have aprox 16K maseter records with ids running
from 1 - 17,500. The service table linked to it had aprx 600k records,
also indexed on clientid, with the same clientid set. searches are fast.
How big of a database is needed?