I have several *very* large files on which I need to perform some file sizing 
diagnostics.  Rather than repeatedly running HASH.AID against these files is 
there a good way to sample say 2-3 million records to copy into a test file?  
SAMPLE will only grab the first n records in hash order and I'm thinking that 
would not necessarily be a good representative sample of the file's contents.  
Am I up in the night thinking this is the case?  Is there a better way to get a 
good sample of records for this purpose?

Thanks.
Perry

Perry Taylor
Senior MV Architect
ZirMed
888 West Market Street, Suite 400
Louisville, KY 40202
www.zirmed.com<http://www.zirmed.com/>



CONFIDENTIALITY NOTICE: This e-mail message, including any 
attachments, is for the sole use of the intended recipient(s) 
and may contain confidential and privileged information.  Any
unauthorized review, use, disclosure or distribution is 
prohibited. ZirMed, Inc. has strict policies regarding the 
content of e-mail communications, specifically Protected Health 
Information, any communications containing such material will 
be returned to the originating party with such advisement 
noted. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the 
original message.
_______________________________________________
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

Reply via email to