SAMPLED keyword with a D on the end will go thru the entire file. SAMPLED 1000 will get every thousandth record as it reads through the file. SAMPLED 1000 SAMPLE 2000 will do the same, but then stop after it builds a list of 2000. That is, after it's read thru the 1st 2 million keys.
On Tue, Jun 11, 2013 at 11:01 AM, Perry Taylor <[email protected]>wrote: > I have several *very* large files on which I need to perform some file > sizing diagnostics. Rather than repeatedly running HASH.AID against these > files is there a good way to sample say 2-3 million records to copy into a > test file? SAMPLE will only grab the first n records in hash order and I'm > thinking that would not necessarily be a good representative sample of the > file's contents. Am I up in the night thinking this is the case? Is there > a better way to get a good sample of records for this purpose? > > Thanks. > Perry > > Perry Taylor > Senior MV Architect > ZirMed > 888 West Market Street, Suite 400 > Louisville, KY 40202 > www.zirmed.com<http://www.zirmed.com/> > > > > CONFIDENTIALITY NOTICE: This e-mail message, including any > attachments, is for the sole use of the intended recipient(s) > and may contain confidential and privileged information. Any > unauthorized review, use, disclosure or distribution is > prohibited. ZirMed, Inc. has strict policies regarding the > content of e-mail communications, specifically Protected Health > Information, any communications containing such material will > be returned to the originating party with such advisement > noted. If you are not the intended recipient, please contact > the sender by reply e-mail and destroy all copies of the > original message. > _______________________________________________ > U2-Users mailing list > [email protected] > http://listserver.u2ug.org/mailman/listinfo/u2-users > _______________________________________________ U2-Users mailing list [email protected] http://listserver.u2ug.org/mailman/listinfo/u2-users
