Hi Rob,
Can you get rid of the duplicates? You could do that with SORT and then
do the match on that files with duplicates removed? Not intuitive but
it might actually be faster than processing once with a user written
program and no code to manage only utility statements.
Best Regards,
Sam Knutson, GEICO
Performance and Availability Management
mailto:[EMAIL PROTECTED]
(office) 301.986.3574
"Think big, act bold, start simple, grow fast..."
-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Roberto R.
Sent: Wednesday, April 26, 2006 6:23 PM
To: [email protected]
Subject: Re: DFSORT match problem
If I understand this correctly I need a different solution depending on
if I have duplicates in input1 or input2, is this true? My situation is
that
input1 consists of roughly 10.000 records and input2 about 350.000
records, and I have no idea how or where duplicates exist. I take this
as there is no universal solution in DFSORT for matching records based
on a key in this manner... Guess I need to write a program instead :-(
But thanks anyway.
Rob
====================
This email/fax message is for the sole use of the intended
recipient(s) and may contain confidential and privileged information.
Any unauthorized review, use, disclosure or distribution of this
email/fax is prohibited. If you are not the intended recipient, please
destroy all paper and electronic copies of the original message.
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html