One must learn to use "PIPE AHELP stage" 
to see the author's help, and there AUTOADD etc are documented.

"HELP PIPE stage" and "PIPE HELP stage" both produce Endicott's help 
files, which are less good.

Kris,
IBM Belgium, VM customer support




"Nix, Robert P." <[EMAIL PROTECTED]> 
Sent by: The IBM z/VM Operating System <[email protected]>
2006-06-28 14:53
Please respond to
The IBM z/VM Operating System


To
[email protected]
cc

Subject
Re: Finding DUPS in a CMS file





Fanout your input. Using the lookup stage previously given,  you could get 
a list of the duplicate keys. Feed this list of keys into a second  lookup 
input stream 2 as the keys to look for, and feed the second output stream 
of the fanout into this lookup as input. Output only the records that 
match the  keys found there.
 
The  online help for lookup doesn't show the autoadd or keyonly options, 
and says  that input stream 2 MUST be connected, which I don't see in any 
of the examples  that have been shown. However, assuming that these 
options exist and are just  undocumented, the above pattern should be able 
to give you a complete list of  the non-unique records, in the order they 
existed in the original  file.

-- 
Robert P.  Nix           Mayo Foundation
RO-OC-1-13               200 First Street SW
507-284-0844             Rochester, MN 55905
-----
"In theory, theory and practice are the same, but
 in practice, theory and  practice are different."
 


From: The IBM z/VM Operating System  [mailto:[EMAIL PROTECTED] On 
Behalf Of Colin  Allinson
Sent: Wednesday, June 28, 2006 7:30 AM
To:  [email protected]
Subject: Re: Finding DUPS in a CMS  file


That is fairly easy. 

PIPE the records through a UNIQUE stage  into a hole. Label the UNIQUE 
stage and take the secondary output into a STEM  (for instance). Then, if 
the stem has any entries you know there are  duplicates (and have some 
idea what to look for). 

Regards 

Colin Allinson 

"Wakser,  David" <[EMAIL PROTECTED]> Wrote: 

All: 

        I appreciate  the answers (and was relatively surprised at how 
easily this could be  accomplished). HOWEVER, I need the capability of 
possibly merely issuing an  error message, instead of "blindly" deleting 
one of the records, because the  entire records are not duplicates, merely 
the first 20 bytes. And someone  would need to look at the input data to 
see why the data is duplicated. 

        For those that  are curious and desire more details, I am putting 
together a method for our  Help Desk to know who is on call at any given 
time. Considering the varied  schedules that I and my co-workers have, 
this gets sort of complicated, and  the Help Desk always seems to get it 
wrong! 

        In general,  one of us is on call for entire week, 24 hours a day. 
However, since I get up  at 4:00 AM I would prefer that my co-workers are 
not called at that time,  since I am up anyway! 

        The  identification of duplicates is to ensure that two people 
have not been  assigned duty in the same time period! These would be 
"exceptions" (or  overrides) to the default on-call person. 

        For example,  let's assume I am "on call" July 1 thru July 7. 
However, on July 3rd I have an  appointment from 6:00 PM to 7:00 PM. I 
would set the "default" on-call record  for the entire week with my name, 
and then have specific date/time records for  the times that OTHER people 
would cover for me, which override the "default".  I need to ensure that, 
except for the "default" record, no two people are  assigned coverage for 
the same time period. 

        Records  consist of a date, a time (all times assume a 
thirty-minute interval), and the  on-call person's name. Once I have 
sorted them by date and time, there should  NOT be any duplicates. If 
there are, how can I identify them? I MIGHT possibly  want to selectively 
delete one, but at this point in time I need to IDENTIFY  that fact that 
there are dups and issue a message. 

        I hope this  explains things so that your solutions are more 
helpful! 

David Wakser 
InfoCrossing 

Reply via email to