Re: dataset allocation

2020-10-07 Thread Wayne Bickerdike
Joe,

As previously mentioned. Set up an ISPF JCL skeleton and build your JCL
using File Tailoring.

Sometimes you have to slow down and break down the problem too.

DFSORT or Syncsort are very fast but first off, test against one dataset
and get some indicative timings. end to end that will give you a baseline,
then just add up all the elapsed times and CPU.

You perhaps have an IEFUSI exit that is limiting your job or cpu time.
TIME=1440 is often disabled, so talk to a systems programmer. You should
have a job class for long running/high CPU jobs.



On Thu, Oct 8, 2020 at 11:52 AM Jeremy Nicoll 
wrote:

> On Thu, 8 Oct 2020, at 01:10, Seymour J Metz wrote:
> > No, I'm saying that I know what the CHANGE command does. Did the OP say
> > that the relevant lines are contiguous?
>
> No, he said nothing at all except that
>
>   "On a different note. I just compared EDIT macro performance
>versus IPOUPDTE. IPOUPDTE was about 600 times faster."
>
> Of course I'm not surprised that a specific utility is faster, but how much
> faster depends on lots of things that weren't stated - which I thought
> made the comparison more or less worthless.
>
> It reminds me of the discovery I made back in the 1980s, on a VM/CMS
> system, that one could copy a file more quickly using Xedit to load the
> file then write it elsewhere (with a macro governing that), than by using
> the CMS file copy command.
>
> As far as I remember, IBM admitted that Xedit I/O had been optimised
> to make it as fast as possible, to help sell VM, CMS and Xedit as a
> development tool.
>
> --
> Jeremy Nicoll - my opinions are my own.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>


-- 
Wayne V. Bickerdike

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Ipoupdte

2020-10-07 Thread Steve Beaver
At the shop I worked be barred ipoupdte because it broke the endeavor locks in 
production pds’s

Sent from my iPhone

I promise you I can’t type or
Spell on any smartphone 

> On Oct 7, 2020, at 19:51, Jeremy Nicoll  wrote:
> 
> On Thu, 8 Oct 2020, at 01:10, Seymour J Metz wrote:
>> No, I'm saying that I know what the CHANGE command does. Did the OP say 
>> that the relevant lines are contiguous?
> 
> No, he said nothing at all except that 
> 
>  "On a different note. I just compared EDIT macro performance 
>   versus IPOUPDTE. IPOUPDTE was about 600 times faster."
> 
> Of course I'm not surprised that a specific utility is faster, but how much
> faster depends on lots of things that weren't stated - which I thought 
> made the comparison more or less worthless.
> 
> It reminds me of the discovery I made back in the 1980s, on a VM/CMS
> system, that one could copy a file more quickly using Xedit to load the
> file then write it elsewhere (with a macro governing that), than by using 
> the CMS file copy command.
> 
> As far as I remember, IBM admitted that Xedit I/O had been optimised
> to make it as fast as possible, to help sell VM, CMS and Xedit as a 
> development tool. 
> 
> -- 
> Jeremy Nicoll - my opinions are my own.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Thu, 8 Oct 2020, at 01:10, Seymour J Metz wrote:
> No, I'm saying that I know what the CHANGE command does. Did the OP say 
> that the relevant lines are contiguous?

No, he said nothing at all except that 

  "On a different note. I just compared EDIT macro performance 
   versus IPOUPDTE. IPOUPDTE was about 600 times faster."

Of course I'm not surprised that a specific utility is faster, but how much
faster depends on lots of things that weren't stated - which I thought 
made the comparison more or less worthless.

It reminds me of the discovery I made back in the 1980s, on a VM/CMS
system, that one could copy a file more quickly using Xedit to load the
file then write it elsewhere (with a macro governing that), than by using 
the CMS file copy command.

As far as I remember, IBM admitted that Xedit I/O had been optimised
to make it as fast as possible, to help sell VM, CMS and Xedit as a 
development tool. 

-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Seymour J Metz
No, I'm saying that I know what the CHANGE command does. Did the OP say that 
the relevant lines are contiguous?


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3



From: IBM Mainframe Discussion List  on behalf of 
Jeremy Nicoll 
Sent: Wednesday, October 7, 2020 7:34 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

On Wed, 7 Oct 2020, at 19:37, Seymour J Metz wrote:
> Using global change command would work in, e.g., SuperWylbur, but the
> change command in ISPF doesn't have the requisite functionality.

Are you saying you know what the macro (that Wayne referred to) does?

It's been a long time since I wrote any edit macros, but a quick peek at one
of those that I have a copy of shows that (to make a specific change to all
of a subset of lines) I typically used

- a find command to find the first line of the subset (or something just
 before it), which I then labelled

- a find command to find the final line of the subset (or something just
  after it), which I then labelled

- a change command along the lines of

   change " one thing" "to another thing" .first  .final all


The point with this is that the searches for the first/final lines are done
by the editor, not by the macro's own logic, and the change command
likewise.


--
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Thu, 8 Oct 2020, at 00:12, Joseph Reichman wrote:
> I would like to issue IGGCSI00 and see how may datasets are involved 
> doing it in multiple steps I would have to code 4,400 DD statements 
> that would take forever 

You can't, surely you can't mean that you'd hand write that many dd
statements?

All you'd have to do is read the dataset list a line at a time and use 
a clist or rexx exec or whatever to generate jcl dd statements for 
each one.  

-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Wed, 7 Oct 2020, at 22:04, Paul Gilmartin wrote:
> On Wed, 7 Oct 2020 11:36:12 -0400, Joseph Reichman wrote:
> >
> >There is a maximum of 5 min CPU time for job step 
> 
> On Wed, 7 Oct 2020 18:15:56 +0100, Jeremy Nicoll wrote:
> >On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
> >> I work for the IRS  ... 
> >> 
> >And you've said that multiple times.  No-one cares who you work
> >for, ...
> >
> Yes, however the two statements above taken together, and
> without other context, are astonishing.

Yes, absolutely.  It beggars belief that anyone would be expected 
to process vast amounts of data with jobs (or steps) not able to 
access adequate resources.

And also, not know how (ie who to speak to) to solve that.

-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Wed, 7 Oct 2020, at 19:37, Seymour J Metz wrote:
> Using global change command would work in, e.g., SuperWylbur, but the 
> change command in ISPF doesn't have the requisite functionality.

Are you saying you know what the macro (that Wayne referred to) does?

It's been a long time since I wrote any edit macros, but a quick peek at one
of those that I have a copy of shows that (to make a specific change to all
of a subset of lines) I typically used

- a find command to find the first line of the subset (or something just 
 before it), which I then labelled

- a find command to find the final line of the subset (or something just 
  after it), which I then labelled

- a change command along the lines of  

   change " one thing" "to another thing" .first  .final all


The point with this is that the searches for the first/final lines are done
by the editor, not by the macro's own logic, and the change command 
likewise. 


-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Sri h Kolusu
>
> I would like to issue IGGCSI00 and see how may datasets are involved
> doing it in multiple steps I would have to code 4,400 DD statements
> that would take forever

Route the output of IGGCSI00 to a sequential dataset and DFSORT can
generate a dynamic JCL by parsing the contents.   But first you need to
explain the requirement.

once you explain the requirement clearly, you can send me a sample of the
output from IGGCSI00 ( mask the data if you need to ) and I can show you
how to create the dynamic jobs.


Thanks,
Kolusu
DFSORT Development
IBM Corporation


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Sri h Kolusu
> If DFSORT will do the trick I’m all for it
> I have been looking at the manual

To be honest,  I for one have absolutely no idea as to "what the real
requirement is ". we already have 44 posts on this but very little
information on the real requirement.
 You have been telling that you work for IRS and  have about 4000+ datasets
that you need to search, but you never explained what is that you are
searching for or how the search needs to be done.

You really need to explain us the requirement rather than wasting your time
as well as other members time.


Thanks,
Kolusu
DFSORT Development
IBM Corporation


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
I would like to issue IGGCSI00 and see how may datasets are involved doing it 
in multiple steps I would have to code 4,400 DD statements that would take 
forever 



> On Oct 7, 2020, at 7:08 PM, Joseph Reichman  wrote:
> 
> If DFSORT will do the trick I’m all for it 
> I have been looking at the manual 
> 
> I would assume it’s in the section running DFSORT from a program 
> 
> 
> 
>> On Oct 7, 2020, at 7:00 PM, Mike Hochee  wrote:
>> 
>> Hi Joseph, 
>> 
>> I like your idea, especially if this is a one-off, you already have it 
>> written, and the system it's running on is not totally i/o or cpu 
>> constrained. If it becomes something that needs to run regularly, maybe 
>> that's a different story and you rewrite using DFSORT or whatever. 
>> 
>> HTH, 
>> Mike
>> 
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
>> Behalf Of Joseph Reichman
>> Sent: Wednesday, October 7, 2020 6:20 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: dataset allocation
>> 
>> Caution! This message was sent from outside your organization.
>> 
>> S322
>> 
>> I IMHO breaking up the job submitting to INTRDR may help
>> 
>> What do you think ?
>> 
>> 
>> 
 On Oct 7, 2020, at 6:10 PM, Seymour J Metz  wrote:
>>> 
>>> Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?
>>> 
>>> I'm in Annandale, just inside the Beltway.
>>> 
>>> 
>>> --
>>> Shmuel (Seymour J.) Metz
>>> http://mason.gmu.edu/~smetz3
>>> 
>>> 
>>> 
>>> From: IBM Mainframe Discussion List  on 
>>> behalf of Joseph Reichman 
>>> Sent: Wednesday, October 7, 2020 4:01 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: dataset allocation
>>> 
>>> 1440 it’s bombing on time
>>> 
>>> Seymour you live in Virginia never worked for the IRS you cannt be 
>>> that far from NCFB the code here is all Assembler
>>> 
>>> Large many VB files
>>> 
>>> 
>>> 
> On Oct 7, 2020, at 2:52 PM, Seymour J Metz  wrote:
 
 The limit is the same for static and dynamic allocation.
 
 The limit is higher for extended TIOT.
 
 What TIME did you specify on JOB and EXEC?
 
 What DYNAMNBR did you specify on EXEC?
 
 
 --
 Shmuel (Seymour J.) Metz
 http://mason.gmu.edu/~smetz3
 
 
 
 From: IBM Mainframe Discussion List  on 
 behalf of Joseph Reichman 
 Sent: Wednesday, October 7, 2020 1:28 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: dataset allocation
 
 There are two main issues here
 
 1) I can not allocate this many datasets to
 A job step that’s includes using SVC 99
 
 2) The job step times out because I have reached a 5 minute CPU time 
 limit on the job step
 
 Sri from my understanding said DFSORT can overcome these two problems
 
 I’m looking at the DFSORT manual
 
 Thank You
 
>>> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll 
>>>  wrote:
>>> 
>>> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>>> I work for the IRS I have to search thru year 2020 data that’s 
>>> 4,467 files about 240,000 records per file and a record length 
>>> could be
>>> 10,000 bytes
>>> VB files
> 
> And you've said that multiple times.  No-one cares who you work for, 
> but we do care about the technical issues you're facing.
> 
> 
> Every single time you ask for help, no matter on what topic, it's 
> nearly impossible for anyone to find out what exactly you're trying to do.
> 
> Why don't you just answer the questions?
> 
> Are the records in the file in any particular order?
> 
> Are you looking for particular values in fixed locations in the records?
> 
> Are you looking for records where there's definable relationships 
> between values in specific records?
> 
> Is there any way that - say - you can do a first scan to make 
> subsets of records before you then examine those in much more detail?
> 
> --
> Jeremy Nicoll - my opinions are my own.
> 
> 
> -- For IBM-MAIN subscribe / signoff / archive access instructions, 
> send email to lists...@listserv.ua.edu with the message: INFO 
> IBM-MAIN
 
 -
 - For IBM-MAIN subscribe / signoff / archive access instructions, 
 send email to lists...@listserv.ua.edu with the message: INFO 
 IBM-MAIN
 
 
 -
 - For IBM-MAIN subscribe / signoff / archive access instructions, 
 send email to lists...@listserv.ua.edu with the message: INFO 
 IBM-MAIN
>>> 
>>> --
>>> For IBM-MAIN subscribe / signoff / archive access 

Re: dataset allocation

2020-10-07 Thread Joseph Reichman
If DFSORT will do the trick I’m all for it 
I have been looking at the manual 

I would assume it’s in the section running DFSORT from a program 



> On Oct 7, 2020, at 7:00 PM, Mike Hochee  wrote:
> 
> Hi Joseph, 
> 
> I like your idea, especially if this is a one-off, you already have it 
> written, and the system it's running on is not totally i/o or cpu 
> constrained. If it becomes something that needs to run regularly, maybe 
> that's a different story and you rewrite using DFSORT or whatever. 
> 
> HTH, 
> Mike
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of Joseph Reichman
> Sent: Wednesday, October 7, 2020 6:20 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: dataset allocation
> 
> Caution! This message was sent from outside your organization.
> 
> S322
> 
> I IMHO breaking up the job submitting to INTRDR may help
> 
> What do you think ?
> 
> 
> 
>> On Oct 7, 2020, at 6:10 PM, Seymour J Metz  wrote:
>> 
>> Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?
>> 
>> I'm in Annandale, just inside the Beltway.
>> 
>> 
>> --
>> Shmuel (Seymour J.) Metz
>> http://mason.gmu.edu/~smetz3
>> 
>> 
>> 
>> From: IBM Mainframe Discussion List  on 
>> behalf of Joseph Reichman 
>> Sent: Wednesday, October 7, 2020 4:01 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: dataset allocation
>> 
>> 1440 it’s bombing on time
>> 
>> Seymour you live in Virginia never worked for the IRS you cannt be 
>> that far from NCFB the code here is all Assembler
>> 
>> Large many VB files
>> 
>> 
>> 
 On Oct 7, 2020, at 2:52 PM, Seymour J Metz  wrote:
>>> 
>>> The limit is the same for static and dynamic allocation.
>>> 
>>> The limit is higher for extended TIOT.
>>> 
>>> What TIME did you specify on JOB and EXEC?
>>> 
>>> What DYNAMNBR did you specify on EXEC?
>>> 
>>> 
>>> --
>>> Shmuel (Seymour J.) Metz
>>> http://mason.gmu.edu/~smetz3
>>> 
>>> 
>>> 
>>> From: IBM Mainframe Discussion List  on 
>>> behalf of Joseph Reichman 
>>> Sent: Wednesday, October 7, 2020 1:28 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: dataset allocation
>>> 
>>> There are two main issues here
>>> 
>>> 1) I can not allocate this many datasets to
>>>  A job step that’s includes using SVC 99
>>> 
>>> 2) The job step times out because I have reached a 5 minute CPU time 
>>> limit on the job step
>>> 
>>> Sri from my understanding said DFSORT can overcome these two problems
>>> 
>>> I’m looking at the DFSORT manual
>>> 
>>> Thank You
>>> 
>> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll 
>>  wrote:
>> 
>> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS I have to search thru year 2020 data that’s 
>> 4,467 files about 240,000 records per file and a record length 
>> could be
>> 10,000 bytes
>> VB files
 
 And you've said that multiple times.  No-one cares who you work for, 
 but we do care about the technical issues you're facing.
 
 
 Every single time you ask for help, no matter on what topic, it's 
 nearly impossible for anyone to find out what exactly you're trying to do.
 
 Why don't you just answer the questions?
 
 Are the records in the file in any particular order?
 
 Are you looking for particular values in fixed locations in the records?
 
 Are you looking for records where there's definable relationships 
 between values in specific records?
 
 Is there any way that - say - you can do a first scan to make 
 subsets of records before you then examine those in much more detail?
 
 --
 Jeremy Nicoll - my opinions are my own.
 
 
 -- For IBM-MAIN subscribe / signoff / archive access instructions, 
 send email to lists...@listserv.ua.edu with the message: INFO 
 IBM-MAIN
>>> 
>>> -
>>> - For IBM-MAIN subscribe / signoff / archive access instructions, 
>>> send email to lists...@listserv.ua.edu with the message: INFO 
>>> IBM-MAIN
>>> 
>>> 
>>> -
>>> - For IBM-MAIN subscribe / signoff / archive access instructions, 
>>> send email to lists...@listserv.ua.edu with the message: INFO 
>>> IBM-MAIN
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions, send 
>> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>> 
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions, send 
>> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> 

Re: dataset allocation

2020-10-07 Thread Seymour J Metz
If putting TIME on both JOB and EXEC doesn't help, then just break the job up 
into multiple steps or multiple jobs; there's no need to mess with INTRDR.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3



From: IBM Mainframe Discussion List  on behalf of 
Joseph Reichman 
Sent: Wednesday, October 7, 2020 6:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

S322

I IMHO breaking up the job submitting to INTRDR may help

What do you think ?



> On Oct 7, 2020, at 6:10 PM, Seymour J Metz  wrote:
>
> Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?
>
> I'm in Annandale, just inside the Beltway.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
>
> 
> From: IBM Mainframe Discussion List  on behalf of 
> Joseph Reichman 
> Sent: Wednesday, October 7, 2020 4:01 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: dataset allocation
>
> 1440 it’s bombing on time
>
> Seymour you live in Virginia never worked for the IRS you cannt be that far 
> from NCFB the code here is all Assembler
>
> Large many VB files
>
>
>
>> On Oct 7, 2020, at 2:52 PM, Seymour J Metz  wrote:
>>
>> The limit is the same for static and dynamic allocation.
>>
>> The limit is higher for extended TIOT.
>>
>> What TIME did you specify on JOB and EXEC?
>>
>> What DYNAMNBR did you specify on EXEC?
>>
>>
>> --
>> Shmuel (Seymour J.) Metz
>> http://mason.gmu.edu/~smetz3
>>
>>
>> 
>> From: IBM Mainframe Discussion List  on behalf of 
>> Joseph Reichman 
>> Sent: Wednesday, October 7, 2020 1:28 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: dataset allocation
>>
>> There are two main issues here
>>
>> 1) I can not allocate this many datasets to
>>   A job step that’s includes using SVC 99
>>
>> 2) The job step times out because I have reached a 5 minute CPU time limit 
>> on the job step
>>
>> Sri from my understanding said DFSORT can overcome these two problems
>>
>> I’m looking at the DFSORT manual
>>
>> Thank You
>>
> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
>
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
> I work for the IRS I have to search thru year 2020 data that’s 4,467
> files about 240,000 records per file and a record length could be
> 10,000 bytes
> VB files
>>>
>>> And you've said that multiple times.  No-one cares who you work
>>> for, but we do care about the technical issues you're facing.
>>>
>>>
>>> Every single time you ask for help, no matter on what topic, it's nearly
>>> impossible for anyone to find out what exactly you're trying to do.
>>>
>>> Why don't you just answer the questions?
>>>
>>> Are the records in the file in any particular order?
>>>
>>> Are you looking for particular values in fixed locations in the records?
>>>
>>> Are you looking for records where there's definable relationships between
>>> values in specific records?
>>>
>>> Is there any way that - say - you can do a first scan to make subsets of
>>> records before you then examine those in much more detail?
>>>
>>> --
>>> Jeremy Nicoll - my opinions are my own.
>>>
>>> --
>>> For IBM-MAIN subscribe / signoff / archive access instructions,
>>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>>
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Mike Hochee
Hi Joseph, 

I like your idea, especially if this is a one-off, you already have it written, 
and the system it's running on is not totally i/o or cpu constrained. If it 
becomes something that needs to run regularly, maybe that's a different story 
and you rewrite using DFSORT or whatever. 

HTH, 
Mike

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Joseph Reichman
Sent: Wednesday, October 7, 2020 6:20 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

Caution! This message was sent from outside your organization.

S322

I IMHO breaking up the job submitting to INTRDR may help

What do you think ?



> On Oct 7, 2020, at 6:10 PM, Seymour J Metz  wrote:
>
> Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?
>
> I'm in Annandale, just inside the Beltway.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
>
> 
> From: IBM Mainframe Discussion List  on 
> behalf of Joseph Reichman 
> Sent: Wednesday, October 7, 2020 4:01 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: dataset allocation
>
> 1440 it’s bombing on time
>
> Seymour you live in Virginia never worked for the IRS you cannt be 
> that far from NCFB the code here is all Assembler
>
> Large many VB files
>
>
>
>> On Oct 7, 2020, at 2:52 PM, Seymour J Metz  wrote:
>>
>> The limit is the same for static and dynamic allocation.
>>
>> The limit is higher for extended TIOT.
>>
>> What TIME did you specify on JOB and EXEC?
>>
>> What DYNAMNBR did you specify on EXEC?
>>
>>
>> --
>> Shmuel (Seymour J.) Metz
>> http://mason.gmu.edu/~smetz3
>>
>>
>> 
>> From: IBM Mainframe Discussion List  on 
>> behalf of Joseph Reichman 
>> Sent: Wednesday, October 7, 2020 1:28 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: dataset allocation
>>
>> There are two main issues here
>>
>> 1) I can not allocate this many datasets to
>>   A job step that’s includes using SVC 99
>>
>> 2) The job step times out because I have reached a 5 minute CPU time 
>> limit on the job step
>>
>> Sri from my understanding said DFSORT can overcome these two problems
>>
>> I’m looking at the DFSORT manual
>>
>> Thank You
>>
> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
>
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
> I work for the IRS I have to search thru year 2020 data that’s 
> 4,467 files about 240,000 records per file and a record length 
> could be
> 10,000 bytes
> VB files
>>>
>>> And you've said that multiple times.  No-one cares who you work for, 
>>> but we do care about the technical issues you're facing.
>>>
>>>
>>> Every single time you ask for help, no matter on what topic, it's 
>>> nearly impossible for anyone to find out what exactly you're trying to do.
>>>
>>> Why don't you just answer the questions?
>>>
>>> Are the records in the file in any particular order?
>>>
>>> Are you looking for particular values in fixed locations in the records?
>>>
>>> Are you looking for records where there's definable relationships 
>>> between values in specific records?
>>>
>>> Is there any way that - say - you can do a first scan to make 
>>> subsets of records before you then examine those in much more detail?
>>>
>>> --
>>> Jeremy Nicoll - my opinions are my own.
>>>
>>> 
>>> -- For IBM-MAIN subscribe / signoff / archive access instructions, 
>>> send email to lists...@listserv.ua.edu with the message: INFO 
>>> IBM-MAIN
>>
>> -
>> - For IBM-MAIN subscribe / signoff / archive access instructions, 
>> send email to lists...@listserv.ua.edu with the message: INFO 
>> IBM-MAIN
>>
>>
>> -
>> - For IBM-MAIN subscribe / signoff / archive access instructions, 
>> send email to lists...@listserv.ua.edu with the message: INFO 
>> IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
I meant submitting many jobs under the cover the input datasets can remain VB

> On Oct 7, 2020, at 6:33 PM, Clark Morris  wrote:
> 
> [Default] On 7 Oct 2020 10:03:05 -0700, in bit.listserv.ibm-main
> skol...@us.ibm.com (Sri h Kolusu) wrote:
> 
>>> Yes at this point but since the file is variable
>>> I may need an exit to get the right spot at times to do a compare
>> 
>> Joseph,
>> 
>> You still haven't explained us as to what the real requirement is.  DFSORT
>> can handle VB file with ease. Substring search will make sure you can
>> search anywhere within the record.
>> 
>>> If these files are normally accessed by either COBOL or PL1, using a
>>> COBOL or Pl1 program in batch to do what you need to do will be faster
>>> to code.  Both languages have reference modification so variably
>>> located fields can be easily dealt
>> 
>> Clark,
>> 
>> OP has different LRECL files and COBOL will not be able handle it, unless
>> you code all the files have a common LRECL
>> 
> Since they are VB at worst you would have to code overrides to handle
> minimum LRECL for the program to handle them.
> 
> Clark Morris
>> 
>> Thanks,
>> Kolusu
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Clark Morris
[Default] On 7 Oct 2020 10:03:05 -0700, in bit.listserv.ibm-main
skol...@us.ibm.com (Sri h Kolusu) wrote:

>> Yes at this point but since the file is variable
>> I may need an exit to get the right spot at times to do a compare
>
>Joseph,
>
>You still haven't explained us as to what the real requirement is.  DFSORT
>can handle VB file with ease. Substring search will make sure you can
>search anywhere within the record.
>
>>  If these files are normally accessed by either COBOL or PL1, using a
>> COBOL or Pl1 program in batch to do what you need to do will be faster
>> to code.  Both languages have reference modification so variably
>> located fields can be easily dealt
>
>Clark,
>
>OP has different LRECL files and COBOL will not be able handle it, unless
>you code all the files have a common LRECL
>
Since they are VB at worst you would have to code overrides to handle
minimum LRECL for the program to handle them.

Clark Morris
>
>Thanks,
>Kolusu
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
S322 

I IMHO breaking up the job submitting to INTRDR may help 

What do you think ?



> On Oct 7, 2020, at 6:10 PM, Seymour J Metz  wrote:
> 
> Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?
> 
> I'm in Annandale, just inside the Beltway.
> 
> 
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
> 
> 
> 
> From: IBM Mainframe Discussion List  on behalf of 
> Joseph Reichman 
> Sent: Wednesday, October 7, 2020 4:01 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: dataset allocation
> 
> 1440 it’s bombing on time
> 
> Seymour you live in Virginia never worked for the IRS you cannt be that far 
> from NCFB the code here is all Assembler
> 
> Large many VB files
> 
> 
> 
>> On Oct 7, 2020, at 2:52 PM, Seymour J Metz  wrote:
>> 
>> The limit is the same for static and dynamic allocation.
>> 
>> The limit is higher for extended TIOT.
>> 
>> What TIME did you specify on JOB and EXEC?
>> 
>> What DYNAMNBR did you specify on EXEC?
>> 
>> 
>> --
>> Shmuel (Seymour J.) Metz
>> http://mason.gmu.edu/~smetz3
>> 
>> 
>> 
>> From: IBM Mainframe Discussion List  on behalf of 
>> Joseph Reichman 
>> Sent: Wednesday, October 7, 2020 1:28 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: dataset allocation
>> 
>> There are two main issues here
>> 
>> 1) I can not allocate this many datasets to
>>   A job step that’s includes using SVC 99
>> 
>> 2) The job step times out because I have reached a 5 minute CPU time limit 
>> on the job step
>> 
>> Sri from my understanding said DFSORT can overcome these two problems
>> 
>> I’m looking at the DFSORT manual
>> 
>> Thank You
>> 
> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
> 
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
> I work for the IRS I have to search thru year 2020 data that’s 4,467
> files about 240,000 records per file and a record length could be
> 10,000 bytes
> VB files
>>> 
>>> And you've said that multiple times.  No-one cares who you work
>>> for, but we do care about the technical issues you're facing.
>>> 
>>> 
>>> Every single time you ask for help, no matter on what topic, it's nearly
>>> impossible for anyone to find out what exactly you're trying to do.
>>> 
>>> Why don't you just answer the questions?
>>> 
>>> Are the records in the file in any particular order?
>>> 
>>> Are you looking for particular values in fixed locations in the records?
>>> 
>>> Are you looking for records where there's definable relationships between
>>> values in specific records?
>>> 
>>> Is there any way that - say - you can do a first scan to make subsets of
>>> records before you then examine those in much more detail?
>>> 
>>> --
>>> Jeremy Nicoll - my opinions are my own.
>>> 
>>> --
>>> For IBM-MAIN subscribe / signoff / archive access instructions,
>>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>> 
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Seymour J Metz
Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?

I'm in Annandale, just inside the Beltway.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3



From: IBM Mainframe Discussion List  on behalf of 
Joseph Reichman 
Sent: Wednesday, October 7, 2020 4:01 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

1440 it’s bombing on time

Seymour you live in Virginia never worked for the IRS you cannt be that far 
from NCFB the code here is all Assembler

Large many VB files



> On Oct 7, 2020, at 2:52 PM, Seymour J Metz  wrote:
>
> The limit is the same for static and dynamic allocation.
>
> The limit is higher for extended TIOT.
>
> What TIME did you specify on JOB and EXEC?
>
> What DYNAMNBR did you specify on EXEC?
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
>
> 
> From: IBM Mainframe Discussion List  on behalf of 
> Joseph Reichman 
> Sent: Wednesday, October 7, 2020 1:28 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: dataset allocation
>
> There are two main issues here
>
> 1) I can not allocate this many datasets to
>A job step that’s includes using SVC 99
>
> 2) The job step times out because I have reached a 5 minute CPU time limit on 
> the job step
>
> Sri from my understanding said DFSORT can overcome these two problems
>
> I’m looking at the DFSORT manual
>
> Thank You
>
>>> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
>>> wrote:
>>>
>>> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>>> I work for the IRS I have to search thru year 2020 data that’s 4,467
>>> files about 240,000 records per file and a record length could be
>>> 10,000 bytes
>>> VB files
>>
>> And you've said that multiple times.  No-one cares who you work
>> for, but we do care about the technical issues you're facing.
>>
>>
>> Every single time you ask for help, no matter on what topic, it's nearly
>> impossible for anyone to find out what exactly you're trying to do.
>>
>> Why don't you just answer the questions?
>>
>> Are the records in the file in any particular order?
>>
>> Are you looking for particular values in fixed locations in the records?
>>
>> Are you looking for records where there's definable relationships between
>> values in specific records?
>>
>> Is there any way that - say - you can do a first scan to make subsets of
>> records before you then examine those in much more detail?
>>
>> --
>> Jeremy Nicoll - my opinions are my own.
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Paul Gilmartin
On Wed, 7 Oct 2020 18:51:58 +, Seymour J Metz wrote:
>...
>
>What DYNAMNBR did you specify on EXEC?
>
Allocation by BPXWDYN, for example, is exempt from the DYNAMNBR limit.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Paul Gilmartin
On Wed, 7 Oct 2020 11:36:12 -0400, Joseph Reichman wrote:
>
>There is a maximum of 5 min CPU time for job step 

On Wed, 7 Oct 2020 18:15:56 +0100, Jeremy Nicoll wrote:
>On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS  ... 
>> 
>And you've said that multiple times.  No-one cares who you work
>for, ...
>
Yes, however the two statements above taken together, and
without other context, are astonishing.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


DFSMShsm SETSYS BACKUP(DASD)

2020-10-07 Thread Buckton, T. (Theo)
Hi There,

I have set the HSM BACKUP to DASD, and disabled the tape handling parameters 
except for SETSYS NOUSERUNITTABLE which remains as is. However, an HBACK 
command keeps allocating a tape for the backup. A backup volume is added as:

ADDVOL HSMBK0 UNIT(3390)
   BACKUP(DAILY)

Not sure why HSM keeps allocating a tape volume for the backup. Please advise.

Regards




Nedbank disclaimer and confidentiality notice:

This email may contain information that is confidential, privileged or 
otherwise protected from disclosure. If you are not an intended recipient of 
this email or all or some of the information contained therein, do not 
duplicate or redistribute it by any means. Please delete it and any attachments 
and notify the sender that you have received it in error. Unless specifically 
indicated, this email is neither an offer or a solicitation to buy or sell any 
securities, investment products or other financial product or service, nor is 
it an official confirmation of any transaction or an official statement of 
Nedbank. Any views or opinions presented are solely those of the author and do 
not necessarily represent those of Nedbank. Nedbank Ltd Reg No 1951/09/06.

The following link displays the names of the Nedbank Board of Directors and 
Company Secretary. [http://www.nedbank.co.za/terms/DirectorsNedbank.htm]

If you do not want to click on a link, please type the relevant address in your 
browser



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
1440 it’s bombing on time 

Seymour you live in Virginia never worked for the IRS you cannt be that far 
from NCFB the code here is all Assembler 

Large many VB files 



> On Oct 7, 2020, at 2:52 PM, Seymour J Metz  wrote:
> 
> The limit is the same for static and dynamic allocation.
> 
> The limit is higher for extended TIOT.
> 
> What TIME did you specify on JOB and EXEC?
> 
> What DYNAMNBR did you specify on EXEC?
> 
> 
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
> 
> 
> 
> From: IBM Mainframe Discussion List  on behalf of 
> Joseph Reichman 
> Sent: Wednesday, October 7, 2020 1:28 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: dataset allocation
> 
> There are two main issues here
> 
> 1) I can not allocate this many datasets to
>A job step that’s includes using SVC 99
> 
> 2) The job step times out because I have reached a 5 minute CPU time limit on 
> the job step
> 
> Sri from my understanding said DFSORT can overcome these two problems
> 
> I’m looking at the DFSORT manual
> 
> Thank You
> 
>>> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
>>> wrote:
>>> 
>>> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>>> I work for the IRS I have to search thru year 2020 data that’s 4,467
>>> files about 240,000 records per file and a record length could be
>>> 10,000 bytes
>>> VB files
>> 
>> And you've said that multiple times.  No-one cares who you work
>> for, but we do care about the technical issues you're facing.
>> 
>> 
>> Every single time you ask for help, no matter on what topic, it's nearly
>> impossible for anyone to find out what exactly you're trying to do.
>> 
>> Why don't you just answer the questions?
>> 
>> Are the records in the file in any particular order?
>> 
>> Are you looking for particular values in fixed locations in the records?
>> 
>> Are you looking for records where there's definable relationships between
>> values in specific records?
>> 
>> Is there any way that - say - you can do a first scan to make subsets of
>> records before you then examine those in much more detail?
>> 
>> --
>> Jeremy Nicoll - my opinions are my own.
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Seymour J Metz
The limit is the same for static and dynamic allocation.

The limit is higher for extended TIOT.

What TIME did you specify on JOB and EXEC?

What DYNAMNBR did you specify on EXEC?


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3



From: IBM Mainframe Discussion List  on behalf of 
Joseph Reichman 
Sent: Wednesday, October 7, 2020 1:28 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

There are two main issues here

1) I can not allocate this many datasets to
A job step that’s includes using SVC 99

2) The job step times out because I have reached a 5 minute CPU time limit on 
the job step

Sri from my understanding said DFSORT can overcome these two problems

I’m looking at the DFSORT manual

Thank You

> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
>
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS I have to search thru year 2020 data that’s 4,467
>> files about 240,000 records per file and a record length could be
>> 10,000 bytes
>> VB files
>
> And you've said that multiple times.  No-one cares who you work
> for, but we do care about the technical issues you're facing.
>
>
> Every single time you ask for help, no matter on what topic, it's nearly
> impossible for anyone to find out what exactly you're trying to do.
>
> Why don't you just answer the questions?
>
> Are the records in the file in any particular order?
>
> Are you looking for particular values in fixed locations in the records?
>
> Are you looking for records where there's definable relationships between
> values in specific records?
>
> Is there any way that - say - you can do a first scan to make subsets of
> records before you then examine those in much more detail?
>
> --
> Jeremy Nicoll - my opinions are my own.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Seymour J Metz
Using global change command would work in, e.g., SuperWylbur, but the change 
command in ISPF doesn't have the requisite functionality.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3



From: IBM Mainframe Discussion List  on behalf of 
Jeremy Nicoll 
Sent: Wednesday, October 7, 2020 10:28 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

On Wed, 7 Oct 2020, at 14:49, Paul Gilmartin wrote:

> On Wed, 7 Oct 2020 13:45:04 +0100, Jeremy Nicoll wrote:
> >...
> >Alternatively, maybe you never wrote any edit macros in anything other
> >than REXX?  ISTR that one could use any SAA language, eg COBOL or
> >Assembler, apart from CLIST/REXX.
> >
> If so, I'd expect the limiting factor to be Edit's parsing the command string.

But only if the macro made intelligent use of editor commands, for
example issuing change commands to affect all matching lines in a
file.  If it instead iterated through the file a line at a time, looking
for things and maybe replacing whole lines itself then much more
of the cpu use could be down to the macro's own logic.

That's the point.  The OP's contention that a macro was much less
efficient depends a great deal on what the macro was doing and
how it was written.

--
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


RMM Retention Method EXPDT

2020-10-07 Thread John Benik
I am wondering if others on this list have made the transition from Retention 
Method(VRSEL) to Retention Method(Expdt)?  We have done so but ran into some 
unexpected behaviors, and just trying to get an idea of what the experience was 
like for others who have done this?  If you also took advantage of the tape 
retention policy fields that have been added to the SMS management class 
attributes that would be great.  That was made available in z/OS 2.3.  Finally 
are there any things I need to be aware of in RMM when we go to z/OS 2.4?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
Thank you I did not know that 


> On Oct 7, 2020, at 2:13 PM, DAL POS Raphael  
> wrote:
> 
> Hi Joseph, 
> 
> Ref : 
> 1) I can not allocate this many datasets to 
>A job step that’s includes using SVC 99
> 
> This is not true. 
> 
> When using SVC99 you can use S99TIOEX flag to use extended TIOT. I will need 
> to run authorized for this. 
> 
> If you are not running authorized you can use flag S99ACUCB instead. It will 
> cause the creation of an extended TIOT even if S99TIOEX is not coded and it 
> does not require authorization. 
> 
> Ciao,  
> 
> -- 
> Raphael Dal Pos / z/OS Support
> Generali Shared Services S.c.a.r.l.
> GSS\CIN-MF (Central Infrastructure Mainframe)
> 11-17, Avenue François Mitterrand
> 93200 Saint Denis / France
> Wilo W 03 B1 029C  
> raphael.dal...@generali.com +(33)1-58-38-59-67 
>   or mobile +(33)6.24.33.20.87 
> -- 
> "MVS: Guilty, until proven innocent !!" RDP 2009 
> 
> 
> 
> -Message d'origine-
> De : IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] De la 
> part de Joseph Reichman
> Envoyé : mercredi 7 octobre 2020 19:28
> À : IBM-MAIN@LISTSERV.UA.EDU
> Objet : Re: dataset allocation
> 
> There are two main issues here 
> 
> 1) I can not allocate this many datasets to 
>A job step that’s includes using SVC 99
> 
> 2) The job step times out because I have reached a 5 minute CPU time limit on 
> the job step 
> 
> Sri from my understanding said DFSORT can overcome these two problems 
> 
> I’m looking at the DFSORT manual 
> 
> Thank You
> 
>>> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
>>> wrote:
>>> 
>>> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>>> I work for the IRS I have to search thru year 2020 data that’s 4,467 
>>> files about 240,000 records per file and a record length could be 
>>> 10,000 bytes 
>>> VB files 
>> 
>> And you've said that multiple times.  No-one cares who you work
>> for, but we do care about the technical issues you're facing.
>> 
>> 
>> Every single time you ask for help, no matter on what topic, it's nearly 
>> impossible for anyone to find out what exactly you're trying to do.
>> 
>> Why don't you just answer the questions?
>> 
>> Are the records in the file in any particular order?
>> 
>> Are you looking for particular values in fixed locations in the records?
>> 
>> Are you looking for records where there's definable relationships between
>> values in specific records?
>> 
>> Is there any way that - say - you can do a first scan to make subsets of 
>> records before you then examine those in much more detail?
>> 
>> -- 
>> Jeremy Nicoll - my opinions are my own.
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread DAL POS Raphael
Hi Joseph, 

Ref : 
1) I can not allocate this many datasets to 
A job step that’s includes using SVC 99

This is not true. 

When using SVC99 you can use S99TIOEX flag to use extended TIOT. I will need to 
run authorized for this. 

If you are not running authorized you can use flag S99ACUCB instead. It will 
cause the creation of an extended TIOT even if S99TIOEX is not coded and it 
does not require authorization. 

Ciao,  

-- 
Raphael Dal Pos / z/OS Support
Generali Shared Services S.c.a.r.l.
GSS\CIN-MF (Central Infrastructure Mainframe)
11-17, Avenue François Mitterrand
93200 Saint Denis / France
Wilo W 03 B1 029C  
raphael.dal...@generali.com +(33)1-58-38-59-67 
  or mobile +(33)6.24.33.20.87 
-- 
"MVS: Guilty, until proven innocent !!" RDP 2009 



-Message d'origine-
De : IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] De la part 
de Joseph Reichman
Envoyé : mercredi 7 octobre 2020 19:28
À : IBM-MAIN@LISTSERV.UA.EDU
Objet : Re: dataset allocation

There are two main issues here 

1) I can not allocate this many datasets to 
A job step that’s includes using SVC 99

2) The job step times out because I have reached a 5 minute CPU time limit on 
the job step 

Sri from my understanding said DFSORT can overcome these two problems 

I’m looking at the DFSORT manual 

Thank You

> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
> 
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS I have to search thru year 2020 data that’s 4,467 
>> files about 240,000 records per file and a record length could be 
>> 10,000 bytes 
>> VB files 
> 
> And you've said that multiple times.  No-one cares who you work
> for, but we do care about the technical issues you're facing.
> 
> 
> Every single time you ask for help, no matter on what topic, it's nearly 
> impossible for anyone to find out what exactly you're trying to do.
> 
> Why don't you just answer the questions?
> 
> Are the records in the file in any particular order?
> 
> Are you looking for particular values in fixed locations in the records?
> 
> Are you looking for records where there's definable relationships between
> values in specific records?
> 
> Is there any way that - say - you can do a first scan to make subsets of 
> records before you then examine those in much more detail?
> 
> -- 
> Jeremy Nicoll - my opinions are my own.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Gibney, Dave
> 2) The job step times out because I have reached a 5 minute CPU time limit
> on the job step
>

This is a site and environment choice. Use a JOBCLASS (or how ever your site 
controls this) with a greater or no time limit.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Mazer Ken G
Joseph,

I know for a fact that there are job classes that are available for long 
running jobs.  Have you tried to use those?
You'll want to read up on Syncsort from Precisely as we are not licensed for 
DFSORT from IBM.

Ken Mazer
IRS Systems Programmer 25 years

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Joseph Reichman
Sent: Wednesday, October 07, 2020 1:28 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

There are two main issues here 

1) I can not allocate this many datasets to 
A job step that’s includes using SVC 99

2) The job step times out because I have reached a 5 minute CPU time limit on 
the job step 

Sri from my understanding said DFSORT can overcome these two problems 

I’m looking at the DFSORT manual 

Thank You

> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
> 
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS I have to search thru year 2020 data that’s 4,467 
>> files about 240,000 records per file and a record length could be
>> 10,000 bytes
>> VB files
> 
> And you've said that multiple times.  No-one cares who you work for, 
> but we do care about the technical issues you're facing.
> 
> 
> Every single time you ask for help, no matter on what topic, it's 
> nearly impossible for anyone to find out what exactly you're trying to do.
> 
> Why don't you just answer the questions?
> 
> Are the records in the file in any particular order?
> 
> Are you looking for particular values in fixed locations in the records?
> 
> Are you looking for records where there's definable relationships 
> between values in specific records?
> 
> Is there any way that - say - you can do a first scan to make subsets 
> of records before you then examine those in much more detail?
> 
> --
> Jeremy Nicoll - my opinions are my own.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Charles Mills
I am not an expert but I would think the "5 CPU minutes/job" would be 
irrespective of whether the jobstep program were DFSORT or your homegrown 
program. I think you are going to have to get some sort of special dispensation 
from your WLM or similar sysprogs.

I suppose you might get around the restriction by splitting the work up into 
multiple jobs. Each job would process some subset of the full repertoire of 
input datasets and produce some sort of a summary dataset. The summary datasets 
from the multiple jobs would then be combined by some final job.

Charles


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Joseph Reichman
Sent: Wednesday, October 7, 2020 10:28 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: dataset allocation

There are two main issues here 

1) I can not allocate this many datasets to 
A job step that’s includes using SVC 99

2) The job step times out because I have reached a 5 minute CPU time limit on 
the job step 

Sri from my understanding said DFSORT can overcome these two problems 

I’m looking at the DFSORT manual 

Thank You

> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
> 
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS I have to search thru year 2020 data that’s 4,467 
>> files about 240,000 records per file and a record length could be 
>> 10,000 bytes 
>> VB files 
> 
> And you've said that multiple times.  No-one cares who you work
> for, but we do care about the technical issues you're facing.
> 
> 
> Every single time you ask for help, no matter on what topic, it's nearly 
> impossible for anyone to find out what exactly you're trying to do.
> 
> Why don't you just answer the questions?
> 
> Are the records in the file in any particular order?
> 
> Are you looking for particular values in fixed locations in the records?
> 
> Are you looking for records where there's definable relationships between
> values in specific records?
> 
> Is there any way that - say - you can do a first scan to make subsets of 
> records before you then examine those in much more detail?
> 
> -- 
> Jeremy Nicoll - my opinions are my own.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
There are two main issues here 

1) I can not allocate this many datasets to 
A job step that’s includes using SVC 99

2) The job step times out because I have reached a 5 minute CPU time limit on 
the job step 

Sri from my understanding said DFSORT can overcome these two problems 

I’m looking at the DFSORT manual 

Thank You

> On Oct 7, 2020, at 1:16 PM, Jeremy Nicoll  
> wrote:
> 
> On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS I have to search thru year 2020 data that’s 4,467 
>> files about 240,000 records per file and a record length could be 
>> 10,000 bytes 
>> VB files 
> 
> And you've said that multiple times.  No-one cares who you work
> for, but we do care about the technical issues you're facing.
> 
> 
> Every single time you ask for help, no matter on what topic, it's nearly 
> impossible for anyone to find out what exactly you're trying to do.
> 
> Why don't you just answer the questions?
> 
> Are the records in the file in any particular order?
> 
> Are you looking for particular values in fixed locations in the records?
> 
> Are you looking for records where there's definable relationships between
> values in specific records?
> 
> Is there any way that - say - you can do a first scan to make subsets of 
> records before you then examine those in much more detail?
> 
> -- 
> Jeremy Nicoll - my opinions are my own.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
> I work for the IRS I have to search thru year 2020 data that’s 4,467 
> files about 240,000 records per file and a record length could be 
> 10,000 bytes 
> VB files 

And you've said that multiple times.  No-one cares who you work
for, but we do care about the technical issues you're facing.


Every single time you ask for help, no matter on what topic, it's nearly 
impossible for anyone to find out what exactly you're trying to do.

Why don't you just answer the questions?

Are the records in the file in any particular order?

Are you looking for particular values in fixed locations in the records?

Are you looking for records where there's definable relationships between
values in specific records?

Is there any way that - say - you can do a first scan to make subsets of 
records before you then examine those in much more detail?

-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
I work for the IRS I have to search thru year 2020 data that’s 4,467 files 
about 240,000 records per file and a record length could be 10,000 bytes 
VB files 

> On Oct 7, 2020, at 1:03 PM, Sri h Kolusu  wrote:
> 
> 
>> 
>> Yes at this point but since the file is variable
>> I may need an exit to get the right spot at times to do a compare
> 
> Joseph,
> 
> You still haven't explained us as to what the real requirement is.  DFSORT
> can handle VB file with ease. Substring search will make sure you can
> search anywhere within the record.
> 
>> If these files are normally accessed by either COBOL or PL1, using a
>> COBOL or Pl1 program in batch to do what you need to do will be faster
>> to code.  Both languages have reference modification so variably
>> located fields can be easily dealt
> 
> Clark,
> 
> OP has different LRECL files and COBOL will not be able handle it, unless
> you code all the files have a common LRECL
> 
> 
> Thanks,
> Kolusu
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Sri h Kolusu
> Yes at this point but since the file is variable
> I may need an exit to get the right spot at times to do a compare

Joseph,

You still haven't explained us as to what the real requirement is.  DFSORT
can handle VB file with ease. Substring search will make sure you can
search anywhere within the record.

>  If these files are normally accessed by either COBOL or PL1, using a
> COBOL or Pl1 program in batch to do what you need to do will be faster
> to code.  Both languages have reference modification so variably
> located fields can be easily dealt

Clark,

OP has different LRECL files and COBOL will not be able handle it, unless
you code all the files have a common LRECL


Thanks,
Kolusu

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Clark Morris
[Default] On 7 Oct 2020 08:50:04 -0700, in bit.listserv.ibm-main
reichman...@gmail.com (Joseph Reichman) wrote:

>Yes at this point but since the file is variable 
>I may need an exit to get the right spot at times to do a compare  
>There are 4,644 files an average of 240,000 records the file is VB the record 
>size can be 10,000 rough estimates 
>
If these files are normally accessed by either COBOL or PL1, using a
COBOL or Pl1 program in batch to do what you need to do will be faster
to code.  Both languages have reference modification so variably
located fields can be easily dealt with (I have written programs in
COBOL to process's SMF 30 records and to parse COBOL source statements
for CALL and COPY usage).  You can increase sequential performance by
coding a BUFNO on the file such that a cylinder can bee read at a
time.  The advantage of COBOL is that the program(s) can bee easily
used as templates for future job like this.

Clark Morris  
>> On Oct 7, 2020, at 11:44 AM, Sri h Kolusu  wrote:
>> 
>> ?
>>> 
 There is a maximum of 5 min CPU time for job step
 In order to increase the TIOT the  allocxx member had to be modified
>> 
>> 
>> You don't have to change TIOT limit, we can cap the concatenation limit to
>> whatever value we decide. Since you only have 5 mins of cpu time for each
>> job, we probably can limit to 800-1000 dataset per job.
>> 
>> So far you haven't explained as to what you are trying to do? Is it just
>> picking records that match a condition?
>> 
>> Thanks
>> Kolusu
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
Yes at this point but since the file is variable 
I may need an exit to get the right spot at times to do a compare  
There are 4,644 files an average of 240,000 records the file is VB the record 
size can be 10,000 rough estimates 


> On Oct 7, 2020, at 11:44 AM, Sri h Kolusu  wrote:
> 
> 
>> 
>>> There is a maximum of 5 min CPU time for job step
>>> In order to increase the TIOT the  allocxx member had to be modified
> 
> 
> You don't have to change TIOT limit, we can cap the concatenation limit to
> whatever value we decide. Since you only have 5 mins of cpu time for each
> job, we probably can limit to 800-1000 dataset per job.
> 
> So far you haven't explained as to what you are trying to do? Is it just
> picking records that match a condition?
> 
> Thanks
> Kolusu
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dfsort file split

2020-10-07 Thread Ron Thomas
Thanks a lot Kolusu.. it worked like a charm :)

mazimo- the solution provided did not work for me .

Regards
Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Sri h Kolusu
>> There is a maximum of 5 min CPU time for job step
>> In order to increase the TIOT the  allocxx member had to be modified


You don't have to change TIOT limit, we can cap the concatenation limit to
whatever value we decide. Since you only have 5 mins of cpu time for each
job, we probably can limit to 800-1000 dataset per job.

So far you haven't explained as to what you are trying to do? Is it just
picking records that match a condition?

Thanks
Kolusu

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
Thanks good to know my job is timing out 
There is a maximum of 5 min CPU time for job step 

In order to increase the TIOT the  allocxx member had to be modified 

> On Oct 7, 2020, at 11:22 AM, Sri h Kolusu  wrote:
> 
> 
>> 
>> You may be surprised at how much SORT can do for you though.
> 
> SORT can easily accomplish this by generating JCL on the fly for the 4000+
> datasets.  The maximum number of dd's per job is 3273 (assuming TIOT is
> 64k).  So DFSORT can take a list of the datasets and generate 3 different
> jobs and submit them in parallel via INTRDR.
> 
> 
> Thanks,
> Sri Hari Kolusu
> DFSORT Development
> IBM Corporation
> Email: skol...@us.ibm.com
> Phone: 520-799-2237 Tie Line: 321-2237
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Sri h Kolusu
> You may be surprised at how much SORT can do for you though.

SORT can easily accomplish this by generating JCL on the fly for the 4000+
datasets.  The maximum number of dd's per job is 3273 (assuming TIOT is
64k).  So DFSORT can take a list of the datasets and generate 3 different
jobs and submit them in parallel via INTRDR.


Thanks,
Sri Hari Kolusu
DFSORT Development
IBM Corporation
Email: skol...@us.ibm.com
Phone: 520-799-2237 Tie Line: 321-2237


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dfsort file split

2020-10-07 Thread Sri h Kolusu
> Here , i want to add the header in the OUT01-05 files . is there a
> way we can do in the same step.

Ron,

You used KEYBEGIN on 40,5 , the header record will have a Group number 1
and the detail records will have group number starting at 2.  So if your
input file always has a header then you need to use the group number 1 to
include the header.  You also need to remove the temporary group id number
that you created. You need to use BUILD for that.

//SYSINDD *
  OPTION COPY
  OUTREC IFTHEN=(WHEN=GROUP,KEYBEGIN=(40,5),PUSH=(651:ID=2))
  OUTFIL FNAMES=OUT01,BUILD=(01,650),INCLUDE=(651,2,SS,EQ,C'01,02')
  OUTFIL FNAMES=OUT02,BUILD=(01,650),INCLUDE=(651,2,SS,EQ,C'01,03')
  OUTFIL FNAMES=OUT03,BUILD=(01,650),INCLUDE=(651,2,SS,EQ,C'01,04')
  OUTFIL FNAMES=OUT04,BUILD=(01,650),INCLUDE=(651,2,SS,EQ,C'01,05')
  OUTFIL FNAMES=OUT05,BUILD=(01,650),INCLUDE=(651,2,SS,EQ,C'01,06')
/*

>> for instance if the header record is the only with double-blank at 651:


Massimo,

That will NOT work. Keybegin will look wherever there is a change in the
key at position 40 for 5 bytes.  So the first key change is on the Header
record itself.


Thanks,
Kolusu
DFSORT Development
IBM Corporation

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dfsort file split

2020-10-07 Thread Massimo Biancucci
Ron,

if you mean the very same header record of the file, if there's a condition
that does allow you to recognize it, for instance if the header record is
the only with double-blank at 651:

SORT FIELDS=COPY
  OUTREC IFTHEN=(WHEN=GROUP,KEYBEGIN=(40,5),PUSH=(651:ID=2))
  OUTFIL
FNAMES=OUT01,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,01,OR,651,2,CH,EQ,C'  '))
  OUTFIL FNAMES=OUT02,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,02
,OR,651,2,CH,EQ,C'  ') )
  OUTFIL FNAMES=OUT03,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,03
,OR,651,2,CH,EQ,C'  ') )
  OUTFIL FNAMES=OUT04,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,04
,OR,651,2,CH,EQ,C'  ') )
  OUTFIL
FNAMES=OUT05,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,05),OR,651,2,CH,EQ,C'  '))

Best regards.
Max



Mail
priva di virus. www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Il giorno mer 7 ott 2020 alle ore 14:15 Ron Thomas  ha
scritto:

> Hello-
>
> i have a file which has a header and detail records , i want to split the
> file based on value  and here below is the one i have coded
>
>
> //SPLITEXEC PGM=SORT
> //SYSOUT   DD SYSOUT=*
> //SORTIN   DD DSN=PYU678S.ITR1.FUTR.RTLDTA.UNLOAD,DISP=SHR
> //OUT01DD SYSOUT=*
> //OUT02DD SYSOUT=*
> //OUT03DD SYSOUT=*
> //OUT04DD SYSOUT=*
> //OUT05DD SYSOUT=*
> //SYSINDD *
>   SORT FIELDS=COPY
>   OUTREC IFTHEN=(WHEN=GROUP,KEYBEGIN=(40,5),PUSH=(651:ID=2))
>   OUTFIL FNAMES=OUT01,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,01)
>   OUTFIL FNAMES=OUT02,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,02)
>   OUTFIL FNAMES=OUT03,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,03)
>   OUTFIL FNAMES=OUT04,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,04)
>   OUTFIL FNAMES=OUT05,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,05)
> /*
>
> Here , i want to add the header in the OUT01-05 files . is there a way we
> can do in the same step.
>
> Sample input data as follows
>
>
> *** Top of Data
> 
> RTPOSY_NBR   }  PRREM_DESC1  }  PSORE_NBR   }  RETAIL_TYPE_DESC   }
> RETAIL_AMT
> 500127657 }NECT CY ABACAXI  }1141   }SBH - Base Especial
> 500792452 }NECT CY UVA 1L   }1141   }SBH - Base Especial
> 500792451 }NECT CY MARACUJA 1L  }1141   }SBH - Base Especial
> 500827656 }NECT CY MANGA}1151   }SBH - Base Especial
> 500840785 }NECT CY PESSEGO L 1L }1151   }SBH - Base Especial
> 500759650 }ALC GEL GB CLAS  }1181   }SBH - Base Especial
> 500759651 }ALC GEL GB BLUE  }1181   }SBH - Base Especial
> 500766705 }ALC GEL GB CLASSIC   }1191   }SBH- Base Especial
> 500839893 }NECT CY LARANJ 1L}1191   }SHB - Base Especial
>
> Thanks
> Ron T
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Wed, 7 Oct 2020, at 14:49, Paul Gilmartin wrote:

> On Wed, 7 Oct 2020 13:45:04 +0100, Jeremy Nicoll wrote:
> >...
> >Alternatively, maybe you never wrote any edit macros in anything other
> >than REXX?  ISTR that one could use any SAA language, eg COBOL or
> >Assembler, apart from CLIST/REXX.
> >
> If so, I'd expect the limiting factor to be Edit's parsing the command string.

But only if the macro made intelligent use of editor commands, for 
example issuing change commands to affect all matching lines in a 
file.  If it instead iterated through the file a line at a time, looking 
for things and maybe replacing whole lines itself then much more
of the cpu use could be down to the macro's own logic.  

That's the point.  The OP's contention that a macro was much less
efficient depends a great deal on what the macro was doing and 
how it was written.

-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Paul Gilmartin
On Wed, 7 Oct 2020 10:59:55 +0100, Jeremy Nicoll wrote:
>...
>How much of what the macro was doing was "glue logic" (if it was
>in REXX) or scanning through the file line by line, compared with
>calling editor commands (which one would expect to be fairly
>efficient)?

On Wed, 7 Oct 2020 13:45:04 +0100, Jeremy Nicoll wrote:
>...
>Alternatively, maybe you never wrote any edit macros in anything other
>than REXX?  ISTR that one could use any SAA language, eg COBOL or
>Assembler, apart from CLIST/REXX.
>
If so, I'd expect the limiting factor to be Edit's parsing the command string.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEASYS problem

2020-10-07 Thread Peter Relson

All our ieasys00 contains is CLPA, meaning that we always IPL with CLPA. 


At the minimum, you would think that ieasys00 should have in it whatever 
stuff you want for all users of that ieasys00.
In Barbara's case, that only thing is CLPA.

Many would have in it the "site defaults" that they want, allowing for 
override by other ieasysxx's, to avoid having to change every one of those 
ieasysxx's if some site default is to be changed.

Peter Relson
z/OS Core Technology Design


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Wed, 7 Oct 2020, at 15:40, Robert Prins wrote:
> On 2020-10-07 11:00, Jeremy Nicoll wrote:
> > On Wed, 7 Oct 2020, at 04:03, Wayne Bickerdike wrote:
> > 
> >> On a different note. I just compared EDIT macro performance versus
> >> IPOUPDTE. IPOUPDTE was about 600 times faster.
> > 
> > Is that a macro written in Assembler, or REXX?
> 
> It's an IBM program, a "reconstituted" version can be found @ 
> . From that page:

I'm pretty sure I remember using IPOUPDTE from working on CBIPOs 
about thirty years ago.

But I was asking about the edit macro that Wayne referred to.  Or are
you saying that PDSUPDTE is in fact an assembled edit macro? 
  
Alternatively, maybe you never wrote any edit macros in anything other
than REXX?  ISTR that one could use any SAA language, eg COBOL or
Assembler, apart from CLIST/REXX.  

-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


dfsort file split

2020-10-07 Thread Ron Thomas
Hello-

i have a file which has a header and detail records , i want to split the file 
based on value  and here below is the one i have coded


//SPLITEXEC PGM=SORT
//SYSOUT   DD SYSOUT=*
//SORTIN   DD DSN=PYU678S.ITR1.FUTR.RTLDTA.UNLOAD,DISP=SHR
//OUT01DD SYSOUT=*
//OUT02DD SYSOUT=*
//OUT03DD SYSOUT=*
//OUT04DD SYSOUT=*
//OUT05DD SYSOUT=*
//SYSINDD *
  SORT FIELDS=COPY
  OUTREC IFTHEN=(WHEN=GROUP,KEYBEGIN=(40,5),PUSH=(651:ID=2))
  OUTFIL FNAMES=OUT01,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,01)
  OUTFIL FNAMES=OUT02,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,02)
  OUTFIL FNAMES=OUT03,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,03)
  OUTFIL FNAMES=OUT04,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,04)
  OUTFIL FNAMES=OUT05,BUILD=(1,650),INCLUDE=(651,2,ZD,EQ,05)
/*

Here , i want to add the header in the OUT01-05 files . is there a way we can 
do in the same step.

Sample input data as follows


*** Top of Data 
RTPOSY_NBR   }  PRREM_DESC1  }  PSORE_NBR   }  RETAIL_TYPE_DESC   }  RETAIL_AMT
500127657 }NECT CY ABACAXI  }1141   }SBH - Base Especial
500792452 }NECT CY UVA 1L   }1141   }SBH - Base Especial
500792451 }NECT CY MARACUJA 1L  }1141   }SBH - Base Especial
500827656 }NECT CY MANGA}1151   }SBH - Base Especial
500840785 }NECT CY PESSEGO L 1L }1151   }SBH - Base Especial
500759650 }ALC GEL GB CLAS  }1181   }SBH - Base Especial
500759651 }ALC GEL GB BLUE  }1181   }SBH - Base Especial
500766705 }ALC GEL GB CLASSIC   }1191   }SBH- Base Especial
500839893 }NECT CY LARANJ 1L}1191   }SHB - Base Especial

Thanks
Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Joseph Reichman
Thanks for the response 

The average number of records in the file could be 240,000 each record could be 
close to 10,000 bytes there are 4,644 files  

Trying to find certain type of data

( doing testing for new filing season IRS)

Thanks 



> On Oct 7, 2020, at 7:32 AM, Robert Prins  wrote:
> 
> On 2020-10-07 11:00, Jeremy Nicoll wrote:
>>> On Wed, 7 Oct 2020, at 04:03, Wayne Bickerdike wrote:
>>> On a different note. I just compared EDIT macro performance versus
>>> IPOUPDTE. IPOUPDTE was about 600 times faster.
>> Is that a macro written in Assembler, or REXX?
> 
> It's an IBM program, a "reconstituted" version can be found @ 
> . From that page:
> 
> "PDSUPDTE is a reconstituted version of the IPOUPDTE program (also 
> distributed as CPPUPDTE).  It provides the ability to apply a group of 
> search/replace type modifications to JCL or control cards contained in all 
> members of one to several libraries (Partitioned Datasets).  It can also be 
> useful for searching for and changing fields in source code statements.
> 
> PDSUPDTE is located in File #65 of the CBT Overflow tape and is part of a 
> collection submitted by the Los Angeles User Group."
> 
>> How much of what the macro was doing was "glue logic" (if it was
>> in REXX) or scanning through the file line by line, compared with
>> calling editor commands (which one would expect to be fairly
>> efficient)?
> 
> If you need to change one member, it's OK. Run it in batch on a PDS with a 
> few hundred members, and it burns CPU like there is no tomorrow.
> 
> Robert
> -- 
> Robert AH Prins
> robert.ah.prins(a)gmail.com
> The hitchhiking grandfather @ https://prino.neocities.org/indez.html
> Some useful(?) REXX @ https://prino.neocities.org/zOS/zOS-Tools.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Robert Prins

On 2020-10-07 11:00, Jeremy Nicoll wrote:

On Wed, 7 Oct 2020, at 04:03, Wayne Bickerdike wrote:


On a different note. I just compared EDIT macro performance versus
IPOUPDTE. IPOUPDTE was about 600 times faster.


Is that a macro written in Assembler, or REXX?


It's an IBM program, a "reconstituted" version can be found @ 
. From that page:


"PDSUPDTE is a reconstituted version of the IPOUPDTE program (also distributed 
as CPPUPDTE).  It provides the ability to apply a group of search/replace type 
modifications to JCL or control cards contained in all members of one to several 
libraries (Partitioned Datasets).  It can also be useful for searching for and 
changing fields in source code statements.


PDSUPDTE is located in File #65 of the CBT Overflow tape and is part of a 
collection submitted by the Los Angeles User Group."



How much of what the macro was doing was "glue logic" (if it was
in REXX) or scanning through the file line by line, compared with
calling editor commands (which one would expect to be fairly
efficient)?


If you need to change one member, it's OK. Run it in batch on a PDS with a few 
hundred members, and it burns CPU like there is no tomorrow.


Robert
--
Robert AH Prins
robert.ah.prins(a)gmail.com
The hitchhiking grandfather @ https://prino.neocities.org/indez.html
Some useful(?) REXX @ https://prino.neocities.org/zOS/zOS-Tools.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Jeremy Nicoll
On Wed, 7 Oct 2020, at 04:03, Wayne Bickerdike wrote:

> On a different note. I just compared EDIT macro performance versus
> IPOUPDTE. IPOUPDTE was about 600 times faster.

Is that a macro written in Assembler, or REXX?

How much of what the macro was doing was "glue logic" (if it was
in REXX) or scanning through the file line by line, compared with 
calling editor commands (which one would expect to be fairly 
efficient)?

-- 
Jeremy Nicoll - my opinions are my own.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dataset allocation

2020-10-07 Thread Robert Prins

On 2020-10-07 03:03, Wayne Bickerdike wrote:

Give us an idea of how big each file is. OPEN/CLOSE is expensive. QSAM with
large buffers should go pretty quickly.

LOCATE instead of MOVE mode can speed things up when you are reading.

On a different note. I just compared EDIT macro performance versus
IPOUPDTE. IPOUPDTE was about 600 times faster.


Yes, but only if your data set is FB(80) and your changes are "trivial"...

For what it's worth, does anyone know of any IPOUPDTE equivalent on the CBTTape 
site that can handle any LRECL that you throw at it?


Robert
--
Robert AH Prins
robert.ah.prins(a)gmail.com
The hitchhiking grandfather - https://prino.neocities.org/
Some REXX code for use on z/OS - https://prino.neocities.org/zOS/zOS-Tools.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN