Joe,
As previously mentioned. Set up an ISPF JCL skeleton and build your JCL
using File Tailoring.
Sometimes you have to slow down and break down the problem too.
DFSORT or Syncsort are very fast but first off, test against one dataset
and get some indicative timings. end to end that will give
At the shop I worked be barred ipoupdte because it broke the endeavor locks in
production pds’s
Sent from my iPhone
I promise you I can’t type or
Spell on any smartphone
> On Oct 7, 2020, at 19:51, Jeremy Nicoll wrote:
>
> On Thu, 8 Oct 2020, at 01:10, Seymour J Metz wrote:
>> No, I'm
On Thu, 8 Oct 2020, at 01:10, Seymour J Metz wrote:
> No, I'm saying that I know what the CHANGE command does. Did the OP say
> that the relevant lines are contiguous?
No, he said nothing at all except that
"On a different note. I just compared EDIT macro performance
versus IPOUPDTE.
No, I'm saying that I know what the CHANGE command does. Did the OP say that
the relevant lines are contiguous?
--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3
From: IBM Mainframe Discussion List on behalf of
Jeremy Nicoll
Sent: Wednesday,
On Thu, 8 Oct 2020, at 00:12, Joseph Reichman wrote:
> I would like to issue IGGCSI00 and see how may datasets are involved
> doing it in multiple steps I would have to code 4,400 DD statements
> that would take forever
You can't, surely you can't mean that you'd hand write that many dd
On Wed, 7 Oct 2020, at 22:04, Paul Gilmartin wrote:
> On Wed, 7 Oct 2020 11:36:12 -0400, Joseph Reichman wrote:
> >
> >There is a maximum of 5 min CPU time for job step
>
> On Wed, 7 Oct 2020 18:15:56 +0100, Jeremy Nicoll wrote:
> >On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
> >> I work
On Wed, 7 Oct 2020, at 19:37, Seymour J Metz wrote:
> Using global change command would work in, e.g., SuperWylbur, but the
> change command in ISPF doesn't have the requisite functionality.
Are you saying you know what the macro (that Wayne referred to) does?
It's been a long time since I
>
> I would like to issue IGGCSI00 and see how may datasets are involved
> doing it in multiple steps I would have to code 4,400 DD statements
> that would take forever
Route the output of IGGCSI00 to a sequential dataset and DFSORT can
generate a dynamic JCL by parsing the contents. But first
> If DFSORT will do the trick I’m all for it
> I have been looking at the manual
To be honest, I for one have absolutely no idea as to "what the real
requirement is ". we already have 44 posts on this but very little
information on the real requirement.
You have been telling that you work for
I would like to issue IGGCSI00 and see how may datasets are involved doing it
in multiple steps I would have to code 4,400 DD statements that would take
forever
> On Oct 7, 2020, at 7:08 PM, Joseph Reichman wrote:
>
> If DFSORT will do the trick I’m all for it
> I have been looking at
If DFSORT will do the trick I’m all for it
I have been looking at the manual
I would assume it’s in the section running DFSORT from a program
> On Oct 7, 2020, at 7:00 PM, Mike Hochee wrote:
>
> Hi Joseph,
>
> I like your idea, especially if this is a one-off, you already have it
>
If putting TIME on both JOB and EXEC doesn't help, then just break the job up
into multiple steps or multiple jobs; there's no need to mess with INTRDR.
--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3
From: IBM Mainframe Discussion List on
Hi Joseph,
I like your idea, especially if this is a one-off, you already have it written,
and the system it's running on is not totally i/o or cpu constrained. If it
becomes something that needs to run regularly, maybe that's a different story
and you rewrite using DFSORT or whatever.
HTH,
I meant submitting many jobs under the cover the input datasets can remain VB
> On Oct 7, 2020, at 6:33 PM, Clark Morris wrote:
>
> [Default] On 7 Oct 2020 10:03:05 -0700, in bit.listserv.ibm-main
> skol...@us.ibm.com (Sri h Kolusu) wrote:
>
>>> Yes at this point but since the file is
[Default] On 7 Oct 2020 10:03:05 -0700, in bit.listserv.ibm-main
skol...@us.ibm.com (Sri h Kolusu) wrote:
>> Yes at this point but since the file is variable
>> I may need an exit to get the right spot at times to do a compare
>
>Joseph,
>
>You still haven't explained us as to what the real
S322
I IMHO breaking up the job submitting to INTRDR may help
What do you think ?
> On Oct 7, 2020, at 6:10 PM, Seymour J Metz wrote:
>
> Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?
>
> I'm in Annandale, just inside the Beltway.
>
>
> --
> Shmuel (Seymour J.)
Do you have TIME=1440 on both JOB and EXEC? What's the ABEND code?
I'm in Annandale, just inside the Beltway.
--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3
From: IBM Mainframe Discussion List on behalf of
Joseph Reichman
Sent: Wednesday,
On Wed, 7 Oct 2020 18:51:58 +, Seymour J Metz wrote:
>...
>
>What DYNAMNBR did you specify on EXEC?
>
Allocation by BPXWDYN, for example, is exempt from the DYNAMNBR limit.
-- gil
--
For IBM-MAIN subscribe / signoff /
On Wed, 7 Oct 2020 11:36:12 -0400, Joseph Reichman wrote:
>
>There is a maximum of 5 min CPU time for job step
On Wed, 7 Oct 2020 18:15:56 +0100, Jeremy Nicoll wrote:
>On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
>> I work for the IRS ...
>>
>And you've said that multiple times.
Hi There,
I have set the HSM BACKUP to DASD, and disabled the tape handling parameters
except for SETSYS NOUSERUNITTABLE which remains as is. However, an HBACK
command keeps allocating a tape for the backup. A backup volume is added as:
ADDVOL HSMBK0 UNIT(3390)
BACKUP(DAILY)
Not sure
1440 it’s bombing on time
Seymour you live in Virginia never worked for the IRS you cannt be that far
from NCFB the code here is all Assembler
Large many VB files
> On Oct 7, 2020, at 2:52 PM, Seymour J Metz wrote:
>
> The limit is the same for static and dynamic allocation.
>
> The
The limit is the same for static and dynamic allocation.
The limit is higher for extended TIOT.
What TIME did you specify on JOB and EXEC?
What DYNAMNBR did you specify on EXEC?
--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3
From: IBM
Using global change command would work in, e.g., SuperWylbur, but the change
command in ISPF doesn't have the requisite functionality.
--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3
From: IBM Mainframe Discussion List on behalf of
Jeremy
I am wondering if others on this list have made the transition from Retention
Method(VRSEL) to Retention Method(Expdt)? We have done so but ran into some
unexpected behaviors, and just trying to get an idea of what the experience was
like for others who have done this? If you also took
Thank you I did not know that
> On Oct 7, 2020, at 2:13 PM, DAL POS Raphael
> wrote:
>
> Hi Joseph,
>
> Ref :
> 1) I can not allocate this many datasets to
>A job step that’s includes using SVC 99
>
> This is not true.
>
> When using SVC99 you can use S99TIOEX flag to use
Hi Joseph,
Ref :
1) I can not allocate this many datasets to
A job step that’s includes using SVC 99
This is not true.
When using SVC99 you can use S99TIOEX flag to use extended TIOT. I will need to
run authorized for this.
If you are not running authorized you can use flag S99ACUCB
> 2) The job step times out because I have reached a 5 minute CPU time limit
> on the job step
>
This is a site and environment choice. Use a JOBCLASS (or how ever your site
controls this) with a greater or no time limit.
--
Joseph,
I know for a fact that there are job classes that are available for long
running jobs. Have you tried to use those?
You'll want to read up on Syncsort from Precisely as we are not licensed for
DFSORT from IBM.
Ken Mazer
IRS Systems Programmer 25 years
-Original Message-
From:
I am not an expert but I would think the "5 CPU minutes/job" would be
irrespective of whether the jobstep program were DFSORT or your homegrown
program. I think you are going to have to get some sort of special dispensation
from your WLM or similar sysprogs.
I suppose you might get around the
There are two main issues here
1) I can not allocate this many datasets to
A job step that’s includes using SVC 99
2) The job step times out because I have reached a 5 minute CPU time limit on
the job step
Sri from my understanding said DFSORT can overcome these two problems
I’m
On Wed, 7 Oct 2020, at 18:06, Joseph Reichman wrote:
> I work for the IRS I have to search thru year 2020 data that’s 4,467
> files about 240,000 records per file and a record length could be
> 10,000 bytes
> VB files
And you've said that multiple times. No-one cares who you work
for, but we
I work for the IRS I have to search thru year 2020 data that’s 4,467 files
about 240,000 records per file and a record length could be 10,000 bytes
VB files
> On Oct 7, 2020, at 1:03 PM, Sri h Kolusu wrote:
>
>
>>
>> Yes at this point but since the file is variable
>> I may need an exit
> Yes at this point but since the file is variable
> I may need an exit to get the right spot at times to do a compare
Joseph,
You still haven't explained us as to what the real requirement is. DFSORT
can handle VB file with ease. Substring search will make sure you can
search anywhere within
[Default] On 7 Oct 2020 08:50:04 -0700, in bit.listserv.ibm-main
reichman...@gmail.com (Joseph Reichman) wrote:
>Yes at this point but since the file is variable
>I may need an exit to get the right spot at times to do a compare
>There are 4,644 files an average of 240,000 records the file is
Yes at this point but since the file is variable
I may need an exit to get the right spot at times to do a compare
There are 4,644 files an average of 240,000 records the file is VB the record
size can be 10,000 rough estimates
> On Oct 7, 2020, at 11:44 AM, Sri h Kolusu wrote:
>
>
>>
Thanks a lot Kolusu.. it worked like a charm :)
mazimo- the solution provided did not work for me .
Regards
Ron T
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with
>> There is a maximum of 5 min CPU time for job step
>> In order to increase the TIOT the allocxx member had to be modified
You don't have to change TIOT limit, we can cap the concatenation limit to
whatever value we decide. Since you only have 5 mins of cpu time for each
job, we probably can
Thanks good to know my job is timing out
There is a maximum of 5 min CPU time for job step
In order to increase the TIOT the allocxx member had to be modified
> On Oct 7, 2020, at 11:22 AM, Sri h Kolusu wrote:
>
>
>>
>> You may be surprised at how much SORT can do for you though.
>
>
> You may be surprised at how much SORT can do for you though.
SORT can easily accomplish this by generating JCL on the fly for the 4000+
datasets. The maximum number of dd's per job is 3273 (assuming TIOT is
64k). So DFSORT can take a list of the datasets and generate 3 different
jobs and
> Here , i want to add the header in the OUT01-05 files . is there a
> way we can do in the same step.
Ron,
You used KEYBEGIN on 40,5 , the header record will have a Group number 1
and the detail records will have group number starting at 2. So if your
input file always has a header then you
Ron,
if you mean the very same header record of the file, if there's a condition
that does allow you to recognize it, for instance if the header record is
the only with double-blank at 651:
SORT FIELDS=COPY
OUTREC IFTHEN=(WHEN=GROUP,KEYBEGIN=(40,5),PUSH=(651:ID=2))
OUTFIL
On Wed, 7 Oct 2020, at 14:49, Paul Gilmartin wrote:
> On Wed, 7 Oct 2020 13:45:04 +0100, Jeremy Nicoll wrote:
> >...
> >Alternatively, maybe you never wrote any edit macros in anything other
> >than REXX? ISTR that one could use any SAA language, eg COBOL or
> >Assembler, apart from
On Wed, 7 Oct 2020 10:59:55 +0100, Jeremy Nicoll wrote:
>...
>How much of what the macro was doing was "glue logic" (if it was
>in REXX) or scanning through the file line by line, compared with
>calling editor commands (which one would expect to be fairly
>efficient)?
On Wed, 7 Oct 2020
All our ieasys00 contains is CLPA, meaning that we always IPL with CLPA.
At the minimum, you would think that ieasys00 should have in it whatever
stuff you want for all users of that ieasys00.
In Barbara's case, that only thing is CLPA.
Many would have in it the "site defaults" that they
On Wed, 7 Oct 2020, at 15:40, Robert Prins wrote:
> On 2020-10-07 11:00, Jeremy Nicoll wrote:
> > On Wed, 7 Oct 2020, at 04:03, Wayne Bickerdike wrote:
> >
> >> On a different note. I just compared EDIT macro performance versus
> >> IPOUPDTE. IPOUPDTE was about 600 times faster.
> >
> > Is that
Hello-
i have a file which has a header and detail records , i want to split the file
based on value and here below is the one i have coded
//SPLITEXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=PYU678S.ITR1.FUTR.RTLDTA.UNLOAD,DISP=SHR
//OUT01DD SYSOUT=*
//OUT02DD SYSOUT=*
Thanks for the response
The average number of records in the file could be 240,000 each record could be
close to 10,000 bytes there are 4,644 files
Trying to find certain type of data
( doing testing for new filing season IRS)
Thanks
> On Oct 7, 2020, at 7:32 AM, Robert Prins wrote:
>
On 2020-10-07 11:00, Jeremy Nicoll wrote:
On Wed, 7 Oct 2020, at 04:03, Wayne Bickerdike wrote:
On a different note. I just compared EDIT macro performance versus
IPOUPDTE. IPOUPDTE was about 600 times faster.
Is that a macro written in Assembler, or REXX?
It's an IBM program, a
On Wed, 7 Oct 2020, at 04:03, Wayne Bickerdike wrote:
> On a different note. I just compared EDIT macro performance versus
> IPOUPDTE. IPOUPDTE was about 600 times faster.
Is that a macro written in Assembler, or REXX?
How much of what the macro was doing was "glue logic" (if it was
in REXX) or
On 2020-10-07 03:03, Wayne Bickerdike wrote:
Give us an idea of how big each file is. OPEN/CLOSE is expensive. QSAM with
large buffers should go pretty quickly.
LOCATE instead of MOVE mode can speed things up when you are reading.
On a different note. I just compared EDIT macro performance
50 matches
Mail list logo