Re: ASG Workload Scheduler?

2019-05-03 Thread Tim Hare
"the big diversion"?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 6.2 and ARCH(12)

2019-05-03 Thread TSpina
SmartTest  giving you problems?

Missing me?

Most of the technical guys are gone.  I think the only guy left is Ken someone 
or other.  If it's Frank A. You're f'd.

If you got a name let me know.

Tom Spina

On May 3, 2019, at 3:57 PM, Brian Chapman  wrote:

We have a vendor debugging product that is constantly causing 0C1 and 0C4
abends since we have upgraded to COBOL 6.2. It also caused these abends
when we were at COBOL 4,2, but the abend rate has grown considerably after
the upgrade.

The vendor has produced countless patches, but so far they have not
resolved the issues. We were notified today that they believe they
understand the issue. They are stating that even though our COBOL compiler
is set with ARCH(8) (to support our DRE machine), LE run-time is
recognizing that the program is COBOL 6.2, running on a z14, and
automatically switch the ARCH level to ARCH(12). They believe the run-time
execution is exploiting the new Vector Packed Decimal Facility and
producing erratic behavior.

I searched through several presentations and IBM manuals for COBOL 6.2, and
everything I have found states that a recompile with ARCH(12) is required
to take advantage of the new facility. Is the vendor correct?



Thank you,

Brian Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 6.2 and ARCH(12)

2019-05-03 Thread Mike Schwab
Is the abend in the user compiled instructions?  Then check the
compiler processor settings.

Is the abend in the vendor compiled libraries or included subroutines?
 Then check the vendor's subroutine / runtime libraries.

On Fri, May 3, 2019 at 6:52 PM Charles Mills  wrote:
>
> I think I disagree.
>
> You compile the program for ARCH(8). IBM guarantees that it will run on a z10 
> (do I have that right?). They do NOT guarantee that the program plus LE will 
> behave on a z114 exactly as though it were running on a z10.
>
> No matter what ARCH the program were compiled for, I would expect that LE 
> running on a z114 might well exploit the actual hardware. I would be kind of 
> unhappy if it did NOT.
>
> The vendor product either supports z114's or it does not. If they do not 
> support z114 instructions, they should admit that they do not.
>
> > If LE really is doing this, why even have an ABO product
>
> To update ("optimize") the *compiled* object code. The OS-resident 
> support/library modules (LE) are a different matter. They are already (I am 
> guessing) at a current level.
>
> What is the z/OS release? I would expect LE to be built for the lowest level 
> hardware that that release supported, but LE might be clever enough to 
> dual-path, and I think that would be a good thing.
>
> Charles
>
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of Mark Zelden
> Sent: Friday, May 3, 2019 3:35 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: COBOL 6.2 and ARCH(12)
>
> On Fri, 3 May 2019 15:57:34 -0400, Brian Chapman  wrote:
>
> >We have a vendor debugging product that is constantly causing 0C1 and 0C4
> >abends since we have upgraded to COBOL 6.2. It also caused these abends
> >when we were at COBOL 4,2, but the abend rate has grown considerably after
> >the upgrade.
> >
> >The vendor has produced countless patches, but so far they have not
> >resolved the issues. We were notified today that they believe they
> >understand the issue. They are stating that even though our COBOL compiler
> >is set with ARCH(8) (to support our DRE machine), LE run-time is
> >recognizing that the program is COBOL 6.2, running on a z14, and
> >automatically switch the ARCH level to ARCH(12). They believe the run-time
> >execution is exploiting the new Vector Packed Decimal Facility and
> >producing erratic behavior.
> >
> >I searched through several presentations and IBM manuals for COBOL 6.2, and
> >everything I have found states that a recompile with ARCH(12) is required
> >to take advantage of the new facility. Is the vendor correct?
> >
> >
>
> I've never heard of that and I wouldn't expect IBM to ever do something like 
> that,
> but heck, what do I know.  ;-)   LE shouldn't be trying to outsmart the 
> person that
> compiled the code (IMHO).
>
> 1) Have you verified the options in a compile listing are as you expected?
>
> 2) Are you running ABO and could that be involved?  Although I know nothing 
> about
> configuration ABO (I have never "seen" or used it), even if you were I woudn't
> think you would have it configured to use z14 instructions.
>
> If LE really is doing this, why even have an ABO product.   I certainly would 
> open
> an SR with IBM LE support about it.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 6.2 and ARCH(12)

2019-05-03 Thread Charles Mills
I think I disagree.

You compile the program for ARCH(8). IBM guarantees that it will run on a z10 
(do I have that right?). They do NOT guarantee that the program plus LE will 
behave on a z114 exactly as though it were running on a z10.

No matter what ARCH the program were compiled for, I would expect that LE 
running on a z114 might well exploit the actual hardware. I would be kind of 
unhappy if it did NOT.

The vendor product either supports z114's or it does not. If they do not 
support z114 instructions, they should admit that they do not. 

> If LE really is doing this, why even have an ABO product

To update ("optimize") the *compiled* object code. The OS-resident 
support/library modules (LE) are a different matter. They are already (I am 
guessing) at a current level.

What is the z/OS release? I would expect LE to be built for the lowest level 
hardware that that release supported, but LE might be clever enough to 
dual-path, and I think that would be a good thing.

Charles


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Mark Zelden
Sent: Friday, May 3, 2019 3:35 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: COBOL 6.2 and ARCH(12)

On Fri, 3 May 2019 15:57:34 -0400, Brian Chapman  wrote:

>We have a vendor debugging product that is constantly causing 0C1 and 0C4
>abends since we have upgraded to COBOL 6.2. It also caused these abends
>when we were at COBOL 4,2, but the abend rate has grown considerably after
>the upgrade.
>
>The vendor has produced countless patches, but so far they have not
>resolved the issues. We were notified today that they believe they
>understand the issue. They are stating that even though our COBOL compiler
>is set with ARCH(8) (to support our DRE machine), LE run-time is
>recognizing that the program is COBOL 6.2, running on a z14, and
>automatically switch the ARCH level to ARCH(12). They believe the run-time
>execution is exploiting the new Vector Packed Decimal Facility and
>producing erratic behavior.
>
>I searched through several presentations and IBM manuals for COBOL 6.2, and
>everything I have found states that a recompile with ARCH(12) is required
>to take advantage of the new facility. Is the vendor correct?
>
>

I've never heard of that and I wouldn't expect IBM to ever do something like 
that,
but heck, what do I know.  ;-)   LE shouldn't be trying to outsmart the person 
that
compiled the code (IMHO).   

1) Have you verified the options in a compile listing are as you expected?

2) Are you running ABO and could that be involved?  Although I know nothing 
about
configuration ABO (I have never "seen" or used it), even if you were I woudn't
think you would have it configured to use z14 instructions. 

If LE really is doing this, why even have an ABO product.   I certainly would 
open
an SR with IBM LE support about it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 6.2 and ARCH(12)

2019-05-03 Thread Mark Zelden
On Fri, 3 May 2019 15:57:34 -0400, Brian Chapman  wrote:

>We have a vendor debugging product that is constantly causing 0C1 and 0C4
>abends since we have upgraded to COBOL 6.2. It also caused these abends
>when we were at COBOL 4,2, but the abend rate has grown considerably after
>the upgrade.
>
>The vendor has produced countless patches, but so far they have not
>resolved the issues. We were notified today that they believe they
>understand the issue. They are stating that even though our COBOL compiler
>is set with ARCH(8) (to support our DRE machine), LE run-time is
>recognizing that the program is COBOL 6.2, running on a z14, and
>automatically switch the ARCH level to ARCH(12). They believe the run-time
>execution is exploiting the new Vector Packed Decimal Facility and
>producing erratic behavior.
>
>I searched through several presentations and IBM manuals for COBOL 6.2, and
>everything I have found states that a recompile with ARCH(12) is required
>to take advantage of the new facility. Is the vendor correct?
>
>

I've never heard of that and I wouldn't expect IBM to ever do something like 
that,
but heck, what do I know.  ;-)   LE shouldn't be trying to outsmart the person 
that
compiled the code (IMHO).   

1) Have you verified the options in a compile listing are as you expected?

2) Are you running ABO and could that be involved?  Although I know nothing 
about
configuration ABO (I have never "seen" or used it), even if you were I woudn't
think you would have it configured to use z14 instructions. 

If LE really is doing this, why even have an ABO product.   I certainly would 
open
an SR with IBM LE support about it.

Best Regards,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS
ITIL v3 Foundation Certified
mailto:m...@mzelden.com
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html
Systems Programming expert at http://search390.techtarget.com/ateExperts/
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 6.2 and ARCH(12) [EXTERNAL]

2019-05-03 Thread Brian Chapman
This article from IBM agrees with your thoughts and everything else I've
read. I can't find anything that confirms the vendor's statement.

https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca=an=897=ENUS217-323


On Fri, May 3, 2019, 5:20 PM Feller, Paul 
wrote:

> It is my understanding that if you set the ARCH level to something lower
> than the machine type you are running on it should not use any of the new
> machine instructions.  If what the vendor says is truly what is happening
> then I would think a question to IBM would be in order.
>
> Thanks..
>
> Paul Feller
> AGT Mainframe Technical Support
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Brian Chapman
> Sent: Friday, May 03, 2019 2:58 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: COBOL 6.2 and ARCH(12) [EXTERNAL]
>
> We have a vendor debugging product that is constantly causing 0C1 and 0C4
> abends since we have upgraded to COBOL 6.2. It also caused these abends
> when we were at COBOL 4,2, but the abend rate has grown considerably after
> the upgrade.
>
> The vendor has produced countless patches, but so far they have not
> resolved the issues. We were notified today that they believe they
> understand the issue. They are stating that even though our COBOL compiler
> is set with ARCH(8) (to support our DRE machine), LE run-time is
> recognizing that the program is COBOL 6.2, running on a z14, and
> automatically switch the ARCH level to ARCH(12). They believe the run-time
> execution is exploiting the new Vector Packed Decimal Facility and
> producing erratic behavior.
>
> I searched through several presentations and IBM manuals for COBOL 6.2,
> and everything I have found states that a recompile with ARCH(12) is
> required to take advantage of the new facility. Is the vendor correct?
>
>
>
> Thank you,
>
> Brian Chapman
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> Please note:  This message originated outside your organization. Please
> use caution when opening links or attachments.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ASG Workload Scheduler?

2019-05-03 Thread Edward Finnell
We were early users of Smart Scheduler. It was a nice product and George Elliot 
was a whiz. He enhanced my knowledge several times on Ibm-main. Died of a 
massive coronary at age 38. We kept it thru the turmoil until the big 
diversion. 

In a message dated 5/3/2019 3:33:51 PM Central Standard Time, 
haresystemssupp...@comcast.net writes:
Just looking to see if there are others to SHARE stuff with.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 6.2 and ARCH(12) [EXTERNAL]

2019-05-03 Thread Feller, Paul
It is my understanding that if you set the ARCH level to something lower than 
the machine type you are running on it should not use any of the new machine 
instructions.  If what the vendor says is truly what is happening then I would 
think a question to IBM would be in order.

Thanks..

Paul Feller
AGT Mainframe Technical Support

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Brian Chapman
Sent: Friday, May 03, 2019 2:58 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: COBOL 6.2 and ARCH(12) [EXTERNAL]

We have a vendor debugging product that is constantly causing 0C1 and 0C4 
abends since we have upgraded to COBOL 6.2. It also caused these abends when we 
were at COBOL 4,2, but the abend rate has grown considerably after the upgrade.

The vendor has produced countless patches, but so far they have not resolved 
the issues. We were notified today that they believe they understand the issue. 
They are stating that even though our COBOL compiler is set with ARCH(8) (to 
support our DRE machine), LE run-time is recognizing that the program is COBOL 
6.2, running on a z14, and automatically switch the ARCH level to ARCH(12). 
They believe the run-time execution is exploiting the new Vector Packed Decimal 
Facility and producing erratic behavior.

I searched through several presentations and IBM manuals for COBOL 6.2, and 
everything I have found states that a recompile with ARCH(12) is required to 
take advantage of the new facility. Is the vendor correct?



Thank you,

Brian Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
Please note:  This message originated outside your organization. Please use 
caution when opening links or attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Grant Taylor

On 5/3/19 2:13 PM, Don Poitras wrote:

Well, no one told me till today. :)


Better late than never?


Seriously, what's wrong with scp?


10 hack
20 kludge
30 goto 10

My understanding is that scp uses a terminal connection between scp on 
one end talking to scp as a remote command on the other end, through the 
same type of connection that you would ssh through.  Control and data 
are mixed.  I'm not confident that it's true 8-bit clean.  (Think escape 
sequence.)


Conversely sftp actually establishes multiple separate channels in the 
ssh connection.  Control and data are independent.  The data is 8-bit clean.


I've found that scp works > 95% of the time for me.  But there are 
exceptionally rare (for me) cases where I have to use sftp.



The problem with sftp is that it's interactive.


Yes, sftp is meant to be a drop in replacement for interactive ftp.

It's also possible to pull / get files via sftp non-interactively.

sftp : 

I don't remember the syntax to push / put files remotely at the moment. 
But I'd be shocked if there wasn't a way to do it.


I can write scripts where I call scp over and over with a syntax that 
is simple and intuitive.


Yep.

sftp can also pull / get files the same way.


Everywhere _but_ z/OS, it is the most useful way I know to transfer files.


I use scp extensively.  I just do so knowing that there are corner cases 
that can bite.




--
Grant. . . .
unix || die

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL 6.2 and ARCH(12)

2019-05-03 Thread Steve Thompson
Possibly. LE library routines may be smart enough to do that. But the compiler 
can’t do that in the case you compiled on a z14 to run on any lower level 
supported architecture. 

Sent from my iPhone — small keyboarf, fat fungrs, stupd spell manglr. Expct 
mistaks 


> On May 3, 2019, at 3:57 PM, Brian Chapman  wrote:
> 
> We have a vendor debugging product that is constantly causing 0C1 and 0C4
> abends since we have upgraded to COBOL 6.2. It also caused these abends
> when we were at COBOL 4,2, but the abend rate has grown considerably after
> the upgrade.
> 
> The vendor has produced countless patches, but so far they have not
> resolved the issues. We were notified today that they believe they
> understand the issue. They are stating that even though our COBOL compiler
> is set with ARCH(8) (to support our DRE machine), LE run-time is
> recognizing that the program is COBOL 6.2, running on a z14, and
> automatically switch the ARCH level to ARCH(12). They believe the run-time
> execution is exploiting the new Vector Packed Decimal Facility and
> producing erratic behavior.
> 
> I searched through several presentations and IBM manuals for COBOL 6.2, and
> everything I have found states that a recompile with ARCH(12) is required
> to take advantage of the new facility. Is the vendor correct?
> 
> 
> 
> Thank you,
> 
> Brian Chapman
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Don Poitras
Yes, the z/OS scp is BAD. Grant seemed to think that scp on other platforms
was also not-good. That's what I was asking about.

In article <71e3e36b-a792-4307-b6b8-67f2ced4a...@hogstrom.org> you wrote:
> IBM???s OpenSSL impl??mentation ???attempted??? to fix transfers via scp by 
> treating all files like they were character and does a code conversion from 
> 1047 to 8859 or some such nonsense.  
> Scp will not work without some calestentics that are just plain frustrating 
> but Z makes sure it will be consistently frustrating for 50 years so there is 
> that :)   Don???t expect a fix.
> Use sftp using binary transfer and your life will be better (not perfect, 
> there are code pages 1047, 37, ??? 
> Matt Hogstrom
> m...@hogstrom.org
> +1-919-656-0564
> PGP Key: 0x90ECB270
> ???It may be cognitive, but, it ain???t intuitive."
> ??? Hogstrom
> > On May 3, 2019, at 4:13 PM, Don Poitras  wrote:
> > 
> > In article 
> >  you 
> > wrote:
> >> On 5/3/19 1:00 PM, Don Poitras wrote:
> >>> z/OS scp is BAD. There's no way to tell it to do binary without doing 
> >>> something like Paul's piping conniptions.
> >> Many will tell you that scp itself is not-good and that you should use 
> >> sftp instead.
> >> Perhaps z/OS's scp is worse than scp by itself.
> >> -- 
> >> Grant. . . .
> >> unix || die
> > 
> > Well, no one told me till today. :) Seriously, what's wrong with scp?
> > The problem with sftp is that it's interactive. I can write scripts where
> > I call scp over and over with a syntax that is simple and intuitive. 
> > Everywhere _but_ z/OS, it is the most useful way I know to transfer 
> > files.

-- 
Don Poitras - SAS Development  -  SAS Institute Inc. - SAS Campus Drive
sas...@sas.com   (919) 531-5637Cary, NC 27513

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


ASG Workload Scheduler?

2019-05-03 Thread Tim Hare
I'm curious about how many here run ASG's Workload Scheduler, which was 
Beta-42, and before that was created by Pecan.  The shop I'm working for still 
runs it and is still happy with it for years though they've asked me at times 
to do some specialized reporting about tasks and schedules (which I manage to 
do with SORT for the most part) that aren't part of the product.  

Just looking to see if there are others to SHARE stuff with.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Matt Hogstrom
IBM’s OpenSSL implémentation “attempted” to fix transfers via scp by treating 
all files like they were character and does a code conversion from 1047 to 8859 
or some such nonsense.  

Scp will not work without some calestentics that are just plain frustrating but 
Z makes sure it will be consistently frustrating for 50 years so there is that 
:)   Don’t expect a fix.

Use sftp using binary transfer and your life will be better (not perfect, there 
are code pages 1047, 37, … 

Matt Hogstrom
m...@hogstrom.org
+1-919-656-0564
PGP Key: 0x90ECB270


“It may be cognitive, but, it ain’t intuitive."
— Hogstrom

> On May 3, 2019, at 4:13 PM, Don Poitras  wrote:
> 
> In article  
> you wrote:
>> On 5/3/19 1:00 PM, Don Poitras wrote:
>>> z/OS scp is BAD. There's no way to tell it to do binary without doing 
>>> something like Paul's piping conniptions.
>> Many will tell you that scp itself is not-good and that you should use 
>> sftp instead.
>> Perhaps z/OS's scp is worse than scp by itself.
>> -- 
>> Grant. . . .
>> unix || die
> 
> Well, no one told me till today. :) Seriously, what's wrong with scp?
> The problem with sftp is that it's interactive. I can write scripts where
> I call scp over and over with a syntax that is simple and intuitive. 
> Everywhere _but_ z/OS, it is the most useful way I know to transfer 
> files.
> 
> -- 
> Don Poitras - SAS Development  -  SAS Institute Inc. - SAS Campus Drive
> sas...@sas.com   (919) 531-5637Cary, NC 27513
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Don Poitras
In article  
you wrote:
> On 5/3/19 1:00 PM, Don Poitras wrote:
> > z/OS scp is BAD. There's no way to tell it to do binary without doing 
> > something like Paul's piping conniptions.
> Many will tell you that scp itself is not-good and that you should use 
> sftp instead.
> Perhaps z/OS's scp is worse than scp by itself.
> -- 
> Grant. . . .
> unix || die

Well, no one told me till today. :) Seriously, what's wrong with scp?
The problem with sftp is that it's interactive. I can write scripts where
I call scp over and over with a syntax that is simple and intuitive. 
Everywhere _but_ z/OS, it is the most useful way I know to transfer 
files.

-- 
Don Poitras - SAS Development  -  SAS Institute Inc. - SAS Campus Drive
sas...@sas.com   (919) 531-5637Cary, NC 27513

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Concatenation and UNIT=(,n)

2019-05-03 Thread Tim Hare
Figuring that this is 'working as designed',  I have started a SHARE 
requirement, where discussion could take place.  My suggested improvment is 
that for multi-unit, multi-volume datasets (and I realize this probably mostly 
applies to tape) a consolidated list of volumes for the concatenated datasets 
be constructed, and the mounts happen from that list.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


COBOL 6.2 and ARCH(12)

2019-05-03 Thread Brian Chapman
We have a vendor debugging product that is constantly causing 0C1 and 0C4
abends since we have upgraded to COBOL 6.2. It also caused these abends
when we were at COBOL 4,2, but the abend rate has grown considerably after
the upgrade.

The vendor has produced countless patches, but so far they have not
resolved the issues. We were notified today that they believe they
understand the issue. They are stating that even though our COBOL compiler
is set with ARCH(8) (to support our DRE machine), LE run-time is
recognizing that the program is COBOL 6.2, running on a z14, and
automatically switch the ARCH level to ARCH(12). They believe the run-time
execution is exploiting the new Vector Packed Decimal Facility and
producing erratic behavior.

I searched through several presentations and IBM manuals for COBOL 6.2, and
everything I have found states that a recompile with ARCH(12) is required
to take advantage of the new facility. Is the vendor correct?



Thank you,

Brian Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Grant Taylor

On 5/3/19 1:00 PM, Don Poitras wrote:
z/OS scp is BAD. There's no way to tell it to do binary without doing 
something like Paul's piping conniptions.


Many will tell you that scp itself is not-good and that you should use 
sftp instead.


Perhaps z/OS's scp is worse than scp by itself.



--
Grant. . . .
unix || die

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Paul Gilmartin
On Fri, 3 May 2019 15:00:04 -0400, Don Poitras wrote:
>
>z/OS scp is BAD. There's no way to tell it to do binary without doing
>something like Paul's piping conniptions.
> 
With conniptioned ssh, you can archive, transfer, and extract in a single
ugly command, either push or pull.

On the first page of the Ref. for tar, I see:
You cannot use tar unless you specify –f.

Ouch!  That's so idiosyncratic.  Is it even true?  (IIRC, POSIX no
longer specifies tar, only pax.)

z/OS pax provides EBCDIC->ASCII conversion (and many others).
That may be useful to the OP.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Don Poitras
In article <0294809266169973.wa.nealesinenomine@listserv.ua.edu> you wrote:
> I use scp which I assumed defaulted to binary. So I did it with sftp and 
> explicitly used binary and all was good. The scp/sftp utility we wrote for 
> CMS defaults to binary so I had made an incorrect assumption. Thanks all for 
> the help.
> Neale

z/OS scp is BAD. There's no way to tell it to do binary without doing
something like Paul's piping conniptions.

-- 
Don Poitras - SAS Development  -  SAS Institute Inc. - SAS Campus Drive
sas...@sas.com   (919) 531-5637Cary, NC 27513

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Appending timestamp to the file

2019-05-03 Thread retired mainframer
I'm confused

How does a BUILD statement with 12 comma constants produce output which 
contains only 3 commas?

> -Original Message-
> From: IBM Mainframe Discussion List  On
> Behalf Of Ron Thomas
> Sent: Friday, May 03, 2019 9:53 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Appending timestamp to the file
> 
> Hi . We are appending the timestamp to a CSV file and when we do the same  we 
> see
> the microsecond part is different  for all the rows . Is there a way to make 
> it same for
> all of the rows ?
> 
> Here below is the control card we used
> 
> OUTREC PARSE=(%01=(ENDBEFR=C'|',FIXLEN=4),
>   %02=(ENDBEFR=C'|',FIXLEN=9),
>   %03=(ENDBEFR=C'|',FIXLEN=7),
>   %04=(ENDBEFR=C'|',FIXLEN=11),
>   %05=(ENDBEFR=C'|',FIXLEN=2),
>   %06=(ENDBEFR=C'|',FIXLEN=4),
>   %07=(ENDBEFR=C'|',FIXLEN=3),
>   %08=(ENDBEFR=C'|',FIXLEN=2),
>   %09=(ENDBEFR=C'|',FIXLEN=10),
>   %10=(ENDBEFR=C'|',FIXLEN=10),
>   %11=(ENDBEFR=C'|',FIXLEN=05)),
> BUILD=(%01,C',',%02,C',',
>%03,C',',%04,C',',
>%05,C',',%06,C',',
>%07,C',',%08,C',',
>%09,C',',%10,C',',DATE=(4MD-),C'-',TIME(24.),C'.',SEQNUM,6,ZD,
>C',',%11,C',')
> 
> The o/p we are getting as follows
> 
> ITBR800X  ,IT800X,2019-05-03-11.44.40.01,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.02,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.03,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.04,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.05,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.06,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.07,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.08,33
> ITBR800X  ,IT800X,2019-05-03-11.44.40.09,33

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Neale Ferguson
I use scp which I assumed defaulted to binary. So I did it with sftp and 
explicitly used binary and all was good. The scp/sftp utility we wrote for CMS 
defaults to binary so I had made an incorrect assumption. Thanks all for the 
help.

Neale

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Paul Gilmartin
On Fri, 3 May 2019 13:26:15 -0500, Neale Ferguson wrote:

>When I scp or sftp the tar ball to the Linux system it complains that it 
>doesn't recognize the file as an archive:
>
>$ tar -tf inc.tar 
>tar: This does not look like a tar archive
>tar: Skipping to next header
>tar: Exiting with failure status due to previous errors
>
>I had created it with tar -cf inc.tar ./headers
> 
What does "cksum inc.tar" show at both ends?

Is this an ASCII<->EBCDIC problem?

Can you view it with a simple viewer at both ends?
If it appears to contain some text on z/OS and also on Linux,
it has been translated EBCDIC->ASCII.  That breaks it.

(Is this something similar to C #include files or HLASM macros?)

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Mark Jacobs
Simple question. Did you ftp in binary mode?

Mark Jacobs


Sent from ProtonMail, Swiss-based encrypted email.

GPG Public Key - 
https://api.protonmail.ch/pks/lookup?op=get=markjac...@protonmail.com

‐‐‐ Original Message ‐‐‐
On Friday, May 3, 2019 2:26 PM, Neale Ferguson  wrote:

> When I scp or sftp the tar ball to the Linux system it complains that it 
> doesn't recognize the file as an archive:
>
> $ tar -tf inc.tar
> tar: This does not look like a tar archive
> tar: Skipping to next header
> tar: Exiting with failure status due to previous errors
>
> I had created it with tar -cf inc.tar ./headers
>
> Neale
>
> --
>
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Neale Ferguson
When I scp or sftp the tar ball to the Linux system it complains that it 
doesn't recognize the file as an archive:

$ tar -tf inc.tar 
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors

I had created it with tar -cf inc.tar ./headers

Neale

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Paul Gilmartin
On Fri, 3 May 2019 13:12:21 -0400, Don Poitras wrote:
>
>You don't say what error you're receiving. If it's a permission problem,
>use the -o option to disable tar from passing the owner across. ...
>
>If I don't do that, the sender's UID is set and I can't delete the
>files and I have to get someone with root to delete them.
> 
I had such a problem with NFS; Solaris server, MVS client.
MVS client had no facility to prevent such ownership change,
assuming it was the responsibility of the server.  Solaris
could enforce the rule on either the server or the client.
Solaris admins chose to do it on the client and adamantly
refused to change.

Was tar running setuid?

Did the OP need EBCDIC->ASCII translation?  Pax on z/OS,
at least, can do that.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Paul Gilmartin
On Fri, 3 May 2019 14:03:49 +, Seymour J Metz wrote:

>In OS/360, IEBCOPY couldn't reblock load modules. In OS/VS, it could, with the 
>appropriate control statement.
> 
When IEBCOPY reblocks a module, does it leave any audit trail?  That
might be of interest in case of the OP's problem.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Appending timestamp to the file

2019-05-03 Thread Ron Thomas
Ok thanks Kolusu. That worked ..

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Creating tar file for use on Linux

2019-05-03 Thread Don Poitras
In article <68c2e717-9481-48c0-b6a2-855c33c44...@sinenomine.net> you wrote:
> What incantation of tar or pax do I need to use to create a tar ball under 
> Unix System Services such that it can be transferred to a Linux system and 
> untarred there? Just the straight -cf doesn???t seem to do the trick nor 
> using the -U -X option.
> Neale

You don't say what error you're receiving. If it's a permission problem,
use the -o option to disable tar from passing the owner across. I do

tar -cvof a.tar dir

and untar on linux with

tar -xvf a.tar

with no isssues. 

Whenever I extract a tar file on z/OS, I do the same kind of thing. e.g.

tar -xvof a.tar

If I don't do that, the sender's UID is set and I can't delete the
files and I have to get someone with root to delete them. 

-- 
Don Poitras - SAS Development  -  SAS Institute Inc. - SAS Campus Drive
sas...@sas.com   (919) 531-5637Cary, NC 27513

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Appending timestamp to the file

2019-05-03 Thread Sri h Kolusu
>>> Hi . We are appending the timestamp to a CSV file and when we do the
same  we see the microsecond part is different  for all the rows .
> Is there a way to make it same for all of the rows ?


You created your own time stamp using a incrementing sequence number of 6
bytes  using the following parms

DATE=(4MD-),C'-',TIME(24.),C'.',SEQNUM,6,ZD,

If you want a constant value, then remove the seqnum parm and add a
constant value of whatever you prefer.

I am wondering as to why you are not even using DATE5 parm for the
timestamp and built your own timestamp.

Thanks,
Kolusu
DFSORT Development
IBM Corporation


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Appending timestamp to the file

2019-05-03 Thread Ron Thomas
Hi . We are appending the timestamp to a CSV file and when we do the same  we 
see the microsecond part is different  for all the rows . Is there a way to 
make it same for all of the rows ?

Here below is the control card we used 

OUTREC PARSE=(%01=(ENDBEFR=C'|',FIXLEN=4),
  %02=(ENDBEFR=C'|',FIXLEN=9),
  %03=(ENDBEFR=C'|',FIXLEN=7),
  %04=(ENDBEFR=C'|',FIXLEN=11),
  %05=(ENDBEFR=C'|',FIXLEN=2),
  %06=(ENDBEFR=C'|',FIXLEN=4),
  %07=(ENDBEFR=C'|',FIXLEN=3),
  %08=(ENDBEFR=C'|',FIXLEN=2),
  %09=(ENDBEFR=C'|',FIXLEN=10),
  %10=(ENDBEFR=C'|',FIXLEN=10),
  %11=(ENDBEFR=C'|',FIXLEN=05)),
BUILD=(%01,C',',%02,C',',
   %03,C',',%04,C',',
   %05,C',',%06,C',',
   %07,C',',%08,C',',
   %09,C',',%10,C',',DATE=(4MD-),C'-',TIME(24.),C'.',SEQNUM,6,ZD,
   C',',%11,C',')

The o/p we are getting as follows

ITBR800X  ,IT800X,2019-05-03-11.44.40.01,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.02,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.03,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.04,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.05,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.06,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.07,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.08,33
ITBR800X  ,IT800X,2019-05-03-11.44.40.09,33

Thanks in Advance

Regards
Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Creating tar file for use on Linux

2019-05-03 Thread Neale Ferguson
What incantation of tar or pax do I need to use to create a tar ball under Unix 
System Services such that it can be transferred to a Linux system and untarred 
there? Just the straight -cf doesn’t seem to do the trick nor using the -U -X 
option.

Neale

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Query for article on testing mainframe systems, applications, networks [SEC=UNOFFICIAL]

2019-05-03 Thread g...@gabegold.com
Thanks -- that's GREAT, much appreciated. (The silence was giving me a 
headache!)

May I quote you, with attribution? Editor likes quotes and they can't be 
anonymous.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Seymour J Metz
In OS/360, IEBCOPY couldn't reblock load modules. In OS/VS, it could, with the 
appropriate control statement.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List  on behalf of 
Greg Price 
Sent: Friday, May 3, 2019 3:41 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Crazy concatenation mystery

On 2019-05-03 12:15 PM, David Spiegel wrote:
> Steve said: "... but the received wisdom is that all load libraries should
> have blksize=32K-8. ..."
>
> For optimal space usage, however, the BLKSIZE should be 27998 (i.e. 
> half-track blocking).

You might think that, but for load modules, you have to realize that
in-between the text blocks (which could be 27998 bytes long in your
scenario) there are RLD and/or CTL records which means that no single
track could contain 2 full-sized text blocks.

Because of the "random" sizes of CSECTs and RLD usage (where "random"
means not really knowable at load library data set creation time) it is
not possible to know the best block size to use to minimize the disk
space used by a set of programs without doing some sort of analysis on
the load modules to be housed in that library.

I mention CSECT because once a text block has some data to the end of a
section, the next section will not be started in that block unless the
whole section will fit in that block. That is why you see short text
blocks even though there is plenty more object text that follows on.

And even though the linkage editor may make good use of remaining track
space, what happens when the blocks a shifted around by a data set copy
or a compress?

So, it may be that BLKSIZE=32760 really is the best advice. At least you
could reasonably hope to minimize the amount of disk space wasted on
inter-block gaps.  (Of course, inter-block gaps may well be emulated
away these days, but they still exist for 3390 CKD accounting purposes.)

And as for PDSE program object libraries - how about this?

If the BLKSIZE value doesn't matter in terms of how programs are stored
in the PDSE and fetched at run time, what about using BLKSIZE=4096 for
PDSE load libraries?

Why? Because if you browse a program object in a PDSE and scroll right,
you will notice that all of the blocks end at column 4096. So to read
that member you have acquired 32760-byte buffers when 4096-byte buffers
would have sufficed.
:)

In practice, 32760 for all program libraries is probably the best choice
to remove any block size hassles even if occasionally it causes more
storage to be used. After all, I keep hearing that storage is cheap.

Just my thoughts, of course...

Cheers,
Greg

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Seymour J Metz
I'm not sure that was  ever true for the LE, although it took IEBCOPY a while 
to catch up.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List  on behalf of 
David Spiegel 
Sent: Friday, May 3, 2019 6:08 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Crazy concatenation mystery

Hi Greg,
If someone uses BLKSIZE=32760, isn't it true that only one physical
block fits on a (emulated) 3390 track, thereby definitely wasting
(2*27998)-32760=23236 bytes per track (regardless of any Program Binder
considerations)?

Thanks and regards,
David

On 2019-05-03 03:41, Greg Price wrote:
> On 2019-05-03 12:15 PM, David Spiegel wrote:
>> Steve said: "... but the received wisdom is that all load libraries
>> should
>> have blksize=32K-8. ..."
>>
>> For optimal space usage, however, the BLKSIZE should be 27998 (i.e.
>> half-track blocking).
>
> You might think that, but for load modules, you have to realize that
> in-between the text blocks (which could be 27998 bytes long in your
> scenario) there are RLD and/or CTL records which means that no single
> track could contain 2 full-sized text blocks.
>
> Because of the "random" sizes of CSECTs and RLD usage (where "random"
> means not really knowable at load library data set creation time) it
> is not possible to know the best block size to use to minimize the
> disk space used by a set of programs without doing some sort of
> analysis on the load modules to be housed in that library.
>
> I mention CSECT because once a text block has some data to the end of
> a section, the next section will not be started in that block unless
> the whole section will fit in that block. That is why you see short
> text blocks even though there is plenty more object text that follows on.
>
> And even though the linkage editor may make good use of remaining
> track space, what happens when the blocks a shifted around by a data
> set copy or a compress?
>
> So, it may be that BLKSIZE=32760 really is the best advice. At least
> you could reasonably hope to minimize the amount of disk space wasted
> on inter-block gaps.  (Of course, inter-block gaps may well be
> emulated away these days, but they still exist for 3390 CKD accounting
> purposes.)
>
> And as for PDSE program object libraries - how about this?
>
> If the BLKSIZE value doesn't matter in terms of how programs are
> stored in the PDSE and fetched at run time, what about using
> BLKSIZE=4096 for PDSE load libraries?
>
> Why? Because if you browse a program object in a PDSE and scroll
> right, you will notice that all of the blocks end at column 4096. So
> to read that member you have acquired 32760-byte buffers when
> 4096-byte buffers would have sufficed.
> :)
>
> In practice, 32760 for all program libraries is probably the best
> choice to remove any block size hassles even if occasionally it causes
> more storage to be used. After all, I keep hearing that storage is cheap.
>
> Just my thoughts, of course...
>
> Cheers,
> Greg
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> .
>


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Seymour J Metz
Neither BINDER nor the Linkage Editor write object modules, although they read 
them. 

Both fill the track for load modules, and block size is meaningless for 
bprogram objects.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List  on behalf of 
Mike Schwab 
Sent: Friday, May 3, 2019 6:53 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Crazy concatenation mystery

Not for OBJECT modules.  The Binder calls a routine to determine the
remaining space on the track, round down to the next multplie of 1k,
and writes no more than that amount on that track.

On Fri, May 3, 2019 at 5:08 AM David Spiegel  wrote:
>
> Hi Greg,
> If someone uses BLKSIZE=32760, isn't it true that only one physical
> block fits on a (emulated) 3390 track, thereby definitely wasting
> (2*27998)-32760=23236 bytes per track (regardless of any Program Binder
> considerations)?
>
> Thanks and regards,
> David
>
> On 2019-05-03 03:41, Greg Price wrote:
> > On 2019-05-03 12:15 PM, David Spiegel wrote:
> >> Steve said: "... but the received wisdom is that all load libraries
> >> should
> >> have blksize=32K-8. ..."
> >>
> >> For optimal space usage, however, the BLKSIZE should be 27998 (i.e.
> >> half-track blocking).
> >
> > You might think that, but for load modules, you have to realize that
> > in-between the text blocks (which could be 27998 bytes long in your
> > scenario) there are RLD and/or CTL records which means that no single
> > track could contain 2 full-sized text blocks.
> >
> > Because of the "random" sizes of CSECTs and RLD usage (where "random"
> > means not really knowable at load library data set creation time) it
> > is not possible to know the best block size to use to minimize the
> > disk space used by a set of programs without doing some sort of
> > analysis on the load modules to be housed in that library.
> >
> > I mention CSECT because once a text block has some data to the end of
> > a section, the next section will not be started in that block unless
> > the whole section will fit in that block. That is why you see short
> > text blocks even though there is plenty more object text that follows on.
> >
> > And even though the linkage editor may make good use of remaining
> > track space, what happens when the blocks a shifted around by a data
> > set copy or a compress?
> >
> > So, it may be that BLKSIZE=32760 really is the best advice. At least
> > you could reasonably hope to minimize the amount of disk space wasted
> > on inter-block gaps.  (Of course, inter-block gaps may well be
> > emulated away these days, but they still exist for 3390 CKD accounting
> > purposes.)
> >
> > And as for PDSE program object libraries - how about this?
> >
> > If the BLKSIZE value doesn't matter in terms of how programs are
> > stored in the PDSE and fetched at run time, what about using
> > BLKSIZE=4096 for PDSE load libraries?
> >
> > Why? Because if you browse a program object in a PDSE and scroll
> > right, you will notice that all of the blocks end at column 4096. So
> > to read that member you have acquired 32760-byte buffers when
> > 4096-byte buffers would have sufficed.
> > :)
> >
> > In practice, 32760 for all program libraries is probably the best
> > choice to remove any block size hassles even if occasionally it causes
> > more storage to be used. After all, I keep hearing that storage is cheap.
> >
> > Just my thoughts, of course...
> >
> > Cheers,
> > Greg
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> > .
> >
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Tom Marchant
On Fri, 3 May 2019 10:08:02 +, David Spiegel wrote:

>Hi Greg,
>If someone uses BLKSIZE=32760, isn't it true that only one physical 
>block fits on a (emulated) 3390 track, thereby definitely wasting 
>(2*27998)-32760=23236 bytes per track (regardless of any Program Binder 
>considerations)?

No, it isn't. John Eells described this in considerable detail a while ago. 
Look for it in the archives.

For load libraries, the optimal blocksize is 32760. This is optimal both for 
space utilization and for performance. In many cases, it may not be better 
than some other blocksize, but it is never worse.

The reason is that when the binder writes out w TXT record, it uses TRKBAL 
to determine how much space is available on the track. If necessary, it will 
adjust the record size so that it will use the remaining space.

With a 27K blocksize, a 30K load module will always require two blocks for 
the TXT. With a 32K blocksize, two blocks might be required if there is not 
enough space at the end of the track for it. But if there is enough space 
on the track, that module will require only one 30K block for the TXT.

-- 
Tom Marchant
"It ain't what you don't know that gets you into trouble. 
It's what you know for sure that just ain't so"

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Peter Frampton and IBM

2019-05-03 Thread Phil Smith III
Tony Harminc wrote:
>As it happens I heard a CBC Radio interview with Frampton a couple of days
>ago. I wasn't a great fan back in the day, but the interview was
interesting.

>https://www.cbc.ca/radio/q/nothing-s-gonna-keep-me-from-playing-peter-frampton-on-preparing-for-his-farewell-tour-1.5112895

Aye, lad, that's the same one I heard!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Allan Staller
No. See my previous reply to an earlier email in this thread.

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
David Spiegel
Sent: Friday, May 3, 2019 5:08 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Crazy concatenation mystery

Hi Greg,
If someone uses BLKSIZE=32760, isn't it true that only one physical block fits 
on a (emulated) 3390 track, thereby definitely wasting
(2*27998)-32760=23236 bytes per track (regardless of any Program Binder 
considerations)?

Thanks and regards,
David

On 2019-05-03 03:41, Greg Price wrote:
> On 2019-05-03 12:15 PM, David Spiegel wrote:
>> Steve said: "... but the received wisdom is that all load libraries
>> should have blksize=32K-8. ..."
>>
>> For optimal space usage, however, the BLKSIZE should be 27998 (i.e.
>> half-track blocking).
>
> You might think that, but for load modules, you have to realize that
> in-between the text blocks (which could be 27998 bytes long in your
> scenario) there are RLD and/or CTL records which means that no single
> track could contain 2 full-sized text blocks.
>
> Because of the "random" sizes of CSECTs and RLD usage (where "random"
> means not really knowable at load library data set creation time) it
> is not possible to know the best block size to use to minimize the
> disk space used by a set of programs without doing some sort of
> analysis on the load modules to be housed in that library.
>
> I mention CSECT because once a text block has some data to the end of
> a section, the next section will not be started in that block unless
> the whole section will fit in that block. That is why you see short
> text blocks even though there is plenty more object text that follows on.
>
> And even though the linkage editor may make good use of remaining
> track space, what happens when the blocks a shifted around by a data
> set copy or a compress?
>
> So, it may be that BLKSIZE=32760 really is the best advice. At least
> you could reasonably hope to minimize the amount of disk space wasted
> on inter-block gaps.  (Of course, inter-block gaps may well be
> emulated away these days, but they still exist for 3390 CKD accounting
> purposes.)
>
> And as for PDSE program object libraries - how about this?
>
> If the BLKSIZE value doesn't matter in terms of how programs are
> stored in the PDSE and fetched at run time, what about using
> BLKSIZE=4096 for PDSE load libraries?
>
> Why? Because if you browse a program object in a PDSE and scroll
> right, you will notice that all of the blocks end at column 4096. So
> to read that member you have acquired 32760-byte buffers when
> 4096-byte buffers would have sufficed.
> :)
>
> In practice, 32760 for all program libraries is probably the best
> choice to remove any block size hassles even if occasionally it causes
> more storage to be used. After all, I keep hearing that storage is cheap.
>
> Just my thoughts, of course...
>
> Cheers,
> Greg
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN .
>


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
::DISCLAIMER::
--
The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.
--


Re: Crazy concatenation mystery

2019-05-03 Thread Allan Staller



You said: "... but the received wisdom is that all load libraries should have 
blksize=32K-8. ..."

For optimal space usage, however, the BLKSIZE should be 27998 (i.e. half-track 
blocking).
 On Behalf Of 
David Spiegel
Sent: Thursday, May 2, 2019 9:16 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Crazy concatenation mystery

Hi Steve,
You said: "... but the received wisdom is that all load libraries should have 
blksize=32K-8. ..."

For optimal space usage, however, the BLKSIZE should be 27998 (i.e. half-track 
blocking).

Regards,
David

On 2019-05-02 21:57, Steve Smith wrote:
> Well, Greg Price explained why the blksize issue doesn't arise in
> normal execution.
>
> In addition, PDSEs don't really have a blksize; that is faked up on
> the fly when BPAM or something similar is used.  Program Fetch uses
> something like DIV or paging I/O to load program objects.  For classic
> PDS, the blksize is "real", but again Program Fetch doesn't use access
> methods, and doesn't care what size the blocks are.
>
> It's a little late in the day, but the received wisdom is that all
> load libraries should have blksize=32K-8.  That predates PDSE by
> decades.  The old linkage-editor was smart enough to fill tracks up
> with whatever block size would fit.  As long as it wasn't artificially
> restricted to something less than the max.  RECFM=U does not work like FB.
>
> btw, why are you running FA?  Has it ever done anything useful for you?
>
> sas
>
>
>
> On Thu, May 2, 2019 at 8:42 PM Attila Fogarasi  wrote:
>
>> The Binder is not invoked by Db2 when executing your application
>> program -- hence no error message and successful execution.  Fault
>> Analyzer is invoking the Binder to get debugging info about the load
>> module as part of its processing for the prior problem.  Other
>> debugging tools handle this more elegantly but FA chooses to just
>> confuse you with the irrelevant cascaded error which has no bearing on the 
>> defect it is trying to report.
>>   Quick fix is to turn off Fault Analyzer as these "invalid" load
>> module block sizes are perfectly valid for execution or even for use
>> with the Binder with the right environment.  For better or worse the
>> Binder defaults to using 32760 (maximum device supported blksize)
>> whenever possible, unless directed otherwise.
>>
>> On Fri, May 3, 2019 at 8:43 AM Jesse 1 Robinson
>> 
>> wrote:
>>
>>> Thanks to the many contributions to this thread, I think we have it
>>> (mostly) figured out. The key was identifying what changed on 14 April.
>> No
>>> module changes. No JCL changes. But of course something happened
>>> that I didn't mention earlier because 'it could not be the cause'.
>>> What happened on the 14th was an error in the data that caused an
>>> SQL duplicate record condition, or 811. That led to a U3003 abend,
>>> which woke up Fault
>> Analyzer
>>> *for the first time*. Upon awakening, he looked around and saw the
>> invalid
>>> module block sizes and complained about them. For literally years FA
>>> had never peeped because there had never been an actual abend. Why
>>> did fetch not bellyache about BLKSIZE? I have no idea. The module
>>> named in the
>> message
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN .
>


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
::DISCLAIMER::
--
The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.

Re: Peter Frampton and IBM

2019-05-03 Thread Vernooij, Kees (ITOP NM) - KLM
If he asks us again "I want you, to show me the way", we will.

Kees


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Mike Wawiorko
> Sent: 03 May, 2019 14:04
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Peter Frampton and IBM
> 
> I wonder how  he's feeling? Ringing in his ears?
> 
> Mike Wawiorko
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Peter Frampton and IBM

2019-05-03 Thread Mike Wawiorko
I wonder how  he's feeling? Ringing in his ears?

Mike Wawiorko   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Mike Schwab
Not for OBJECT modules.  The Binder calls a routine to determine the
remaining space on the track, round down to the next multplie of 1k,
and writes no more than that amount on that track.

On Fri, May 3, 2019 at 5:08 AM David Spiegel  wrote:
>
> Hi Greg,
> If someone uses BLKSIZE=32760, isn't it true that only one physical
> block fits on a (emulated) 3390 track, thereby definitely wasting
> (2*27998)-32760=23236 bytes per track (regardless of any Program Binder
> considerations)?
>
> Thanks and regards,
> David
>
> On 2019-05-03 03:41, Greg Price wrote:
> > On 2019-05-03 12:15 PM, David Spiegel wrote:
> >> Steve said: "... but the received wisdom is that all load libraries
> >> should
> >> have blksize=32K-8. ..."
> >>
> >> For optimal space usage, however, the BLKSIZE should be 27998 (i.e.
> >> half-track blocking).
> >
> > You might think that, but for load modules, you have to realize that
> > in-between the text blocks (which could be 27998 bytes long in your
> > scenario) there are RLD and/or CTL records which means that no single
> > track could contain 2 full-sized text blocks.
> >
> > Because of the "random" sizes of CSECTs and RLD usage (where "random"
> > means not really knowable at load library data set creation time) it
> > is not possible to know the best block size to use to minimize the
> > disk space used by a set of programs without doing some sort of
> > analysis on the load modules to be housed in that library.
> >
> > I mention CSECT because once a text block has some data to the end of
> > a section, the next section will not be started in that block unless
> > the whole section will fit in that block. That is why you see short
> > text blocks even though there is plenty more object text that follows on.
> >
> > And even though the linkage editor may make good use of remaining
> > track space, what happens when the blocks a shifted around by a data
> > set copy or a compress?
> >
> > So, it may be that BLKSIZE=32760 really is the best advice. At least
> > you could reasonably hope to minimize the amount of disk space wasted
> > on inter-block gaps.  (Of course, inter-block gaps may well be
> > emulated away these days, but they still exist for 3390 CKD accounting
> > purposes.)
> >
> > And as for PDSE program object libraries - how about this?
> >
> > If the BLKSIZE value doesn't matter in terms of how programs are
> > stored in the PDSE and fetched at run time, what about using
> > BLKSIZE=4096 for PDSE load libraries?
> >
> > Why? Because if you browse a program object in a PDSE and scroll
> > right, you will notice that all of the blocks end at column 4096. So
> > to read that member you have acquired 32760-byte buffers when
> > 4096-byte buffers would have sufficed.
> > :)
> >
> > In practice, 32760 for all program libraries is probably the best
> > choice to remove any block size hassles even if occasionally it causes
> > more storage to be used. After all, I keep hearing that storage is cheap.
> >
> > Just my thoughts, of course...
> >
> > Cheers,
> > Greg
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> > .
> >
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread David Spiegel
Hi Greg,
If someone uses BLKSIZE=32760, isn't it true that only one physical 
block fits on a (emulated) 3390 track, thereby definitely wasting 
(2*27998)-32760=23236 bytes per track (regardless of any Program Binder 
considerations)?

Thanks and regards,
David

On 2019-05-03 03:41, Greg Price wrote:
> On 2019-05-03 12:15 PM, David Spiegel wrote:
>> Steve said: "... but the received wisdom is that all load libraries 
>> should
>> have blksize=32K-8. ..."
>>
>> For optimal space usage, however, the BLKSIZE should be 27998 (i.e. 
>> half-track blocking).
>
> You might think that, but for load modules, you have to realize that 
> in-between the text blocks (which could be 27998 bytes long in your 
> scenario) there are RLD and/or CTL records which means that no single 
> track could contain 2 full-sized text blocks.
>
> Because of the "random" sizes of CSECTs and RLD usage (where "random" 
> means not really knowable at load library data set creation time) it 
> is not possible to know the best block size to use to minimize the 
> disk space used by a set of programs without doing some sort of 
> analysis on the load modules to be housed in that library.
>
> I mention CSECT because once a text block has some data to the end of 
> a section, the next section will not be started in that block unless 
> the whole section will fit in that block. That is why you see short 
> text blocks even though there is plenty more object text that follows on.
>
> And even though the linkage editor may make good use of remaining 
> track space, what happens when the blocks a shifted around by a data 
> set copy or a compress?
>
> So, it may be that BLKSIZE=32760 really is the best advice. At least 
> you could reasonably hope to minimize the amount of disk space wasted 
> on inter-block gaps.  (Of course, inter-block gaps may well be 
> emulated away these days, but they still exist for 3390 CKD accounting 
> purposes.)
>
> And as for PDSE program object libraries - how about this?
>
> If the BLKSIZE value doesn't matter in terms of how programs are 
> stored in the PDSE and fetched at run time, what about using 
> BLKSIZE=4096 for PDSE load libraries?
>
> Why? Because if you browse a program object in a PDSE and scroll 
> right, you will notice that all of the blocks end at column 4096. So 
> to read that member you have acquired 32760-byte buffers when 
> 4096-byte buffers would have sufficed.
> :)
>
> In practice, 32760 for all program libraries is probably the best 
> choice to remove any block size hassles even if occasionally it causes 
> more storage to be used. After all, I keep hearing that storage is cheap.
>
> Just my thoughts, of course...
>
> Cheers,
> Greg
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> .
>


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Crazy concatenation mystery

2019-05-03 Thread Greg Price

On 2019-05-03 12:15 PM, David Spiegel wrote:

Steve said: "... but the received wisdom is that all load libraries should
have blksize=32K-8. ..."

For optimal space usage, however, the BLKSIZE should be 27998 (i.e. half-track 
blocking).


You might think that, but for load modules, you have to realize that 
in-between the text blocks (which could be 27998 bytes long in your 
scenario) there are RLD and/or CTL records which means that no single 
track could contain 2 full-sized text blocks.


Because of the "random" sizes of CSECTs and RLD usage (where "random" 
means not really knowable at load library data set creation time) it is 
not possible to know the best block size to use to minimize the disk 
space used by a set of programs without doing some sort of analysis on 
the load modules to be housed in that library.


I mention CSECT because once a text block has some data to the end of a 
section, the next section will not be started in that block unless the 
whole section will fit in that block. That is why you see short text 
blocks even though there is plenty more object text that follows on.


And even though the linkage editor may make good use of remaining track 
space, what happens when the blocks a shifted around by a data set copy 
or a compress?


So, it may be that BLKSIZE=32760 really is the best advice. At least you 
could reasonably hope to minimize the amount of disk space wasted on 
inter-block gaps.  (Of course, inter-block gaps may well be emulated 
away these days, but they still exist for 3390 CKD accounting purposes.)


And as for PDSE program object libraries - how about this?

If the BLKSIZE value doesn't matter in terms of how programs are stored 
in the PDSE and fetched at run time, what about using BLKSIZE=4096 for 
PDSE load libraries?


Why? Because if you browse a program object in a PDSE and scroll right, 
you will notice that all of the blocks end at column 4096. So to read 
that member you have acquired 32760-byte buffers when 4096-byte buffers 
would have sufficed.

:)

In practice, 32760 for all program libraries is probably the best choice 
to remove any block size hassles even if occasionally it causes more 
storage to be used. After all, I keep hearing that storage is cheap.


Just my thoughts, of course...

Cheers,
Greg

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN