CEE3551S DLL xxxxxxxx does not contain any C functions.

2017-04-25 Thread Bill Woodger
On Tuesday, 25 April 2017 21:15:57 UTC+2, Dan Kalmar  wrote:
> I am calling an LE assembler routine from Enterprise COBOL batch program and 
> receive the CEE3551S error message.
> 
> The call made in cobol is the dynamic type.
> 
> Not sure why LE thinks the target program is expected to contain C functions.

Why have you made it a DLL?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYNCSORT SEQNUM not restarting at 1 or incrementing by 1 for changes in RESTART value

2017-04-24 Thread Bill Woodger
1. "This manual left intentionally blank". Try a DATASORT. If you have that, 
you probably have everything. 

2. Yes, symbols are great. SyncSORT doesn't call them symbols as such, I think 
they are "dictionary" or something like that.

I've been known to change Kolusu's inline comments to symbols...

There was mention of using WORD in Rexx to get the 10th word for something the 
other day.

//THIRDWRD EXEC PGM=SORT 
//SYSOUT   DD SYSOUT=* 
//SYMNAMES DD * 
THIRD-WORD,%01 
//SYMNOUNT DD SYSOUT=* 
//SORTOUT  DD SYSOUT=* 
//SYSINDD * 
 SORT FIELDS=COPY 
 INREC PARSE=(%=(ENDBEFR=BLANKS, 
 STARTAT=NONBLANK, 
 STARTAFT=BLANKS), 
  %=(ENDBEFR=BLANKS), 
  THIRD-WORD=(ENDBEFR=BLANKS, 
  FIXLEN=30)), 
   BUILD=(THIRD-WORD) 
//SORTIN   DD * 
 A120 B1230 C0 D12340 
E123450F1230  G120 H10 
  I0J0K0   L0 

Adapt by repeating the "%=(ENDBEFR=BLANKS)," an appropriate number of times, 
and change the symbol name (this is more the power of PARSE, than symbols, but 
I was reminded).

Also, it is very effective to generate a symbol-file in one step, and use the 
symbol-file in another step. It means, for instance, that you can have a 
modified value for INCLUDE/OMIT (like a formula to get a selection date), 
amongst other things.

Even for simply saving on the typos when using the same field for the sixth 
time... makes Sort Cards "self-documenting" :-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYNCSORT SEQNUM not restarting at 1 or incrementing by 1 for changes in RESTART value

2017-04-24 Thread Bill Woodger
If you don't need the physical file for anything except to extract from it, you 
can just establish a SEQNUM for each selection, and just use an OUTFIL (or 
more) with your selection by number. Saves the SORT, the creation of the new 
file, and the processing of the new file, just cutting straight to the chase, 
the extract(s) you want.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYNCSORT SEQNUM not restarting at 1 or incrementing by 1 for changes in RESTART value

2017-04-24 Thread Bill Woodger
Also, your initial BUILD creates the extra byte which is used. You can change 
the subsequent BUILDs to OVERLAY which just change the one byte at column 5: 
(will save you one BUILD per record, you'll notice).

If 220k+ were really large, you could also (since the test values are mutually 
exclusive) put the header and trailer processing before the WHEN=NONE. That you 
may not notice on 220k+ records.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SYNCSORT SEQNUM not restarting at 1 or incrementing by 1 for changes in RESTART value

2017-04-24 Thread Bill Woodger
It doesn't matter where you physically locate the SORT control card, SORT will 
execute after INREC.

You need to put anything which the SORT relies upon in INREC, and anything 
which relies upon the SORT in OUTREC.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Purchasing opportunity...

2017-04-17 Thread Bill Woodger
On Mon, 17 Apr 2017 17:28:11 -0500, Mike Schwab  wrote:

>Works just fine for me.
>

Thanks. Tried again, same problem. Closing and restarting Firefox got it back 
for me. I'm in a different country today, and I clicked OK for a bunch of 
security updates, I guess there's no way of finding out exactly what caused 
that for me (or what I did to cause that), and even if I did find no-one would 
care. "Closed Permanently (Workaround)" is cheaper than root-cause analysis.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Purchasing opportunity...

2017-04-17 Thread Bill Woodger
"Firefox can't find the server at www.ibm.com."

Perhaps they've not paid the domain registration, and someone can snap it up?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL integrated pre-processor, API in general

2017-04-17 Thread Bill Woodger
Frank Swarbrick posted an example COBOL compiler input-exit last year, to allow 
the embedding of linkageeditor/binder statements. It is on IBM's COBOL Cafe. I 
can't provide a link because I currently get "Problem loading...".

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL integrated pre-processor, API in general

2017-04-17 Thread Bill Woodger
On Mon, 17 Apr 2017 15:51:57 -0500, Paul Gilmartin  wrote:

[...]

>If there were such a fixed API, it would need to be able to:
>o Invoke external services (CALL, ATTACH, SVC, PC, ...).
>o Access and update COBOL variables.
>o Check syntax and validate data types.
>o Probably many other things (which?)
>
>-- gil
>

What each integrated processor does, descriptively, is documented in the 
Enterprise COBOL Programming Guide. See the description of each compiler option 
(CICS. SQL, SQLIMS), and the Related Concepts, Related Tasks and Related 
References for each.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL integrated pre-processor, API in general

2017-04-17 Thread Bill Woodger
John, you are after a general API to allow "something" to have a meaningful 
impact on code-generation by the (Enterprise COBOL) compiler?

I think we've had a "non-denial denial" (or similar) that there is a single API 
for EXEC. Even if similar (who knows?) they are different. Each interacts 
directly with its target sub-system (CICS, DB2, IMS).

I think they are "asking what your sub-system can do for you"-type, rather than 
"this is what you can do with the compiler if you set this, that and the 
other". Use of a DB2 API, rather than being an API to the COBOL compiler (I can 
be very wrong there).

If they were intended to be open to ISV use, they'd be documented as a "Vendor 
Interface". If they were so exposed, then there'd be "constraints" on what 
could be changed in the compiler ("you can't do that, it'll break XYQ product").

There is, of course, nothing which stops anyone writing a "pre-processor". It 
can also be "integrated", such that it does not require a separate step, using 
an "input exit". Any processing which may be analogous to the integrated 
processors would be done in the exit, and the "already COBOL-ified" code is 
what would be presented to the compiler.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL integrated pre-processor, API in general

2017-04-17 Thread Bill Woodger
On Mon, 17 Apr 2017 14:30:10 -0500, Allan Kielstra  wrote:

>I wouldn't necessarily assume that there is a fixed API for EXEC CICS, EXEC 
>SQL, EXEC SQLIMS.  It's always possible that each one of these gets some sort 
>of custom treatment from the compiler.
>

Hi Allan, brilliantly worded :-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Opinion: Using C "standard library" routines in COBOL.

2017-04-07 Thread Bill Woodger
I'm interested in how people deal with the case of the functions and what type 
of executables people create.

Relevant is discussion of overflow. By Standard fine for COBOL,  not fine for C.

sprintf is fun but do I want to interpret a string 4m times to produce a report?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What are mainframes

2017-04-06 Thread Bill Woodger
On Thu, 6 Apr 2017 19:02:23 +0800, David Crayford <dcrayf...@gmail.com> wrote:

>On 6/04/2017 6:35 PM, Bill Woodger wrote:
>> Just to note, the UK Weather Centre (The Meteorological Office, or Met 
>> Office) uses a big-boy LinuxONE and they were an early user of that.
>
>Do you know what they use if for? Probably not for weather forecasting
>algorithms.
>

There's other stuff out there as well, but here's a link: 
https://www-01.ibm.com/software/uk/system-z/case-studies.html

Doesn't look like they use it for Payroll.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and decimal overflow overhead (was: Data/hiperspaces)

2017-04-06 Thread Bill Woodger
On Wed, 5 Apr 2017 13:09:53 +0200, Peter Hunkeler  wrote:

>>Firstly, Peter, I know you didn't write the code or come up with the idea of 
>>the definition of Fraction. Secondly, looks like you have some progress with 
>>your related topics from over the last few months.
>
>
>Yes, we have made good progress. I think I had posted something about the 
>outcome earlier. Anyway, here is a short summary: Smart/Restart is using ESPIE 
>and ESTAE to gain control in error situations. This is not compatible with 
>LE's usage of ESPIE and ESTAE. For the non-problem 00A program check 
>interruptions from Cobol code, LE was not gaining control first, so LE could 
>not let the program continue. Smart/Restart did not recognize the 00A from 
>Cobol *not* to be a problem, but was acting as if a real problem had occurred. 
>The vendor has supplied a intermediate fix, and are in the process redesigning 
>Smart/Restart Error handling so it no longer interferes with LE.
>
>
>
>
>> The implication is to think about it, and achieve the truncation without 
>> overflow. I was making the point about calculations: I'm surprised a MOVE 
>> can cause overflow.
>
>
>I was surprised, too, when I found out.
>
>
>>The idea of the definition of Fraction is "hey, if we define it to the 
>>maximum number of possible digits, then we never need to think what we MOVE 
>>to it to get the decimal part of a number".
>
>
>
>You don't have to think as long as this is valid Cobol. The compiler and 
>runtime should handle this behind the scene. The bad thing about the 00As is 
>that firstly, the newer compiler seem to generate code that may cause a 00A 
>more often. Secondly, the decimal overflow mask in the PSW seems to become set 
>ON more often than before due to changes in the LE runtime. Finally, it may 
>have a severe performance impact if the overflows occur often due to the 
>program logic.

Well, there is valid COBOL, which it is, and there is the thinking. At times, 
many times, mostly, one-size-fits-all means extra processing for 99.99% of 
usage.

Things always (did I really say always?) work "better" if sizes are the same, 
and where the "source" size is the best fit for the business requirement (so 
the advice is not to make all the data have 18 decimal places just so the MOVE 
works better...).

With up to V4, it was possible to be accurate with expectations of what code 
would be generated, with knowledge of compile options, and to a greater or 
lesser extent, depending on skill and interest.

With V5+, multiple ARCH possibilities, several new compiler options which 
affect generated code, three levels of optimisation (only one with up to V4, 
and even OPT(0) with V5+ does some optimisations), potential use of an 
additions 600+ instructions and the possibility that the same code can be 
optimised in different ways in the same program, it is... less easy... to know 
what will be generated, but will (should) still be the case that thinking about 
data-definitions and code will give benefits.

Yes, I agree, an increase in the number of things which can get 00A, coupled 
with an increase in the likelihood that the overflow mask bit is on, is 
something for concern due to LE chewing up cycles to decide nothing needs to be 
done.

You could raise this on IBM's Compiler Cafe: COBOL Cafe Forum. There's a couple 
of developers from Markham who keep an eye open there. Or just raise it with 
IBM.

>
>
>
>> Second point.  V9(18). 18. Eighteen. If there is a commercial requirement 
>> for 18 decimal places, it will be a rare one. 18 digits happens to be 
>> horrible if much of anything is to be done with it. (If much of anything is 
>> to be done, changing it to 17 digits may reduce the code generated,hanging 
>> it to 15 digits may reduce it further. By "may" I mean "I'd be surprised if 
>> it didn't").
>
>
>
>I agree, but I do not know why the programmer chose to define it that way in 
>this case.
>

Can't be sure, but I suspect it was a "no brainer", in the wrong sense.

>
>
>> 01 Fraction PIC V9(6).
>> 01 Decimal-Number PIC S9(9)V9(6).
> >
>> MOVE Decimal-Number to Fraction
> >
>> With that, I'd be hard put to imagine much more than whatever the compiler 
>> chooses for a six-byte MOVE - depending on compile options. What does V5.2 
>> produce?
>
>
>
>You made me curious, so I tried with V9(6), V9(8), V9(9), and V9(10). 
>Interesting. Different instruction sequence is generated for each case. The 
>SRP is found for the last one, but not for the first three.
>Anyway, we cannot ask the programmer to look at the pseudo assemble listing to 
>decide it the Cobol code is reasonable, can we :-)
>

Well... it needs "management support" at times, but if a particular way of 
doing things is "local standard", and that has... issues... then those standard 
methods should be reviewed by someone who can look at and make use of the 
listing of the generated code. And even inspire others to have an interest...

>
>> I've no idea what IZGFCPC 

Re: What are mainframes

2017-04-06 Thread Bill Woodger
Just to note, the UK Weather Centre (The Meteorological Office, or Met Office) 
uses a big-boy LinuxONE and they were an early user of that.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and decimal overflow overhead (was: Data/hiperspaces)

2017-04-04 Thread Bill Woodger
On Tue, 4 Apr 2017 07:43:05 +0200, Peter Hunkeler  wrote:

[...]
>
>>If unintended truncation, fix. If intended, do it in such a way as to not 
>>cause the overflow.
>
>
>The last point is not always easy to achieve. The each compilers version or 
>release may generate different code using different instructions.
>
>
>The following statements are valid Cobol code to extract the fraction part of 
>a decimal number:
>...
>77 Fraction PIC V9(18).
>77 Decimal-Number PIC S9(9)V9(6).
>...
>MOVE Decimal-Number to Fraction.
>
>
>Enterprise Cobol V4.2 generates code that will never cause a decimal overflow, 
>V5.2 will use SRP instruction, which will cause an overflow if 
>"Decimal-Number" is greater or equal to 1.
>
>
>And worse, we just stublmed over IZGFCPC which is called as part of the code 
>generated for a non-trivial COMPUTE with the ON SIZE ERROR clause. IGZCFPC 
>does use SRP instruciton and may cause a decimal overflow. How would you avoid 
>this easily?
>
>--
>Peter Hunkeler
>

Firstly, Peter, I know you didn't write the code or come up with the idea of 
the definition of Fraction. Secondly, looks like you have some progress with 
your related topics from over the last few months.


"If [truncation is] intended, do it in such a way as to not cause the overflow."

The implication is to think about it, and achieve the truncation without 
overflow. I was making the point about calculations: I'm surprised a MOVE can 
cause overflow.

The idea of the definition of Fraction is "hey, if we define it to the maximum 
number of possible digits, then we never need to think what we MOVE to it to 
get the decimal part of a number".

My first point would be that having a "don't need to think" rule means what I 
suggest, which requires thinking, can't be done.

Second point.  V9(18). 18. Eighteen. If there is a commercial requirement for 
18 decimal places, it will be a rare one. 18 digits happens to be horrible if 
much of anything is to be done with it. (If much of anything is to be done, 
changing it to 17 digits may reduce the code generated,hanging it to 15 digits 
may reduce it further. By "may" I mean "I'd be surprised if it didn't").

Third point. 9(9)V9(6) is pretty good for business purposes. Why, to get the 
fractional part, use anything other than six decimal places? (15 digits is 
nice, and even plenty for commercial treasury purposes in major currencies - 
and even the US GDP fits in with some to spare).

(Even though 9(9)V9(6) is pretty good, I still far prefer specific sizes for 
specific cases). 

01 Fraction PIC V9(6).
01 Decimal-Number PIC S9(9)V9(6).

MOVE Decimal-Number to Fraction

With that, I'd be hard put to imagine much more than whatever the compiler 
chooses for a six-byte MOVE - depending on compile options. What does V5.2 
produce?

I've no idea what IZGFCPC is. Search-engineing reveals your post and nothing 
else, except this one, some time later.

Can you show the COMPUTE and the definitions of the fields it uses?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Data/hiperspaces (was: ... 4G bar?)

2017-04-03 Thread Bill Woodger
On Thu, 30 Mar 2017 13:20:57 -0700, Alan Young  wrote:

[...]

>
>fread(), fwrite(), fclose(), clrmemf() etc. support it. And the routines
>are callable from COBOL.
>
>Alan
>

And if calling C/C++ from COBOL programs, be aware of the 00A/S0CA issue. Since 
COBOL truncates high-order by nature, the "don't bother with exception for 
this" bit is set for COBOL programs. However, for COBOL calling C/C++ (or using 
a DLL) the bit is "on".

If unhandled overflow occurs in the COBOL program thereafter, LE will spend 
some (noticeable) time ignoring it, because it comes from COBOL.

If you add C/C++ function-usage to a COBOL program and get an otherwise 
unexplained increase in CPU usage, look to overflow from calculations.

If unintended truncation, fix. If intended, do it in such a way as to not cause 
the overflow.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [EXTERNAL] Re: IBM RFE's for z/OS - 2 to push

2017-03-30 Thread Bill Woodger
On Thu, 30 Mar 2017 12:02:40 -0700, Ed Jaffe  
wrote:

>On 3/30/2017 9:33 AM, Dyck, Lionel B. (TRA) wrote:
>> I have no idea as I can't get into the requirements area any longer. Perhaps 
>> because I haven't been to a SHARE for several years.  What I do know is that 
>> both of these requirements are now over 100 votes and growing.
>
>The SHARE Requirements System can be found here:
>
>http://www.share.org/page/requirements-&-advocacy
>
>Many RFEs are essentially nonsense, entered by random individuals
>without the benefit of real discussion or IBM input. And, RFE "votes"
>don't carry as much weight as you might think because:

Interesting, since IBM encourages votes for RFEs (for COBOL, anyway). Seems 
pointless to encourage votes and then ignore them. I do agree that RFE votes 
should actually be difficult to take seriously (when others vote for something 
I don't like, anyway...).

I followed your link to SHARE and clicked on Register to make an account. Oops. 
That was unexpected.

[...]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How does ABO report its outcome? (was: Migrating Cobol)

2017-03-29 Thread Bill Woodger
There was no suggestion attempted (for anything) just an explanation that if 
you have used options which augment the LE abend output (which should also 
include CEEDUMP) then ABO will leave those alone.

This was in reference to Peter's question posted immediately prior: 

"So what [is being said] is that an optional chargeable product knows how to 
handle problems with ABO optimized code, but the standard runtime environment 
does not? Really? What is the rationale behind?"

Paid-for products can understand the ABO output. Base LE output with no TEST 
options is the same, ABO or non-ABO. You look at the listing. Just may involve 
looking at a different listing, or require looking at two listings. TEST 
options can augment LE abend-output.

There's no substantial difference between the handling of an abending ABO'd 
program and a non-ABO'd program.

"A little something" to merge original compile listing with ABO output listing 
could be useful for someone to write.

On Tue, 28 Mar 2017 18:02:43 -0500, Edward Gould <edgould1...@comcast.net> 
wrote:

>> On Mar 28, 2017, at 3:47 PM, Bill Woodger <bill.wood...@gmail.com> wrote:
>> 
>> Without any TEST option on the compile, LE gives you nothing but the offset 
>> of the failing instruction, then you find it in the compile listing. ABO 
>> gets you a new listing of the new code, a new place to consult for the 
>> offset.
>> 
>> If you compile with TEST options, the code generated for those options still 
>> exists in the ABO'd program, and if from the original program that results 
>> in additional information in the LE dump, then it will still do so after the 
>> program has been ABO'd.
>
>
>Bill:
>
>In the past we have found that compiling COBOL programs with test meant a hell 
>of a lot more run time and of course CPU usage went up as well.
>I would *NEVER* suggest TEST in a production environment.
>
>Ed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: How does ABO report its outcome? (was: Migrating Cobol)

2017-03-28 Thread Bill Woodger
Without any TEST option on the compile, LE gives you nothing but the offset of 
the failing instruction, then you find it in the compile listing. ABO gets you 
a new listing of the new code, a new place to consult for the offset.

If you compile with TEST options, the code generated for those options still 
exists in the ABO'd program, and if from the original program that results in 
additional information in the LE dump, then it will still do so after the 
program has been ABO'd.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How does ABO report its outcome? (was: Migrating Cobol)

2017-03-27 Thread Bill Woodger
The output listing from the ABO process is designed to mesh with the output 
from the original compile.

If it doesn't, or there are difficulties, problems, suggestions for 
improvement, let IBM know.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Migrating Cobol

2017-03-26 Thread Bill Woodger
Danger of this becoming another ABO thread in disguise :-)

The question of the testing of ABO is not entirely cultural, or not necessarily 
so. Nor necessarily "compliance". There can be a technical basis on which to 
make decisions, which cultural and compliance issues may make moot. I'm keen to 
poke for the technical basis, of which there are not much more than hints and 
generalisations for now :-)

It's like developing a strain of peas which pick themselves when ready, pack 
themselves and stack themselves in boxes on the trailer. So will stick to "the 
One True Way to cultivate peas", some will stick to "the only way the rules 
allow to cultivate peas". Some will get on with more interesting stuff whilst 
the peas look after themselves. That latter group will be small if there are no 
detailed instructions on the sacks of seed.

IBM's intention going forward is that ABO and Enterprise COBOL are a 
complimentary package. New ARCH level, new compiler (or PTF to existing 
compiler), new ABO. You don't need to recompile everything to use the new 
instructions immediately, you can ABO (even perhaps "on the fly"). New 
development/maintenance uses the new compiler. "Migration" becomes...

A cultural and real-world (compliance) impact for sure, but if the technical 
basis has no more known grounding than the Witchdoctory One True Way then it 
won't happen on any scale.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Migrating Cobol

2017-03-26 Thread Bill Woodger
If you have very large programs and you want to optimise them to Level 1 or 
Level 2, then Enterprise COBOL V6.1 is your best bet. The optimiszer was 
re-written with 64-bit addressing and is now much more comfortable with large 
programs (which may just fail to compile with V5.2).

V6.1 is now "continuous delivery". Meaning new functionality, not just fixes 
and retro-fitting, can and is being supplied by PTF.

Consider use of the Automatic Binary Optimizer (ABO). This can allow your COBOL 
programs to benefit from z/ARCH instructions without needing to be recompiled. 
This can allow you to rework your planning.

Biggest problem seems to have been the need for PDSE: no more loadmodules, now 
Program Objects, which must be in a PDSE.

ABO again offers some extra flexibility by not requiring PDSE for all 
executables.

IBM has done a lot of work over the last few months to reproduce V4.2 output 
where the results expected are "undefined" across all compilers, but have a 
specific outcome under a particular compiler.

You could chose ABO for "everything" and V5/v6 for new development/maintenance 
(an incremental "migration") through to Big Bang, V5/V6-only, and devil take 
the hind-most. Or anything in between.

ABO has not been around long, and V1.2 only since last November. 
 
ABO gives you some new ways to do migration, but be aware that there are still 
discussions (including on this list) as to how much testing is required for an 
ABO over a recompile. Ask your friendly IBM rep if you can talk to the ABO 
people in Markham about your specific site :-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thoughts on z/OS ftp server enhancement.

2017-03-25 Thread Bill Woodger
On Sat, 25 Mar 2017 11:38:50 -0500, Paul Gilmartin <paulgboul...@aim.com> wrote:

>On Sat, 25 Mar 2017 03:29:33 -0500, Bill Woodger wrote:
>
>>Micro Focus COBOL can read "Mainframe" and write ""PC", several ways. 
>>Enterprise COBOL can write the data as ASCII (that's just for information, as 
>>DASD-shortage rules that out for you).
>> 
>Again, could POSIX pipe circumvent that constraint?
>
>-- gil

I don't know enough about the actual task to know if the lack of DASD is a 
direct constraint :-)

With the knowledge that Micro Focus COBOL can read EBCDIC and write ASCII, 
perhaps just shipping the data as binary becomes the preferred solution? 

There's mention of multiple large files. The costs will be less just banging it 
out of the Mainframe as rapidly as possible (if it doesn't choke the receiving 
server).

I like pipes. I don't like that little caveat you made, except why would it be 
relevant here?

Either on the Mainframe or with Micro Focus, the COBOL would be trivial, so not 
a problem using Micro Focus (since they have it) even if the final destination 
isn't a COBOL system (if it is not, simply through data-definitions, make any 
binary, packed-decimal or (shock, horror) floating-point into "text" fields).

If the server chokes on large amounts of data (and I've seen enough questions 
on splitting files because "we can't send more than xMB to the server at once") 
to know that some types (never find out exactly what) of issues exist, that 
could be a factor in the solution.

Some combination from the options of split/pipe/COBOL/someotherlanguage  can 
get the task done, depending on the specifics. ETL generally does this stuff. 
If they can get the ETL sufficiently honed, that may be a neater solution.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thoughts on z/OS ftp server enhancement.

2017-03-25 Thread Bill Woodger
Micro Focus COBOL can read "Mainframe" and write ""PC", several ways. 
Enterprise COBOL can write the data as ASCII (that's just for information, as 
DASD-shortage rules that out for you).

Find out what would be best, and other options, for the non-Mainframe people to 
receive. Other things than ASCII are possible.

You have a Support contract with Micro Focus. Contact them, give them the 
details, and they should be able to recommend the best way to do it for your 
situation.

Whatever they suggest will be a trivial COBOL program (the conversion is not 
due to COBOL code, very likely), unless you are "doing the PC guys a favour" by 
giving them more of what they want (and in the worst-case, the logic will be 
trivial and the code little beyond that than MOVEs and probably a CALL).

I'd produce audit totals from a pass of the data on the Mainframe, and those 
should be matched to the converted data on the PC (or Cloud, or server - I'm 
much less worried about the terminology of those who think that Mainframes are 
dusty things from the 1980s).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Read SMF using COBOL

2017-03-22 Thread Bill Woodger
Ah, the diagnostic is only for OUTPUT/EXTEND for an F-type with RECORD CONTAINS 
0.

Since it doesn't produce a diagnostic message in your case, I think you are OK 
going forward, but I'd not code it like that.

Also, your FD:

FD  SMF-RECORDS-IN
RECORDING MODE S
LABEL RECORDS ARE STANDARD
BLOCK CONTAINS 32760 CHARACTERS
RECORD CONTAINS 14 TO 32760 CHARACTERS
DATA RECORDS ARE SMF-MIN-REC
 SMF-TYPE14-REC
 SMF-TYPE17-REC
 SMF-TYPE65-REC
 SMF-MAX-REC.

Can be simplified:

FD  SMF-RECORDS-IN
RECORDING MODE S
RECORD CONTAINS 0.

BLOCK CONTAINS is irrelevant for an input file (it is already in blocks, you 
can't change it, or get it changed).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Read SMF using COBOL

2017-03-22 Thread Bill Woodger
The Language Reference says:

"The RECORD CONTAINS 0 CHARACTERS clause can be specified for
input QSAM files containing fixed-length records; the record size is
determined at run time from the DD statement parameters or the data set
label. If, at run time, the actual record is larger than the 01 record
description, then only the 01 record length is available. If the actual record
is shorter, then only the actual record length can be referred to. Otherwise,
uninitialized data or an addressing exception can be produced."

However, no diagnostic for non-F :-)

If you use SMF-MAX-REC when you know that it is a maximum-length record, expect 
a S0C4. I guess you won't use it.

Better would be RECORD IS VARYING from minimum to maximum (which can be 32756 
if that is your actual maximum of data possible).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Read SMF using COBOL

2017-03-22 Thread Bill Woodger
BLOCK CONTAINS 0 makes sense, if BLOCK CONTAINS is specified in a COBOL 
program, it overrides everything else up the line. But, won't affect your 
problem.

RECORD CONTAINS 0. That's for fixed-length records, where the actual length 
will be acquired at runtime.

For a variable-length record file, you'll get a diagnostic message. I thought 
it was even an RC=8.

It does, however, "work". Until IBM make a change to the compiler. Which means, 
since they've already changed it, perhaps doesn't do the same thing in V5+.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Read SMF using COBOL

2017-03-22 Thread Bill Woodger
And no, I don't know why the discrepancy is not four. Perhaps there is a 
problem when you specify variable-length data in the FD that is longer than the 
maximum possible value.

You should get the file open by reducing your 32760 to 32754 in the FILLER.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Read SMF using COBOL

2017-03-22 Thread Bill Woodger
You have specified 32760 bytes of data. Remember that the LRECL does not only 
include data. If you have 32760 of data, your LRECL would be larger than 32760. 
If your LRECL is 32760, your data has to be shorter.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Broken VBS and ICE141A

2017-03-20 Thread Bill Woodger
Paul, that doesn't answer the question of whether data was written to the 
dataset with LRECL=X, the question I asked. Provides a way that the data is 
actually valid.

Otherwise the data is invalid. 

Are you really so sure that a program, with no knowledge of the expected 
structure of the data, can determine exactly how a VBS is pickled? Because if 
it is not exactly, it is no use. 

Someone will just fix what the message says, then find out later undeterminable 
bits of the data were missing or otherwise nonsense. Or they'll waste time 
looking at "this is what it says the error is", when the error is something 
else.

If it's broke in a really weird way, a message to say "it's broke in a really 
weird way, you fix it, I don't want to give you a false sense of security" is 
fine with me. Mileage may vary.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Broken VBS and ICE141A

2017-03-20 Thread Bill Woodger
Simple, John. You produce the message Paul wants. Can't be clearer.

Radoslav,

Was the data set written with LRECL=X, to get "long" records? What is the data? 
Why is it VBS?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ComputerWorld Says: Cobol plays major role in U.S. government breaches

2017-03-20 Thread Bill Woodger
Martin, I doubt the US government was ever restricted, since they were 
effectively the source of the restrictions...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ComputerWorld Says: Cobol plays major role in U.S. government breaches

2017-03-20 Thread Bill Woodger
On Mon, 20 Mar 2017 11:33:10 -0400, George Rodriguez 
 wrote:

>Does anyone think that Computerworld is going to write a retraction?
>
>


No. And it is not just Computerworld, a search-engine finds at least 23 similar 
references to the report. Maybe some are plagiarised from Computerworld :-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ComputerWorld Says: Cobol plays major role in U.S. government breaches

2017-03-18 Thread Bill Woodger
My gosh, ho-hum, what a bag of nonsense passing itself off as a contribution to 
research.

And then there's the journalism. 

Tom Marchant phrased it eloque... well, bluntly. The researchers who wrote the 
paper, dated March 7, 2017, used a media article to come up with "It's COBOL 
wot dun it" for OPM, whereas the report "The OPM Data Breach:  How the 
Government Jeopardized Our National Security for More than a Generation" by the 
Committee on Oversight and Government Reform, published September 7, 2016, 
doesn't even mention COBOL. Journalists (I'm assuming there are more articles, 
it's "easy" journalism) quote the report, quoting them. Self-referential, 
self-defining. Just meaningless. 

The report on the OPM breach doesn't even go as far as to say that access was 
gained to a Mainframe (in terms of a hack). What is clear is that the hackers 
(at least two) spent years, yes, years, wandering about various Windows servers 
belonging to OPM. They exfiltrated (my word of the day) documents relating to 
the Mainframe system. Mmm Powerpoint, Visio, XLSX, etc... Enough on OPM.

Now, Research is closely related to Brain Science and Rocket Surgery. If you 
can do it, you're really cool, and will be recognised above the mundane who 
only have to deal with known facts. 

However, Bad Research is related to what? Bad Journalism? Great.

Take "security by antiquity". Anyone ever heard of that? I put that into a 
search box along with the word computer and got 438 results. I put "security by 
obscurity", a term that I've heard of, and which includes being unaware of the 
enormous amount of documentation IBM provides publicly, and computer into the 
same search box and got 38,000 hits (38,417.83 in Research Terms).

So, build a Straw Man, then set fire to him, to general applause.

Take "legacy system". If you are writing for someone else, and you use jargon, 
or terminology, or concepts which are not clearly defined and accepted, then 
you define, exactly, how you use those terms. Because otherwise the use is 
meaningless.

Obviously "legacy" means Mainframe/COBOL. Except they talk of migrating 
"legacy" to the Cloud. So obviously they don't mean Mainframe/COBOL. Or, 
perhaps more accurately, they have a version of Lewis Carol's Humpty Dumpty: 
"any word or phrase means exactly what I mean it to mean at that moment, even 
if contradicted shortly thereafter, and contradicted further several times 
later".

In Bad Research, look for figures with pin-point accuracy: "increased by 1,121 
percent". Increased by what? What does that final one percent mean? Or even the 
final 20%?

Let me define "information about computers ages very quickly" to mean "in 
situations where the fundamentals of what you are talking about change very 
rapidly, discussion that is five years old may be useless". Let me be generous 
and extend that to 10 years, else the main publication they refer to, from 
2009, is outside the range. Let's say every computer-related paper they 
reference which is older than 10 years would have to be seriously question for 
its use in this context. Whoops. That puts a lot of stuff under question.

Surely "criminal behaviour" doesn't change so fast? Oooh. Economic criminal 
behaviour. Relating to hacking. How much does it cost these days to get a 
domain, a laptop and some harddisks/sticks? So that has changed, as rapidly.

Oooh. Another problem. The whole OPM thing is supposed to be done by either 
"hacking groups" who just don't like government/business and material 
consequences are perhaps an aside, or "hacking groups" specifically backed by a 
certain foreign government. Neither of these fit into ordinary "criminal" 
analysis, and no case is made in the research for why anything should fit into 
the criminal analysis. So scratch all that junk.

Table 4. I can't make head of tail of it, but, at least Table 2 defines  
indicents. Of the eight categories, four are nothing to do with "cyber 
criminals": Improper Usage; Unauthorized Equipment; Policy Violation; Non-Cyber 
Incidents. Taking out all those does what for 1,121 percent?

If the paper were coherent and internally consistent, I'd go on. But it isn't, 
so I won't. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Need help understanding use of CEEPIPI from non-LE assembler main programs

2017-03-16 Thread Bill Woodger
You have an Assembler program, and once it has called and returned from one 
COBOL program, it goes on to call and return from other COBOL programs (or 
other invocations of the same program)?

You'll want to look at start_seq and end_seq.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Enterprise COBOL for z/OS, V6.1 supports the continuous delivery model for new features

2017-03-13 Thread Bill Woodger
On Mon, 13 Mar 2017 10:23:12 -0500, John McKown  
wrote:

>​I took a quick look at XPLINK. And you're right, that's a whole 'nother
>kettle of fish. I basically understand the why, as explained in the LE
>manuals. But why COBOL decided to go with the same, other than
>inter-operation with C, is beyond my tiny (and shrinking) mind.​ Even with
>nested COBOL programs, I don't see COBOL programmers writing "tons" of
>"itty bitty" COBOL programs. But C/C++ programs do that a lot, especially
>C++ programmers.
>

COBOL programmers traditionally use paragraphs/SECTIONs and PERFORM for the 
itty-bitty.

There is an interesting development. For "contained/nested" programs, the 
optimiser can now "inline" the CALLs, so that a CALL to a contained program 
looks like a PERFORM of a paragraph/SECTION. Previously with Enterprise COBOL, 
there was a lesser overhead with a CALL to a contained/nested program than to 
an external program.

So you could now have lots of itty-bitty (contained) programs, which behave 
like paragraphs/SECTIONS but with "localised" data-names.

I don't know (no access to V5+) exactly how this pans out (there may be limits 
to how much can be inlined, as with paragrpahs/SECTIONs themselves) but it may 
offer a "different" way to develop COBOL programs. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Enterprise COBOL for z/OS, V6.1 supports the continuous delivery model for new features

2017-03-13 Thread Bill Woodger
From the COBOL Cafe: 
https://www.ibm.com/developerworks/community/forums/html/topic?id=e055e63f-4e61-4502-b04c-db9b3d89d213

Allan Kielstra, from Markham, on a question of whether COBOL V5+ uses the 
FASTLINK calling convention.

'This is a bit confusing.  Even I was confused while I was implementing it.  
:-)  There are a number of variants of the basic "OS Linkage."  That's the 
linkage that uses R1 as a pointer to a block of incoming arguments (or outgoing 
parameters.)  One of those variants is called "C private," and it is the one 
that is used by C.  To tell that you are C private, you start by "pretending" 
that you are fastlink.  All the magic is here in the PPA1:

 0003F8  90 =AL1(144)   
   Flags


That's the program flags at offset hex 18 in the PPA1.  The value 1 in the top 
bit says "fastlink stack frame layout."  But that is "waved off" by the value 
001 in the next three bits.  That means "C private" convention.  So we always 
generate a 9x for that field of the PPA1.   (In fact a 90.)

That note about returning a doubleword binary item is something that did change 
between V4 and V5.  The C private calling convention requires that such items 
be returned in storage whose address is passed as a hidden first argument.  The 
V4 compiler didn't implement that quite right but it did everything else 
correctly for C private.  The V5/V6 compiler also implements C private and it 
adheres to the specification correctly even in this case.

So the bottom line is:  we use C private and not fastlink for COBOL.'

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Enterprise COBOL for z/OS, V6.1 supports the continuous delivery model for new features

2017-03-13 Thread Bill Woodger
On Mon, 13 Mar 2017 09:37:15 -0500, Tom Marchant  
wrote:

>Unless IBM has changed their direction, 64-bit Cobol will only be useful for 
>new applications. It will not interact with existing code unless that code is 
>also converted to AMODE 64.
>
>The reason for that is that 64-bit Cobol will only be supported with 
>XPLINK-64. The design of XPLINK-64 makes it incompatible with 31-bit XPLINK. 
>XPLINK-64 can call non-XPLINK programs, but since it passes a save area 
>located above the bar, it can only call AMODE 64 programs.
>
>XPLINK is touted as a performance improvement over standard linkage. The small 
>improvement in performance makes a big difference with C programs, with its 
>tendency to create very small subroutines. However, the cost of calling a 
>program that uses standard linkage is considerably higher.
>
>Every time an XPLINK program issues a GET or PUT, it has to make that 
>transition.

Unless something really dramatic happens, which means unequivocal benefit for 
everything from 64-bit addressing for a COBOL program, a reasonable expectation 
is of at least one to two decades where almost all COBOL programs will continue 
to use 31-bit addressing.

As I understand it, if there are sufficient exceptional cases where clients 
want 64-bit addressing, and what they ask for is genuinely resolved by 64-bit 
addressing, and there are enough such requests, then Enterprise COBOL, in a 
future release, will have the option (probably as a compiler option) to 
generate 64-bit addressing executable code from COBOL source programs.

Enterprise COBOL: no longer only knows about ESA OP-codes; uses Program 
Objects; has a CALLINTERFACE compiler directive; is geared up to react quickly 
(relative term) in response to change (V6.1, supporting ARCH(11) was announced 
on announcement of z13); probably lots of other stuff. V5 wasn't only for 2013, 
but included a lot of "enablement" for future change.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Enterprise COBOL for z/OS, V6.1 supports the continuous delivery model for new features

2017-03-12 Thread Bill Woodger
Mike, is that the top of a list of performance/usability improvements for 
64-bit addressing in general and in isolation? Or for a combined 
31-bit-vs-64-bit, so that the difference in paging outweighs other losses?

A quote from Tom Ross, from a discussion here on 15 January 2015, which shortly 
afterwards went off-COBOL-topic...

"AMODE 64 COBOL is still being worked on here at IBM.

I (like the other poster) would like to know what you would do with AMODE 64 
COBOL?
Also, does everyone realize that AMODE 64 code will run slower than AMODE 31 
code?
We assume that AMODE 64 COBOL will be used for very specialized one-off cases
to solve specific business problems, and that in general 99% of code will be
compiled for AMODE 31 even after we ship AMODE 64 COBOL.

  Unlike AMODE 31, which we expected EVERYONE to move to (still waiting :-) we
do not think very many users will need AMODE 64 in the next 10-15 years.
We are gathering use cases and verifiable needs for AMODE 64 COBOL, so if
you know of any, please SHARE!  (get it? :-)"

Unfortunately Tom did not return to the conversation to take up some of the 
questions raised.

To turn around the questions asked about inter-language communication, what 
benefits are 64-bit addressing C/C++ and Java (and "software") bringing, and 
are they general or specialised?

At the moment, I agree with Tom Ross (which is by no means always the case) and 
expect that few requirement would actually need 64-bit addressing in COBOL. 
When it becomes available, I seriously hope that it is only used when needed, 
not "change that compiler option now! We must have 64 because it is sexy".

As well as plain slow-downs (unless offset by the no-paging and whatever else), 
for existing systems there will be subtle (worse) and not-so-subtle problems 
with pointers, indexes, index-data-items, and others. New calling-convention, 
fix all those Asm programs out there which are serving COBOL programs with 
information on number of parameters and name of CALLer, and such-like.

Large amorphous lumps of data may well work best with 64-bit addressing for 
COBOL. It is unlikely that existing code if recompiled with 64-bit addressing 
COBOL will benefit.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Enterprise COBOL for z/OS, V6.1 supports the continuous delivery model for new features

2017-03-12 Thread Bill Woodger
Decimal floating point is nothing to do with being "64-bit" or not.

The compiler is prepared for 64-bit when customer need arises.

V7 is coming. I don't know when, or what it contains, but it contains something 
to be V7 not V6.n. 

If it were to be 64-bit addressing, I doubt that... people... will be writing 
to unequivocally applaud that. Expect the snarky "16 years late" (or any other 
number, it doesn't really matter) at least. At worst it will be "more proof" of 
something-or-other (it doesn't matter what).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Enterprise COBOL for z/OS, V6.1 supports the continuous delivery model for new features

2017-03-11 Thread Bill Woodger
On Sat, 11 Mar 2017 09:46:30 -0600, Paul Gilmartin  wrote:

>>>
>>> IBMR Enterprise COBOL for z/OSR is a leading-edge, 
>>>
>(except for 64-bit)
>

Leading Edge is just sales waffle, no content.

Why, specifically, would "64-bit" (whatever you mean by that) make COBOL 
better? You afraid of the playground taunts of "our COBOL has 64-bits yours 
doesn't"?

Enterprise COBOL is now, with the re-write, prepared for 64-bit addressing. 
That doesn't mean there's just a button to press, but means that when someone 
(clients) think it is a good idea, for good reasons, to have the option to do 
that for COBOL, then it is now possible to change the Enterprise COBOL to do 
that.


Leaving that aside, why? V5+ took the maximum size of a single storage SECTION 
of a COBOL program from 128MB to 2GB. Even before that, you could (presuming 
you had access to it) reference every single byte of 31-bit-addressable storage 
in a single COBOL program.

You want flames spray-painted on the side otherwise you feel you are somehow 
being let down?

You think it is more "some phrase to make things seem cooler" to be "64-bit" 
even at the cost of program performance? 

You want something on like the spoiler on a non-racing car for Enterprise 
COBOL? Why?

Ah. You don't use COBOL, do you? It's not you afraid of the playground 
chanters, it's you being one.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: curious: why S/360 & decendants are "big endian".

2017-03-09 Thread Bill Woodger
Four-and-twenty is not poetic, it is archaic, with continuing regional use in 
the UK. Although probably originally more thorough, I've only heard it used 
with 20. I grew up with five-and-20-past and five-and-20-to for the time. I 
didn't pick it up myself. Also for non-time things, but only with 20.

What's the French for 83? Four-twenties-three. What if the 360 had been 
developed in Toulon, or Lincolnshire (the real one)?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: GSE UK - Large Systems Meeting

2017-03-08 Thread Bill Woodger
Thanks. After a brief period of DNS telling me the site didn't exist, it works. 
But, the agenda is still the same link.

Just a clarification, I wasn't trying to suggest GSE wouldn't let me attend, 
which I now see would be a reasonable interpretation of what I wrote :-) I was 
putting heavy emphasis on "they" in my mind, and casting mental glances towards 
The Management. However, none of it came out that way on the actual paper. 


On Thu, 9 Mar 2017 11:22:42 +1100, Clem Clarke <clementcla...@ozemail.com.au> 
wrote:

>Same!
>Here's a link to the main webpage:
>http://www.gse.org.uk/mainsite/content/content_events.php
>And a link to the news page which has more of a description.
>
>Clement Clarke, Author of Jol, JCL+
>http://www.oscar-jol.com/
>
>Bill Woodger wrote:
>> Well, it could be just me but I get 366k of something which provides links 
>> to local files which don't exist. Anyway, I doubt they'll let me attend.
>>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: GSE UK - Large Systems Meeting

2017-03-07 Thread Bill Woodger
Well, it could be just me but I get 366k of something which provides links to 
local files which don't exist. Anyway, I doubt they'll let me attend.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: GSE UK - Large Systems Meeting

2017-03-07 Thread Bill Woodger
The link doesn't work very well for me.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Check out Massive Amazon cloud service outage disrupts sites

2017-03-03 Thread Bill Woodger
On Fri, 3 Mar 2017 09:30:11 -0600, Elardus Engelbrecht 
 wrote:

>Vernooij, Kees (ITOPT1) - KLM  wrote:
>
...
>
>...giving a link to this [honest] post mortem by the AWS:
>
>https://aws.amazon.com/message/41926/
>
>Just a simple lame typo... ;-)
>
>Groete / Greetings
>Elardus Engelbrecht
>

The link to the Amazon release was in the article mentioned yesterday. I'm not 
sure "honest" is the exact word I'd use to describe what Amazon writes :-). 
There's also some irony (for me) that the most obvious things on that web page 
are "by the way, take up our service" and "hey, you can eve do it for free".

Here's an example of how "well crafted" the item is:

"Finally, we want to apologize for the impact this event caused for our 
customers. While we are proud of our long track record of availability with 
Amazon S3, we know how critical this service is to our customers, their 
applications and end users, and their businesses. We will do everything we can 
to learn from this event and use it to improve our availability even further."

Why "finally"? Isn't that the first thing they want to do? Why is it an 
"event", which doesn't sound very bad? After all, event happens, it's just 
often spelled differently in that phrase.

And it is not lessons learned to improve availability. It is to " improve our 
availability even further". So it was a good thing.

So, full disclosure, everything in the open. Whoops. Somehow it is convenient 
not to mention or address HOW DID THAT EVER HAPPEN IN THE FIRST PLACE. 

As has been said, don't you test it first? With something of 
ever-increasing-scale you don't even rely on "well, it worked OK six months 
ago".

"The Amazon Simple Storage Service (S3) team was debugging an issue causing the 
S3 billing system to progress more slowly than expected."

They were "debugging". It was a "billing" problem. Something causing the 
billing to "progress more slowly than expected" (does that really sound so 
bad?). Debugging billing on a live system, and they loose vast numbers of 
business-availability-hours across vast numbers of websites? Debugging? Really? 
Seriously? And they can get away with that? 

Yes, it's all in there. Sort of. Standard PR technique to reveal "everything" 
so that no-one digs into the revelations, because the revalatory work of the 
journalist is already done by Amazon themselves. Move along, please, nothing to 
see here. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSORT - Date Display of Previous Month

2017-03-02 Thread Bill Woodger
Kolusu works for IBM's DFSORT. He is not going to comment on the particular 
competing product that you use.

What does it say in the manual? Can you show what you tried, how it failed, and 
more exactly what you want to do, with some sample input, expected output., amd 
output you receive?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Question about PARMDD

2017-02-27 Thread Bill Woodger
Sorry, Allan, one of those occasions when reading all of the words prior to 
jumping is good...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Question about PARMDD

2017-02-27 Thread Bill Woodger
On Monday, 27 February 2017 15:00:03 UTC+1, Allan Staller  wrote:
> No. IBM chose not to break thousands upon thousands of programs that were 
> perfectly happy with 100 byte parm fields, provided via JCL.
> They added a new mechanism for those program, where 100 bytes was not 
> sufficient.
> 

Unless you change the JCL to use PARMDD on the EXEC instead of PARM on the 
EXEC, nothing changes.

If you make that change for no purpose, and then the program is doing something 
which relies on there being 100 bytes of data as a maximum implicitly, then you 
may have a problem. But how is that IBM's fault? No-one forced the JCL change.

If you don't change the JCL, the program expecting a maximum of 100 bytes and 
never needing any more than that will work as designed for the next... well, 
forever.

Have you got an example from one of the thousands and thousands of breaks 
caused?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Question about PARMDD

2017-02-24 Thread Bill Woodger
The exact possibilities for the content of a PARM in a JCL deck depend on an 
interaction between "JCL" and the data the parm represents, (as David W Noon 
was keen to impress in the other thread on this).

What happens for PARM-in-JCL need not happen for PARM-in-PARMDD. PARMDD did not 
exist previously, you have to change the JCL significantly to use it, PARMDD 
data contains no elements of JCL, nothing relating to JCL directly (unless you 
count the symbol-substitution and & compression) and "does some other 
stuff", stripping what it (it being the PARMDD processing) thinks is a line 
number.

Taking your original example and putting it in PARMDD would not work, because 
the program using the PARM data would not understand the extra information.

Taking Charles' example and removing the comments and putting it in PARMDD 
would not work, because some of the PARM formatting in that example is to allow 
for the presence of the comments (those apostrophes).

Taking any PARM which has required extra formatting because the data of the 
PARM conflicted with JCL's understanding of how a line is formed and putting it 
unchanged into PARMDD would not work, because those extra bits of formatting 
for the JCL are still there, and now part of the data, which the program using 
the PARM data will not understand.

PARM is data. You can "embed" a comment in the JCL, but it is embedded in the 
JCL, not the data (in your example the commas have a dual role, they are data, 
and the JCL-processor sees them as JCL).

PARM is data. Sometimes the data requires encapsulation so that the 
JCL-processor understands.

PARMDD is data. There is no requirement or capability for de-encapsulating JCL 
that in an original PARM existed only to enforce a distinction between the data 
and the JCL.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Question about PARMDD

2017-02-23 Thread Bill Woodger
I don't think so either. There is documentation of the possibility of symbol 
substitution, but nothing about placement of commas, nothing about continuation 
symbols, and a piece about embedded blanks being possible. Particularly this 
latter could be affected by the embedding of comments in such a way.

For the processing of the PARMDD data set(s), explicit documentation of the 
processing of comments (if it is done) would be required.

As far as I can tell, if you want to put PARM='LUMP', you could have:

Dataset 1: L followed by blanks
Dataset 2: U followed by blanks
Dataset 3: M followed by blanks
Dataset 4: P folowed by blanks

How would you tell, if that is possible, where a comment was? There's no 
specified requirement to "break" at a specific column on on a comma, as one 
long "string" of up to 32760 characters are created.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Fujitsu Mainframe Vs IBM mainframe

2017-02-23 Thread Bill Woodger
Also note that if you see a current job-ad for Fujitsu Mainframe skills in the 
UK, it will be for an ICL Mainframe, running VME, and being distinctly 
different from... anything from IBM. The COBOL is to the 1974 Standard (with 
Extensions, including COMP-5 which allows the definition of "bits").

Various parts of "the government" have huge projects at the moment converting 
their old ICL systems to Microfocus COBOL on "servers".

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z390 (Corrected Typo)

2017-02-21 Thread Bill Woodger
"Define one or more directories for source MAC files.  If the option starts 
with + the directories listed will be concatenated with current list.  Multiple 
directories are always separated by +.  This option may also override suffix by 
adding *.sfx."

The way I read that, the first character inside the ( must be a + (as well as a 
+ for each concatenated directory).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 31 vs 24 QSAM

2017-02-16 Thread Bill Woodger
Yes, don't just write using LBI from a program and expect to validate old vs 
new with ISRSUPC in batch.

I know that a PMR has been raised about whether ISRSUPC supports LBI, the 
IEC141I 013-E1 message it produces hints at not.

From what I've heard, using LBI, where it is possible to use it, leads to 
dramatic improvements in throughput. Not surprising, 32k blocks vs 256k blocks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: QSAM using DCBE macro

2017-02-15 Thread Bill Woodger
"The large block interface (LBI) lets your program 

handle much larger blocks with BSAM or QSAM. 

On the current level of the system you can use LBI

with BSAM, BPAM, and QSAM for any kind of data set

except unit record or a TSO/E terminal. Currently 

blocks of more than 32 760 bytes are supported 

only on tape, dummy data sets, and BSAM UNIX files."

Poetry (c) IBM Documentation

Probably all fine for you, until the last line.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New HMC user issue (z12BC)

2017-02-14 Thread Bill Woodger
"Moral: I never had to tell the boss..."

That's until now, right?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBDSACB control block

2017-02-14 Thread Bill Woodger
As far as I can tell, the only thing would be to get LE or IPCS to format it in 
an appropriate dump, look at it, look at the COBOL program, and then tell us...

It does seem to be a documentary deficiency.

On Mon, 13 Feb 2017 16:45:17 +0200, Steff Gladstone  
wrote:

>Does anybody know where we can find precise and complete information
>regarding the contents of the COBDSACB control block of COBOL?
>
>Thank you,
>Steff Gladstone
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SORT - How can I put accumulated value on each output record

2017-02-12 Thread Bill Woodger
The Smart DFSORT Tricks doesn't have anything especially close.

Search-engineing *will* provide something very close. But, as the obvious post 
itself suggests, understanding the code allows the same or similar techniques 
to be known and therefore available for other circumstances. 

Gives me a chance to check if I missed something :-) 

Well, there's that BUILD in the OUTFIL. The point of that, and it may be needed 
here, is that if the report produced is longer than the OUTFIL source data, 
there's a run-time error. It's need depends on the length of the input records. 
If they end immediately after the data shown, a BUILD will be needed (doesn't 
need content, just length), because...

That mistake comes out of my first, the thought that you couldn't SUBTOTAL a 
UFF (or CS/CSF). So no need to pre-format, which will mean the record is likely 
short, and the BUILD will likely be needed. 

The other difference is the length of the sequence number. In this case I can't 
see the need of a sequence number beyond one in length, to cause the break each 
time.

dfsort "running total"

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SORT - How can I put accumulated value on each output record

2017-02-12 Thread Bill Woodger
Yes, you can make this type of output with DFSORT.

DFSORT has "reporting features" on OUTFIL. This provides SUBTOT/SUBTOTAL, which 
is to provide running-totals.

SUBTOTAL is available on TRAILER1, TRAILER2 and TRAILER3. None of which do 
exactly what you want. If you look at what each does you'll pick TRAILER3 as 
the most likely.

TRAILER3 operates by providing a "total line" on a control-break defined with 
SECTIONS.

You don't have anything likely-looking which is going to securely produce a 
control-break for each record. So you make one.

You use INREC with OVERLAY (for fixed-length records (with variable-length 
records, prepend with BUILD instead) to append a SEQNUM to each record. Note 
that all you need is to guarantee that somewhere on the current record has a 
value which is different from the previous record. So, a SEQNUM with a length 
of one is fine.

SUBTOTAL only understands directly numeric data-types, so you need some 
additional preparation of those two items which are not numeric (although you'd 
get away with the first). This you can prepare with OVERLAY, doing a 
"conversion" (you can use TO= with LENGTH= or EDIT=). You'll need to look at 
data-types, and probably use CSF/FS and UFF. If your number of decimal places 
is not fixed, you'll need some extra.

On the TRAILER3 you include the original data, then the two SUBTOTALs. There 
are standard widths (8 and 15) or you can provide your own specific widths and 
formatting (you'll need to use EDIT to get your "." before a single decimal 
place).

Then, since "reporting features" produce reports, you'll need some things on 
OUTFIL. You don't want Carriage Control characters, so use REMOVECC. It is only 
the totals you want, so NOTDETAIL.

I'm sure there are examples out there, but you'll pick up a lot if you go 
through the above with the manual.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Where's my dump?

2017-02-10 Thread Bill Woodger
For the PARM, the setting of Language Environment option CBLOPTS determines 
whether parm information on the left or the right of the / goes to the COBOL 
program.

The CEEOPTS DD is a very convenient way to do it.

Having said that, if it is a simple exercise program, you may be able to bust 
it just by looking at the line of COBOL the offset relates to. If it wasn't 
Friday, you'd have had a string of guesses already (perhaps).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-08 Thread Bill Woodger
"Thank Steve, I only mentioned it because Bill had discounted the idea and I 
was pleased with your confirmation it was possible. 
 
:) "

"Discounted" for a reason. The AMBLIST outputs were compared. AMODE is in the 
heading. I'd be more than mildly surprised if the OS/VS COBOL program with an 
Assembler program last assembled in 1989 has an AMODE other than 24. With an 
AMODE of 24, how could you possibly get 31-bit-addressed IO areas from a COBOL 
program? 

On the available evidence, and subject to review if the evidence changed, I 
didn't discount it, I *utterly dismissed it*. Following a fluffy puppy up a 
country lane because in an entirely different situation (causes of S0C4 are 
hardly of a limited number of specific things) a fluffy puppy was the answer 
is... how useful?

Multiple people concurring on a suggestion doesn't in itself make it any more 
valid. In other situations it is spot on, but here?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-08 Thread Bill Woodger
On Wed, 8 Feb 2017 10:34:28 -0600, Peter Ten Eyck 
 wrote:

>At this point I am thinking the coding change is required due a difference in 
>how the COBOL compilers work. I was attempting to identify what that 
>difference may be or find something in the migration guide that highlighted 
>it. ...

I think that is essentially correct. The compiler, or the runtime - a more 
slight chance. You've eliminated "data" as an issue, and, except as an 
incidental, the Assembler program.

If R4 is zero and you review my requests, I think you will be there. I don't 
think it is documented in a Migration Guide, but there is some "supporting 
material". 

Of course, I can be wrong :-) There's not enough information to fully support 
what I think, but nothing has countered it yet. Other evidence could do so.

If R4 is zero, then I'm still 100% (that's rounded), sure it is not DATA(31) 
being an issue (unless there is weird code in the Assembler program which 
coincidentally makes R4 zero when whatever circumstances cause this - never say 
never).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: The "Cobol and 00A Program Check" Story - Some insight gained

2017-02-08 Thread Bill Woodger
Good progress Peter.

The debugger turning the bit on seems to be... wrong. Interestingly wrong ("oh, 
we're in the debugger, let's do something different...") :-)

The bit is only ever set (to zero or one, as appropriate) once by LE per 
"environment that needs to be established" (my wording). If the first program 
is C/C++, the bit is on, and left on. If it is explicitly turned off, it will 
stay turned off. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-08 Thread Bill Woodger
And Abend-Aid shows R4 as zero?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-08 Thread Bill Woodger
Ah, but we enjoy it. Please don't take it away from us...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-08 Thread Bill Woodger
Thanks Peter.

Can you provide the COBOL compile options, the linkedit map for the EXEC PGM= 
program and the AMBLIST output for the original program?

Did you discover why someone suspected the header?

Any code (including whether it is PERFORMed) which CALLs the Assembler program.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-07 Thread Bill Woodger
"Rather than trying to infer the cause by tweaking everything in sight, how 
about this: Set a SLIP PER trap to catch an SVC dump for the first abend of any 
kind, S0C4 or otherwise. That dump may tell you the exact cause far more 
quickly than trial-and-error with recompiles/reassemblies/logic-changes."

Isn't there already a dump?

If there isn't, I absolutely agree. Todge about with things, and you change 
things.

Program A CALLs Program B. Program B goes into a Big Fat Loop.

LOOP.
do some stuff
IF loopcounter NOT GREATER THAN +5
ADD 1 TO loopcounter
GO TO LOOP.

Programmer "adds a DISPLAY" to B.
No loop.
Programmer removes added DISPLAY.
Loop.
Programmer adds DISPLAY to A.
No loop.
Programmer removes added DISPLAY.
Loop.
Programmer adds DISPLAY to both programs (yes, they did)
No loop.

Faffing-about continues for a couple of days, but the only time it loops is 
with the programs unchanged.

Comes to me with a big smile and "I've found a compiler bug ".

I looked at the above code, and ask what was the last code change he'd made 
before the first loop. He showed me in Program A. Utterly wild subscript, 
MOVEing LOW-VALUES to a one-byte field within a large group item subordinate to 
an OCCURS, conditionally.

Just happened to hit the low-order part of the binary one in the Literal Pool 
of Program B. So, adding the "constant" literal value one, was actually adding 
zero, so the loop could never terminate.

A handful of other scattered single-byte binary-zeros had had no obvious affect 
amongst all his "compiler bug" hunting.

But. Put in a DISPLAY, with a nice literal, prior to the use of +1, and the 
location of the +1 is changed (OS/VS COBOL doesn't define literals in reverse 
order), so now, for the loop, it is +1. Add a DISPLAY to Program A, and the 
Program B code "moves", and again the +1 in the literal pool is preserved.

Cancel, with a dump. Look at the dump. Look at the code for the loop, and 
there's X'' where X'0001' should be in the Literal Pool of Program B.

If ever overwiting of executable code is suspected, adding code in front of the 
problem is going to shift it at best, and mask it at worse.

Bust that dump. Guessing and DISPLAY I find of little value. Obtaining all the 
information possible, and going forward with what matches all the information 
(cue various Sherlock Holmes quotes) I find of great value. A dump (and for 
sure a full one) mostly has all the information needed.

Having said that, something that happens with record 17, which causes a failure 
when record 291,347 is reached, is more problematic - except that deduction 
leads you there, and at times surprisingly fast.

The information available so far does not indicate that this can be an 
addressing problem due to 31/24. The AMBLIST Module Summaries are the same. 

OK, if Peter Ten Eyck remains quiet on the subject from now on, there's that 
outside possibility that the same report was compared to itself, and the actual 
new report reveals exactly what people have been suggesting is possible :-)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-07 Thread Bill Woodger
Remember, the AMBLIST Module Summary has been checked and confirmed. Even if 
NORENT is not used, dynamic CALLs, multiple load modules, how would they look 
the same, and yet have such a difference?


On Tue, 7 Feb 2017 16:27:14 -0500, Farley, Peter x23353 
 wrote:

>But a COBOL V4.2 recompile with compiler option DATA(31) will put 
>WORKING-STORAGE in 31-bit address ranges rather than 24-bit, causing the 
>24-bit assembler program to fail  S0C4.
>
>I think that is the OP's real issue, but he has to verify the compiler options 
>actually used for the recompile.
>
>Peter
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-07 Thread Bill Woodger
To get the S0C4, something has to reference something outside the buffer 
maintained by QSAM.

I know no details of the internals of QSAM, but it seems reasonable to believe 
that it allocates storage for buffers which are equal in size to the BLKSIZE.

Although it would be possible to construct a scenario where the header is in a 
block of its own (file written by a COBOL program without APPLY WRITE ONLY or 
compiler option AWO, LRECL plus length of header exceeds BLKSIZE, so header 
written in a block of its own), it would not be possible to have the header 
"overdefined" in the program such that referencing the entire header would 
cause storage outside the buffer to be accessed. OK, it could be done with 
reference-modification with an erroneously calculated length, but I can't think 
how otherwise. If it was the 17th record, 23rd, 492nd, possible. With the 
first? I can't get that.

If the header is just shorter than its definition, that won't matter at all, 
because the data picked up will just be from the next record. With the header 
the only record in the block, the data will be just whatever was lying around, 
but it can be referenced without the S0C4. I doubt there is any protection from 
QSAM for "outside the block, but inside the buffer".


On Tue, 7 Feb 2017 14:22:42 -0700, Sri h Kolusu  wrote:

 In this case, because it is the behavior of the header record  (which 
>for one I'm assuming is the first record), these would only  be potential 
>issues if the file only consisted of the header, no  other records.
>
>Bill,
>
>Not really.  Assuming that OP's program is indeed dealing with VB files, 
>the header might be a Short record and the MOVE resulted in moving the 
>full length than the actual short record length and hence the S0C4. The 
>detailed records following the Header record *may* be of full length and 
>MOVE statements worked fine. Skipping the short header record *might* have 
>solved the issue. (Just a guess)
>
>Kolusu
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-07 Thread Bill Woodger
In this case, because it is the behaviour of the header record (which for one 
I'm assuming is the first record), these would only be potential issues if the 
file only consisted of the header, no other records.



Of course, the header record can be a coincidence. In changing the code, the 
size of 

On Tue, 7 Feb 2017 13:39:11 -0700, Sri h Kolusu  wrote:

 The programmer added code to bypass the header record on the first 
>file read, now it runs at the development level.
>
>You may want to check this out
>
>http://www-01.ibm.com/support/docview.wss?uid=swg21242182
>
>
>Kolusu
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM SR process -- brilliant

2017-02-07 Thread Bill Woodger
If it is any help, I know of another ICN from IBM that probably doesn't have 
anything to do with what you are talking about.

I guess even though extensive, the store of TLAs runs out, and they have to be 
reused, and reused again.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-07 Thread Bill Woodger
"My guess is that the call for the header record passed the record from the FD 
buffer. And subsequent calls pass the record after it's moved to WORKING 
STORAGE".

If the AMBLIST output looks good (comparison of new to Production), then I'm 
pretty sure there is no direct problem with addressability of the data "under" 
the FD as far as the Assembler program is concerned. The IO areas will be 
within the addressable area of where the program is loaded.

However, it can't hurt to see the compiler options used, and the 
linkedit/binder options used (used, not just those supplied to override).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Program now working, but why?

2017-02-07 Thread Bill Woodger
Let's guess that the program is statically binderered/linkedited.

Firstly, it is entirely possible that even 100% identical programs in 
Production and Test "work" and "fail". After all, that's how we get Production 
failures. So, same data, or not?

You seem just ever-so-slightly guarded about messages. Were there *any* 
messages from the COBOL program?

There are some "behavioural" changes from OS/VS COBOL to later ones, but they 
are in the Migration Guide (or at least "a" Migration Guide).

Why did someone decide to code-around the header record? Part of a strategy of 
ignoring all records one by one, or something more purposeful?

It sounds like the header is causing the problem. Is the header the "same" 
(length, types of values) in Production? My guess for now is that something 
about the header is causing an issue in the Assembler program.

Mmm... single CALL to the Assembler (in a PERFORMed procedure)? Multiple 
physical CALLs? If the latter, do all CALLs have the same number of parameters? 
There's a possibility there, but stretching to make it a guess with information 
so far.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSORT Extract records

2017-02-07 Thread Bill Woodger
Changing the data to match the condition, rather than changing the condition to 
match the data, is... unusual. Was the request to "also change any lowercase in 
the data to uppercase"? If not, it is easier, and less resources are used, to 
just change the search value, as suggested a couple of times already, and get 
the correct output.

On Tue, 7 Feb 2017 09:57:03 -0600, Ron Thomas  wrote:

>Ok i have translated the whole to Uppercase  as below  and then extracted the 
>records .
>SORT FIELDS=COPY
>OUTREC FIELDS=(1:1,2996,TRAN=LTOU)
>
>Thanks
>Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL/LE question

2017-02-02 Thread Bill Woodger
The IGZ0268W is a warning message (no kidding). If your are using up to 
Enterprise COBOL V4.2 (which you are), it is just a warning that some time in 
the future (going to V5+, or perhaps with some future LE) you *will* have a 
problem. If you are using V5+ (which you are not) it is a problem right now.

You have a different case from it just being COBOL. Your combined COBOL/ASM was 
previously running non-destructively with LE. And now it doesn't.

First shot would be to recompile one COBOL program that uses the Asm, and see 
if you then get cooperation. If not, someone has to visit with the Asm.

If you are simply not allowed to recompile (cast-iron policy), then just pack 
up and go home early. It is highly unlikely that any simple magic exists to fix 
it in such a way that you need cease to wonder "OK, but what the heck else 
could be going on while it 'apparently works (RC0)'.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL V5.2 question: INITCHECK option incompatible with OPTIMIZE(0=

2017-01-30 Thread Bill Woodger
On Mon, 30 Jan 2017 09:27:36 -0800, Tom Ross  
wrote:

> )? (Msg IGYOS4021-W)
>
>>We are beginning the transition to COBOL V5.2 from V4.2 and exploring the n=
>>ew options available for debugging.
>
>>We just discovered that the INITCHECK option is incompatible with OPTIMIZE(=
>>0).  Using both options generates this warning-level message:
>
>>IGYOS4021-W   The "INITCHECK" option was discarded due to option conflict r=
>>esolution.  The "OPTIMIZE(0)" option took precedence.=20
>
>>There is no restriction documented for INITCHECK in the V5.2 Programmer's G=
>>uide and no mention of this incompatibility in the section on incompatible =
>>compiler options either.
>
>INITCHECK takes advantage of control flow analysis done by OPT(1|2) and so
>cannot function with OPT(0).
>
>We are trying to be aggressive about back-fitting migration assisting new
>features but we can't  update all documentation everywhere.  The Knowledge
>Center is updated every few months and is your best bet.  If we are missing
>documentation, please open a PMR and we wil fix it!
>

But the documentation (both the Migration Guide and Programming reference for 
both V6.1 and V5.2) was updated. The even refer to the relevant APARs, and are 
included in "Summary of changes..." dated September 2016.

What those documentary changes don't include is the mutual-exclusivity between 
OPT(0) and INITCHECK.

Are we to read anything into this?:

"|
|
|
|
|
|
| v Invalid data might get different results with COBOL V5 and V6 than in 
earlier
COBOL versions. Some users have found that they get different results with the
newer compilers than with previous compilers, and/or that they get different
results with different OPT settings. These are normally due to invalid data that
is brought into the COBOL programs at run time. One way to find out whether
your programs will have this problem is to follow our new migration
recommendation:
|
| 1. Compile with SSRANGE, ZONECHECK, and INITCHECK, and then run
regression tests.
| 2. Check whether there are any problems:
|
|
| – If no problems found, recompile with NOSSRANGE, NOZONECHECK,
and NOINITCHECK, and then do a small test before moving into
production.
|
|
| – If problems are found, then either correct the programs and or data, or in
the case of bad data in zoned decimal data items, use the ZONEDATA
compiler option to tolerate the invalid data.
|
|
|
Note: The INITCHECK option is available in Enterprise COBOL V6.1 with the
PTF for APAR PI68226 installed, and available in Enterprise COBOL V5.2 with
the PTF for APAR PI69197 installed."

Gives "our new migration recommendation", yet also fails to mention that 
OPT(1/2) must be used to have INITCHECK.

Also, despite the reference in the final paragraph to V5.2, the V5.2 Migration 
Guide does not contain INITCHECK in the "recommendation". Is that because with 
V5.2, as Peter Farley has discovered, there seems to be a "memory usage" issue 
(hopefully alleviated by V6.1)? Or is it the recommendation, just not included 
in the actual text.

Looking at the KnowledgeCentre, no reference to INITCHECK is made in the 
migration recommendations:



Enterprise COBOL for z/OS
Enterprise COBOL for z/OS 6.1.0
Migration Guide
Migration strategies
Migration recommendations to Enterprise COBOL V5 and V6

"Compile with SSRANGE, ZONECHECK, and OPT(0) for initial code changes and unit 
tests."

If the KC is the go-to place for current documentation, does this mean the 
recommendation in the V6.1 Migration Guide is withdrawn, returning to the V5.1 
recommendation? Or is the V6.1 OK, the KC not, and V5.2 missing the current 
recommendation. Or is the V6.1 correct, the V5.2 correct, and the KC wrong?

There also seems to be an issue with a previous PMR on the subject anyway. 
Apparently the claim was (paraphrase) "we could do INITCHECK at OPT(0), but 
since OPT(0) is for compiler performance, INITCHECK would slow things down and 
defeat that". Now you say that INITCHECK is a side-effect of the work already 
done for OPT(1/2) so fat chance it could have been done at OPT(0) and just 
wasn't for "compiler performance".

In the interim to disentangling the existing documentation, could you clarify 
"our new migration recommendation" for V5.2 and V6.1, please? The KC casts 
further knots on the subject at the moment, so turning to that is just not 
giving an answer that can be used directly.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL V5.2 question: INITCHECK option incompatible with OPTIMIZE(0)? (Msg IGYOS4021-W)

2017-01-27 Thread Bill Woodger
"Actually, Tom Ross in his migration presentation recommends this procedure:..."


Yes, unfortunately that was May 2016, and INITCHECK appeared in September 2016.

The reference I was making was to the V6.1 Migration Guide. The advice seems 
not to be in the MG for V5.2, although INITCHECK is there.

With V6.1, the compiler "back end" uses 64-bit addressing, allowing larger 
programs (which are generally "generated" programs) to be optimised than with 
V5.2. The only reason it is V6, not V5.3, is because of this change.

Have a look at Youtube for Enterprise COBOL V6.1. There's an Introduction, 
which is only two weeks old, and a two-month old "Migration Assistant" video. 
I've not seen either yet...

If it were easy to use INITCHECK with OPT(0), but the compile would take 
longer, why didn't they just do it? It's a migration and perhaps "initial 
compile" thing? If it takes longer, it is then your choice to use it or not.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL V5.2 question: INITCHECK option incompatible with OPTIMIZE(0)? (Msg IGYOS4021-W)

2017-01-27 Thread Bill Woodger
On Thu, 26 Jan 2017 20:56:27 -0600, Mike Schwab  wrote:

>Initially, the numeric / zero checks would not work like before.  I
>know there is an parm to make it work like before in 6.1.  Not sure if
>they applied it to 5.2.
>
>IBM Cobol Documentation page.  Click on Version (6.1) then download
>Migration guide.  Compare to 5.2 guide.
>http://www-01.ibm.com/support/docview.wss?uid=swg27036733 & download PDF.
>
>
>

INITCHECK is new, so can't really have a different behaviour to before.

I think what you are referring to is the general problem that using "bad data", 
which then gives "undefined results" (perhaps rather "not the expected defined 
results") can change between compilers (up to V4.2 vs V5+).

IBM's recommendation is to compile with SSRANGE, ZONECHECK and INITCHECK and 
run regression-tests (acutually, since INITCHECK is a compile-time-only thing, 
check the compile listing before running anything for that...). Even where 
suggesting that there should be a mention of OPT greater than zero if required 
for INITCHECK.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL V5.2 question: INITCHECK option incompatible with OPTIMIZE(0)? (Msg IGYOS4021-W)

2017-01-27 Thread Bill Woodger
Although I can't see it documented, I suspect that INITCHECK can only be 
offered as a side-effect of the complex analysis which is already done for the 
higher levels of optimisation, and which is not done at the lowest level of 
optimisation (OPT(0)). The message is probably correct, but the relationship 
should be documented in the Programming Guide.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and PreCompile for CICS and DB2

2017-01-11 Thread Bill Woodger
I'm not quite sure what you are asking. Do you mean, which release of COBOL 
first included the integrated CICS translator within the compiler itself? Or 
are you asking something else?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VSAM: Why MAXLRECL?

2017-01-11 Thread Bill Woodger
With the paucity of information in your original post it definitely seemed 
an... odd... idea.

I hate being "near" limits. To give an example, you have found something that 
says your can have 32761 for your data in VSAM (before extending yourself), and 
yet you can't have that amount of data for plain QSAM. Hit that limit, and next 
thing "rats we can't copy those bigguns, and rats immediately we need an extra 
byte for the log because of public holidays in Tanganiyka".

Buffers, CI-Size, perhaps freespace, index levels - but you have a log file, so 
those are less of an issue.

Personally I'd go with multiple-physical-one-logical and stick at your 
around-about 500-bytes. "Key" on each record, sequence number, just write out 
as many as you need (how many you need is known at the time, and there are no 
issues with intermixture, and even if there were, you have the information 
within the data to put things back together - (unlike "spanned" records)). 
Consider going fixed-length, ESDS. I assume that you are writing though a 
sub-program, and going for "write it quick, it's a log-record after all"?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: "task level" TIOT & XTIOT? A crazy thought?

2017-01-10 Thread Bill Woodger
And what would you want in data-name-1? A DDNAME,like the PL/I? A data set 
name? A partial data set name (with some magic for full expansion)? Something 
else?

What do you feel, with your selected option for data-name-1, would be some 
example uses for this?

Again, for me, just because something is in COBOL 2014, doesn't naturally mean 
it should get Enterprise COBOL development resources assigned to it. Clearly, 
unlike in 1985, the number of "sites" using COBOL on Linux/Unix/Windows is 
greater than the number of Mainframe sites. COBOL 2014 and future developments 
of the COBOL Standard will reflect this.

As you've pointed out previously, with XML, Json and UNBOUNDED, IBM is happy to 
be COBOL-85-with-extesnsions-from-2002-2014-plus-our-own.

So, good choices from 2002/2014, yes. What is it that would make the ability to 
use a data-name in the ASSIGN good?

On Tue, 10 Jan 2017 23:01:59 +, Frank Swarbrick 
 wrote:

>For whatever its worth, the latest COBOL Standard offers the following syntax, 
>which I believe meets this requirement at the language level:
>
>
>SELECT file-name-1 ASSIGN USING data-name-1.
>
>
>"3) The ASSIGN clause specifies the association of the file connector 
>referenced by file-name-1 to a physical file identified  by  device-name-1,  
>literal-1,  or  the  content  of  the  data  item  referenced  by  data-name-1.
>
>3b) When the USING phrase of the ASSIGN clause is specified, the file 
>connector referenced by file-name-1 is associated with a physical file 
>identified by the content of the data item referenced by data-name-1 in the 
>runtime element that executes the OPEN, SORT, or MERGE statement."
>
>
>Someone want to open an RFE?
>
>
>Frank

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL5 and ceedump

2017-01-10 Thread Bill Woodger
Yes, a deliberate area left specifically to catch any overflow and for no other 
purpose, similar to a patch-area.

Technique arises before SSRANGE exists. If SSRANGE were subsequently used, it 
would/could catch the problem, but it is not as simple as the APAR text makes 
out (from memory of the text). So SSRANGE would "help" but not necessarily be 
sufficient, and remember that in cases the originally code is deliberately 
overflowing (to overcome an old compiler limitation). Then you get into "change 
that costs (development/testing/implementation) but achieves nothing, which is 
sometimes difficult (but important) to "sell" to management. Those who didn't 
buy (and who migrated early to V5+) reaped that whirlwind.

Note that with V5+, the LE runtime option CHECK no longer influences anything. 
You can't deploy with SSRANGE(ON) and run with CHECK(OFF). Amusingly (to me 
anyway) people who run CHECK(ON) in Production complained about the additional 
"overhead" of the check for CHECK being ON or OFF.


On Tue, 10 Jan 2017 04:10:12 -0600, Elardus Engelbrecht 
 wrote:

>Gibney, Dave wrote:
>
>>A frequent, even standard way to get past the size limit of a COBOL array, or 
>>more appropriately table,  was to define more "empty" space after it. Since 
>>subscript bounds checking was always turned off for performance reasons, you 
>>could effectively address substantially larger than the size limit of any 
>>single 01 item.
>
>Hmmm, it reminds me of a sort of patch area?
>
>I'm curious, what about usage of SSRANGE compile option and CHECK(ON) runtime 
>option to avoid going over an array/table and getting an ABEND? Or will that 
>subscript bounds checking not help you here?
>
>I'm also puzzled by that APARs and the comments in this thread. (We're still 
>at COBOL v4, 5655-S71)
>
>Just following this thread out of curiousity.
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: Re: COBOL5 and ceedump

2017-01-10 Thread Bill Woodger
There is a conflation of two issues, with the first being in two parts.

Issue 1) Part a) Can't define enough storage for a table due to COBOL limits.

This is only an issue for "old" programs, where limits for table-size were 
imposed by the compiler. The "expedient" approach was to define consecutive 
tables, the subsequent ones being "catch-all" for overflow from the main table. 
More resilient approaches would have been available, but would have required 
"more thought".

Issue 1) Part b) This table can never fill, so it doesn't matter that it is 
used in 20 sub-programs with no checking - it just logically can't fill

Of course, for some given number of tables, it fills, and it does so at 02:30. 
Temporary expedient is to add an "overflow" table in the main program, and "fix 
it properly" some time later. Which only ever happens piecemeal, when one of 
the sub-programs has to change.

Issue 2) the inexperienced programmer

Previous to V5+, the indexes are (more) safely tucked-away in the TGT, out of 
immediate danger of being trashed. It was not impossible to trash, but less 
likely.

With the V5+ change, a simple overrun using an index could trash the index, 
leading to a complex issue for an inexperienced programmer, even if a more 
experienced programmer may expend less effort identifying the issue (and, 
ideally, not having the issue in the first place).
Prior to V5+, something else would be trashed, but the current value of the 
index would at least be pointing to the trashed value, rather than to somewhere 
wild (which may or may not have subsequently been trashed, and which may even 
be program code, or a system area, or whatever).


Yes, absolutely, to never access storage outside the storage you've defined is 
the best thing to do, but when a new beginner does access something in error, 
it is less than hepful that something completely weird may happen (and 
remembering that the absolute worst situation is when something "happens", but 
it is not noticed/immediately connected to the problem.

The frustrating thing was the up to V4.2 the "extending a table" method was 
covered in the Programming Guide, where OPT(FULL) was discussed (a warning to 
not remove innocent-looking "unused" tables that were used for this purpose). 
With FULL being replaced by a new option, the reference disappeared from the V5 
documentation, but it cannot be argued that IBM (someone) was aware of the 
issue existing before it was idly changed.

The reason for location was given as "performance", though I can't really 
visualise why an index "down there somewhere at the end of the table of length 
up to 999,999,999 bytes" can outperform an index "right here, immediately in 
front of the table" if the overwhelming need was to have the indexes defined, 
for the first time, intermingle with WORKING-STORAGE, user-defined, data.

There's also indexes for LOCAL-STORAGE and, of course, without any possibility 
to locate with the table, the LINKAGE SECTION.

I don't know for 100% where indexes (indices) are located, as I've not got 
access to V5+ nor seen a listing subsequent to the PTFs. They may or may not be 
in "as safe" a place as they were before, but I don't think they'll be moving 
again any time soon, as I think the change was substantial to apply the fix.

As well as the COBOL Cafe discussion, there was a discussion on LinkedIn, which 
got comment from Tom Ross. In the end, the "fix" was unexpected from the 
results of those discussions.



On Tue, 10 Jan 2017 11:27:34 +0100, Peter Hunkeler  wrote:

>
>>A frequent, even standard way to get past the size limit of a COBOL array, or 
>>more appropriately table,  was to define more "empty" space after it. Since 
>>subscript bounds checking was always turned off for performance reasons, you 
>>could effectively address substantially larger than the size limit of any 
>>single 01 item.
>
>
>
>I understand. I also read the head of the thread that Bill posted.
>
>
>
>It seems kind of ridiculous to me to justify all this with "less experienced 
>programmers". I remember when I was told how to program, I was told to 
>always make sure my coded does not go beyond the table. This is nothing 
>difficult to do. There is no excuse not to do it.
>
>
>And as for the "standard way" to cheat the Cobol table restriction (I'm no 
>Cobol programmer, sorry): Cheating is cheating. Shudder But it explains at 
>least why IBM agreed to change the code. Thanks.
>
>
>
>
>--
>Peter Hunkeler

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: Re: COBOL5 and ceedump

2017-01-09 Thread Bill Woodger
See also here, where a policy-shift was revealed (towards the end of the 
thread) to consider replicating V4 results where reasonable. 
https://www.ibm.com/developerworks/community/forums/html/topic?id=0f54483b-6f83-441d-a5fc-22a3d333dddf=25

Ironically this has included deliberate replication of the bug in that thread.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: Re: COBOL5 and ceedump

2017-01-09 Thread Bill Woodger
Unfortunately an old technique was still in use, and hit many clients. See here 
for some detail: 
https://www.ibm.com/developerworks/community/forums/html/topic?id=c476d2c9-0d4e-4073-97c5-6384d8f381c0=25

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL5 and ceedump

2017-01-09 Thread Bill Woodger
The full "Fix list for Enterprise COBOL for z/OS" for V5+ is here: 
http://www-01.ibm.com/support/docview.wss?uid=swg27041164

This is segregated by compiler release/version and also includes the runtime 
(Language Environment).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL5 and ceedump

2017-01-08 Thread Bill Woodger
On Sun, 8 Jan 2017 18:51:14 +0200, Steff Gladstone  
wrote:

"In a COBOL5  CEEDUMP, how do I locate the *index* of an array (i.e., an array 
that is defined with "indexed by") in >the dump?"

If you consult the "* * * * *   S T A T I C M A P   * * * * *" in the 
compile listing you should get an OFFSET (and a length, shoot me if it is not 
four).

For COBOL it is a Table, not an Array. The Programming Guide should describe 
the listing.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Fwd: Apache Spark on z listserver or forum?

2017-01-06 Thread Bill Woodger
StackOverflow and other StackExchange sites (like SuperUser) are Question and 
Answer sites. That doesn't mean they don't work, it means they are different. 
They are not a Forum, nor a Mailing List. On SO you'd ask about programming. On 
SU about setting up software. You'd get answers. Which are specific, concrete, 
answers. Comments on questions and answers can be made (once you have 50 
reputation points) to elicit more information or make the speculative "Did you 
read...?".

The arrangement is "community regulated" with Moderators in addition.

A huge advantage (I'd hope it would be) for Apache Spark is that some 
developers of Spark-for-elsewhere attend and are active contributors.

It could work very well, but you have to remember (and be aware of first) the 
rules, for both asking and answering. It is not anything like 
IBM-MAIN-just-with-fancier-stuff.

For developers of a product, stating affiliation in an answer (you can also 
mention in your "profile") is always good, and at times mandatory. Questions on 
IBM's implementation of Spark will be very welcome, and will have a natural 
tendency to "promote". Outright promotion is problematic. Being clear with 
affiliation helps around the borders.


On Sat, 7 Jan 2017 11:16:09 +0800, David Crayford  wrote:

>>> For us, it will be Spark on z/OS..
>> Glad to hear it!  Sounds like you maybe have some questions.  Feel free to
>> mail me directly (ef...@us.ibm.com), or post here until I see what I can
>> get set up for a Spark-z/OS-specific mailing list.
>
>Why not just use StackOverflow? IBM already use SO for WAS Libery
>questions to good effect. IMO, SO is far superior to mailing list
>servers, especially for searching for answers.
>
>
>> Thanks,
>> Erin
>> ef...@us.ibm.com
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEC141I 013-A8: how to read VS data sets?

2017-01-06 Thread Bill Woodger
As part of my experimentation, the one record I wrote was for sure as small as 
80 bytes. That would seem to have been quite "favourable", and no, the 
remainder of the track was untouched by the data which was MODded to it.

I, weakly, remember there was some apparent "wrinkle" with "end of data" but 
assumed it was a "normal" thing, rather than anything specific to what I was 
doing. From memory, of four years ago, with no need of the detail since, when 
the track was full (could not contain another logical record) there was no "end 
of file" (file mark). Something like that. I could be wrong, having made the 
assumption (which seemed to be borne out by the experimentation), I didn't 
research it.

On Fri, 6 Jan 2017 15:41:53 +, J R <jayare...@hotmail.com> wrote:

>Sent from my iPhone
>
>> On Jan 6, 2017, at 10:27, Bill Woodger <bill.wood...@gmail.com> wrote:
>> 
>> 1) short or coincidentally-full  block, unfilled track; 
>
>So, in the case of a favorable TRKBAL, the access method doesn't back up over 
>the file mark and write the first new block in its stead?  
>
>It would have to remove the file mark in either eventuality.  
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEC141I 013-A8: how to read VS data sets?

2017-01-06 Thread Bill Woodger
When DISP=MOD is used and there is existing data on the data set I think there 
are three possibilities (in likelihood order) with the existing data: 1) short 
or coincidentally-full  block, unfilled track; 2) short block, 
otherwise-coincidentally-filled track; 3) full block, filled track (double 
coincidence).

For 1), the first block written for "MODding" will be the first block on the 
"next" track, with the remainder of the final track of the previous content of 
the data set being "empty", screwing up a "seek" (there will obviously be a 
short block in one case, and a full one in the other).

For 2), the first block written for "MODding" will be the first block on the 
"next" track, and the previous block will be short, screwing up a "seek".

For 3), by the double coincidence, everything will be fine, and there will be 
no evidence from the content of the data set itself that it was ever MODded. 
"seek" works perfectly. May not, or may, on the next run.

It is unproblematic for "seek" if the final block is short, or the final track 
is short, or both. But, entirely problematic - at least potentially - for 
subsequent data (that which has been MODded) if there is any "gap" of any type 
(short block, short track), if there are embedded short blocks or unfilled 
tracks.

Whether the "seek" would work sometimes, fail all the time, or the programmer 
noted something but did not diagnose but coded-around, and probably other 
scenarios, will be down to the actual data and program.

For ordinary sequential reads, nothing with DISP=MOD is problematic (assuming 
that RECFM/LRECL are consistent with how later used).



On Fri, 6 Jan 2017 14:55:22 +, J R <jayare...@hotmail.com> wrote:

>Sent from my iPhone
>
>> On Jan 6, 2017, at 09:37, Bill Woodger <bill.wood...@gmail.com> wrote:
>> 
>> "with the exception of the last block or track." 
>
>Are we not talking about the "last (used) track" in the case of MOD?  
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEC141I 013-A8: how to read VS data sets?

2017-01-06 Thread Bill Woodger
All the documentation I read suggested that a latest incomplete track, when 
present, is not written to with DISP=MOD. My experiments bore this out.

It is possible that I misread everything and borked all the experiments. I can 
perhaps re-check at some point if this (MOD backfills empty space on a 
partially used track) is the predominant feeling.

Here is the JCL Reference:

"S
(1) For fixed-length records, indicates that the records are written as 
standard blocks, that is, no truncated blocks or unfilled tracks within the 
data set, with the exception of the last block or track."

If tracks were backfilled, that "or unfilled tracks" would be redundant.

No, I've nothing to back that up.

Whilst it may seem profligate with space, probably more determinant on why 
would be "how fast does it take to slap this data onto DASD, given that in 
99.% (or more) of slaps, the particular slap is not the first slap to a 
DISP=MOD with prior data.

On Fri, 6 Jan 2017 14:12:16 +, J R  wrote:

>MOD, I believe, does start by filling the last track;  but it would be sheer 
>luck if the erstwhile last block were a standard block.  Consequently, the 
>odds are stacked against the dataset remaining properly standard.  
>
>In any event, DISP=MOD with RECFM=FBS leaves the dataset labelled as RECFM=FB 
>to obviate behavior unexpected by subsequent users.  
>
>Sent from my iPhone
>
>> On Jan 6, 2017, at 08:13, Tom Marchant 
>> <000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:
>> 
>> Are you sure? I'm not sure, but I thought that MOD would start by 
>> filling up the last track.
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEC141I 013-A8: how to read VS data sets?

2017-01-06 Thread Bill Woodger
On Fri, 6 Jan 2017 07:12:52 -0600, Tom Marchant  
wrote:

[...]
>
>>As far as I know, it is simply that guarantee that is the difference, 
>>so it can be acted upon. S meaning "this data set has not been 
>>MODded
>
>Are you sure? I'm not sure, but I thought that MOD would start by 
>filling up the last track.
>

Unless it has changed with V2, I am sure, through reading the documentation and 
subsequent experimentation.

For instance, one experiment was OPEN, WRITE one record, CLOSE for DISP=NEW 
with TRK,1.

Run again with DISP=MOD. BANG!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEC141I 013-A8: how to read VS data sets?

2017-01-05 Thread Bill Woodger
The S in FS is the same "standard" as the S in FBS.

With an F which has been MODded there will, mostly, be an unfilled track (last 
record on from the previous output, empty remainder of track unless the record 
happens to be the last one which would fit on a track).

FS guarantees (by the person who coded it) that there is no partial track 
within the file/data set, so that a record can be read directly through the 
calculation of its position. As far as I know, it is simply that guarantee that 
is the difference, so it can be acted upon. S meaning "this data set has not 
been MODded, and if it has, it is my fault that something goes to pot at times".

As far as I know, the S in FS and FBS are not mandatory when writing, and 
everything works when it is specified for the data set used as input.

Although the "S" is a "performance thing", it is only so for "seek-type" usage 
(I'm sure there is a "calculate my physical record's location" Assemble macro). 
It otherwise makes no difference.

I think SAS has some stipulations about FBS for its libraries, or some of them.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEC141I 013-A8: how to read VS data sets?

2017-01-05 Thread Bill Woodger
Yet in modern times the S for F has its uses. If a C/C++ program is going to 
use a "seek" for a file, if the file is F/FB, then the file will be read from 
the start to satisfy the seek (because there may be those embedded short 
blocks), but if the file is FS/FBS (guarantee, by the person who put the S in 
the RECFM, to not have embedded short blocks) then the seek is able to 
calculate the position of he block containing the sought record, and then only 
have to read within the block.

I'm sure all C/C++ programmers who want to use seek on z/OS know that, since it 
is documented. Yeah. Right. (at risk of starting war) people who want to code 
seek to save a bit of thinking are exactly the ones who read the manuals.

What this means is "if you are using seek in a C/C++ program to access 
fixed-length records, ensure RECFM=FS/FBS. If you haven't done that, do it, and 
compare the resource usage.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IEC141I 013-A8: how to read VS data sets?

2017-01-05 Thread Bill Woodger
Paul,

For QSAM, there's F/FS/FB/FBS, U, V/VB, VS/VBS that you may see used in a 
business system (and business systems, in the main, are the reasons for having 
a Mainframe).

All have their specific "it's better in this case to do this". Of these, VS/VBS 
is the slowest way to read or write records, and the least likely that you will 
see, and the least "known" to programmers, in business systems.

"Slowest" means, for large numbers of records (the usual stuff of Mainframe 
business systems) "more expensive" and "running longer". Neither of these are 
good.

I've seen you suggest that things would be better with VS/VBS only: can you 
outline how, please?

QSAM has limits. You seem to be unable to accept that. It has the limits it 
has, not the limits that you think it should have. It ain't gonna change.

If for your "more and more mysterious" your point is "why didn't it abend, 
rather than apparently work, although work other than I wanted", then raise it 
with IBM. I'm not sure it will get a high priority, because mostly people will 
be expected to respect the limits.

Since you are very much at home with HFS, why don't you use that, and just 
pretend that it is giving you VS/VBS? Won't that make you happy, and leave 
everyone else choosing the most effective RECFM for the specific task (or not, 
as it has been known to happen)?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Addressing Question

2017-01-04 Thread Bill Woodger
ALL31(ON) is only relevant for dynamic CALLs, and it is as Frank has described 
- no switching, and if you CALL an AMODE(24), dynamically, you'll likely break.

Your resultant loadmodule is less than 16MB, and, when loaded, fits within the 
available memory below the line. How close you are to exhausting that memory 
with a minor increase in storage is unknown from what you have said.

You are "getting away with it". 

The map of the loadmodule, produced by the program binder, will confirm this. 

I think if you go over 16MB the binder will complain, but it has no way of 
knowing what memory is available of the 16MB for when your program is loaded, 
so you can have a loadmodule which binders successfully, but won't fit in the 
memory available - if you have a change which adds sufficient memory.

You should be able to test this fairly easily. Find the size of the loadmodule, 
add storage so that it is close to, but not, 16MB, and try to load it.

If no other handy methods, a quick "binary chop" will get a reasonable 
"maximum" size you can load, and you can compare your actual loadmodule size to 
that, to know how much leeway you have.

If that is not enough leeway, you will have to make the assembler happy with 
31-bit addressing which will allow you to fully binder greater than 16MB with 
everything static, or go with dynamic CALLs to the assembler and have LE do its 
work, using ALL31(OFF) so you get the switching.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Addressing Question

2017-01-04 Thread Bill Woodger
No. Your CALLs are static.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Interested in portable mainframes?

2017-01-04 Thread Bill Woodger
"Eating your own dogfood" is apparently consuming (only) your own, poor-quality 
(in relation to other things available) software.

"Drinking your own Champagne" is a proud counter, that you use your own 
products because they are the best.

IBM used the phrase, coined in 2007 apparently, in page accessed by the link 
provided.

I just can't think "Drinking your own..." without ending that with something 
that gives the fancy phrase a somewhat unsavoury... taste. 

It may just be me.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


  1   2   3   4   5   >