Re: AW: Re: SDSF ISF121I message

2015-03-31 Thread Elardus Engelbrecht
Peter Hunkeler wrote:

And there is the (maybe) unexpected and (IMHO) unlucky relationship that 
REGION=0K or 0M implies MEMLIMIT=NOLIMIT.

True for the word 'implies' [1] :

If no MEMLIMIT parameter is specified, the default is the value defined to 
SMF, except when REGION=0K/0M is specified, in which case the default is 
NOLIMIT.

But there is another catch:

Unlike the REGION parameter, MEMLIMIT=0M (or equivalent in G, T, or P) means 
that the step can not use virtual storage above the bar.

and 

A specification of REGION=0K/0M will result in a MEMLIMIT value being set to 
NOLIMIT, when a MEMLIMIT value has not been specified on either the JOB or EXEC 
statements, and IEFUSI has not been used to set the MEMLIMIT.

MEMLIMIT=NOLIMIT (specified or not) is really *no* limit, but MEMLIMIT=0scale 
is simply zero, nil, nada, nothing memory.

I don't have limitless memory to handle this mixup... ;-D

You can use SMF30SFL, SMF30RGN, SMF30MEM, SMF30MES and friends to conduct 
JOB/TSO/STC autopsies.

Groete / Greetings
Elardus Engelbrecht

[1] - For me the word 'implies' means in this context: 'fall back to default, 
because nothing was specified at all.'

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Redbook on DFHSM and PDSEs Control datasets

2015-03-31 Thread Lizette Koehler
Ed

Thanks for pointing this out.

Even though it is for specifically PDSEs V2 under DFHSM.   It has some very
nice detail on how to setup ACS, SG, SC, DC, - how to TEST/VALIDATE and a
few other basic HSM functions.  So could be used as a good training doc on
basic ACS Code and HSM functions.

Lizette


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Ed Gould
 Sent: Monday, March 30, 2015 1:16 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Redbook on DFHSM and PDSEs Control datasets
 
 http://www.redbooks.ibm.com/redpieces/pdfs/redp5160.pdf
 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainfra...

2015-03-31 Thread Paul Gilmartin
On Mon, 30 Mar 2015 21:06:52 -0400, Shmuel Metz (Seymour J.)  wrote:

This is one main reason why I prefer the UNIX fork() philosophy
rather than threading.

That philosophy led to the insanity that a command can't pass back an
environment variable to its caller.
 
Much alleviated by the mechanism of command substitution.

And hauntingly similar to the JCL insanity that one job step can't
pass back a JCL variable to be used in the PARM to a subsequent
step.  And nothing similar to command substitution available as
an alternative.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Redbook on DFHSM and PDSEs Control datasets

2015-03-31 Thread Scott Ford
Liz,

Do you think an old Dino like me could use it to setup DFHSM for the first
time. I need to on our development environment.

Regards,
Scott

On Tuesday, March 31, 2015, Lizette Koehler stars...@mindspring.com wrote:

 Ed

 Thanks for pointing this out.

 Even though it is for specifically PDSEs V2 under DFHSM.   It has some very
 nice detail on how to setup ACS, SG, SC, DC, - how to TEST/VALIDATE and a
 few other basic HSM functions.  So could be used as a good training doc on
 basic ACS Code and HSM functions.

 Lizette


  -Original Message-
  From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU
 javascript:;]
  On Behalf Of Ed Gould
  Sent: Monday, March 30, 2015 1:16 PM
  To: IBM-MAIN@LISTSERV.UA.EDU javascript:;
  Subject: Redbook on DFHSM and PDSEs Control datasets
 
  http://www.redbooks.ibm.com/redpieces/pdfs/redp5160.pdf
 
 

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu javascript:; with the message:
 INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Redbook on DFHSM and PDSEs Control datasets

2015-03-31 Thread Richards, Robert B.
I feel an old Alka-Seltzer commercial coming on: Try it, you'll like it  :-)

https://www.youtube.com/watch?v=9qdfMYFl0Ic

Bob

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Scott Ford
Sent: Tuesday, March 31, 2015 8:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Redbook on DFHSM and PDSEs Control datasets

Liz,

Do you think an old Dino like me could use it to setup DFHSM for the first 
time. I need to on our development environment.

Regards,
Scott

On Tuesday, March 31, 2015, Lizette Koehler stars...@mindspring.com wrote:

 Ed

 Thanks for pointing this out.

 Even though it is for specifically PDSEs V2 under DFHSM.   It has some very
 nice detail on how to setup ACS, SG, SC, DC, - how to TEST/VALIDATE 
 and a few other basic HSM functions.  So could be used as a good 
 training doc on basic ACS Code and HSM functions.

 Lizette


  -Original Message-
  From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU
 javascript:;]
  On Behalf Of Ed Gould
  Sent: Monday, March 30, 2015 1:16 PM
  To: IBM-MAIN@LISTSERV.UA.EDU javascript:;
  Subject: Redbook on DFHSM and PDSEs Control datasets
 
  http://www.redbooks.ibm.com/redpieces/pdfs/redp5160.pdf
 
 

 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send 
 email to lists...@listserv.ua.edu javascript:; with the message:
 INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDSF ISF121I message

2015-03-31 Thread Tom Marchant
On Mon, 30 Mar 2015 14:37:57 -0400, Mark Jacobs wrote:

1MB for
MEMLIMIT is very small.

That's an understatement. 1MB is the minimum increment for an allocation 
above the bar. (Yes, I know that there are now ways for a program to 
obtain less than that. But first an allocation in 1M increments must be done.)

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainfra...

2015-03-31 Thread Shmuel Metz (Seymour J.)
In
CAAJSdjgcPHDb60=apm36kvymoddqmd2fiefavq6my5zuqxw...@mail.gmail.com,
on 03/30/2015
   at 08:34 AM, John McKown john.archie.mck...@gmail.com said:

This is one main reason why I prefer the UNIX fork() philosophy
rather than threading.

That philosophy led to the insanity that a command can't pass back an
environment variable to its caller.

But I still like the isolation of protect keys.

What isolation? With everybody and his brother running key 8, the
storage key mechanism is worthless for shared memory.

Ignorance of Multics considered harmful
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see http://patriot.net/~shmuel/resume/brief.html 
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDSF ISF121I message

2015-03-31 Thread Tom Marchant
On Tue, 31 Mar 2015 16:45:55 +, Jousma, David wrote:

I think the other thing that should be said, is that it is a LIMIT, 
not an allocation.

Same as REGION.

And then it's worth mentioning that an application that run 
authorized can bypass the limit

Likewise with REGION, though it is more difficult.

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: migrating compiler versions

2015-03-31 Thread Ed Gould

As always it depends.

We had decent COBOL standards in our shop and no programmer funny  
business (no pushing the bounds). we dropped in every version of  
COBOL and had very few issues we also had no issue with LE per se it  
wasn't quite a no oops but pretty much.
One shop I was at hit practically every LE issue in the world (that  
is where I learned to hate LE which I won't comment on too much here).
The one issue we had with COBOL was that it insisted on putting out  
self explanatory messages. We had programmers calling our phones  
solid for several months. Another COBOL error was to put out a really  
mysterious error (its been 15 years so pardon me if I forget what the  
error was). I had to call IBM myself as it didn't click any bells.  
IBM's response was something like oh yea how about that (or words to  
that effect).  I suggested to the programmer that he might drop some  
of the FD's that weren't used. He did so and the error went away.
Overall if you have decent programmers that don't push the bounds of  
reality I would say that you will have more issues with LE than with  
COBOL. However if you do god help you.


Ed
On Mar 31, 2015, at 5:51 PM, Frank Swarbrick wrote:

Does anyone have a good list of best practices when migrating to a  
new version of a compiler?  While our shop is 35+ years old, we  
have rarely actually migrated compilers (or even compiler  
versions), so we don't have a lot of experience in this area.


When migrating from DOS/VS COBOL to VS COBOL II there were almost  
always source code changes, so all programs were tested before they  
were migrated, and no program was simply automatically migrated,  
as it were.  I'm sure we had COBOL II for at least ten years before  
all of our DOS/VS COBOL programs were eliminated.  (From what I  
hear that's actually better than many shops, which STILL have pre- 
COBOL II programs out there!)


Going from VS COBOL II to IBM COBOL for VSE/ESA was a more normal  
migration.  If I remember correctly (and I should because I was  
pretty much in charge of the migration) we at first only migrated  
to the new compiler if other changes were being made to a program  
and a special flag was set to indicate that this program should  
be compiled with the new compiler instead of the older one.  Only  
after many, many programs were migrated in this fashion (and it  
probably took us several years to get to this point, though I  
honestly do not recall) did we finally eliminate the COBOL II  
compiler altogether.  But I believe at no point did we do any kind  
of mass recompile.  We simply used COBOL for VSE/ESA for all  
programs going forward (old-COBOL had all been converted to COBOL  
II or COBOL for VSE/ESA already).
Then we migrated to z/OS and IBM Enterprise COBOL 4.2.  Of course  
in this situation EVERYTHING was recompiled and regression tested.
So now we're pondering how to get the Enterprise COBOL 5.2.  The  
easiest thing, of course, would be to simply change our compile  
procs to use COBOL 5.2.  But being as how COBOL 5.2 has been out  
for only a month that's probably not a good idea.  Probably the  
best thing to do would be to have developers choose to use COBOL  
5.2 by this 'setting a flag' for an individual program indicating  
that their test compiles should use 5.2, and in turn so should  
their production compiles.  Should we also, at least at first, have  
them do a regression test of the current 4.2 results comparing  
against the results of the program compiled with 5.2?  And for how  
long should we keep that up?  At some point (6 months? a year?)  
we'd want to stop using 4.2 altogether.  But we also (I don't  
think!) don't want to regression test every single program.


All advice appreciated.  Thanks!
Frank SwarbrickApplication Development ArchitectFirstBank --  
Lakewood, CO USA



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Environment scope (was: OT: Digital? Cloud? ... )

2015-03-31 Thread Paul Gilmartin
On Tue, 31 Mar 2015 19:37:23 -0400, Shmuel Metz (Seymour J.) wrote:

In CMS the enviroment is absolutely global.

Well, within that virtual machine[1], assuming you're referring to
GLOBALV.

[1] The lifetime depends on the option used.

Not only GLOBALV, but LINKs, ACCESSes, options set by CP SET and CMS SET
commands, spooling options (although CMS Pipelines somewhat relaxes
those constraints), ... others?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Environment scope (was: OT: Digital? Cloud? ... )

2015-03-31 Thread Shmuel Metz (Seymour J.)
In 6894540274965773.wa.paulgboulderaim@listserv.ua.edu, on
03/31/2015
   at 09:26 AM, Paul Gilmartin
000433f07816-dmarc-requ...@listserv.ua.edu said:

In CMS the enviroment is absolutely global.

Well, within that virtual machine[1], assuming you're referring to
GLOBALV.

[1] The lifetime depends on the option used.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see http://patriot.net/~shmuel/resume/brief.html 
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: migrating compiler versions

2015-03-31 Thread Lizette Koehler
Big thing for going to V5 Cobol. Your Production jobs will need to have a PDS/E 
in the STEPLIBs to receive the new compile/link COBOL V5 programs.  I think you 
will need to have other datasets be PDS/Es during the Compile process.  Just 
review the Share presentations on COBOL V5 Migration.

Lizette


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Frank Swarbrick
 Sent: Tuesday, March 31, 2015 3:52 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: migrating compiler versions
 
 Does anyone have a good list of best practices when migrating to a new
 version of a compiler?  While our shop is 35+ years old, we have rarely
 actually migrated compilers (or even compiler versions), so we don't have a
 lot of experience in this area.
 
 When migrating from DOS/VS COBOL to VS COBOL II there were almost
 always source code changes, so all programs were tested before they were
 migrated, and no program was simply automatically migrated, as it
 were.  I'm sure we had COBOL II for at least ten years before all of our
 DOS/VS COBOL programs were eliminated.  (From what I hear that's actually
 better than many shops, which STILL have pre-COBOL II programs out there!)
 
 Going from VS COBOL II to IBM COBOL for VSE/ESA was a more normal
 migration.  If I remember correctly (and I should because I was pretty much
 in charge of the migration) we at first only migrated to the new compiler if
 other changes were being made to a program and a special flag was set to
 indicate that this program should be compiled with the new compiler instead
 of the older one.  Only after many, many programs were migrated in this
 fashion (and it probably took us several years to get to this point, though I
 honestly do not recall) did we finally eliminate the COBOL II compiler
 altogether.  But I believe at no point did we do any kind of mass
 recompile.  We simply used COBOL for VSE/ESA for all programs going
 forward (old-COBOL had all been converted to COBOL II or COBOL for
 VSE/ESA already).
 Then we migrated to z/OS and IBM Enterprise COBOL 4.2.  Of course in this
 situation EVERYTHING was recompiled and regression tested.
 So now we're pondering how to get the Enterprise COBOL 5.2.  The easiest
 thing, of course, would be to simply change our compile procs to use COBOL
 5.2.  But being as how COBOL 5.2 has been out for only a month that's
 probably not a good idea.  Probably the best thing to do would be to have
 developers choose to use COBOL 5.2 by this 'setting a flag' for an 
 individual
 program indicating that their test compiles should use 5.2, and in turn so
 should their production compiles.  Should we also, at least at first, have 
 them
 do a regression test of the current 4.2 results comparing against the results 
 of
 the program compiled with 5.2?  And for how long should we keep that
 up?  At some point (6 months? a year?) we'd want to stop using 4.2
 altogether.  But we also (I don't think!) don't want to regression test every
 single program.
 
 All advice appreciated.  Thanks!
 Frank SwarbrickApplication Development ArchitectFirstBank -- Lakewood, CO
 USA
 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Redbook on DFHSM and PDSEs Control datasets

2015-03-31 Thread nitz-...@gmx.net
 Do you think an old Dino like me could use it to setup DFHSM for the first
 time. I need to on our development environment.
 
 SYS1.SAMPLIB(ARCSTRST) is the HSM starter set and will get you going.

Having gone through this exercise from scratch on an ADCD system about 2 years 
ago, I can say that setting up HSM is not all that hard. Admittedly, I only set 
it up for migration (and not for backup). I also used the starter set and then 
went over all the parms that there are to see if they apply to me. The problem 
with the starter set is that it references defined storage classes, storage 
groups and management classes and it alludes to ACS routines, but it does not 
show the ACS routines, so *that* took  me a while without a good example to 
start from.

Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IDEAL SIZE OF VTOC VVDS FOR PAGE VOLUME

2015-03-31 Thread Thomas Conley

On 3/31/2015 9:48 AM, John Dawes wrote:

G'Day,

I have to initialise a 3390-9 volume as a PAGE volume.  What would be the ideal 
size of the VTOC  VVDS?  I was thinking 29 tracks for the VTOC and 15 for the 
VVDS.  Any suggestions?

Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



I usually go 7 track VTOC, 2 track index, 5 track VVDS.  Everything else 
is page dataset.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainfra...

2015-03-31 Thread John McKown
On Mon, Mar 30, 2015 at 8:06 PM, Shmuel Metz (Seymour J.)
shmuel+ibm-m...@patriot.net wrote:
 In
 CAAJSdjgcPHDb60=apm36kvymoddqmd2fiefavq6my5zuqxw...@mail.gmail.com,
 on 03/30/2015
at 08:34 AM, John McKown john.archie.mck...@gmail.com said:

This is one main reason why I prefer the UNIX fork() philosophy
rather than threading.

 That philosophy led to the insanity that a command can't pass back an
 environment variable to its caller.

OTOH, it stops the child from corrupting the parent's environment
variable space. And I do have a technique which can do _something_
about that. A short example would be to run the following in a UNIX
shell.

$(echo export BUBBA=bubba) # run the stdout of the enclosed command as
commands in the parent shell.

Yes, this is simplistic, but it results in the environment variable
BUBBA being set to bubba in the parent's shell. Of course, the
application needs to be written with this approach in mind. It can't
just use the setenv() or putenv() function.


But I still like the isolation of protect keys.

 What isolation? With everybody and his brother running key 8, the
 storage key mechanism is worthless for shared memory.

The fact that programmers are too lazy to use protect keys does not
make them worthless. If _I_ were writing APF code which required me
to store data in _common_ memory (ECSA for example), then I would most
definitely _not_ use key 8. Given that I'm a paranoid person, I would
likely use fetch protected key 10 storage. Of course, I imagine that
we'd both agree that using a data space and AR mode would be superior.
The problem with that _might_ be if the data truly needed to be,
potentially, addressable in _every_ address space. That could be quite
tricky to do. Or at least a bit complicated compared to ECSA
storage.


 Ignorance of Multics considered harmful

 --
  Shmuel (Seymour J.) Metz, SysProg and JOAT

-- 
If you sent twitter messages while exploring, are you on a textpedition?

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! 
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IDEAL SIZE OF VTOC VVDS FOR PAGE VOLUME

2015-03-31 Thread Staller, Allan
 John Dawes mailto:jhn_da...@yahoo.com.au March 31, 2015 at 9:48 AM 
 G'Day,

 I have to initialise a 3390-9 volume as a PAGE volume. What would be 
 the ideal size of the VTOC  VVDS? I was thinking 29 tracks for the 
 VTOC and 15 for the VVDS. Any suggestions?

 Thanks.

 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send 
 email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Mark Jacobs - Listserv
Sent: Tuesday, March 31, 2015 8:53 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: IDEAL SIZE OF VTOC  VVDS FOR PAGE VOLUME

With one dataset (I'm assuming) on the volume, a one track VTOC is sufficient, 
and you really don't need an index on it either.




I don't really think you want only 1 page dataset on a 3390-9. However, I agree 
that a small vtoc and no vtoc index is appropriate.
15 tracks for the VVDS will be overkill.

Once IPL'ed, on a dedicated paging volume, the system will (most likely) 
*NEVER* reference the vtoc again, so the vtoc index is not needed.

Since there will be (at most) 1-3 datasets on the volume,  a 1 or 2 track vtoc 
would be sufficient. (PAV will handle IO contention).

Since you can't allocate page datasets on other than a cylinder boundary 
(without committing unnatural acts), tracks 1-14 will be unusable for anything 
else.

I would go with VTOC (000,1,9),  no vtoc index, and a VVDS of 5 tracks. This 
will fill up the 1st cylinder or the volume.

Of course, if you put things other than the  1-3 page datasets on the volume, 
all of the above discussion is moot.

BTW, there were (in the last year or so) there were several PTF's regarding 
creating page datasets w/more than 2048k(?) slots. Sorry don't have the APAR 
numbers available.

HTH,

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Redbook on DFHSM and PDSEs Control datasets

2015-03-31 Thread Thomas Conley

On 3/31/2015 8:00 AM, Scott Ford wrote:

Liz,

Do you think an old Dino like me could use it to setup DFHSM for the first
time. I need to on our development environment.

Regards,
Scott

On Tuesday, March 31, 2015, Lizette Koehler stars...@mindspring.com wrote:


Ed

Thanks for pointing this out.

Even though it is for specifically PDSEs V2 under DFHSM.   It has some very
nice detail on how to setup ACS, SG, SC, DC, - how to TEST/VALIDATE and a
few other basic HSM functions.  So could be used as a good training doc on
basic ACS Code and HSM functions.

Lizette



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU

javascript:;]

On Behalf Of Ed Gould
Sent: Monday, March 30, 2015 1:16 PM
To: IBM-MAIN@LISTSERV.UA.EDU javascript:;
Subject: Redbook on DFHSM and PDSEs Control datasets

http://www.redbooks.ibm.com/redpieces/pdfs/redp5160.pdf




SYS1.SAMPLIB(ARCSTRST) is the HSM starter set and will get you going.

Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDSF ISF121I message

2015-03-31 Thread J O Skip Robinson
This thread motivated me to check my SMFPRMxx. On most systems we do not 
specify MEMLIMIT. Turns out that the default is 2G. (Sorry if someone else 
pointed that out.) Should cover most cases. ;-)

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
jo.skip.robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Neal Eckhardt
Sent: Tuesday, March 31, 2015 6:36 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SDSF ISF121I message

Just to close this out for anybody searching for this message, increasing 
MEMLIMIT in SMFPRMxx solved the problem. The fact that you can change it with a 
SET command is a plus.

Thanks for pointing me in the right direction.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM-MAIN Digest - 29 Mar 2015 to 30 Mar 2015 (#2015-89)

2015-03-31 Thread Tim Full
LOGOFF IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Environment scope (was: OT: Digital? Cloud? ... )

2015-03-31 Thread Paul Gilmartin
On Tue, 31 Mar 2015 09:11:32 -0500, John McKown wrote:

$(echo export BUBBA=bubba) # run the stdout of the enclosed command as
commands in the parent shell.
 
In most cases, I'd be more comfortable with:

export BUBBA=$(echo bubba)  # limit the scope of the damage.

Note the serialization concerns.  In neither case is the set value available
to concurrent processes.

In CMS the enviroment is absolutely global.  In consequence, CMS
Pipelines provides considerable concurrence for its built-in stages,
but CMS command stages are relentlessly single-threaded.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Environment scope (was: OT: Digital? Cloud? ... )

2015-03-31 Thread John McKown
On Tue, Mar 31, 2015 at 9:26 AM, Paul Gilmartin
000433f07816-dmarc-requ...@listserv.ua.edu wrote:
 On Tue, 31 Mar 2015 09:11:32 -0500, John McKown wrote:

$(echo export BUBBA=bubba) # run the stdout of the enclosed command as
commands in the parent shell.

 In most cases, I'd be more comfortable with:

 export BUBBA=$(echo bubba)  # limit the scope of the damage.

That is safer. But it is restricted to only setting one environment
variable. My example was the smallest, simplest that I could come up
with quickly. If I wanted to set multiple environment variables, I'd
protect myself a bit with something like:

$(somecommand ... | egrep '^export +(VAR1|VAR2|VAR3)=')

This would ensure that regardless of what somecommand wrote to
stdout, I'd just do export commands for the environment variables that
I was interested in. It is a _little bit_ safer. After setting them, I
might want to validate them as well, somehow.


 Note the serialization concerns.  In neither case is the set value available
 to concurrent processes.

snip
 -- gil



-- 
If you sent twitter messages while exploring, are you on a textpedition?

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! 
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDSF ISF121I message

2015-03-31 Thread Jousma, David
I think the other thing that should be said, is that it is a LIMIT, not an 
allocation.  And then it's worth mentioning that an application that run 
authorized can bypass the limit(like DB2).

_
Dave Jousma
Assistant Vice President, Mainframe Engineering
david.jou...@53.com
1830 East Paris, Grand Rapids, MI  49546 MD RSCB2H
p 616.653.8429
f 616.653.2717



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of J O Skip Robinson
Sent: Tuesday, March 31, 2015 11:20 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SDSF ISF121I message

This thread motivated me to check my SMFPRMxx. On most systems we do not 
specify MEMLIMIT. Turns out that the default is 2G. (Sorry if someone else 
pointed that out.) Should cover most cases. ;-)

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
jo.skip.robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Neal Eckhardt
Sent: Tuesday, March 31, 2015 6:36 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SDSF ISF121I message

Just to close this out for anybody searching for this message, increasing 
MEMLIMIT in SMFPRMxx solved the problem. The fact that you can change it with a 
SET command is a plus.

Thanks for pointing me in the right direction.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IDEAL SIZE OF VTOC VVDS FOR PAGE VOLUME

2015-03-31 Thread Mark Jacobs - Listserv
With one dataset (I'm assuming) on the volume, a one track VTOC is 
sufficient, and you really don't need an index on it either.


Mark Jacobs


John Dawes mailto:jhn_da...@yahoo.com.au
March 31, 2015 at 9:48 AM
G'Day,

I have to initialise a 3390-9 volume as a PAGE volume. What would be 
the ideal size of the VTOC  VVDS? I was thinking 29 tracks for the 
VTOC and 15 for the VVDS. Any suggestions?


Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IDEAL SIZE OF VTOC VVDS FOR PAGE VOLUME

2015-03-31 Thread John Dawes
Great.  Thanks for the help.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Redbook on DFHSM and PDSEs Control datasets

2015-03-31 Thread Lizette Koehler
Depends on what you are setting up.

There are several Share presentations on ACS Code and HSM, Best Practices, and 
a few Redbooks on DFHSM and DFSMS.

They go hand in hand.

I find the DFHSM PRIMER Redbook - handy.

So the process that you will coordinate 
DFSMS MGT class will tell how long and what to do with datasets
DFHSM will see the MGT class and action the requirements.

Please post questions, or write me offline, if you have questions.

Lizette


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Scott Ford
 Sent: Tuesday, March 31, 2015 5:00 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Redbook on DFHSM and PDSEs Control datasets
 
 Liz,
 
 Do you think an old Dino like me could use it to setup DFHSM for the first
 time. I need to on our development environment.
 
 Regards,
 Scott
 
 On Tuesday, March 31, 2015, Lizette Koehler stars...@mindspring.com
 wrote:
 
  Ed
 
  Thanks for pointing this out.
 
  Even though it is for specifically PDSEs V2 under DFHSM.   It has some very
  nice detail on how to setup ACS, SG, SC, DC, - how to TEST/VALIDATE
  and a few other basic HSM functions.  So could be used as a good
  training doc on basic ACS Code and HSM functions.
 
  Lizette
 
 
   -Original Message-
   From: IBM Mainframe Discussion List [mailto:IBM-
 m...@listserv.ua.edu
  javascript:;]
   On Behalf Of Ed Gould
   Sent: Monday, March 30, 2015 1:16 PM
   To: IBM-MAIN@LISTSERV.UA.EDU javascript:;
   Subject: Redbook on DFHSM and PDSEs Control datasets
  
   http://www.redbooks.ibm.com/redpieces/pdfs/redp5160.pdf
  
  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Environment scope (was: OT: Digital? Cloud? ... )

2015-03-31 Thread John McKown
On Tue, Mar 31, 2015 at 12:22 PM, Paul Gilmartin
000433f07816-dmarc-requ...@listserv.ua.edu wrote:
 On Tue, 31 Mar 2015 12:08:26 -0500, John McKown wrote:

... If I wanted to set multiple environment variables, I'd
protect myself a bit with something like:

$(somecommand ... | egrep '^export +(VAR1|VAR2|VAR3)=')

This would ensure that regardless of what somecommand wrote to
stdout, I'd just do export commands for the environment variables that
I was interested in. It is a _little bit_ safer. After setting them, I
might want to validate them as well, somehow.

 And if somecommand is echo 'export VAR1=bubba; sudo rm -rf /'?

 http://xkcd.com/327/

 Beware the semicolon!

Good point, that's the UNIX shell equivalent of the SQL injection attack.


 -- gil

-- 
If you sent twitter messages while exploring, are you on a textpedition?

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! 
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Environment scope (was: OT: Digital? Cloud? ... )

2015-03-31 Thread Paul Gilmartin
On Tue, 31 Mar 2015 12:08:26 -0500, John McKown wrote:

... If I wanted to set multiple environment variables, I'd
protect myself a bit with something like:

$(somecommand ... | egrep '^export +(VAR1|VAR2|VAR3)=')

This would ensure that regardless of what somecommand wrote to
stdout, I'd just do export commands for the environment variables that
I was interested in. It is a _little bit_ safer. After setting them, I
might want to validate them as well, somehow.

And if somecommand is echo 'export VAR1=bubba; sudo rm -rf /'?

http://xkcd.com/327/

Beware the semicolon!

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


IDEAL SIZE OF VTOC VVDS FOR PAGE VOLUME

2015-03-31 Thread John Dawes
G'Day,

I have to initialise a 3390-9 volume as a PAGE volume.  What would be the ideal 
size of the VTOC  VVDS?  I was thinking 29 tracks for the VTOC and 15 for the 
VVDS.  Any suggestions? 

Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainfra...

2015-03-31 Thread Rob Schramm
XES  XCF for advances

PDSE - mixed bag .. Mostly advances.. And some .. Well ... just plain weird.

zFS over HFS - advance

Open Edition / unix system services / USS  / z/unix. - Mixed bag - b2 to c2
bad, TCPIP good,  ported tools/open source good, java jrio - blunder, java
jzos - advance.

Rob Schramm
On Mar 31, 2015 10:12 AM, John McKown john.archie.mck...@gmail.com
wrote:

 On Mon, Mar 30, 2015 at 8:06 PM, Shmuel Metz (Seymour J.)
 shmuel+ibm-m...@patriot.net wrote:
  In
  CAAJSdjgcPHDb60=apm36kvymoddqmd2fiefavq6my5zuqxw...@mail.gmail.com,
  on 03/30/2015
 at 08:34 AM, John McKown john.archie.mck...@gmail.com said:
 
 This is one main reason why I prefer the UNIX fork() philosophy
 rather than threading.
 
  That philosophy led to the insanity that a command can't pass back an
  environment variable to its caller.

 OTOH, it stops the child from corrupting the parent's environment
 variable space. And I do have a technique which can do _something_
 about that. A short example would be to run the following in a UNIX
 shell.

 $(echo export BUBBA=bubba) # run the stdout of the enclosed command as
 commands in the parent shell.

 Yes, this is simplistic, but it results in the environment variable
 BUBBA being set to bubba in the parent's shell. Of course, the
 application needs to be written with this approach in mind. It can't
 just use the setenv() or putenv() function.

 
 But I still like the isolation of protect keys.
 
  What isolation? With everybody and his brother running key 8, the
  storage key mechanism is worthless for shared memory.

 The fact that programmers are too lazy to use protect keys does not
 make them worthless. If _I_ were writing APF code which required me
 to store data in _common_ memory (ECSA for example), then I would most
 definitely _not_ use key 8. Given that I'm a paranoid person, I would
 likely use fetch protected key 10 storage. Of course, I imagine that
 we'd both agree that using a data space and AR mode would be superior.
 The problem with that _might_ be if the data truly needed to be,
 potentially, addressable in _every_ address space. That could be quite
 tricky to do. Or at least a bit complicated compared to ECSA
 storage.

 
  Ignorance of Multics considered harmful
 
  --
   Shmuel (Seymour J.) Metz, SysProg and JOAT

 --
 If you sent twitter messages while exploring, are you on a textpedition?

 He's about as useful as a wax frying pan.

 10 to the 12th power microphones = 1 Megaphone

 Maranatha! 
 John McKown

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainfra...

2015-03-31 Thread Tony Harminc
On 31 March 2015 at 10:11, John McKown john.archie.mck...@gmail.com wrote:

 The fact that programmers are too lazy to use protect keys does not
 make them worthless. If _I_ were writing APF code which required me
 to store data in _common_ memory (ECSA for example), then I would most
 definitely _not_ use key 8. Given that I'm a paranoid person, I would
 likely use fetch protected key 10 storage.

Then you would likely get a B78-5C abend. Key 10 is a user key. Surely
most shops run with ALLOWUSERKEYCSA(NO) these days.

 Of course, I imagine that we'd both agree that using a data space and AR mode 
 would be superior.

Perhaps... Though some have compared it to the Intel 286 (and others)
segmented architecture that's long gone in modern ia systems.

 The problem with that _might_ be if the data truly needed to be,
 potentially, addressable in _every_ address space. That could be quite
 tricky to do. Or at least a bit complicated compared to ECSA
 storage.

There's Above The Bar shared storage (not to be confused with ATB
common, which might also be applicable).

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDSF ISF121I message

2015-03-31 Thread Neal Eckhardt
Just to close this out for anybody searching for this message, increasing 
MEMLIMIT in SMFPRMxx solved the problem. The fact that you can change it with a 
SET command is a plus.

Thanks for pointing me in the right direction.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: migrating compiler versions

2015-03-31 Thread Stevet
Use the migrate option on 4.2. It will flag those things not supported by COBOL 
5.1. There should be an APAR for 5.2's changes that you should apply to your 
4.2 compiler. 

Next, IIRC, there is a migration guide. It will help in you getting there. 

HTH

Sent from iPhone - small keyboard fat fingers - expect spellinf errots.

 On Mar 31, 2015, at 6:51 PM, Frank Swarbrick 
 002782105f5c-dmarc-requ...@listserv.ua.edu wrote:
 
 Does anyone have a good list of best practices when migrating to a new 
 version of a compiler?  While our shop is 35+ years old, we have rarely 
 actually migrated compilers (or even compiler versions), so we don't have a 
 lot of experience in this area.
 
 When migrating from DOS/VS COBOL to VS COBOL II there were almost always 
 source code changes, so all programs were tested before they were migrated, 
 and no program was simply automatically migrated, as it were.  I'm sure we 
 had COBOL II for at least ten years before all of our DOS/VS COBOL programs 
 were eliminated.  (From what I hear that's actually better than many shops, 
 which STILL have pre-COBOL II programs out there!)
 
 Going from VS COBOL II to IBM COBOL for VSE/ESA was a more normal 
 migration.  If I remember correctly (and I should because I was pretty much 
 in charge of the migration) we at first only migrated to the new compiler 
 if other changes were being made to a program and a special flag was set to 
 indicate that this program should be compiled with the new compiler instead 
 of the older one.  Only after many, many programs were migrated in this 
 fashion (and it probably took us several years to get to this point, though I 
 honestly do not recall) did we finally eliminate the COBOL II compiler 
 altogether.  But I believe at no point did we do any kind of mass 
 recompile.  We simply used COBOL for VSE/ESA for all programs going forward 
 (old-COBOL had all been converted to COBOL II or COBOL for VSE/ESA already).
 Then we migrated to z/OS and IBM Enterprise COBOL 4.2.  Of course in this 
 situation EVERYTHING was recompiled and regression tested.
 So now we're pondering how to get the Enterprise COBOL 5.2.  The easiest 
 thing, of course, would be to simply change our compile procs to use COBOL 
 5.2.  But being as how COBOL 5.2 has been out for only a month that's 
 probably not a good idea.  Probably the best thing to do would be to have 
 developers choose to use COBOL 5.2 by this 'setting a flag' for an 
 individual program indicating that their test compiles should use 5.2, and in 
 turn so should their production compiles.  Should we also, at least at first, 
 have them do a regression test of the current 4.2 results comparing against 
 the results of the program compiled with 5.2?  And for how long should we 
 keep that up?  At some point (6 months? a year?) we'd want to stop using 4.2 
 altogether.  But we also (I don't think!) don't want to regression test every 
 single program.
 
 All advice appreciated.  Thanks!
 Frank SwarbrickApplication Development ArchitectFirstBank -- Lakewood, CO USA
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


migrating compiler versions

2015-03-31 Thread Frank Swarbrick
Does anyone have a good list of best practices when migrating to a new version 
of a compiler?  While our shop is 35+ years old, we have rarely actually 
migrated compilers (or even compiler versions), so we don't have a lot of 
experience in this area.

When migrating from DOS/VS COBOL to VS COBOL II there were almost always source 
code changes, so all programs were tested before they were migrated, and no 
program was simply automatically migrated, as it were.  I'm sure we had COBOL 
II for at least ten years before all of our DOS/VS COBOL programs were 
eliminated.  (From what I hear that's actually better than many shops, which 
STILL have pre-COBOL II programs out there!)

Going from VS COBOL II to IBM COBOL for VSE/ESA was a more normal migration.  
If I remember correctly (and I should because I was pretty much in charge of 
the migration) we at first only migrated to the new compiler if other changes 
were being made to a program and a special flag was set to indicate that this 
program should be compiled with the new compiler instead of the older one.  
Only after many, many programs were migrated in this fashion (and it probably 
took us several years to get to this point, though I honestly do not recall) 
did we finally eliminate the COBOL II compiler altogether.  But I believe at no 
point did we do any kind of mass recompile.  We simply used COBOL for VSE/ESA 
for all programs going forward (old-COBOL had all been converted to COBOL II or 
COBOL for VSE/ESA already).
Then we migrated to z/OS and IBM Enterprise COBOL 4.2.  Of course in this 
situation EVERYTHING was recompiled and regression tested.
So now we're pondering how to get the Enterprise COBOL 5.2.  The easiest thing, 
of course, would be to simply change our compile procs to use COBOL 5.2.  But 
being as how COBOL 5.2 has been out for only a month that's probably not a good 
idea.  Probably the best thing to do would be to have developers choose to 
use COBOL 5.2 by this 'setting a flag' for an individual program indicating 
that their test compiles should use 5.2, and in turn so should their production 
compiles.  Should we also, at least at first, have them do a regression test of 
the current 4.2 results comparing against the results of the program compiled 
with 5.2?  And for how long should we keep that up?  At some point (6 months? a 
year?) we'd want to stop using 4.2 altogether.  But we also (I don't think!) 
don't want to regression test every single program.

All advice appreciated.  Thanks!
Frank SwarbrickApplication Development ArchitectFirstBank -- Lakewood, CO USA


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN