Re: IBM brings down large axe on staff in the US

2016-03-02 Thread Martin Packer
On YOUR shores. :-) And another on MY shores. :-)

Cheers, Martin

Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:   "Jack J. Woehr" 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   03/03/2016 06:10
Subject:Re: IBM brings down large axe on staff in the US
Sent by:IBM Mainframe Discussion List 



Elardus Engelbrecht wrote:
> But then big blue is still recruiting "... currently has more than 
25,000 open positions."
The impression I got when contracting at IBM for 4 years was that IBM is 
sort of like an independent city-state, a
transnational entity composed of people from all over the world. Anyone 
who thinks IBM is sending "our jobs" overseas
doesn't realize that one leg of the world's greatest business computing 
company just happens to stand, like the Colossus
of Rhodes, on our shores as IBM straddles the world.

-- 
Jack J. Woehr # Science is more than a body of knowledge. It's a way 
of
www.well.com/~jax # thinking, a way of skeptically interrogating the 
universe
www.softwoehr.com # with a fine understanding of human fallibility. - Carl 
Sagan

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM brings down large axe on staff in the US

2016-03-02 Thread Jack J. Woehr

Elardus Engelbrecht wrote:

But then big blue is still recruiting "... currently has more than 25,000 open 
positions."

The impression I got when contracting at IBM for 4 years was that IBM is sort 
of like an independent city-state, a
transnational entity composed of people from all over the world. Anyone who thinks IBM is 
sending "our jobs" overseas
doesn't realize that one leg of the world's greatest business computing company 
just happens to stand, like the Colossus
of Rhodes, on our shores as IBM straddles the world.

--
Jack J. Woehr # Science is more than a body of knowledge. It's a way of
www.well.com/~jax # thinking, a way of skeptically interrogating the universe
www.softwoehr.com # with a fine understanding of human fallibility. - Carl Sagan

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM brings down large axe on staff in the US

2016-03-02 Thread Elardus Engelbrecht
Ed Gould wrote:

>http://www.channelregister.co.uk/2016/03/02/ibm_layoffs/
>IBM axed a wedge of workers today across the US as part of an "aggressive" 
>shakeup of its business.

Same story every few year. In fact, the term 'Dead wood' is coming from IBM 
according to an ex-IBMer.

The big blue is hiring and firing in a never ending cycle due to technology 
changes (cloud), strategy, competition, cost cutting, etc.

Also, big blue is outsourcing and moving jobs offshore hmmm, reminds me of 
current threads... "work moved to India"

The same hiring/firing are also happening in banks (branches closing in favour 
of mobile) and mining conglomerates (not much left in earth to mine)...

But then big blue is still recruiting "... currently has more than 25,000 open 
positions."

If you fit, you can join ...

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL Install

2016-03-02 Thread Timothy Sipples
Clark Morris wrote:
>Check your licensing requirements.  Normally there is only a short
>period when you can run 2 versions without having to pay for both.

It's normally 12 months for IBM Monthly License Charge products. However,
if you have IBM Country Multiplex Pricing there is no time limit, although
there are still support expiration dates. (Yes, CMP is available even if
you have one machine and even if you have machines in more than one
country. In the latter case you might have two or more CMP contract
agreements, but that's perfectly OK.)

Moreover, even if you have a 12 month Single Version Charge limit, if you
are taking advantage of sub-capacity licensing you should only be paying
based on your peak utilization of each product individually. Simply check
to make sure that's actually happening, particularly if you have Enterprise
COBOL Version 4.x or prior. It's quite common during a version-to-version
migration that runs longer than expected to need only "a little bit" of the
previous version.

Please make sure you understand and respect the requirements concerning
discontinuation of a product or version license, especially the requirement
to purge all copies.

Finally, "ask your friendly IBM representative" if you have questions or
concerns about any SVC limits. Particularly if IBM is materially holding
back your version-to-version migration through no fault or lack of effort
on your part, your friendly IBM representative might be able to help.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CBU test

2016-03-02 Thread Timothy Sipples
Radoslaw Skorupka:
>Yes, there are other methods to skin the cat, but what's wrong with
>knowledge whether method #101 is possible?

I believe I offered an answer to your question. If there's no other test
that you could be running instead that is more important then have fun.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


IBM brings down large axe on staff in the US

2016-03-02 Thread Ed Gould

http://www.channelregister.co.uk/2016/03/02/ibm_layoffs/

IBM axed a wedge of workers today across the US as part of an  
"aggressive" shakeup of its business.





--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Check for dynamic HCD activation

2016-03-02 Thread Anthony Thompson
I'm  supposing that when you say HCD you mean IODF.

Further, when you mention REXX code, I'm guessing you are referring to Mark 
Zelden's IPLINFO REXX. 

This interrogates a couple of control blocks that I can't see documented, not 
in the Data Areas manuals, nor in MACLIB/MODGEN (IOVT and CDA ). 

I'd try to verify your assertion that these control blocks aren't updated by an 
ACTIVATE, but my company is in the throes of a mainframe upgrade and the 
sandbox sysplex is currently unavailable.

If your REXX runs under a userid with the authority to issue system commands, 
I'm wondering if you can trap the output from a 'D IOS,CONFIG' command and 
analyse the results from that.

Mr. Zelden may have more to say..


Ant.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of White, Andy
Sent: Thursday, 3 March 2016 10:38 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Check for dynamic HCD activation

Does anyone have Rexx code to check to see if an HCD is activated dynamically 
between IPLS. My issue is I can get the IPL Load Parms when the system is 
ipl'ed by using rexx code I have gotten off of websites. What I can't detect is 
when the HCD changes between IPLS because it seems the CVT area I'm checking 
doesn’t get updated.

Thanks


Andy



The information contained in this message may be CONFIDENTIAL and is for the 
intended addressee only.  Any unauthorized use, dissemination of the 
information, or copying of this message is prohibited.  If you are not the 
intended addressee, please notify the sender immediately and delete this 
message.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bootstrapping the GZIP for z/OS install

2016-03-02 Thread David Crayford

On 3/03/2016 11:13 AM, David Crayford wrote:



On 3/03/2016 11:04 AM, Paul Gilmartin wrote:

On Thu, 3 Mar 2016 10:23:12 +0800, David Crayford  wrote:

I've got no idea why Rocket would choose to use tarballs. It would have
been a much better idea to use compressed pax archives like the 
original

IBM ported tools.


Yes.  But on (some) GNU Linux:

man pax
 ...
  -z  Use the gzip(1) utility to compress (decompress) the 
archive while writing

  (reading).  Incompatible with -a.

Ouch!  Gnu's Not Unix.



I think we can take it as a given that z/OS pax uses proprietary 
compression. Although I wonder if it will use gzip compression if the 
customer has zEDC?





Nope! It says in the book that it uses Lempel-Ziv compression so I was 
wrong.




-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bootstrapping the GZIP for z/OS install

2016-03-02 Thread David Crayford

On 3/03/2016 11:04 AM, Paul Gilmartin wrote:

On Thu, 3 Mar 2016 10:23:12 +0800, David Crayford  wrote:

I've got no idea why Rocket would choose to use tarballs. It would have
been a much better idea to use compressed pax archives like the original
IBM ported tools.


Yes.  But on (some) GNU Linux:

man pax
 ...
  -z  Use the gzip(1) utility to compress (decompress) the archive 
while writing
  (reading).  Incompatible with -a.

Ouch!  Gnu's Not Unix.



I think we can take it as a given that z/OS pax uses proprietary 
compression. Although I wonder if it will use gzip compression if the 
customer has zEDC?





-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bootstrapping the GZIP for z/OS install

2016-03-02 Thread Paul Gilmartin
On Thu, 3 Mar 2016 10:23:12 +0800, David Crayford  wrote:
>
>I've got no idea why Rocket would choose to use tarballs. It would have
>been a much better idea to use compressed pax archives like the original
>IBM ported tools.
> 
Yes.  But on (some) GNU Linux:

man pax
...
 -z  Use the gzip(1) utility to compress (decompress) the archive while 
writing
 (reading).  Incompatible with -a.

Ouch!  Gnu's Not Unix.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bootstrapping the GZIP for z/OS install

2016-03-02 Thread David Crayford

On 3/03/2016 1:24 AM, Paul Gilmartin wrote:

I'm calling Rocket remiss in providing a gzip that can't be bootstrapped
using only base z/OS facilities.  Hardly forgiven in that many desktop
systems (which Rocket may have used for packaging) provide uncompress
but not compress because of (expired) patent restrictions.  Even worse,
some provide gzip but misname it "compress".


I've got no idea why Rocket would choose to use tarballs. It would have 
been a much better idea to use compressed pax archives like the original 
IBM ported tools.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Check for dynamic HCD activation

2016-03-02 Thread White, Andy
Does anyone have Rexx code to check to see if an HCD is activated dynamically 
between IPLS. My issue is I can get the IPL Load Parms when the system is 
ipl'ed by using rexx code I have gotten off of websites. What I can't detect is 
when the HCD changes between IPLS because it seems the CVT area I'm checking 
doesn’t get updated.

Thanks


Andy



The information contained in this message may be CONFIDENTIAL and is for the 
intended addressee only.  Any unauthorized use, dissemination of the 
information, or copying of this message is prohibited.  If you are not the 
intended addressee, please notify the sender immediately and delete this 
message.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM RECYCLE INFLUENCE

2016-03-02 Thread Lizette Koehler
When we went to pure virtual tape, this is what we got back from IBM DFHSM L2:


PARTIALTAPE(REUSE) vs PARTIALTAPE(MARKFULL)
When using a virtual tape system, IBM usually recommends using 
PARTIALTAPE(MARKFULL).

RECYCLE
You (or automation) need to issue the RECYCLE command, it is not automatic.  
Also, if you want HSM to recycle multiple tape volumes onto one, you will need 
to use a GENERIC recycle command, for example:
Use 'ALL' to recycle both ML2 and backup tape volumes:

HSEND RECYCLE ALL PERCENTVALID(20) EXEC

Use 'ML2' to recycle only ML2 tape volumes:

HSEND RECYCLE ML2 PERCENTVALID(20) EXEC

Use 'BACKUP' to recycle only backup tape volumes:

HSEND RECYCLE BACKUP PERCENTVALID(20) EXEC

Suggested numbers for ML2RECYCLEPERCENT are 20% or 30%. This allows more 
volumes to become eligible to be recycled onto fewer volumes at the same time.  
For example:
VOL1 50%
VOL2 20%
VOL3 15%
VOL4  1%
With ML2RECYCLEPERCENT(1), only VOL4 will be recycled.  The next time you run 
RECYCLE again, the same tape would be recycled again and again.  With 
ML2RECYCLEPERCENT(25), VOL2, VOL3 and VOL4 would be recycled.  The resulting 
percent used for the output tape will become 36%, so, that tape will not be 
recycled again the next time around.

MAXRECYCLETASKS(nn).  Each recycle task requires an output tape.  If you allow 
5 recycle tasks to run concurrently and have 5 tapes to recycle, you will land 
up with 5 recycled tapes (no reduction).  If you increase the time between 
recycles, this should also allow more tapes to become eligible to be recycled 
onto fewer tapes.

If you have migrated to a new tape technology, the REUSE CAPACITY (this is an 
average) will be incorrect leading to unneeded recycles.  With time however, 
the REUSE CAPACITY will better reflect the reality of your new environment.

Hope this helps.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Anthony Fletcher
> Sent: Wednesday, March 02, 2016 4:20 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: HSM RECYCLE INFLUENCE
> 
> We are using a VTS with the virtual tapes defined as 3490s, and we are using
> the recommended 'Mark TAPE Full' option. That's all well and good.
> It does, however, mean that there are a lot of 'tapes' and many of those may
> have a mixture of valid and no-longer-valid data sets - a call for Recycle.
> The VTS is probably under configured, but that's life, and its a financial
> institution so they keep everything for 7 years, and are paranoid about
> keeping things so there is really too much data floating around that has to
> continue to float around.
> 
> HSM can decide that it should do recycles, and if it does decide, then it will
> according to its built in algrorithm.
> 
> However, we find that the algorithm makes decisions based on the WHOLE set of
> volumes that it knows about. That is OK, except that if there are a lot of
> huge data sets that have been migrated, and they used full tapes (and probably
> more than one tape). That skews the data to a higher than the overall usage
> figure, and it doesn't conclude that recycles are needed. We can of course
> initiate manual recycles but working out which ones to choose is a pain. So we
> are looking at writing something to automate that. We are looking at reading
> the OCDS and using that to select which volumes (eg excluding the big ones)
> and then issue the RECYCLE commands.
> 
> It seems that HSM ought to be able to automatically do the selection of which
> volumes to do in a way  that works for us.
> 
> So the question is whether we can influence the recycle trigger point
> automatically to select what to recycle? e.g. tell it to ignore the full
> volumes and only assess based on the rest of the volumes or something like
> that.
> 
> We used to have real tapes and it didn't matter if the algorithm resulted in
> few recycles, but with the VTS we need to release the space in the VTS to stay
> alive.
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM RECYCLE INFLUENCE

2016-03-02 Thread Graham Harris
With our VTS, we found ourselves in the position where the default
automatic DFHSM recycle was unable to keep up with scratch cartridge
release that was needed to maintain the required scratch count, which was
putting the whole VTS environment at risk (90+% of which was consumed by
DFHSM migration).  And VTS reclaim was also struggling to keep up (the MUCH
more dangerous aspect),  DFHSM recycle & VTS reclaim were pretty much
running 24x7 trying to perpetually de-honeycomb the whole environment, and
fighting a losing battle.

Hard to tell from your description whether you are asking this, because you
are finding yourself in that position too.

If you are, then it is a potentially solveable problem, but it aint
straightforward, and do-ability and effectiveness may be heavily dependant
on a very good understanding of your physical VTS arrangement (e.g. grid
setup, virtual drive arrangements, etc.), and the ability to 'adjust
things' where necessary.

Reading the OCDS and determining the 'best' DFHSM recycle candidates is a
good first step of that journey (to ignore any chained volumes, thus
concentrating recycle focus on the lowest utilised singletons to free up),
driving as many manual recycles as required.

Depending on how dire a situation you are in, that may be enough by itself,
but obviously manual recycles are single streamed, which is the slight
downside, although you can mitigate that to a certain extent by spreading
across multiple LPARs.



On 2 March 2016 at 23:20, Anthony Fletcher  wrote:

> We are using a VTS with the virtual tapes defined as 3490s, and we are
> using the recommended 'Mark TAPE Full' option. That's all well and good.
> It does, however, mean that there are a lot of 'tapes' and many of those
> may have a mixture of valid and no-longer-valid data sets - a call for
> Recycle. The VTS is probably under configured, but that's life, and its a
> financial institution so they keep everything for 7 years, and are paranoid
> about keeping things so there is really too much data floating around that
> has to continue to float around.
>
> HSM can decide that it should do recycles, and if it does decide, then it
> will according to its built in algrorithm.
>
> However, we find that the algorithm makes decisions based on the WHOLE set
> of volumes that it knows about. That is OK, except that if there are a lot
> of huge data sets that have been migrated, and they used full tapes (and
> probably more than one tape). That skews the data to a higher than the
> overall usage figure, and it doesn't conclude that recycles are needed. We
> can of course initiate manual recycles but working out which ones to choose
> is a pain. So we are looking at writing something to automate that. We are
> looking at reading the OCDS and using that to select which volumes (eg
> excluding the big ones) and then issue the RECYCLE commands.
>
> It seems that HSM ought to be able to automatically do the selection of
> which volumes to do in a way  that works for us.
>
> So the question is whether we can influence the recycle trigger point
> automatically to select what to recycle? e.g. tell it to ignore the full
> volumes and only assess based on the rest of the volumes or something like
> that.
>
> We used to have real tapes and it didn't matter if the algorithm resulted
> in few recycles, but with the VTS we need to release the space in the VTS
> to stay alive.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM RECYCLE INFLUENCE

2016-03-02 Thread Pinnacle

On 3/2/2016 6:19 PM, Anthony Fletcher wrote:

We are using a VTS with the virtual tapes defined as 3490s, and we are using 
the recommended 'Mark TAPE Full' option. That's all well and good.
It does, however, mean that there are a lot of 'tapes' and many of those may 
have a mixture of valid and no-longer-valid data sets - a call for Recycle. The 
VTS is probably under configured, but that's life, and its a financial 
institution so they keep everything for 7 years, and are paranoid about keeping 
things so there is really too much data floating around that has to continue to 
float around.

HSM can decide that it should do recycles, and if it does decide, then it will 
according to its built in algrorithm.

However, we find that the algorithm makes decisions based on the WHOLE set of 
volumes that it knows about. That is OK, except that if there are a lot of huge 
data sets that have been migrated, and they used full tapes (and probably more 
than one tape). That skews the data to a higher than the overall usage figure, 
and it doesn't conclude that recycles are needed. We can of course initiate 
manual recycles but working out which ones to choose is a pain. So we are 
looking at writing something to automate that. We are looking at reading the 
OCDS and using that to select which volumes (eg excluding the big ones) and 
then issue the RECYCLE commands.

It seems that HSM ought to be able to automatically do the selection of which 
volumes to do in a way  that works for us.

So the question is whether we can influence the recycle trigger point 
automatically to select what to recycle? e.g. tell it to ignore the full 
volumes and only assess based on the rest of the volumes or something like that.

We used to have real tapes and it didn't matter if the algorithm resulted in 
few recycles, but with the VTS we need to release the space in the VTS to stay 
alive.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



Anthony,

The trigger is PERCENTVALID.  Try a higher percentage to recycle.  With 
a VTS, maybe go 50%.  CHECKFIRST can be used if you have a lot of 
connected sets.  You can also SELECT to limit yourself to a range of 
tapes.  You should have enough flexibility to do what you want.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


HSM RECYCLE INFLUENCE

2016-03-02 Thread Anthony Fletcher
We are using a VTS with the virtual tapes defined as 3490s, and we are using 
the recommended 'Mark TAPE Full' option. That's all well and good.
It does, however, mean that there are a lot of 'tapes' and many of those may 
have a mixture of valid and no-longer-valid data sets - a call for Recycle. The 
VTS is probably under configured, but that's life, and its a financial 
institution so they keep everything for 7 years, and are paranoid about keeping 
things so there is really too much data floating around that has to continue to 
float around.

HSM can decide that it should do recycles, and if it does decide, then it will 
according to its built in algrorithm.

However, we find that the algorithm makes decisions based on the WHOLE set of 
volumes that it knows about. That is OK, except that if there are a lot of huge 
data sets that have been migrated, and they used full tapes (and probably more 
than one tape). That skews the data to a higher than the overall usage figure, 
and it doesn't conclude that recycles are needed. We can of course initiate 
manual recycles but working out which ones to choose is a pain. So we are 
looking at writing something to automate that. We are looking at reading the 
OCDS and using that to select which volumes (eg excluding the big ones) and 
then issue the RECYCLE commands.

It seems that HSM ought to be able to automatically do the selection of which 
volumes to do in a way  that works for us.

So the question is whether we can influence the recycle trigger point 
automatically to select what to recycle? e.g. tell it to ignore the full 
volumes and only assess based on the rest of the volumes or something like that.

We used to have real tapes and it didn't matter if the algorithm resulted in 
few recycles, but with the VTS we need to release the space in the VTS to stay 
alive.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


multiple certificates and certificate expiration

2016-03-02 Thread Brad Wissink
We are running AT-TLS and have a keyring with a certifcate that is about to 
expire.  we have gotten a new certificate and added it to the keyring, but not 
as the default.  The question I have is if we leave the old certificate in the 
keyring as the default, when it expires will AT-TLS start using the new 
certificate even thought it is not marked as the default?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: error adding volumes to DFSMS storage group

2016-03-02 Thread Neil Duffee
Caveat:  I'm a daily digester so list responses are always delayed... (plus 
someone's probably beat me to it. *grin*)

Few more items to scratch of the possibilities list:

1)  Verify you are *not* using 'ACTIVE' as the CDS name in the ISMF panels.  
You *cannot* alter the 'ACTIVE' CDS.  You must use the 'real' name ie. from the 
ListCat, then activate that CDS to make it 'ACTIVE'.

2)  use the Volume panel (=ISMF.2.1) [1] to verify the error message.  You can 
define volSer's to SMS that do not physically exist (yet). [2]  Using '*' for 
SG name will search them all.  Re-do the search with both 'ACTIVE' and 'real' 
SCDS name.  To use the Volume list in Storage Groups (=ISMF.6), you must search 
*each* SG by name. (or list them all with sub-option 1 and ListVol against each 
of them; arduous)

3)  I also couldn't find DGTSG079 in the Knowledge Centre. [3]  Did you re-type 
the error?  Most (*all*?) IBM messages have a trailing 'level' indicator such 
as 'I' for informational.

4)  Did you "Check the Volume Failures Panel to review the volsers you 
specified." as recommended & what was the result?

[1]  DGTDVVA1 VOLUME SELECTION ENTRY PANEL
Specify Source of the New List  . . 2 (SMS)

  Type of Volume List . . . 3 
  Volume Serial Number  . . BOB*
  Acquire Physical Data . . Y
  Acquire Space Data  . . . Y
  Storage Group Name  . . . * 
  CDS Name . . . . . . . 'ACTIVE'

[2]  *all* my pools have 50+ spare volSer's defined so I don't have to activate 
a new SMS configuration just to ICKDSF a new pack to the pool.  You can also 
remove a pack without removing the volSer from the SG.  Over the course of 
months, I'm always adding/removing volumes using ICKSDF in response to 
application use.

[3]  
http://www-01.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1/en/homepage.html

>  signature = 8 lines follows  <
Neil Duffee, Joe Sysprog, uOttawa, Ottawa, Ont, Canada
telephone:1 613 562 5800 x4585  fax:1 613 562 5161
mailto:NDuffee of uOttawa.ca http:/ /aix1.uOttawa.ca/ ~nduffee
“How *do* you plan for something like that?”  Guardian Bob, Reboot
“For every action, there is an equal and opposite criticism.”
“Systems Programming: Guilty, until proven innocent”  John Norgauer 2004
"Schrodinger's backup: The condition of any backup is unknown until a restore 
is attempted."  John McKown 2015

-Original Message-
From: Brent Snyder [mailto:brent.sny...@mainline.com] 
Sent: March 1, 2016 10:02
Subject: Re: IBM-MAIN Digest - 28 Feb 2016 to 29 Feb 2016 (#2016-60)

Dave, Lizette and Linda -

The SCDS is defined with REUSE option.
Also:
SHROPTNS(3,3) RECOVERY UNIQUE NOERASE NOWRITECHK NONORDERED RESUSE NONSPANNED

LISTC ENT(scds) ALL shows
ALLOCATION
  SPACE-TYPE--CYLINDER HI-A-RBA66355200
  SPACE-PRI-90 HI-U-RBA31571968
  SPACE-SEC--0

Just tried to add a new volser (DASD), did not receive any IGD messages in the 
SYSLOG.

HELP-ISMF MESSAGE--HELP
COMMAND ===>

   MESSAGE NUMBER:  DGTSG079  (could not find explanation of this error message)

   SHORT MESSAGE:   NO VOLUMES DEFINED

   LONG MESSAGE:None of the specified volumes were DEFINED


   EXPLANATION:
 The requested function DEFINE was not successfully completed for any
 of the volumes specified.  In general, if DEFINE fails it means the
 volumes have already been defined in the specified CDS.
 It could also be that the SCDS is full.

   SUGGESTED ACTION:
 Check the Volume Failures Panel to review the volsers you specified.
 If you believe the operation should have been successful, retry it.
 If the problem still occurs, please contact your systems programmer
 for assistance.

I checked the VOLUME option, the volser is not defined to any storage groups.

I've been on this account since November, this is the first time I have needed 
to add volumes to any of the storage groups.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bootstrapping the GZIP for z/OS install

2016-03-02 Thread Paul Gilmartin
On Wed, 2 Mar 2016 12:03:05 -0500, Rick Troth wrote:

>On 03/02/2016 12:50 AM, Jack J. Woehr wrote:
>>> So I go and download the GZip for z/OS package, and  it says use
>>> 'gzip' as the 1st step to install from the supplied '*.tar.gz' file.
>>
>> If you have Gnu tar on the system, tar takes a gzip switch
>>
>>tar zxvf myfile.tgz
>
>Right, but 'tar' handles GZipped archives by running 'gzip' in a
>pipeline under the covers.
>(I say ... basing that on first hand experience with Gnu tar. If IBM or
>MKS added some built-in GZip capability to their implementation of 'tar'
>that would be handy.)
> 
I'm calling Rocket remiss in providing a gzip that can't be bootstrapped
using only base z/OS facilities.  Hardly forgiven in that many desktop
systems (which Rocket may have used for packaging) provide uncompress
but not compress because of (expired) patent restrictions.  Even worse,
some provide gzip but misname it "compress".

Hmmm...  I believe gzip uses the Deflate algorithm and jar probably
embeds Deflate.  But I suppose the wrappers are incompatible.  In
extremis, Rocket could have zipped (or jarred) a .tar file and specified
jar followed by tar to extract.

Hmmm...  Is jar base z/OS, or separately ordered, or separately priced?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bootstrapping the GZIP for z/OS install

2016-03-02 Thread Rick Troth

On 03/02/2016 12:50 AM, Jack J. Woehr wrote:
So I go and download the GZip for z/OS package, and  it says use 
'gzip' as the 1st step to install from the supplied '*.tar.gz' file.


If you have Gnu tar on the system, tar takes a gzip switch

   tar zxvf myfile.tgz 


Right, but 'tar' handles GZipped archives by running 'gzip' in a 
pipeline under the covers.
(I say ... basing that on first hand experience with Gnu tar. If IBM or 
MKS added some built-in GZip capability to their implementation of 'tar' 
that would be handy.)


-- R; <><




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problem applying UA71619 anyone ?

2016-03-02 Thread John Eells

ESHEL Jonathan wrote:

We are trying to apply the PTF's that install the new JSON parser support under 
z/OS 2.1 (as of 2.2 it's integrated into the base system), and have a problem 
with one of the prereqs - UA71619. It's an assembler error when SMPE is 
compiling SDSF module ISFJREAD and the usage of the CALL macro seems to be 
shaky - it's actually an SDSF macro ISFXB2C using ISFCALL using CALL using 
IHBOPLTX).
Has anyone had or seen something similar ?



UA71619 has been PE for quite some time (by OA44222 in January 2014).  I 
suggest that you RECEIVE current service and HOLDDATA and use 
GROUPEXTEND to pull in the fixes.


--
John Eells
IBM Poughkeepsie
ee...@us.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problem applying UA71619 anyone ?

2016-03-02 Thread Staller, Allan
Check you SMP/E DDDEFS for SYSLIB. Ensure SMPMTS is the 1st dataset in the 
concat...

HTH,


We are trying to apply the PTF's that install the new JSON parser support under 
z/OS 2.1 (as of 2.2 it's integrated into the base system), and have a problem 
with one of the prereqs - UA71619. It's an assembler error when SMPE is 
compiling SDSF module ISFJREAD and the usage of the CALL macro seems to be 
shaky - it's actually an SDSF macro ISFXB2C using ISFCALL using CALL using 
IHBOPLTX).
Has anyone had or seen something similar ?


This email – including attachments – may contain confidential information. If 
you are not the intended recipient, do not copy, distribute or act on it. 
Instead, notify the sender immediately and delete the message.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Problem applying UA71619 anyone ?

2016-03-02 Thread ESHEL Jonathan
We are trying to apply the PTF's that install the new JSON parser support under 
z/OS 2.1 (as of 2.2 it's integrated into the base system), and have a problem 
with one of the prereqs - UA71619. It's an assembler error when SMPE is 
compiling SDSF module ISFJREAD and the usage of the CALL macro seems to be 
shaky - it's actually an SDSF macro ISFXB2C using ISFCALL using CALL using 
IHBOPLTX).
Has anyone had or seen something similar ?

Thanks to everyone,

Jonathan Eshel
RSD S.A.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CBU test

2016-03-02 Thread Parwez Hamid
The CBU test starts when the first processor is activated. That starts the 10 
day clock.  As long as 1 processor remains active,  the customer can go up and 
down as may times as they want within that 10 day period.   As soon as all 
processors on the record are deactivated, the test ends.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: COBOL Install

2016-03-02 Thread Lizette Koehler
I think the op question was about the file system name during the COBOL 
installation.  Do you need to setup a new path, use an existing path or how 
does COBOL 5.2 know to use the COBOL 5.2 path vs. the other path?  Or is the 
file system not needed during comp/lked?


MOUNT FILESYSTEM('#dsn')
MOUNTPOINT('/usr/lpp/IBM/cobol/igyv5r2')
MODE(RDRW) /* can be MODE(READ) */
TYPE(ZFS)

Or I misunderstood the op question.


Lizette


-Original Message-
>From: Bill Woodger 
>Sent: Mar 2, 2016 2:21 AM
>To: IBM-MAIN@LISTSERV.UA.EDU
>Subject: COBOL Install
>
>The JCL for the compiles are substantially different, so your customer will 
>have to select the correct option from a panel you provide (presumably).
>
>There is no direct impediment to running two different COBOL compilers 
>concurrently. You'll probably have one just sitting entirely in a STEPLIB, and 
>the other (the more common) installed for more widespread use.
>
>With the release of V6.1 I'm sure that's been considered as an alternative to 
>V5.2...
>
>
>On Wednesday, 2 March 2016 06:08:20 UTC, Mainframe Mainframe  wrote:
>> Thanks for reply. You Mean, customer will be able to use both version of
>> cobol. v4.2 and v5.2
>> 
>> On Wed, Mar 2, 2016 at 11:35 AM, David Crayford  wrote:
>> 
>> > On 2/03/2016 2:02 PM, Mainframe Mainframe wrote:
>> >
>> >> Hello Group,
>> >>  We have COBOL V4.2 in our system and recently we
>> >> installed v5.2 as well. Now my customers want to use both of these
>> >> version.
>> >> Is it possible or as we installed v5.2 on same file system, they can use
>> >> only v5.2 not the v4.2
>> >>
>> >
>> > Of course. Just setup JCL that STEPLIB the compiler libraries. LE should
>> > have no problems at runtime.
>> >
>> > Any suggestion please.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CBU test

2016-03-02 Thread R.S.

W dniu 2016-03-02 o 06:50, Timothy Sipples pisze:

Radoslaw Skorupka wrote:

That's another option, surely it is legal and counts as one CBU test,
but it does not allow to downsize from 7nn to 6nn or 5nn or 4nn capacity
marker.

True, but you have lots of LPAR constraint settings to accomplish
fundamentally the same thing if you wish.

I like Ken Porowski's idea very much. Keep in mind that a DR test needs to
be a DR test. If you're running out of ideas on what to test or rehearse,
then you probably just haven't done enough thinking yet. :-) (Hint: It
might not be the mainframe! Or try metaphorically incapacitating half or
more of the IT staff -- as they're off trying to save their spouses and
children instead of their employer's IT -- and discover what "hilarity"
ensues.) Every test you run is some other test you don't have time or
resources to run. Imagination is wonderful and fantastic, but imagination
well directed is even better.

Finally, this particular test (testing the "tallness" of engines) you have
already undoubtedly run, more than once. You previously ran this test when
you upgraded to your current machine model. A z13 capacity model 702 is not
a z12EC capacity model 702, for example. Engine "tallness" is never exactly
the same across model generations. A z12EC capacity model 702 is quite like
a hypothetical sub-capacity engine-equipped z13 -- somewhere between a 602
and a 702 z13, closer to the latter.


The only thing I want to accomplish now is to know whether multiple 
"Perform Model Conversion" operations during CBU test are allowed or not.
Yes, there are other methods to skin the cat, but what's wrong with 
knowledge whether method #101 is possible?
That clearly remains me Stanisław Lem's sepulki 
(https://en.wikipedia.org/wiki/Sepulka) .


Regards
--
Radoslaw Skorupka
Lodz, Poland






---
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości 
wpłacony) wynosi 168.955.696 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: bootstrapping the GZIP for z/OS install

2016-03-02 Thread Vince Coen
Just of of completeness you can call the program direct if you have not
yet set up the paths by running :

> /bin/gzip -V[ etc ]

That said the bin directory should already be in the search paths so
check it via
echo $PATH

If not add it to your profile (or for all users if wanted).
.
 
Vince


On 02/03/16 04:44, Bruce Hewson wrote:
> Hello David,
>
> thank you - that worked.
>
>
> 1st:-
> ITSXSA3:/u/bruce: >cd bin
> ITSXSA3:/u/bruce/bin: >gzip --version
> gzip 1.2.4 (18 Aug 93)
> Compilation options:
> DIRENT UTIME HAVE_UNISTD_H
>
>
> 2nd:
> ITSXSA3:/u/bruce/local/gzip/gzip-1.6-edc/bin: >gzip --version
> gzip 1.6
> Copyright (C) 2007, 2010, 2011 Free Software Foundation, Inc.
> Copyright (C) 1993 Jean-loup Gailly.
> This is free software.  You may redistribute copies of it under the terms of
> the GNU General Public License .
> There is NO WARRANTY, to the extent permitted by law.
>
> Written by Jean-loup Gailly.
>
>
>
>
> Regards
> Bruce Hewson
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>


-- 
- IMPORTANT – 

This email and the information in it may be confidential, legally privileged 
and/or protected by law. 
It is intended solely for the use of the person to whom it is addressed. 
If you are not the intended recipient, please notify the sender immediately 
and do not disclose the contents to any other person, use it for any purpose,
or store or copy the information in any medium.

Please also delete all copies of this email & any attachments from your system.

If this is an encrypted email it is your responsibility to maintain the 1024 
byte key system even for one-use keys. Once mail has been sent the sending key 
is not kept and therefore a replacement mail cannot be resent.

We cannot guarantee the security or confidentiality of non encrypted email 
communications. 
We do not accept any liability for losses or damages that you may suffer as a 
result of your receipt of this email including but not limited to computer 
service or system failure, access delays or interruption, data non-delivery 
or mis-delivery, computer viruses or other harmful components.

Copyright in this email and any attachments belongs to Applewood Computers.
Should you communicate with anyone at Applewood Computers by email, 
you consent to us monitoring and reading any such correspondence.

Nothing in this email shall be taken or read as suggesting, proposing or 
relating to any agreement concerted practice or other practice that could 
infringe UK or EC competition legislation (unless it is against Security 
requirements).

This Email and its attachments (if any) are scanned for virii using Clamd 
and ClamAV 0.98.8 or later (Linux x64).

Dykegrove Limited T/A Applewood Computers is a company registered in England 
(no. 01681349) whose registered office is at Applewood House, 
17 Stag Green Avenue, Hatfield, Hertfordshire, AL9 5EB, UK.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


COBOL Install

2016-03-02 Thread Bill Woodger
The JCL for the compiles are substantially different, so your customer will 
have to select the correct option from a panel you provide (presumably).

There is no direct impediment to running two different COBOL compilers 
concurrently. You'll probably have one just sitting entirely in a STEPLIB, and 
the other (the more common) installed for more widespread use.

With the release of V6.1 I'm sure that's been considered as an alternative to 
V5.2...


On Wednesday, 2 March 2016 06:08:20 UTC, Mainframe Mainframe  wrote:
> Thanks for reply. You Mean, customer will be able to use both version of
> cobol. v4.2 and v5.2
> 
> On Wed, Mar 2, 2016 at 11:35 AM, David Crayford  wrote:
> 
> > On 2/03/2016 2:02 PM, Mainframe Mainframe wrote:
> >
> >> Hello Group,
> >>  We have COBOL V4.2 in our system and recently we
> >> installed v5.2 as well. Now my customers want to use both of these
> >> version.
> >> Is it possible or as we installed v5.2 on same file system, they can use
> >> only v5.2 not the v4.2
> >>
> >
> > Of course. Just setup JCL that STEPLIB the compiler libraries. LE should
> > have no problems at runtime.
> >
> > Any suggestion please.
> >>
> >> --
> >> For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >>
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


rexx and tso alllocate

2016-03-02 Thread Bill Woodger
Joel Ewing has made a valid point about programs potentially having LRECL 
expectations. COBOL is good for that.

Tim Brown is silent on what he actually wants to do this for. Until then it's 
difficult to suggest something concrete. Ditch the blocksize has been said, 
making the LRECL smaller has no obvious benefit has been said. Just to add that 
the LRECL can always be "overridden" on the DD for a subsequent reference.


On Wednesday, 2 March 2016 08:24:22 UTC, Ted MacNEIL  wrote:
> Why not just create a VBA file with a very long LRECL and not worry about it 
> at all? Longer  LRECLs don't introduce any more ov‎eReader than short ones.
> 
> -teD
>   Original Message  
> From: Kjell Holmborg
> Sent: Wednesday, March 2, 2016 02:54
> To: IBM-MAIN@LISTSERV.UA.EDU
> Reply To: IBM Mainframe Discussion List
> Subject: Re: rexx and tso alllocate
> 
> One suggestion might be that your rexx program writes records to a stem 
> variable and you could keep track of the longest record and then just before 
> writing the contents of the stem variables to the dataset you do a TSO 
> Allocate with the longest record as a variable to the ALLOCATE command.
> 
> /Kjell
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: rexx and tso alllocate

2016-03-02 Thread Ted MacNEIL
Why not just create a VBA file with a very long LRECL and not worry about it at 
all? Longer  LRECLs don't introduce any more ov‎eReader than short ones.

-teD
  Original Message  
From: Kjell Holmborg
Sent: Wednesday, March 2, 2016 02:54
To: IBM-MAIN@LISTSERV.UA.EDU
Reply To: IBM Mainframe Discussion List
Subject: Re: rexx and tso alllocate

One suggestion might be that your rexx program writes records to a stem 
variable and you could keep track of the longest record and then just before 
writing the contents of the stem variables to the dataset you do a TSO Allocate 
with the longest record as a variable to the ALLOCATE command.

/Kjell

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN