Re: How many cost a cpu second?

2012-06-12 Thread Joel C. Ewing
Or, to expand on R.S.'s remarks, if your installation software charges 
are based on peak 4-hour MSU* average during the month and this 
application not only must run during the peak time but is sufficient to 
raise that peak, then your software license charges will go up for that 
month and there is a real dollar cost involved.  On the other hand, if 
your installation is on peak 4-hour MSU billing and the application is 
easily scheduled for slack periods and does not contribute to the 
monthly peak, the CPU time used is essentially free.


If you are not on peak 4-hour MSU billing and the application is 
scheduled to use otherwise idle CPU and I/O resources, then the physical 
costs are $0; but if it must be run during peak loads and contributes to 
forcing an earlier upgrade of hardware to handle that load, then 
hardware upgrade costs could be significant and should be at least 
partially attributed to that application.


Another cost not mentioned is end-user waiting time.  If cutting the 
resources required by the application significantly improves the turn 
around time to the end user, those improvements could be justified by 
improvements to end-user productivity, which could have other desirable 
side effects.

   JC Ewing
*See other discussions on hardware-dependent relationship between CPU 
seconds and Millions of Service Units (MSU).


On 06/12/2012 04:29 PM, Charles Mills wrote:

I think you are not getting an answer -- actually as I recall you got
several answers to this effect -- because the question is effectively how
long is a piece of string?

I use several LPARs for development. For various business reasons a CPU
second costs me and my employer nothing.

OTOH if you had a contract at a service bureau where you paid $5000 a month
up to 20,000 CPU seconds and $20,000 thereafter, then you just saved $15,000
per month.

If I improve my car's mileage from 20 to 30 miles per gallon how much money
do I save? Well, it depends on how many miles I drive and how much I pay for
gas.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of Salva Carrasco
Sent: Tuesday, June 12, 2012 2:13 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: How many cost a cpu second?

Apart from the syntax error How many vs. How much, My question is not
having much success.

In other terms:
If I have a STC consuming 25,000 cpus / secs per month, and I spend two days
optimizing it to reduce to 15,000.
Am I gaining money or loosing time ?


...

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: SMS Dataset Alloc Issue

2012-05-26 Thread Joel C. Ewing
And since no one else seems to have mentioned it yet, this whole thread 
is an excellent argument for NOT using a SYS1. high level for naming 
installation datasets you want system-specific, like the this SCDS 
dataset -- instead using some unique high-level qualifier associated 
with the master catalog of each system, say MCTA and MCTB.  Then you can 
simply create and maintain your installation System-B datasets from 
System-A by temporarily connecting the System-B MCAT as a User Catalog 
to System-A and defining MCTB (in System-A MCAT) as an ALIAS pointing 
to the System-B MCAT.  With those conventions and working with datasets 
with an MCTB high-level from System-A, everything gets correctly pointed 
to the System-B MCAT without having to play any bizarre games, and 
utilities like ISPF 3.4 will behave more predictably as well.  When 
System-B is up and running on its Master Catalog, there will be no ALIAS 
for MCTB defined in the System-B MCAT, so a search for MCTB.anything 
from System-B would default to the System-B Master Catalog.

   J C Ewing

On 05/26/2012 12:12 PM, Lizette Koehler wrote:

Hello,
Thanks for help.  But I still have once confusion is, When I am

trying to delete

the Data component of these datset, for which cluster part is not visible

, system is not

allowing me to delete. I get error  DATASET not Found.

I think, I am getting this error because that particular datset is not

part of current

master catalog. But if Still I want to delete the data component I

created, Can you

please help to perform this task.

Regards
Saurabh



Some questions
1) Is the catalog where this SYS1.SMS.SCDS is cataloged a USER CAT on the
system you are trying to do the display?
2) Is the system where the SYS1.SMS.SCDS is cataloged - up or able to be
IPL'D?

If you have system up with MCATA and MCATB is a usercat in MCATA then you
may be able to delete the dataset with the CATALOG parm in IDCAMS DEL.

If you can bring up the system where MCATB is the Master Cat then you can do
the changes from the running system

IIRC  SETSMS command can point SMS to different SCDS or ACDS datasets.  So
if you have a running system that you can logon, then you can fix from that
system.

Or could you create a new SYS1.SMS2.SCDS dataset to the correct MCAT and
then update the SYS1.PARMLIB member to point to the new SCDS dataset and
IPL?

You may have several options to correct this issue.


Lizette


...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IEW4005I FETCH FOR MODULE Failed

2012-05-12 Thread Joel C. Ewing

On 05/12/2012 01:27 AM, Lim Ming Liang wrote:

Pardon my ignorance, where can I get this

Mods Tape

You referring to ?

Regards Lim ML

On 12-May-12 11:57 AM, Skip Robinson wrote:

Mods Tape



...
See PDS 8.6 at http://www.cbttape.org/

These mods haven't actually been distributed as a mainframe tape for 
years, but it's still called  the CBT Tape because for years prior to 
being converted to on-line and CDROM formats, versions 1 through 321 of 
this collection of code were managed and distributed on mainframe tape 
by Arnold Casinghino, who worked at Connecticut Bank and Trust Company.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: ### of GDG Entries

2012-05-11 Thread Joel C. Ewing

But,...
While mod'ing onto an existing file can work, for a production process 
this can create a nightmare for restart after a process failure if you 
don't plan for process restart up front -- highly recommended to either 
first make a backup for recovery before each append process or copy the 
old file to a new and append the additional records to the new file as 
part of that copy process.


My personal preference for a report-retention requirement like this is 
to come up with some technique that allows the use of variable dataset 
names rather than trying to work around the limitations of GDGs.  Or, if 
all the reports have the same DS attributes and are relatively small, it 
may be advantageous to use a PDS or PDS/E with each report as a separate 
member, using dynamic allocation of a variable member name (possibly 
variable DS name also) from within the report writing program.  With 
variable names, you will most likely also need your own techniques for 
purging reports that are no longer needed.


It is not necessary to roll your own SVC99 interface to do dynamic 
allocation: just use facilities accessible from REXX or other scripting 
languages, or use a free subroutine like CBT's DYNALLOC from compiled 
languages.

  Joel C Ewing

On 05/10/2012 07:41 PM, Cris Hernandez #9 wrote:

does it have to be a gdg?

what about mod'ing onto the file then emptying it out when it's processed?

back it up with a gdg with that batch job at regular intervals suggestion.





  From: Donnelly, Johnjohn.p.donne...@ti.com
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, May 10, 2012 5:36 PM
Subject: Re: ### of GDG Entries

Thankyou all

John Donnelly
Texas Instruments SVA
2900 Semiconductor Drive
Santa Clara, CA 95051
408-721-5640
408-470-8364 Cell
john.p.donne...@ti.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
Linda
Sent: Thursday, May 10, 2012 2:28 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: ### of GDG Entries

Hi John,

I have had I similar GDG 'thing'.  It would be helpful to know more details...

In my case, we were receiving a widely varying number of reports that we were 
to backup and print for our customer. We scheduled the print job to run every 8 
hours. The operations staff fit the actual printing in to meet customer needs.

Would that work for you?

Another option might include merging the generations to another dataset.
HTH,

Linda

Sent from my iPhone

On May 10, 2012, at 1:12 PM, Donnelly, Johnjohn.p.donne...@ti.com  wrote:


We have a business application that creates literally 100s of GDGs a day; 
please don't ask.
Is there any way to create or pretend to create a GDG base greater than 255...

John Donnelly
Texas Instruments SVA
2900 Semiconductor Drive
Santa Clara, CA 95051
408-721-5640
408-470-8364 Cell
john.p.donne...@ti.com


...
--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: RES: XPAF replacement

2012-05-09 Thread Joel C. Ewing
Another one of those message formats that appears to Thunderbird as 
garbage.  I can read it fine on the ibm-main online archive, where is 
shows Ituriel's readable message followed by the ibm-main trailer lines, 
but it must be arriving in a format that ibm-main doesn't fully 
understand, as its re-broadcast seems to have Ituriel's message encoded 
as base64, followed by the ibm-main trailer lines, also encoded as a 
separate base64 block but with no message heading structure appropriate 
to the sending of two base64 blocks.  I gather some Email clients may 
tolerate this, but Thunderbird does not and just displays the base64 
encoded data.


I think we've been down this path before.  The original message format 
must be partly responsible, but this also looks like a bug in the 
ibm-main list server logic:  it should never think it reasonable to 
append its trailer in a way that sends out two base64 blocks back-to 
back, rather than, say, trying to merge the the data into a single 
base64 block, or resend the whole thing un-decoded with 8-bit MIME Email 
conventions.

  JC Ewing

On 05/09/2012 10:34 AM, ITURIEL DO NASCIMENTO NETO wrote:


‰íz{S­©ì}êÄ�ÊŠxjǺà*...



--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: RES: XPAF replacement

2012-05-09 Thread Joel C. Ewing

On 05/09/2012 01:40 PM, Paul Gilmartin wrote:

On Wed, 9 May 2012 13:12:52 -0500, Walt Farrell wrote:


On Wed, 9 May 2012 12:37:18 -0500, Joel C. Ewing wrote:


Another one of those message formats that appears to Thunderbird as
garbage.  I can read it fine on the ibm-main online archive, where is
shows Ituriel's readable message followed by the ibm-main trailer lines,
but it must be arriving in a format that ibm-main doesn't fully
understand, as its re-broadcast seems to have Ituriel's message encoded
as base64, followed by the ibm-main trailer lines, also encoded as a
separate base64 block but with no message heading structure appropriate
to the sending of two base64 blocks.  I gather some Email clients may
tolerate this, but Thunderbird does not and just displays the base64
encoded data.


With Firefox 12.0 I see it garbled on the LISTSERV at:

 http://bama.ua.edu/cgi-bin/wa?A2=ind1205L=ibm-mainP=320205


In my experience, Joel, it's often a question of -your- (that is, each 
recipient's) personal settings at the list server. If you query your settings, 
look at the header options you have. If you have anything other than FULLHDR 
you may find some messages that don't have enough header info to allow your 
client (Thunderbird) to process the message properly.


-- gil



I know about the FULLHDR option and did have problems with that many 
years back.  Did a list QUERY just be be sure I didn't somehow lose that 
option, and confirmed I still have FULLHDR on.  This appears to be 
something else.  On my received Email, all the usual initial message 
headers appear to be there (which isn't the case without FULLHDR), but 
it's as if the message is being sent as a multi-part MIME message but 
with all the internal MIME headers around the parts missing.  I vaguely 
remember reading at one time that the list server didn't support MIME 
and multi-part formats.  If that is true and this is what is being sent 
to the list in the problem cases, that might be the cause and perhaps an 
issue with the senders Email options.


Ituriel's latest 9 May @1234 and @1248 postings do show up as garbled 
for me on the LISTSERV archive also.  I must have been looking at the 8 
May posting on the archive instead, which is OK.  It looks like the only 
posts with the problem are replies to previous postings and that his 
initial post that started the thread came through clean.  Perhaps that 
is significant.  The only other thing I see unusual is that Ituriel's 
company adds a VERY lengthy appended legal disclaimer, which contains a 
several html tags in a context where html is not appropriate.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Progress Toward z/OS Personal Use License

2012-04-25 Thread Joel C. Ewing

On 04/25/2012 01:53 AM, Paul Gilmartin wrote:

On Tue, 24 Apr 2012 23:36:19 -0700, Edward Jaffe wrote:


The march toward a personal use z/OS license takes another step forward ...

http://www.ibm.com/common/ssi/rep_ca/5/897/ENUS212-145/ENUS212-145.PDF

... Additionally, the [Rational Developer and Test Environment for System z]
  product can now be purchased as a stand-alone entry point into the Rational
development solutions for System z. This lowers the cost of initial purchase,
opening the environment for use by developers, testers, and operations
personnel, and provides an easier path to adoption for traditional mainframe
developers looking to modernize their development and test processes
and infrastructure.


Dongle.  Intel/Linux hosted.  No APAR support.

-- gil

...
Dongle, don't like but could live with.  Intel/Linux hosting, not 
unreasonable.  The big unanswered question not mentioned anywhere in the 
pdf document is the cost.


The big problem with something like zPDF was that it still had a minimum 
$20K - $30K per year cost.  A quick read of this latest offering 
suggests it still has an annual license charge per user, but if there 
was any clue on price range I missed it.   Perhaps there are other 
on-line resources that clarify.


Having a personal z/OS to occasionally play with would be so cool.  But, 
the intended target here still appears to be businesses, which no doubt 
means it's priced accordingly and out of range for casual personal use! 
 Speaking for those of us not in the top 1%, even $5K per year would be 
way more than I currently budget for all my home personal computing, so 
I doubt this new offering yet approaches what I could justify as an 
entertainment expense.


I baulked at MicroSoft's concept that in response to MS's agenda, and 
not mine, I should be willing to shell out $100's per home platform 
every several years for the privilege of having to re-learn all the 
familiar user interfaces, force-upgrade other application software, and 
still expend significant resources on protection software -- which is 
why my primary home systems have been SE Linux (Fedora) for several 
years.  A cost of $1000's per year for cool wouldn't fly for me.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Progress Toward z/OS Personal Use License

2012-04-25 Thread Joel C. Ewing

On 04/25/2012 09:38 AM, Farley, Peter x23353 wrote:

Pardon me if I misinterpreted, but your very short responses, each followed by a period, 
said each of these is an issue for me.

Perhaps I need more coffee before I write such a question... :)

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
Paul Gilmartin
Sent: Wednesday, April 25, 2012 10:30 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Progress Toward z/OS Personal Use License

On Wed, 25 Apr 2012 10:19:19 -0400, Farley, Peter x23353 wrote:


I'll grant you the dongle issue (but it's probably unavoidable) and possibly 
the APAR submission issue (which can be anything from a non-issue to a business 
killer), but why is Linux/Intel hosting a problem to you rather than a solution?

Just curious.


Who used the word problem or issue?
--

...

A dongle definitely could be an issue for some.  Might be less of an 
issue on Linux, but my experiences on Windoze has been less than ideal 
and makes me regard any application that requires a dongle as more of a 
gamble.  While the dongle may be regarded as nice license insurance 
from the software vendors standpoint, it is essentially just another 
point of failure for the user and lowers the value of the product.


My wife has some very expensive Embroidery software that requires a 
dongle.  The license does entitle her to run the software on multiple 
platforms, both her laptop and desktop, since the dongle prevents 
concurrent use. After a year or so the dongle case became too loose to 
remove the dongle from the USB port - the only way now is grasp and pull 
the dongle base with a pair of needle-nose pliers, which works, but is 
certainly not the advertised convenience. The only support provided by 
the application vendor to remedy this situation is to re-purchase the 
software at full price to get a new dongle.


Other than using standard Windows GUI interfaces, this software does 
nothing that special at the Operating System level, except for the 
dongle support that requires a hardware driver written by yet a 
different vendor.  Logic would suggest that this application should be 
able to migrate from Win XP to Win 7 without a problem, provided one can 
find support for the dongle on Win 7.  My initial attempts to migrate 
have so far failed because the dongle vendor's current drivers for Win 7 
are not compatible with the older version dongle that came with the 
application.  I haven't given up, but unless I can locate a compatible 
driver that is also compatible with Win 7 this expensive application is 
toast on Win 7.  A nice result for the application vendor if I'm forced 
to do an otherwise unnecessary upgrade at great cost, but from the 
user's standpoint this is a very poor outcome, apparently forced by the 
decision to require a dongle.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: ZOS 1.13 SMPTABL Mystery

2012-04-24 Thread Joel C. Ewing

On 04/23/2012 04:35 PM, Shmuel Metz (Seymour J.) wrote:

In4f90d295.6000...@acm.org, on 04/19/2012
at 10:05 PM, Joel C. Ewingjcew...@acm.org  said:


If you have multiple people who might have to back each other up and
be able to take over and eventually complete a maintenance project
started by another using SMP/E ISPF dialogs, then they had better
share the same SMPTABL dataset.  Otherwise, the only way to continue
their maintenance would be to manually start a new project,
determine what SYSMODS should be selected, determine the last step
done by the prior person, and spin through the already-done dialog
steps without actually submitting jobs in order to get to the right
starting step.


The flip side of that is that if you have multiple people working
concurrently the variables tracking the work of one are highly
inappropriate for the work of another. If I have to back up somebody
who has started to install service, the CSI will tell me what I need
to know. Chances are that I will receive updated HOLDDATA before doing
anything else, so I may not care what he had previously selected.

SMP/E dialogs do not work that way.  Users do not share the same 
variables directly, they share the same list of named maintenance 
projects within each distinct target zone name, and variable values are 
associated with a specific maintenance project, not with a user 
(internally, SMP/E dialogs generate unique project ISPF table names in 
SMPTABL as needed for storing the variables).  If you want a new 
independent project, you start a new project and give it a name to 
distinguish it and it has its own set of variables.  If you want to 
resume/continue a previously started project (yours or someone else's), 
you instead select an existing project from the list and the dialog 
automatically picks up all the saved variables appropriate to the 
options and state of that project.  When projects are completed either 
by advancing to the final step or by an explicit CANCEL, the project 
variables and corresponding ISPF table members are discarded.


It works quite well for tracking the state of multiple projects to 
multiple zones from one maintainer or multiple maintainers, and I have 
also used it to pick up and resume maintenance started by another when 
the other went on vacation and priority of the project was raised in 
their absence.  It is especially useful when there is a relatively long 
lag between APPLY and ACCEPT and you want to be sure to ACCEPT the same 
SYSMOD set as in APPLY, because a maintenance configuration has proven 
to be a stable one and you want it as a potential RESTORE point.


There is nothing to prevent SMP/E ISPF dialogs with a shared SMPTABL DS 
from correctly tracking the distinct variables of two different SysProgs 
that are concurrently working in ISPF on two independent projects.  This 
is true even if both projects are within the same TZONE/DZONE; although, 
I wouldn't recommend that last level of concurrency as a standard 
practice as it complicates the issue of when affected libraries should 
be backed up, and potentially their batch jobs would waste initiator 
time with enqueue delays waiting for exclusive access to the same CSI 
datasets.


Yes, you can deduce most of the state information from the CSI, possibly 
with help from the SMP/E log datasets; but it takes much more work and 
adds unnecessary opportunity for human error.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: ZOS 1.13 SMPTABL Mystery

2012-04-19 Thread Joel C. Ewing

On 04/19/2012 08:16 AM, Shmuel Metz (Seymour J.) wrote:

In4f8eb53d.9000...@us.ibm.com, on 04/18/2012
at 08:36 AM, Kurt Quackenbushku...@us.ibm.com  said:


http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/gimusr51/3.6.3?SHELF=gim2bk90DT=20110811181158


The only explanation given for installation-wide is SMP/E uses this
table data set to save process status information for the SYSMOD
management dialogs. What problems occur[1] if you don't use a shared
SMPTABL? I've run into problems when it's shared.

[1] Assuming that the unshared SMPTABL is in the
 ISPTLIB concatenation.

If you have multiple people who might have to back each other up and be 
able to take over and eventually complete a maintenance project started 
by another using SMP/E ISPF dialogs, then they had better share the same 
SMPTABL dataset.  Otherwise, the only way to continue their maintenance 
would be to manually start a new project, determine what SYSMODS 
should be selected, determine the last step done by the prior person, 
and spin through the already-done dialog steps without actually 
submitting jobs in order to get to the right starting step.  Also, if a 
different SMPTABL is used, if the original person were to later use the 
SMP/E dialogs, it would still look like he had an incomplete project and 
he might attempt to complete it and attempt to run maintenance jobs that 
are no longer needed or appropriate.


If you have multiple people applying maintenance to the same 
global/target/dlib zones, sharing SMPTABL may be advisable so you can be 
fully aware of other activity that might affect or have some impact on 
the same zones and libraries you are changing.


If you have other people doing maintenance with the SMP/E dialogs to 
other global zone, if there is any chance there are prereq/coreq 
requirements between those zones and zones of interest to you, a shared 
SMPTABL is occasionally useful to make it easier to check if there is 
maintenance in progress to those zones.


We had a small enough number of System Programmers that used the SMP/E 
dialogs that we just found it simpler to share the same SMPTABL among 
all SysProgs (and there is no reason for anyone other than a SysProg to 
have SMPTABL allocated).  It wasn't that difficult to manually 
coordinate on rare occasions when library compression or expansion was 
required.  YMMV.  Occasionally one may have to clean up abandoned 
maintenance projects and obsolete SMPTABL members associated with 
abandoned target zones, but that can happen whether the SMPTABL dataset 
is shared or not.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Execute certain steps based on input parm

2012-04-18 Thread Joel C. Ewing

On 04/18/2012 08:46 AM, Paul Gilmartin wrote:

On Wed, 18 Apr 2012 09:30:59 -0400, Veilleux, Jon L wrote:


According to the JCL manual that won't work:

The following keywords are the only keywords supported by IBM and recommended 
for use in relational-expressions. Any other keywords, even if accepted by the 
system, are not intended or supported keywords.  Also you need to change DNS to 
DSN
...
You will have to get more creative. Perhaps pass a parm that causes an abend in 
a first step?


Or contrive a JCL symbol that evaluates to one of the supported forms.

-- gil



The syntax as formally described in the JCL Reference (z/OS 1.12) is 
demonstrably incomplete just based on the supplied examples.  Something like

//   IF (SYM = value) THEN
is clearly valid by the manual's description if SYM has a numeric value 
and value is also some numeric constant or if SYM resolves to a 
legal keyword value for which value is compatible, as legal operands 
are described as keywords or numeric values, and SYM in this context 
would not be a keyword but simply whatever value replaces it.


 But, in their own examples IBM uses as valid constant values things 
like Unnn, Snnn, TRUE, FALSE, none of which are described as 
keywords and which clearly are not numeric in the normal sense of the 
word!  Obviously some special alphanumeric constants are acceptable, 
which begs the question why other arbitrary alphanumeric constants that 
can't be confused with keywords should not be explicitly allowed as 
operands as well.  (Maybe they work even though undocumented, but usage 
in that case would be risky!)


If the relational expressions directly supported by JCL are found too 
restrictive, one could always write a fairly trivial utility (perhaps 
CBT site already has one) that would accept as a PARM value or an input 
record a more general relational expression (which could include 
parameter references) with syntax that fully supports character string 
comparison and produces a 0|1 (false|true) step condition code that 
could be tested by subsequent conditional JCL statements.  Such a 
utility could even be generalized to allow conditional setting of 
arbitrary step completion codes or even conditionally ABENDing with some 
user ABEND code.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Execute certain steps based on input parm

2012-04-18 Thread Joel C. Ewing

On 04/18/2012 10:51 AM, Paul Gilmartin wrote:

On Wed, 18 Apr 2012 09:47:05 -0500, Joel C. Ewing wrote:


On 04/18/2012 08:46 AM, Paul Gilmartin wrote:

On Wed, 18 Apr 2012 09:30:59 -0400, Veilleux, Jon L wrote:


According to the JCL manual that won't work:

The following keywords are the only keywords supported by IBM and recommended 
for use in relational-expressions. Any other keywords, even if accepted by the 
system, are not intended or supported keywords.


The syntax as formally described in the JCL Reference (z/OS 1.12) is
demonstrably incomplete just based on the supplied examples.  Something like
//   IF (SYM = value) THEN
is clearly valid by the manual's description ifSYM has a numeric value
and value is also some numeric constant ...


No.  In:

 
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/iea2b6b0/17.1.4.2

 17.1.4.2 Comparison Operators
 Use comparison operators in a relational-expression to compare a keyword 
with a numeric value.
 ...

There is no mention of using a comparison operator to compare two numeric 
values.
The keywords are listed in:

 
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/iea2b6b0/17.1.4.5

Note that TRUE and FALSE are not themselves keywords nor constant
values; they appear only (and superfluously) in complex tokens such as
ABEND=TRUE (which means exactly the same as ABEND.  Quine would
shudder).

I think I'll submit another RCF.



  But, in their own examples IBM uses as valid constant values things
like Unnn, Snnn, TRUE, FALSE, none of which are described as


These are not used as constant values, but as lexical parts of keywords.


keywords and which clearly are not numeric in the normal sense of the
word!  Obviously some special alphanumeric constants are acceptable,
which begs the question why other arbitrary alphanumeric constants that
can't be confused with keywords should not be explicitly allowed as
operands as well.  (Maybe they work even though undocumented, but usage
in that case would be risky!)




If the relational expressions directly supported by JCL are found too
restrictive, one could always write a fairly trivial utility ...


JCL tailoring with ISPF skeletons?

-- gil


ISPF skeletons would only be an alternative if dynamically submitting 
another batch job is acceptable.


The referenced 17.1.4.2 section is very definitely in conflict with the 
actual IF JCL implementation.  I successfully used for years such forms as

IF  (OPT = 1) THEN
and
IF (OPT = 0) THEN
where OPT was set on a prior SET statement to have a value of 0 or 
1, to allow various job steps to be executed conditionally based on a 
set of option statements at the start of the job stream.  Much easier 
than hunting for the job steps, and the expected results of 1 =  1 and 
0 = 0 being taken as TRUE and 1 = 0 and 0 = 1 being taken as FALSE 
were as intuitively expected.


So does IBM change the docs to agree with the implementation (the 
cheaper solution), or do they change the code to not conflict with the 
docs (which may break existing usage)?  In any event, the existing JCL 
documentation should not be incomplete, confusing, and inconsistent with 
the implementation.  There is a lot to be said for more formal syntax 
and semantic descriptions.


I confess to having less familiarity with ABENDCC keyword forms, never 
having had a need to use one.  So, U001 in ABENDCC=U001 is a 
lexical part of a keyword?  Good Lord, who invented this syntax?  So 
in the context of an IF statement relational expression, = is both a 
relational operator and also a lexical part of such key words as the 
above.  A rational person familiar with practically any other 
programming language (and with other JCL relational expressions) would 
intuitively expect given ABENDCC=U001 as valid, for ABENDCC = U001, 
ABENDCC EQ U001, and ABENDCC NE U001 to all be legal syntax, which 
they apparently are not.  It is a confusing and unnatural syntactic act 
to permit the same special symbol to serve as both an operator and as 
part of a keyword!


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: JCL help (yes, with COND)

2012-04-18 Thread Joel C. Ewing

On 04/18/2012 12:43 PM, Frank Swarbrick wrote:

I have the following job (cut down to include only the relevant parts):


//VAPPROC4 JOB
//BACKUP1  EXEC PROC=SORTBKUP,COND=(4,LT)
//EP4INEXEC PROC=EP4IN,COND=(4,LT)
//E4INPRT  EXEC PROC=E4INPRT
//


The EP4IN PROC is a vendor supplied proc as follows:


//EP4IN  PROC
//STEP01 EXEC PGM=IEFBR14,COND=(20,LT)
//STEP01A  EXEC PGM=IEFBR14,COND=(20,LT)
//STEP02 EXEC PGM=EPZOS,COND=(20,LT)
//STEP03   EXEC PGM=IDCAMS,COND=(7,LT)
//STEP04   EXEC PGM=EPTBLUPD
//  IF ( STEP04.RC = 3 ) THEN
//STEP05 EXEC PGM=IEBGENER,COND=((7,LT),(4,LT,STEP03),(4,LT,STEP04))
//  ENDIF
//  IF ( STEP04.RC = 0 | STEP04.RC = 4 ) THEN
//STEPLAST EXEC EPLINK99
//  ENDIF
//EP4IN  PEND
//STEP06 EXEC PGM=IEBGENER,COND=(24,LT)
//STEP07 EXEC PGM=EPX99,COND=(20,LT)
//EP4IN  PROC

I've already solved my issue (which I will detail below), but I'm still unclear 
as to why what I had was wrong, and why my fix actually fixes it.

The issue is that EP4IN.STEP06 and EP4IN.STEP07 were not being executed:

IEF202I VAPPROC4 STEP06 EP4IN - STEP WAS NOT RUN BECAUSE OF CONDITION CODES
IEF272I VAPPROC4 STEP06 EP4IN - STEP WAS NOT EXECUTED.

IEF202I VAPPROC4 STEP07 EP4IN - STEP WAS NOT RUN BECAUSE OF CONDITION CODES
IEF272I VAPPROC4 STEP07 EP4IN - STEP WAS NOT EXECUTED.

STEPNAME PROCSTEPRC
BACKUP1  BACKUP  00
EP4INSTEP01  00
EP4INSTEP01A 00
EP4INSTEP02  00
EP4INSTEP03  05
EP4INSTEP04   FLUSH
EP4INSTEP05   FLUSH
STEPLAST STEP16   FLUSH
STEPLAST STEP17   FLUSH
EP4INSTEP06   FLUSH
EP4INSTEP07   FLUSH

(It's expected that EP4IN.STEP04, EP4IN.STEP05, STEPLAST.STEP16 and 
STEPLAST.STEP17 do not execute under these conditions, becasue EP4IN.STEP03 
resulted in RC=05; they execute only when RC=03 for this step.)

Anyway, the solution is to remove the COND parameter from the EXEC PROC=EP4IN.
My new result follows:

BACKUP1  BACKUP  00
EP4INSTEP01  00
EP4INSTEP01A 00
EP4INSTEP02  00
EP4INSTEP03  05
EP4INSTEP04   FLUSH
EP4INSTEP05   FLUSH
STEPLAST STEP16   FLUSH
STEPLAST STEP17   FLUSH
EP4INSTEP06  00
EP4INSTEP07  00
E4INPRT  REPORT  00

What really was the issue and why did my solution resolve it?
My reason for including this parameter is so that EP4IN should be bypassed if 
BACKUP1 fails.

Once again I ponder the sanity of the inventor of COND.

Thanks,
Frank



The EP4IN PROC seems to have an embedded PEND and then a spurious PROC 
statement at the end (where the PEND should be ?).  I suspect  the 
problem is that combining COND on an EXEC PROC with embedded use of COND 
inside a PROC does not interact well and seldom does what one might 
expect/want.  The COND on the EXEC PROC statement does not determine 
whether that EXEC is performed -- it instead overrides the COND 
parameter on every EXEC within the PROC and determines when steps within 
the PROC will be executed, completely ignoring any original carefully 
thought out conditional logic using COND on EXECs within the PROC 
definition itself.


As a side issue, mixing IF/THEN/ELSE conditional execution with use of 
COND is so highly confusing, we always recommended not introducing the 
new forms without converting old EXEC COND usage to IF/THEN/ELSE 
statements at the same time (except possibly for COND on the JOB 
statement).  Also one can use IF/THEN/ELSE in the main JCL stream 
without the bizarre override issues you get with COND on the EXEC PROC.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: GO TO cobol

2012-04-16 Thread Joel C. Ewing
And of course there were languages of the time like FORTRAN, which 
encouraged unconstrained use of GO TO and the creation of spaghetti 
code.  FORTRAN at that point had very limited support for structured 
programming:  one very restrictive loop construct and a conditional 
branch which was essentially a three-way GO TO.  There were even cases 
where the conventions being followed might require a GO TO to reach 
corrective additions which would then use a GO TO to resume the 
original statement stream, either to preserve statement number 
sequencing conventions, or to avoid resequencing the entire program deck 
when all cards had sequence numbers.


 I think more people interpreted Dijkstra's remarks as a complaint 
against the unstructured use of GO TO and the lack of language 
support for structured programming constructs that forced  frequent GO 
TO use, rather than an arbitrary total ban.  The complaint was 
primarily one about use of the GO TO degrading comprehension of the 
program structure, not about program performance.  Ultimately, all code 
is reduced to the Assembler or machine language level, where the 
machine-level equivalent of the GO TO is essential and unavoidable.


ACM SIGPLAN (Special Interest Group on Programming Languages) Notices 
tended to publish humor, especially near April 1.  One proposal many 
years ago to totally eliminate the FORTRAN GO TO was to replace it 
with a COME FROM statement.  This totally eliminated the confusion 
over whether any FORTRAN statement with a statement number might be the 
target of some unseen remote branch -- and replaced it with the even 
more confusing concept that every statement with a statement number 
might actually be a disguised branch to some remote COME FROM statement.

  JC Ewing


On 04/16/2012 07:42 AM, McKown, John wrote:

What??? monopolizes the CPU??? GO TO was made a pariah by an article by Edgar 
Dijkstra.

http://en.wikipedia.org/wiki/Considered_harmful

And, of course, management went stupid (again) and came up with you cannot use the 
GOTO in any code at all. Which actually makes some COBOL more complicated due 
to the requirement of nesting IF statements within IF statements. And before the END-IF, 
that could be very complicated. I've see old code like:

IF ... THEN
...
IF ... THEN
...
ELSE
NEXT SENTENCE
...
IF ... THEN
...
   IF ... THEN
   ...
   ELSE
   NEXT SENTENCE
ELSE
...
.

Each internal IF had to have a corresponding ELSE with only NEXT SENTENCE in it.




--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Batch utility to show DCB info for files

2012-04-11 Thread Joel C. Ewing

On 04/11/2012 11:36 AM, Bill Ashton wrote:

Hello!

Is there a standard IBM batch utility that can show the DCB and Space
atributes for a file? I tried LISTCAT, but it didn't give me this data.

I would like to generate a report for a whole list of files, so as we shift
these to another location, we can have the metadata, too.

Coding a Rexx or other program is not an option...the requirement is to use
standard, already existing utilities.

Thanks!
Billy



See IBM IEHLIST utility, LISTVTOC command.  Will have to separately list 
VTOC for each DASD volume containing files of interest.  Output tends to 
be verbose, but at least it's a standard utility.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Query about overcoming -- debunking, countering, and burying --mainframe myths

2012-04-11 Thread Joel C. Ewing

On 04/11/2012 11:12 AM, McKown, John wrote:

Not legally. But I've heard rumors that it does.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM




-Original Message-
From: IBM Mainframe Discussion List
[mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Paul Gilmartin
Sent: Wednesday, April 11, 2012 8:50 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Query about overcoming -- debunking, countering,
and burying --mainframe myths

On Wed, 11 Apr 2012 07:04:45 -0500, Tom Marchant wrote:



A renowned industry expert ...
[said] the 4341 might not be compatible with the 148


Not what you are looking for, but in the late 1970's, when I
was an Amdahl SE someone said that they understood that
the Amdahl was IBM-compatible, but would it run Cobol?


I understand that Hercules is zSeries compatible, but will
it run z/OS?

-- gil


...

I realize it would be ridiculous from a performance and cost standpoint 
to do so, but as a test of Hercules functionality I have always wondered 
if it would be possible to run z/OS legally under Hercules if you did it 
under a Linux system running in an LPAR on GP processors on a z box 
already licensed for running z/OS natively?

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: VSAM help wanted for random reads

2012-04-04 Thread Joel C. Ewing

On 04/04/2012 10:37 AM, Chip Grantham wrote:

We have an application like this, that is multiple record types in the
same KSDS.  We found that if we had a FD for the type '4' records and a FD
for the type '5' records (that is two DDs pointing to the same file), that
each kept a separate sequence set in storage and it ran faster.  You might
try it.

Chip Grantham  |  Ameritas  |  Sr. IT Consultant | cgrant...@ameritas.com
5900 O Street, Lincoln NE 68510 | p: 402-467-7382 | c: 402-429-3579 | f:
402-325-4030


...
Unless you have something at your installation that automatically tunes 
VSAM buffer allocation, some kind of manual tuning in the JCL is almost 
always recommended, as the default VSAM buffer allocations tend to be 
terrible for performance.  Just specifying an BUFFNI INDEX buffer count 
large enough to accommodate all index levels, plus additional buffers if 
the access pattern has multiple localities of reference, can do wonders 
for random access performance, even without going to BLSR.  The default 
used to guarantee that random access to any VSAM file with data in 
more than one CA (and hence at least two levels of index) would require 
re-reading CI's for all the various index levels for each data record 
access.  Just providing a few additional index buffers in such cases 
might be enough to cut the physical I/O's by a significant factor.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Malicious Software Protection

2012-03-27 Thread Joel C. Ewing
 that protects against malicious
software. It's called SAF, and it interfaces with ESM's like RACF,
or ACF2, or TopSecret.


SAF is not a product. It stands for System Access Facility and it
is nothing more than an interface within z/OS into which a security
system (such as ACF2, TopSecret and any ryo security system) can plug
into to receive and respond to security calls. It really has nothing
to do with anti-virus protection.


SAF is not a product, you're right. Please forgive my use of the term
product, I should have said feature. I do take issue with your
last sentence. SAF and an ESM have everything to do with anti-virus
protection, provided they are configured to correctly protect
APF-authorized resources.


It [z/OS] is the only operating system out there with built-in
anti-virus protection. On top of that, the hardware itself actively
protects against damage through storage keys, protected memory, etc.
You have to explain to the auditors that anti-virus software is not
needed on z/OS, because it's intrinsic to the operating system and
the hardware.


I think you seriously misunderstand what a virus is...

Yes, z/OS has exceptional security (and integrity and reliability)
features for protecting against non-authorized programs. But I must
emphasize... --NON--authorized programs!

When it comes to AUTHORIZED programs, z/OS's integrity (which is what
you are talking about with storage keys and such) is very good, but
of course not bulletproof. Worse though, when it comes to SECURITY,
there are some real problems! Because with the proper knowledge, it
is TRIVIALLY EASY FOR AN AUTHORIZED PROGRAM TO SUBVERT SECURITY
COMPLETELY!

This is what mainframers constantly forget regarding security. For
authorized programs there is no security. All that is necessary for a
malicious program to do is to Trojan-horse its way (with the AC(1)
attribute) into an authorized library, and you're done for!


I've never forgotten this. That's why my APF-authorized libraries are
severely limited in scope, and audited for any and all updates.



As far as I know there is no serious anti-virus program for
mainframes. I believe strongly that there needs to be one, but I
don't know of one. And at this stage of the mainframe culture, I
would be seriously suspicious of the efficacy of any program that
claimed to be anti-virus. I don't think that a serious mainframe
anti-virus program can exist unless and until IBM itself makes a
commitment to support an effort to make the mainframe anti-virus proof.


...

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Leaving IBM

2012-03-26 Thread Joel C. Ewing

Best wishes.  Your contributions over the years have been appreciated!
   JC Ewing

On 03/26/2012 08:52 AM, Walt Farrell wrote:

I mentioned this over on RACF-L the other day, so for some of you this will be 
old news.

I've been an IBMer for 28 years and have had a lot of fun with RACF and MVS,
and I've had a lot of fun interacting with you folks on RACF-L and IBM-MAIN.

But the time has come for me to retire and have fun with other things. I've
enjoyed the discussions here, and working with many of you to plan
enhancements or resolve problems.  I'm sure I'll still read both lists for
awhile, and probably even participate from a personal email address.

But after Wednesday morning I will no longer be an active IBM employee and
I'll speak about z/OS and RACF even less officially than than I do now.

It's been a great 28 years, but my family and other activities are calling
to me more and more strongly, and it's time to spend more time with them.

 Best wishes to you all,
 Walt

...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: A z/OS Redbook Corrected - just about!

2012-03-26 Thread Joel C. Ewing

On 03/26/2012 04:20 PM, John Gilmore wrote:

Yes, your pronunciation appears to be an acceptable one---'correct' is
too normative a word for most linguists---for an
educated-circa-2012-somewhere-in-Texas speaker.

Still, I don't suppose that I will be expected to forego my usual plea
for the use of the IPA instead of such expedients in these situations;
and I will not.

John Gilmore, Ashland, MA 01721 - USA



No doubt a subtle form of humor for this group to use another overloaded 
acronym, IPA.


I made the mistake of getting IPA by accident once when intending 
beer/ale.  Have never forgotten what India Pale Ale is like (yuk) and 
won't make that mistake again; but I'm sure in sufficient quantities it 
would put an end to any argument over acceptable or correct 
pronunciation - hence, I submit, this could be a plausible 
interpretation of IPA in this context -- though knowing John's posts, 
I'm sure the interpretation of one of the later Google links was 
intended. :)


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Pre-Friday fun: Halon dumps and POK Resets

2012-03-24 Thread Joel C. Ewing

On 03/23/2012 04:47 PM, Shmuel Metz (Seymour J.) wrote:

In
cafo-8tqy1ea0pispweiwomxpnajxzs2+rvadkzbd-2kva8q...@mail.gmail.com,
on 03/22/2012
at 01:33 PM, zManzedgarhoo...@gmail.com  said:


Who else has stories to share?


EDS, at a government facility. Halon dumps, everybody ordered out. One
operator decides to be a hero and to shut down the equipment in an
orderly fashion, which he did. It turns out that non-toxic is a
relative term; he did require medical care. I don't know whether he
got a commendation or a reprimand.

There are a couple of other list members who were there at the time;
[perhaps they recall the details.



Halon had a minimum concentration of 5% to be effective and a maximum 
concentration of 7%, over which it starts to have toxic effects on the 
nervous system.


When ozone-depletion concerns caused discontinuation of use of Halon, 
one of the replacement suppression agents was FM200, or 
heptafluoropropane.  FM200 has a slightly broader band of useful-safe 
concentration, 6.25% to 9%.  But, above 9% it is described as causing 
cardiac senstivity, which doesn't sound like a good thing for 
long-term exposure either.


With both of these suppression systems, if through design error the 
dispersal system is over-sized for the area, or if the agent isn't 
distributed uniformly, personnel remaining in the area could be at risk 
from overexposure.


The recommendations for maximum exposure to both of these agents is 
based on the assumption you are not in a room with an active fire. 
These compounds break down while doing their suppression job in the 
presence of fire and other compounds could be released that are much 
more toxic -- not to mention that the typical by products of the type of 
electrical fire one might expect in a computer room would by themselves 
be toxic in a closed area.


That's why one should always assume the worse in the event of a 
discharge and exit.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: host codepge 0037 and the obscure not sign

2012-03-13 Thread Joel C. Ewing

On 03/13/2012 03:11 AM, Hunkeler Peter (KIUP 4) wrote:

Try
quote site sbdataconn(IBM-037,ISO8859-1)
before your get or put command. This tells the ftp server what
translation to apply.

If it help, find out what is setup at your location. Have a look at the
FTP data configuration file
(http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/f1a1b4a0/18.
3?SHELF=f1a1bkc0DT=20100607113411)

--
Peter Hunkeler


And if appropriate, you can make the sbdataconn value shown above the 
FTP default for your installation in TCPPARMS.  When I last checked, the 
default FTP translation table not only handled some characters 
incorrectly for us, but was not even a one-to-one mapping for 8-bit 
characters, so that FTP transmission of 8-bit text to z/OS and then back 
to the original non-EBCDIC system could result in changed characters 
from the original.


In some cases you might want to specify the code set IBM-1047 rather 
than IBM-037.  There are unfortunately [at least] two conflicting 
EBCDIC-based code sets in common use on z/OS: IBM-037 for applications 
from the 3270 tradition, IBM-1047 for applications from the C Language 
or UNIX tradition, mainly differing in code points for brackets, caret, 
and not sign.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Program FLIH backdoor - This is a criminal breach of security!

2012-03-05 Thread Joel C. Ewing
 only intended to affect SMP/E maintenance for CA products, the 
exit was typically globally accessible, and there was no easy way to 
verify that it did not have the potential to compromise SMP/E behavior 
for all other z/OS product maintenance.  Needless to say that approach 
was not well received once it became known (and was discontinued).


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: JES2 STCCLASS and TSUCLASS - documented

2012-03-04 Thread Joel C. Ewing

On 03/04/2012 11:29 AM, Bob Rutledge wrote:

Bob Rutledge wrote:

Andrew Metcalfe wrote:

Folks

Anyone know in which manual can I find the STCCLASS and TSUCLASS
initialisation parameters documented please?
Not in 1.12 JES2 Init  Tuning. I suspect that they are synonymous
with JOBCLASS(STC) and JOBCLASS(TSU). I have inherited support for a
JES2 system that has STCCLASS and TSUCLASS specified in the init
deck, so just looking for definitive description.

STCCLASS AUTH=ALL,
BLP=YES,
COMMAND=EXECUTE,
MSGCLASS=W,
MSGLEVEL=(1,1),
REGION=512K,
TIME=(60,0)

TSUCLASS AUTH=ALL,
BLP=YES,
COMMAND=EXECUTE,
MSGCLASS=W,
MSGLEVEL=(1,1),
REGION=512K,
TIME=(60,0)


JES2 Init  Tuning Guide/Rererence, SA22-7532/3


Also commands $DSTCCLASS and $DTSUCLASS don't appear to be documented.


JES2 Commands, SA22-7526


Sorry. The definition isn't expressly stated. It's hinted at in the
explanation for $HASP837.

Bob

...

The syntax for JES2 initialization parameters and commands have changed 
a number of times over the years, and some of those changes very 
definitely involved unifying extremely similar and related parameters 
and commands into a common syntax with an additional parameter to 
distinguish.  I strongly suspect the STCCLASS and TSUCLASS to 
JOBCLASS() are an example of this, and while the old forms may still 
be supported for compatibility, they would not be documented if their 
use is deprecated, and the old forms may cease to be supported at some 
arbitrary point in the future.


I suspect someone there ignored some JES2 migration instructions in the 
past.


I believe the JOBCLASS parameters and corresponding commands others have 
mentioned are the current incarnation and the versions that should be in 
use rather than the obsolete forms.  If the sub-parameter migration is 
not obvious, you may have to find a manual old enough to have 
documentation on the old forms (or perhaps search for STCCLASS/TSUCLASS 
migration information on-line).


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Writing article on telework/telecommuting

2012-03-04 Thread Joel C. Ewing
A  session manager on the mainframe (CL/Supersessions or equivalent 
functionality) is your friend!  This protects you not only from remote 
failures but local glitches as well.  The current screen from multiple 
logged-in applications is maintained by the session manager and 
unaffected by drop outs occurring between the end user and the session 
manager.  Applications like TSO don't see a session failure that only 
affects the link between the user and the session manager (unless it 
lasts so long that application sessions time out from inactivity).

  Joel C Ewing

On 03/04/2012 12:25 PM, David Betten wrote:

One thing I'll add to that is that if your internet service periodially
drops, it's a real pain if you're connected to a host 3270 session.  For
example, my wife primarally does email and web browsing while working from
home.  So if our internet signal drops for a few minutes and then comes
back, she's not likely to even notice.  However, if I'm scrolling through
code or a hex dump and the sevice drops for just a few seconds, it's a
major headache getting loging back on and hoping my sesson reconnects to
where I was.  Our latest VPN client seems to offer a bit better recovery
from that by maintaining the session but a few years ago it was a major
headache for me.

Have a nice day,
Dave Betten
DFSORT Development, Performance Lead
IBM Corporation
email:  bet...@us.ibm.com
DFSORT/MVSontheweb at http://www.ibm.com/storage/dfsort/

IBM Mainframe Discussion ListIBM-MAIN@bama.ua.edu  wrote on 03/04/2012
08:49:57 AM:


From: Martin Packermartin_pac...@uk.ibm.com
To: IBM-MAIN@bama.ua.edu,
Date: 03/04/2012 10:01 AM
Subject: Re: Writing article on telework/telecommuting
Sent by: IBM Mainframe Discussion ListIBM-MAIN@bama.ua.edu

One experience from teleworking which should appeal to mainframers: By

and

large 3270 is the least demanding data stream - so TSO / ISPF goes fast
even on broadband as crummy as mine. (It's all the other junk that runs



really slowly when the wet string dries out.)

Now I may be in a minority but I bet this counts for lots of people.

Anyhow, having telecommuted for more than 10 years I'm looking forward to



this article: You are not alone is a useful thing to hear. :-)

Cheers, Martin

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:
Gabe Goldbergg...@gabegold.com
To:
IBM-MAIN@bama.ua.edu,
Date:
03/03/2012 21:43
Subject:
Writing article on telework/telecommuting
Sent by:
IBM Mainframe Discussion ListIBM-MAIN@bama.ua.edu



I'm writing article for Destination zhttp://destinationz.org/ on
telework/telecommuting. I think this partitions in two dimensions --
technology vs. mindset and worker vs. employer.

There's abundant information -- and blather -- about this subject. But
Destination z is mainframe focused so I'm especially interested in
relevant System z tips for all four quadrants:
technology/mindset/worker/employer.

Again, this is a tips article so won't include positive/negative
anecdotes. But they're still welcome -- they can suggest tips, they're
interesting, and I might write a longer piece on this sometime.

As usual, extra credit for sending to me directly (in addition to list,
if you're so inclined) so I needn't pluck from digests.

Thanks, as always..

--
Gabriel Goldberg, Computers and Publishing, Inc.   g...@gabegold.com
3401 Silver Maple Place, Falls Church, VA 22042   (703) 204-0433
LinkedIn: http://www.linkedin.com/in/gabegoldTwitter: GabeG0


...
--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IKT100I USERID CANCELED immediately after TN3270 connection fail

2012-03-03 Thread Joel C. Ewing
This sounds like a user training problem.  Users should be taught in the 
scenario you describe to use EDIT on the new member name, within EDIT 
use COPY to pull in the old member contents, then do editing.  The 
design intent of VIEW is to view without altering the original contents, 
and default attributes such as RECOVERY should be expected to be set OFF 
accordingly for VIEW.  They should not be using VIEW when intending 
extensive modifications, but should instead be using EDIT, and have only 
themselves (and their trainer) to blame when they lose data using VIEW 
in an inappropriate context.  It shouldn't take a whole lot of thought 
to realize that the commands are named differently for a reason, and 
that the names actually do convey the intended use!

J.  C. Ewing

On 03/03/2012 10:27 AM, chen lucky wrote:

YES, this is one option. But some user have the habit that do a lot of
edition in view mode, then create a new member on behalf of it. So if the
user can reconnect to the remain session after connection failure, this
should be the best choice.

thanks.

2012/3/3 retired-mainfra...@q.comretired-mainfra...@q.com


If the data to be saved is from ISPF Edit, have you considered setting the
Recover option for your users?

- Original Message -
From: chen luckychenluck...@gmail.com
To: IBM-MAIN@bama.ua.edu
Sent: Saturday, March 3, 2012 2:08:17 AM
Subject: IKT100I USERID CANCELED immediately after TN3270 connection fail

Hi List,

Thanks for your help.

Recently I encounter a problem that IKT100I USERID CANCELED immediately
after TN3270 connection fail, and It is unacceptable in our shop, because
users will lose their work that do not save in time.


...

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Why _TZ put times 7 minutes off?

2012-03-02 Thread Joel C. Ewing

On 03/02/2012 06:44 PM, Charles Mills wrote:

And the answer from those who know is

It happened during the POR last Thursday and we're talking with IBM to
figure out why a POR would do that to us.

Thanks all for your patience with YATQ (yet another time question).

Charles



In absence of sysplex timer or the like, the processor TOD clock is set 
only at POR and is set based on the HMC clock, which may in turn sync 
once a day with the SE clock.  If you don't have any procedure to sync 
the HMC/SE clocks to UTC or verify they are reasonably accurate when you 
know a POR is imminent, odds are they have drifted from reality and a 
POR will propagate that error to the processor TOD clock, which in turn 
propagates to all LPAR TOD clocks as they are activated.  It would seem 
your operators must be setting local time explicitly at IPL (rather than 
using a fixed offset from TOD), or the error would have been more 
obvious after the IPL.My recollection is it has worked this way ever 
since IBM mainframes have had HMCs for control.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Why _TZ put times 7 minutes off?

2012-03-02 Thread Joel C. Ewing

On 03/02/2012 09:41 PM, George Kozakos wrote:

On 02/03/2012 08:36 PM, Joel C. Ewing wrote:

In absence of sysplex timer or the like, the processor TOD clock is set
only at POR and is set based on the HMC clock, which may in turn sync
once a day with the SE clock.

The SE is sync'd to the CEC TOD and the HMC is sync'd to the SE.
The SE is sync'd to the CEC TOD every 24 hours, 4 hours or 1 hour depending
on MCL level. The current best practice is to keep the CEC TOD accurate via
an external time source which in turn keeps the SE and HMC accurate. This
requires STP or sysplex timers.

George Kozakos
z/OS Software Service, Level 2 Supervisor



The CEC sync on more recent machines had slipped my memory.  On those 
systems, to see a significant jump in TOD time at POR, one would have to 
manually set the SE time, do a poor job of it, and do it close enough to 
POR so SE time doesn't get reset by the CEC TOD, yet far enough away 
(several minutes?) to be sure it propagated to HMC.


Particularly now that STP is just a matter of code rather than hardware, 
it makes less and less sense (from the customer's viewpoint of course) 
for this to be a chargeable feature, which was still the case when I 
last checked.  As long as it is a chargeable feature it is hard to 
cost-justify unless you are running a multi-system sysplex environment 
that requires it or you have some unusual requirement that really 
demands your TOD clock be that accurate.  That it is the best practice 
and the only sure way for keeping the TOD clock accurate makes it nice 
to have for a number of reasons; but if management asks if it is a must 
have additional expense or a feature you can live without, in many 
cases the latter response must be given.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Originality

2012-03-01 Thread Joel C. Ewing

On 03/01/2012 10:42 AM, Clark Morris wrote:

On 1 Mar 2012 08:21:46 -0800, in bit.listserv.ibm-main you wrote:


17 USC 106: Subject to sections 107 through 122, the owner of copyright under 
this title has the exclusive rights to do and to authorize any of the following:
(1)to reproduce the copyrighted work in copies or phonorecords;
(2)to prepare derivative works based upon the copyrighted work;
quoted from http://www.law.cornell.edu/uscode/text/17/106

Is anyone here using an IBM mainframe for personal work?  The rest of us 
work(ed) for companies which probably have legal departements.  That is where 
this discussion belongs.  Or, since mainframes are probably used in some 
hospitals and health insurance companies, we can also discuss the virtues and 
limitations of self-performed amputations.


Although I am semi-retired (will take contracts) and probably long
past any statute of limitations, I have used as is or modified SAMPLIB
members, placing them in production and in addition I may well have
put said modifications on the MICHMODS, JES3 or CBT tapes.  For those
of you still active, not using SAMPLIB members could be a drastic
change in the way things are done.  I believe the intent of those
members is that they be used as templates for organizations to
customize the system and that sharing can be a part of that
customization.  Since I am fairly certain that there are installations
within IBM and other vendors that have copies of various MODS tapes
and/or their successors, this probably is a non-issue.  However, I'm
just a bumped up DOS360 COBOL payroll programmer, not a lawyer.

Clark Morris


Considering what a small percentage of the totality that is z/OS is 
represented by SAMPLIB examples and default PARMLIB members, I would be 
astonished if even an anal-retentive lawyer would consider quoting with 
attribution in another published work the unmodified or modified 
versions of these members as anything other than fair use of z/OS (but 
then there are lawyers that surpass anal-retentive).


Considering that IBM advises customers to customize these for their own 
use, and that it is in IBM's advantage for customers to have access to 
as much assistance and insight from other installations as possible in 
that process, it would make no sense for IBM to contest such sharing.


If it were brought to IBM's attention that someone were attempting to 
distribute bad mods or other customization examples with the deliberate 
intent of tricking IBM customers into compromising their z/OS integrity, 
I'm sure IBM would take exception at that; but there surely are other 
statutes more potent than copyright infringement that could be brought 
to bear at that point.

  Joel C Ewing



- Original Message -
From: Shmuel Metz (Seymour J.)shmuel+ibm-m...@patriot.net
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, March 1, 2012 9:52:04 AM
Subject: Re: Originality (was: Duplicating SYSOUT ...)

In040601ccf67d$d7ea4060$87bec120$@mcn.org, on 02/28/2012
   at 05:02 PM, Charles Millscharl...@mcn.org  said:


Creating derivative works is a right reserved to the copyright
holder.


In what country? Do you have a citation for the section of the US Code
that prohibits creating, as opposed to distributing or performing, a
copyrighted work?

...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Stupid JCL trick?

2012-02-21 Thread Joel C. Ewing

On 02/21/2012 10:16 AM, McKown, John wrote:

Do you ever want to retire some step(s) from a job? But you don't really want to remove 
the step(s) just in case? I don't remember this being mentioned before, so I thought I 
would. It will work for any step, other than the first one in the job. Find a step, any step, 
before the step(s) you want to bypass. Wrap the step(s) you want to bypass with:

// IF (stepname.RUN=TRUE AND stepname.RUN=FALSE) THEN
 steps to be bypassed
// ENDIF

Replace stepname with the name of the step you selected which exists before 
the bypassed step(s). This works if the previous step ran or didn't run, regardless of 
the step's return code if it did run. Sorry if this was obvious. Maybe my brain has 
retired already.

John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

...

But be aware there can still be some interactions with the DSNs 
referenced in the DDs for the skipped step.  Whether a DSN enqueue is 
done, the type of enqueue, and how long the enqueue is held may still be 
influenced by the reference to the DSN in the skipped step.  Also, job 
restart managers (ZEBB, CA-11,...) that are configured for auto delete 
of existing datasets at start of job if the first reference to that DSN 
in the job is with DISP=(NEW...) may be fooled if that first reference 
is in a job step that will be skipped.


If you really want guaranteed zero effects from the unwanted step 
without complete deletion of step JCL, changing the statements to 
comments is the only sure way.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Batch process VS Started task

2012-02-19 Thread Joel C. Ewing
The described record arrival rate averages at just under 58 records per 
second.  Somehow I don't think XBM was designed with that high of 
transaction arrival rate in mind, processing each record as a separate 
transaction.  I would think that a high-speed transaction processing 
environment (CICS, IMS) would do better than XBM, although those would 
probably still involve more overhead than a specialized 
transaction-processing STC optimized just for your own peculiar 
transactions (but I agree a separate specialized STC would be more of a 
maintenance headache).


There is also an intermediate approach where you collect records by some 
means and then periodically fire off a process or transaction to process 
records collected since the last time the process was run.


Another consideration: If the current batch processing is done at a time 
of lower system system load and involves significant work per record, 
moving this processing into peak processing times could have significant 
impact on your peak MSU requirement and a significant impact on your 
costs.  I suspect your record arrival rate is not a constant throughout 
the day but also has its peaks.  If those peak record arrival rates 
actually correspond to current periods of peak system load, then the 
impact on the peak system MSU requirements of moving this load would be 
even greater than one would expect from the average record rate alone.


In other words, if there is a perceived business need to process the 
records in a more timely manner, be sure those footing the bill are 
aware it may not be a free lunch.

   JC Ewing


On 02/19/2012 04:04 PM, Ed Gould wrote:

Magen:

Not sure if this is what you are looking for but
There is a facility called execution batch monitoring (at least in JES2)
You feed it input and it creates output (to your specifications). It
was originally designed for batch compiles but it could be adapted to
something like you want (?).

Ed

On Feb 19, 2012, at 3:25 PM, Magen Margalit wrote:


Hi list.

We have a daily betch job that is processing as input records which
has been collected all day.
volume of records is about 5 millions for 24 hours.

In order to make systems more online we are looking for a way to
run the process for each record all day long instead of a daily run,
and doing so with minimum as possible application changes.

One idea that came up is to convert the process to a self developed STC
which will be triggered by a record on an MQ queue and will run as STC
all the
batch process programs

To me it seems like a bad idea because having a self developed STC
in production
create a maintenance gap (and where there is one STC a second one
will soon to follow...)...

Are there other advantages / dis-advantages regarding a self
developed STC ?

Are there any self developed STC's at your shop?

Any other ideas on how to approach this issue?

Thanks in advanced.

Magen




--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: O/T but curious (Re: Archaic allocation in JCL (Was: Physical record size query) )

2012-02-19 Thread Joel C. Ewing

On 02/19/2012 11:40 AM, Shmuel Metz (Seymour J.) wrote:

In
cajtoo59ducxpmrtvozjwjxbr26rbq1hdbdarsnfundxbhfw...@mail.gmail.com,
on 02/18/2012
at 07:06 PM, Mike Schwabmike.a.sch...@gmail.com  said:


Neither Windows or Linux have a Catalog concept to find a dataset on


What do you think a directory is?

Under Windows, a directory is closer functionally to the MVS/DOS concept 
of a VTOC, as each volume has its own directory and you have to somehow 
know which volume to consult -- although admittedly in a windows system 
the number of volumes is typically very low.  In Linux, if all volumes 
are mounted, the directory plays a similar functional role to that of 
the MVS catalog(s) and VTOCs combined.  But in either case they are 
obviously structurally different: finding an file entry in Windows or 
Linux requires a progressive search through multiple directory levels 
rather than just a single lookup of the full path name as with a data 
set name in an MVS catalog.  And in both Windows and Linux, in many 
cases the user thinks of a file by its file name and not its full path, 
and the onus in on the user to remember under what directory the file 
was placed.  That issue does not arise in MVS because dataset names are 
always referenced by the full name -- roughly the equivalent to always 
requiring the full path name in Win/Linux -- and that makes direct 
lookup in a catalog possible.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Archaic allocation in JCL (Was: Physical record size query)

2012-02-13 Thread Joel C. Ewing

On 02/13/2012 09:19 AM, Chris Craddock wrote:

On Mon, Feb 13, 2012 at 8:56 AM, Paul Gilmartinpaulgboul...@aim.comwrote:


On Mon, 13 Feb 2012 07:21:11 -0600, McKown, John wrote:


Or, as the programmers at our shop would do:




SPACE=EAT-EVERYTHING-IN-SIGHT-AND-CAUSE-OTHER-JOBS-TO-ABEND-BECAUSE-MY-STUFF-IS-IMPORTANT-AND-YOUR-STUFF-ISNT.

In many other systems, such as Winblows, everybody gets their own

personal space. And if it is used up, it doesn't impact others. z/OS
shares DASD space.  ...



The z/OS cultural norm for HFS and zFS is to give each user a
dedicated filesystem for HOME.  This is similar to the behavior
of personal instances of those other systems.




I think it is fair to say that JCL and space management are areas where
z/OS truly is archaic. The other world manages to get by just fine
without having to figure out how much resource to give. There's no reason
z/OS couldn't do the same other than slavish adherence to legacy. IMHO it
is about time the system itself took care answering its own incessant how
big?, how many?, how often? questions. It's 2012 ferpetesakes. I'm all
in favor of making sure that existing applications continue to work. I am
far less impressed with continuing to impose 1960s thinking on new ones.


Requiring application programmers to think in terms of tracks and 
cylinders and to understand interaction between physical block size and 
track capacity is indeed archaic, as are artificial restrictions on 
number of extents or volumes.  Prior to emulated DASD and large DASD 
cache sizes, space/allocation sensitivity to tracks and cylinders was 
frequently necessary for performance reasons, but that is no longer the 
case.  It should be possible to just specify data set limits in terms of 
data bytes expected or records/average-record-length expected without 
regard for tracks, cylinders, extents, or volumes.  And given some 
simple mechanism for specifying such limits, z/OS should also provide 
support for monitoring whether application growth is causing data sets 
to be at risk of exceeding their limit.  Restricting sequential data set 
allocation to record allocation of SMS Extended Sequential data sets 
with space constraint relief and SD block size comes close, but is an 
incomplete solution and only works for sequential files.


The MVS allocation strategy, which generally requires dynamic secondary 
extensions to data sets when the size exceeds what can reliably be 
obtained on a single volume, has always been flawed. Even when the exact 
size of a large data set was known in advance, there was never a 
guarantee that space for required secondary extensions would be 
available on the selected volumes.  In effect there was no easy way to 
convey to z/OS via primary/secondary specifications what the true limit 
of the data set should be because the actual maximum number of secondary 
allocations was always an unknown, with no guarantee at the beginning of 
step execution that even one dynamic secondary could be allocated on any 
of the chosen volumes.


Perhaps an awareness of total data bytes involved in a data set is non 
essential for data sets below some (installation-dependent) total-byte 
threshold; but at some point for larger data sets those developing the 
batch application should have an awareness of approximate data set size 
and records involved so that concurrent space requirements for a job 
step may be at least approximated up front; and so application 
programmers don't choose an approach that might be appropriate for a toy 
application of 1000 records but totally inappropriate for a production 
application with a million records.  If a required batch application is 
going to consume a significant percentage of the total DASD farm, there 
also needs to be some means for awareness of that, as it will impact job 
scheduling and capacity planning.


The z/OS fixation on requiring data set SPACE specification for 
allocation rather than using some totally dynamic approach is no doubt 
an outgrowth of the desire for MVS to reliably support unattended batch 
and, as others have mentioned, to prevent one looping batch job from 
causing termination and denial-of-service of other unrelated jobs by 
exhausting available DASD space.  Properly designed JCL SPACE parameters 
(which admittedly takes some effort) can also ABEND a batch job step up 
front if sufficient DASD space does not exist for successful completion 
-- much more desirable than allowing a batch job step to run for hours 
and consume valuable resources, and then blow up because space for 
further secondary allocation is unavailable.  Operating systems that 
don't require space estimates for large file allocation are implicitly 
saying that reliable running of unattended batch processes is of 
lesser importance.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive

Re: Archaic allocation in JCL (Was: Physical record size query)

2012-02-13 Thread Joel C. Ewing

On 02/13/2012 02:43 PM, Paul Gilmartin wrote:

On Mon, 13 Feb 2012 11:38:44 -0600, Joel C. Ewing wrote:



It should be possible to just specify data set limits in terms of
data bytes expected or records/average-record-length expected without
regard for tracks, cylinders, extents, or volumes.  ...


And the user interface should be simplified.  I should be able to
code SPACE=(1,540) and let SDB infer an average
block size and allocation spare me the algebra of factoring the
total space into halfword chunks.  This might require a new
alternative TU: a  64-bit (for future growth) extent size in bytes.

(I'd prefer, for legibility, SPACE=(1,54.000.000.000) with European
thousands separators.)

-- gil


Not sure about your reference to half-word chunks.  Although there are 
16Mi limits on max numerical value for primary-qty parameter,

SPACE=(1,54000),AVGREC=M
is perfectly legitimate for allocating 54,000 MiB (only about 5% high if 
you really needed exactly 54,000 MB) for a sequential data set, provided 
you have a DATACLAS that also specifies EXTENDED, Storage Constraint 
Relief, and a high enough volume count.  Allocation will spread the 
dataset over as many volumes as necessary and up to as large a number 
(127?) of extents per volume as necessary to allocate the requested 
total space.  Although the first value in SPACE is described as average 
record length, it is really only used as a multiplier for primary-qty or 
secondary-qty and AVGREC for calculating total data bytes needed, under 
the assumption the actual BLKSIZE will give efficient track utilization.


The gotcha used to be that if you grossly over-requested space, got 
space dispersed over umpteen volumes, only used a little of the space, 
that RLSE would then only release the unused space on the last volume 
actually written and leave all the unneeded, unused space on subsequent 
volumes allocated until the data set was deleted.  One would hope that 
issue would eventually be resolved and the concept would be extended to 
data set types other than DSORG=PS.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Archaic allocation in JCL (Was: Physical record size query)

2012-02-13 Thread Joel C. Ewing

On 02/13/2012 04:26 PM, Paul Gilmartin wrote:

On Mon, 13 Feb 2012 16:11:53 -0600, Joel C. Ewing wrote:



(I'd prefer, for legibility, SPACE=(1,54.000.000.000) with European
thousands separators.)


Not sure about your reference to half-word chunks.  Although there are
16Mi limits on max numerical value for primary-qty parameter,
SPACE=(1,54000),AVGREC=M
is perfectly legitimate for allocating 54,000 MiB (only about 5% high if
you really needed exactly 54,000 MB) ...


I stand corrected; 24-bit rather than halfword.  But, still,
why can't the computer perform the algebra for me?  And what
is the etymology of AVGREC?  It seems to suggest Average
Record Size; it means nothing of the sort.

(Or, even, SPACE=(1,54000M), or even, SPACE=(1,54G)?)

(Is there a default for the SPACE unit?  If not, I'd suggest 1,
as in SPACE=(,540).)

-- gil


The keyword creator was apparently thinking about the intended changed 
interpretation to the SPACE parameter caused by the presence of the 
AVGREC parameter and not about the meaning of the AVGREC value itself, 
which is inconsistent with the way all other parameter keywords appear 
to have been chosen, and makes even less sense when one uses 1 instead 
of avgreclen for the first SPACE sub parameter.


I also agree it would be more intuitive if direct suffixes such as B, 
KB, MB had been used in the quantity values instead of the separate 
AVGREC - and while they're adding that enhancement to JCL have the 
suffixes represent the correct powers of 10.  When having an application 
programmer or an end-user discuss implementation design limits on number 
of entries or number of records, I never encountered anyone said 10 
thousand and meant 10,240 rather than 10,000!!


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Turning on additional CPs

2012-02-09 Thread Joel C. Ewing
But would you really expect an effect of more than a few percentage 
points in the efficiency of one CP from only going from 3 to 4 engines?


If you are seeing a large difference in run time, perhaps you should 
also look for an explanation that can produce a much larger difference 
than the MP effect.  Maybe by removing a CPU bottleneck you have moved 
your major system constraint elsewhere, either to real memory or to DASD 
throughput, or some logical interlock.  Perhaps the longer running jobs 
are now doing significant paging because of greater contention for real 
memory, or are having to wait on physical I/O to DASD more -- because 
other things that used to be too starved for CPU to compete are now 
running and using resources other than CPU that used to be more 
plentiful in a CPU-starved environment.

  JC Ewing

On 02/09/2012 12:04 PM, Hal Merritt wrote:

I suppose that is reasonable for a single threaded, CPU bound job as a little 
is lost from each engine as another is added. However, you should be able to 
run more concurrent work giving a better over all through put.

Another benefit of another engine is that, if not needed for anything else, 
z/os likes to direct  I/O interupts to just one engine. This allows the other 
engines run a little smoother and again should increase your overall thoughput.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
gsg
Sent: Thursday, February 09, 2012 10:39 AM
To: IBM-MAIN@bama.ua.edu
Subject: Turning on additional CPs

We have a z10 with 4 engines.  Since upgrading to this box, we were only 
running 3 engines.  However, we recently turned on the 4th engine.  We noticed 
that several jobs started running longer, which we didn't expect.  Could 
turning on additional engines actually make a job run longer?
Also, where can I find any read material on the affect of turning on/off 
engines.


TIA

...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: WLM Capping

2012-02-08 Thread Joel C. Ewing
One of the subtle misconceptions in the design of WLM was the implicit 
assumption that you would always have enough hardware resources to do 
the required workload with the required response time, and if not 
would quickly remedy the situation.  In the real world, this is not 
always the case and sharing the pain is at times more acceptable than 
an immediate expenditure to add horsepower (even if it's just a logical 
change that increases software cost).


I would submit that in the business world, while there are frequently 
workloads that are discretionary in the sense that they may be delayed, 
most of these are not discretionary in the sense that they may be 
delayed indefinitely or have no required completion window.  In a system 
that is approaching or at saturation, one finds that typical WLM 
definitions effectively convert workloads that to the user were 
discretionary as to WHEN they might be run into workloads that are not 
run at all, which is rarely the intent.


WLM now has better tools than initially for defining loved ones that 
are conditionally loved and which can better address some of these 
situations.


When financial or political considerations force you to periodically run 
near system saturation for an extended time, you will invariably find 
that some of the workload priorities have to be rethought to allow 
discretionary, but non-optional, work to complete within required 
windows under that environment.  Conversely, if you rarely have to run 
in a resource constrained environment, it probably doesn't make sense to 
expend the effort to distinguish among discretionary workloads that are 
truly discretionary and those that are non-optional and only 
discretionary up to a point (when that point is never being crossed).

  JC Ewing

On 02/08/2012 03:15 AM, Martin Packer wrote:

So that told you some of your batch WASN'T (in business terms) truly
discretionary. Glad you (by the sound of it) pulled the stuff that
mattered if it never ran out of SYSOTHER.

Martin

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker
Blog:
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker



From:
David Andrewsd...@lists.duda.com
To:
IBM-MAIN@bama.ua.edu,
Date:
08/02/2012 00:36
Subject:
Re: WLM Capping
Sent by:
IBM Mainframe Discussion ListIBM-MAIN@bama.ua.edu



On Tue, 2012-02-07 at 15:51 -0500, Gibney, Dave wrote:

I don't want to imagine what WLM stomping on the brakes looks like in
your shop.


Biggest hassle for me when I started softcapping was that most of my
batch had been discretionary - I always liked the MTTW algorithm.  But
when we softcapped all that discretionary workload went to the meat
locker, and we couldn't have that.  Had to do some triage and creative
stuff with velocity goals and performance periods to make things right
again.




--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Physical record size query

2012-02-07 Thread Joel C. Ewing
Under z/OS, device type implies the architecture and usage protocols 
for the device.  You can't call a device a 3390 and not have it 
accurately emulate the characteristics of a 3390, nor can z/OS 
communicate with a device defined to it as a 3390 as if it were 
something else and expect success.  The blocks per track, tracks per 
cylinder, and CKD characteristics of a 3390 were fixed when real 3390's 
were created and are indeed embedded within the z/OS understanding of 
what is a 3390.  To change that definition after the fact would break 
all sorts of things which currently depend on knowing how to calculate 
cylinder/track/record locations of internal blocks in a dataset and 
allocation requirements of a dataset, and data capacity of a track.


The only way to change those rules without breaking existing data 
requires defining a new device type to z/OS, connecting devices of that 
type to z/OS, and placing datasets on those devices.  In the old days 
before emulated DASD, DASD upgrades not infrequently involved changes in 
device types and migration/copying of data to convert it to the new 
device-type rules.  Migration could involve considerable effort, and in 
some cases one would even find applications that wouldn't work on the 
new architecture because of some embedded assumptions about maximum 
records per track or cylinder that were violated on higher capacity 
devices.  When emulated DASD became the norm, freezing the device type 
at 3390 eliminated the need for non-productive DASD device-type 
migration activities, even if it did also preserve the peculiar 
block-per-track rules of the 3390.


Emulating the 3390 architecture does not necessarily require throwing 
away overhead bytes on physical DASD.  Some emulations (e.g. Iceberg 
RVA) not only compressed the logical data but didn't even store empty 
tracks.  This did however complicate things in other areas and create 
other performance issues, which is why cheaper physical DASD seems to 
have encouraged fixed mappings between logical tracks and physical DASD, 
even though that means track overhead bytes represent lost capacity.


Perhaps in light of the FBA architecture that typically underlies the 
today's emulated 3390 DASD it might make sense to consider a new 
emulated DASD device type that is somewhat similar to a 3390 but with no 
overhead bytes lost between emulated physical blocks.  This would 
perhaps be cleaner than a permanent 3390 solution, but it would have 
to be perceived to have enough benefit to justify the migration effort. 
 My guess is that we have stuck with the 3390 type for so long 
precisely because so far no one has been able to cost-justify such a 
change.


My personal long-term DASD ideal would be some kind of new DASD 
architecture that would require one to think only in terms of the 
logical data, not in terms of tracks, cylinders, or even FBA-device 
blocks; but this would be a massive change of many things in z/OS 
including the internal structure of a number of dataset types, also 
difficult to cost-justify.

  JC Ewing


On 02/07/2012 10:01 AM, Dana Mitchell wrote:

That leads to a question that I've been thinking about for some time.  Since 
the 3390 geometry is emulated by modern storage control units,  why then are 
the inefficiencies of small blocks emulated also?  There are not SLEDs actually 
storing the data, why are IBG's,  sectors,  and all the other CKD nastiness 
emulated that makes 80 byte blocks such a bad idea?   IOW, why can't the 
control unit  simply store  708 * 80 byte blocks on a 56,664 byte 3390 track?   
Does zos's calculations take these inefficiencies into account and only write 
78 of these blocks per track?

Inquiring minds want to know

Dana


On Tue, 7 Feb 2012 16:28:33 +0100, R.S.r.skoru...@bremultibank.com.pl  wrote:


RAID has very little to do with half-track blocks.

Nowadays 3390's are emulated, usually on RAID protected disk arrays (*),
but the 3390 geometry remains unchanged from z/OS point of view.
So half-track blocks (**) are still the most effective in terms of
storage utilisation and performance.


...

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: different tape media for ML2 copies in HSM

2012-01-27 Thread Joel C. Ewing

On 01/27/2012 10:42 AM, Judith Nelson wrote:

Hi Don,
I wonder if CopyCat works like that as well. I know it stacks many logical 
tapes to the 3590's, but I will have to check to see if it would update the 
MCDS as well.

Thank you,
Judith


...

Very seriously doubt it.  CopyCat has hooks into CA Tape Management 
systems, but I don't remember any hooks into dfhsm.  Trying to replicate 
dfhsm TAPEREPL functionality would take some serious dfhsm CDS record 
twiddling - not really sure I would want a non-IBM copy utility playing 
at that level.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: 3390s on SAN?

2012-01-26 Thread Joel C. Ewing

On 01/26/2012 09:52 AM, O'Brien, David W. (NIH/CIT) [C] wrote:

There is an internal proposal to carve several TB of dasd from one of our 
non-mainframe depts. And use it to replace our aging HDS DASD.

Question: How easy/difficult is this to accomplish?
We re-configured an array from 3390 mod-3s to mod 27/50s but the entire array 
needed to be cleared of data. I'm assuming the same will be true in this case.
I'm also assuming that the disks will need to be re-modeled (I am probably not 
using the correct terminology) to be mainframe compliant.
Are my assumptions correct?
Thank You,
Dave O'Brien
NIH Contractor



With nothing having been said about the existing type of non-mainframe 
SAN storage...
It's not just a question of raw DASD storage, but whether the existing 
SAN hardware has the smarts to drive mainframe ESCON or FICON channel 
interfaces and whether it is able to support the 3990 controller and 
3390 CKD disk device geometries and protocols, as these are all 
radically different from typical non-mainframe disk interfaces. 
Assuming that the hardware can support these requirements, if it was 
never originally configured with a mainframe in mind the odds are it 
will not have any mainframe channel interfaces installed, and at a 
minimum some hardware upgrade will be required for that.


I'm pretty sure IBM disk subsystems that support mix of SAN and 
mainframe storage require each entire physical array to be allocated to 
only one of those functions.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: MCDS Dataset Help

2012-01-26 Thread Joel C. Ewing

On 01/26/2012 01:30 PM, George Rodriguez wrote:

I'm running the export/import process almost daily. It used to run once a
week.
*
*
*George Rodriguez*
*Specialist II - IT Solutions*
*Application Support / Quality Assurance*
*PX - 47652*
*(561) 357-7652 (office)*
*(561) 707-3496 (mobile)*
*School District of Palm Beach County*
*3348 Forest Hill Blvd.*
*Room B-251*
*West Palm Beach, FL. 33406-5869*
*Florida's Only A-Rated Urban District For Seven Consecutive Years*



On Thu, Jan 26, 2012 at 1:19 PM, Schwarz, Barry A
barry.a.schw...@boeing.com  wrote:


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
Behalf Of George Rodriguez
Sent: Thursday, January 26, 2012 8:18 AM
To: IBM-MAIN@bama.ua.edu
Subject: MCDS Dataset Help

Hi MVSListerv,

I'm confused about the following information, that's display from command
F
DFSMShsm,F CDS:

ARC0101I QUERY CONTROLDATASETS COMMAND STARTING ON
ARC0101I (CONT.) HOST=1
ARC0947I CDS SERIALIZATION TECHNIQUE IS RESERVE
ARC0148I MCDS TOTAL SPACE=648000 K-BYTES, CURRENTLY
ARC0148I (CONT.) ABOUT 81% FULL, WARNING THRESHOLD=80%, TOTAL
ARC0148I (CONT.) FREESPACE=49%, EA=NO, CANDIDATE VOLUMES=0
ARC0948I MCDS INDEX TOTAL SPACE=0010237 K-BYTES,
ARC0948I (CONT.) CURRENTLY ABOUT 025% FULL, WARNING THRESHOLD=080%,
ARC0948I (CONT.) CANDIDATE VOLUMES=0


snip


I guess what has me confused is the 81% full with the 49% freespace...
Makes no sense to me!


Read the description of the message.  The % full is based on the last used
RBA while the % free includes all the space after that point PLUS any
unused space before that point.  It is similar to the situation in a PDS
where new data will always be added at the end but there can be gas in the
interior.



Can someone tell me how to fix this problem?


What problem do you think exists?  Chapter 3 of the HSM Implementation and
Customization Guide has a section on monitoring the CDSs.  It tells you how
to reorganize them if you want to recover the unusable free space.


...




Home of Florida's first LEED Gold Certified School

Under Florida law, e-mail addresses are public records. If you do not want your 
e-mail address
released in response to a public records request, do not send electronic mail 
to this entity.
Instead, contact this office by phone or in writing.


...
You don't necessarily have to reorganize as soon as the threshold 
warning occurs, it all depends on growth rate of %full.  Check the %full 
immediately after reorganize and then watch growth pattern.  It will 
grow most rapidly the first day, then slow down as CA/CI splits build up 
in the most active parts of the MCDS.  If it slows down enough that you 
are still below 90% by end of a week (there's nothing magic about 90% 
either if growth is slow enough), you choices are either to raise the 
threshold so it won't complain for a week, or increase the size of the 
MCDS by a large enough ratio so that the inverse ratio applied to the 
%full after one week would put that value below your 80% threshold.


As the size of the MCDS gets larger with time, odds are a smaller 
percentage of records will change in the course of a week and larger 
%full thresholds may be appropriate.  You can also track the MCDS %full 
after reorganize to get some idea of the actual long-term data growth.


I always had enough stuff to do without worrying about dfhsm CDS's, so 
my goal was to be able to ignore them except for once or twice a year: 
based on empirical data, size them so the initial unused space is 
adequate for the desired interval with a month or so of fudge factor, 
and then set the threshold limit to warn you when you reach your fudge 
factor.  And if you over-estimate size, you can just ignore them for a 
longer interval.


Just a nitpick about  your VSAM cluster definitions in general:
(1) explicitly specifying NAME for DATA and CLUSTER components when 
you want the standard default .DATA .INDEX suffixes has been 
pointless for decades, just more stuff to mistype or forget to update;
(2) specifying SPACE for an INDEX component is redundant because 
IDCAMS should be able to calculate exactly what it needs based on the 
number of CA's in your data SPACE definition and explicit values tend 
invariably to be gross over estimates (e.g.,less than 5 cyls actually 
needed for your particular 900 CA file on a 3390); and
(3) changes to VSAM INDEX CISZ default calculation in the last decade 
made the rare occurrence where the default was too small into an 
extremely rare occurrence.  INDEX CISZ really shouldn't be specified any 
more unless you have a known case where the default has been proved too 
small and part of each CA is unusable, or some explicit requirement for 
a specific INDEX CISZ is built into an application (which seems 
unreasonable to me since no application code should be messing directly 
with KSDS INDEX CI's).


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org

Re: CU Resources Exceeded

2012-01-25 Thread Joel C. Ewing

On 01/25/2012 12:26 PM, Mike Schwab wrote:

The error message appears on printed page 283 in
http://www.redbooks.ibm.com/redbooks/pdfs/sg246497.pdf
and some diagnostic options are displayed.

You might have something defined wrong in the FICON connectors, or
cables switched.

On Wed, Jan 25, 2012 at 11:44 AM, SUBSCRIBE IBM-MAIN Tom Trainor
thomas.j.trai...@exxonmobil.com  wrote:

After adding devices and two (2) LPARS to IOCDS on 2066, unable to IPL the last two(2) of ten (10) 
LPARS.  The last two (2) are NOT necessarily LPAR numbers 9 and A but are 
the last two (2) of the ten (10) that are activated.

Within Channel Problem Determintation on 2066 SE the message in the Analyze Serial Link 
Status is: CU Resources Exceeded - Init Failure - No Resources Available.  Has anyone seen 
this or know what the problem might be.





This almost sounds like you might be exceeding the maximum number of 
logical paths supported by your control unit.  Check the control unit 
documentation for its limit.  Each channel interface on the CU requires 
one logical path for each active LPAR sharing that physical path on a 
connected processor.  Only active LPARs play, so you would always see 
any problem shift to the last LPARs to be activated.


For example, If you have 16 physical paths from the processor to the 
control unit and share all those physical paths with each LPAR, then 
each LPAR you activate using those paths requires another 16 logical 
paths to the CU.  Once you reach the supported logical path max of the 
CU, no more paths can be configured on line to that LPAR or any other 
LPAR. 8x16 = 128, so if the limit happened to be 128, with 16 logical 
paths/LPAR none would be left after 8 LPARs were active.


If that's your problem, the only choice is to either configure fewer 
paths per LPAR so you can run more LPARs, or maybe upgrade to a CU that 
supports more logical paths, or change from ESCON to FICON so you don't 
need as many paths for same data bandwidth.


I think you can release a logical path from an active LPAR by 
dynamically configuring the physical CHPID offline to that LPAR.  I 
doubt if varying the logical path offline is sufficient.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Going from mod-3 to mod9

2012-01-24 Thread Joel C. Ewing

On 01/24/2012 07:28 AM, Dennis McCarthy wrote:

Hi Joel,

We do NOT have any PAV's. We are a pretty small shop. One production LPAR and 
one test (sandbox for me). The VSAM file in question is open to a single CICS 
region. Give that additional information, can I expect a negative impact on 
response time going to the MOD-9's?

Dennis


You have the potential for a negative impact, but it all depends on the 
related transaction rates and physical I/O rates in the CICS region. If 
you have RMF or some other measurement tool that will give average 
device busy on the current 3330-3's used by the file, you can get some 
idea in the extreme cases:  If average device busy is consistently below 
5%, then I would think odds are pretty good that the negative impact 
would be in acceptable ranges; if you frequently see some average device 
busy of 20% or higher on multiple drives I would consider the odds high 
for significant negative impact.  In between, things are less clear. RMF 
device measurement interval relative to arrival pattern of transactions 
may be an issue -- if transactions are typically clustered in a small 
part of the RMF measurement interval rather than uniformly distributed, 
you may need a smaller interval to see if device usage could be a problem.


If you have CICS measurement tools of some kind, you may be able to see 
other things of use, like number of logical I/Os against the file that 
don't even result in physical I/O because of in-memory buffers in CICS, 
and proportion of reads versus writes (which must eventually do a 
physical write).  With the major load coming from CICS, one of the 
possibilities if you have real memory to spare is that you may be able 
to compensate for any bottlenecks in physical I/O by throwing a large 
number of additional LSR buffers at the file to raise the odds of 
finding records in memory and reducing the number of logical reads that 
require a physical read.  Considering the size of the file, you 
obviously can't have a significant amount of the file in buffers, so the 
success of this strategy  would depend on there being some pattern of 
clustering in the way records are typically accessed.


So, if you're lucky, you may be in a zone where rule of thumb may 
suggest a Yea or Nay; otherwise, there may be no simple way to determine 
without taking more risk and trying it.


Assuming your device busy rates are in the grey zone, if real memory and 
adding LSR buffers is an option and I had what I considered really valid 
reasons why I really want to get to the 3390-9, I think I would try a 
temporary experiment to significantly increase LSR buffers and see if 
that significantly reduces I/O rate to the file.  (You wouldn't want to 
run in this mode too long on 3390-3's, because if it does help response, 
the end users could get used to it and expect it).  This way you could 
at least prove in advance whether an LSR buffer increase could be used 
to offset a negative impact if the migration to 3390-9 has more impact 
than acceptable -- and that might impact your decision on whether to 
attempt the migration.


That's the long answer.  The short answer is as always it depends.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM: Backup tape lost due to errors

2012-01-23 Thread Joel C. Ewing

On 01/23/2012 02:46 AM, R.S. wrote:

W dniu 2012-01-22 05:21, Joel C. Ewing pisze:

I've said it before, but will say it again: modern tape media has such a
large capacity that a single dfhsm cartridge can contain an incredibly
large number of datasets. It is almost inevitable that loss of a single
dfhsm ML2 or Backup cartridge will impact something you care about or
can't afford to lose (or are legally required to retain), and it is also
inevitable that single cartridges will occasionally be physically
damaged. Over the long haul, you really can't afford NOT to duplex all
dfhsm carts (and for that matter, non-dfhsm carts that contain data you
can't afford to lose).


90% Agreed, some remarks:
1. Cart capacity is not so relevant. The problem exists with any cart
capacity, the difference is in a likehood, but the likehood is quite
similar (same order of magnitude).
2. It worth to separate two cases:
a) Backup copy loss. It's more or less like loss of spare in a RAID
group, you did not lose your data, you lost redundancy. Additional event
must happen to make backup copy needed for recovery. Of course there
could be some legal requirement to have such copy, in such case he
requirement is broken.
b) Archive copy, ML2 (without backup). In such case lost of cartridge
simply means lost of data.

I always vote fo duplexing data on tapes, for the same reasons why we
use RAID protection on DASD. I mean real media, for example virtual
tapes are usually on RAID-protected disks.



At least based on our experience, I would disagree on point (1).  Over 
the long haul (after drives and cartridges begin to age) we saw about 
the same unreadable cartridge rate (inability to later read a cartridge 
that was written successfully) among 3490 and 3590 - more than one and 
under five losses per year (typically physical damage to cart or loss of 
data on cart from some drive failure).  The difference in capacity (from 
3490) was significantly more than an order of magnitude, so I believe 
our risk of significant data loss also went up.


The newer tape technology tends to be more reliable, with fewer physical 
volumes and fewer physical mounts, but this can also be offset by a much 
larger number of physical mounts and opportunities for physical risk for 
the fewer media volumes that do contain active data.  Unquestionably, 
the potential data loss per cartridge loss (and likelihood of visibility 
of a single incident to management and end users) is directly 
proportional to the data capacity of a cartridge.


Tape media capacity over the last several decades has increased by over 
3 orders of magnitude.  Any installation that may have dismissed 
duplexing when on 3480 or earlier tape technology and hasn't since 
re-evaluated that decision should do so.  Today no one would (I hope) 
consider running z/OS production on DASD that wasn't covered for single 
media failure by RAID or mirroring.  Similar respect is needed today for 
data retained on real tape media.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Going from mod-3 to mod9

2012-01-23 Thread Joel C. Ewing

On 01/23/2012 10:09 AM, Staller, Allan wrote:

Generally speaking...Yes

snip
IOW, the response time is more likely to increase than to decrease,
and increased response time is a Bad Thing
/snip


PAV, Cache, and RAID all have impacts that mitigate the 1:3 (or worse)
actuator to data ratio on mod 9 vs. 3 mod 3.
These impacts may not be part of the intended design.

As with most things, YMMV. Few large datasets are less impacted than
many small datasets.


...
With PAV, if you use PAVs to give the same total number of addresses to 
the mod 9 as were previously on the three mod-3's it replaces, you could 
conceivably even get better response from lower IOSQ time from the 
merging of available PAVs of three volumes into a single pool (unless 
you are already getting this benefit from Hiper-PAV support)


Without appropriately configured PAV, the result is an unknown. 
Depending on how heavy the concurrent loading of the original mod-3's, 
you could have anywhere from severe IOSQ delays and response problems to 
inconsequential IOSQ delays and minimal increase in response time.


While it may be easier to conceive of many concurrent users on a volume 
with many small datasets, large multi-volume sequential datasets can 
easily have high concurrent usage on different volumes from readers in 
multiple critical batch workloads; and large VSAM datasets, as in this 
case, could easily have a large amount of concurrent access on multiple 
volumes from concurrent on-line transactions and/or from multiple batch 
workloads.  In such cases, merging the independent loads from three 
drives to one without adequate PAV addresses could easily introduce 
significant to severe device busy usage and IOSQ delay times.  Without 
knowing something about the concurrent 3390-3 volume usage patterns, 
response time on 3390-9 without PAVs is a crapshoot even with only a 
single large VSAM dataset.

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Going from mod-3 to mod9

2012-01-23 Thread Joel C. Ewing

On 01/23/2012 01:21 PM, Bill Fairchild wrote:

In45e5f2f45d7878458ee5ca679697335502e25...@usdaexch01.kbm1.loc, on
01/23/2012
at 09:08 AM, Staller, Allanallan.stal...@kbmg.com  said:


From the viewpoint of the Operating System, you now
have 3 times as much data behind the actuator on Mod-9's as Mod-3's.
If the Operating system *thinks* the device is busy, the IO is queued
off the UCB and never even tried until it comes to head of queue.


You MIGHT have up to three times as much data behind the actuator.  That 
depends on how fully loaded the three mod-3s are which are to be merged onto 
the same single mod-9; i.e.,  it depends on which three mod-3s you choose to 
merge together.

If all data sets on all volumes are equally and randomly accessed, then you 
will have three times as much requirement to access the new mod-9 as any of the 
three mod-3s had which were merged.  However, most data centers have highly 
skewed access patterns.  80% of the actuators might have only 20% of the total 
I/O workload.  Which means your volumes are almost certainly NOT equally and 
randomly accessed.  You have some volumes that are almost never accessed and 
some others that are accessed all the time.

When z/OS starts in I/O on DASD device , z/OS turns on a flag bit in the 
UCB for that device that indicates that this particular z/OS image has started 
an I/O on that device.  But if the device is shared, then another z/OS image 
may have already started an I/O on the same device, turned that same device's 
UCB flag bit on in its copy of the UCB for the device (which might be device 
 on the other image), and not informed any of the other sharing z/OS images 
that it is now doing I/O on that shared device.  So when image A tests its 
private copy of the flag bit and finds it off, that does not necessarily mean 
that the device is unbusy.  Image A doesn't care, however.  It starts the I/O 
and turns the bit on.  If the shared control unit attached to this device is 
not an IBM 2105 SHARK (vintage ca. 2000), plug-compatible equivalent, or some 
successor technology, then image A's I/O will not really be started until image 
B's already started I/O ends.  This will show up on im

a!

  ge A as a spike in device pending time, not in IOSQ time.  The 2105 and newer 
technology have the ability to let multiple I/O requests from multiple sharing 
systems run simultaneously against the same device as long as there is no 
conflict between any of the simultaneous I/Os involving both reads and writes 
for the same range of tracks.

The only way to know what will probably happen is to do I/O measurement on your current mod-3 
workload.  If you don't see much IOSQ time now, then you will see not much multiplied 
by three after merging.  How much is not much and/or is negligible is up to you to decide.  You 
might also get an idea as to how to merge volumes together based on their individual IOSQ times; 
e.g., merge the one with the highest IOSQ time now with the two mod-3s that now have the lowest 
average IOSQ times.  After merging them, measure again for IOSQ time.  Only if you have 
excessive IOSQ time, where how much is excessive is up to you to decide, would you need 
to consider using PAV devices.

Currently z/OS's I/O Supervisor has no knowledge of the real RAID architecture 
backing the virtual SLED, so many of the classic performance- and space-related 
bottlenecks can theoretically still occur.

Bill Fairchild


Note the original question from Dennis McCarthy (Jan 20) was not an 
arbitrary 3390-3 to 3390-9 migration but specifically moving a VSAM file 
occupying 27 3390-3's to 10 3390-9's, so except for last volume we ARE 
definitely talking about three times the data behind a logical volume, 
but the usage and activity rate of the dataset were not specified.


IOSQ time and related response time elongation is highly non-linear as 
device utilization approaches 100%.  You could see negligible IOSQ time 
on each of three 3390-3's running at 34% utilization become astronomical 
if you merge data from those to a single 3390-9 without PAV, trying to 
run a load that can't even be satisfied at 100% device busy.  Given PAVs 
and assuming there is enough cache and different physical drives and 
internal bandwidth on the EMC backing the logical volume, you can in 
effect exceed 100% logical volume busy (have average number of active 
I/Os to the volume exceed 1.00) and still get acceptable response.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 07:54 AM, Peter Relson wrote:

how does IBM suggest doing a compress on a Linklist lib that needs
compressing, inquiring minds would love to know


There is no suggestion. This is simply not an operation that is supported
or can be supported in general.

Peter Relson
z/OS Core Technology Design



So the only functionally-equivalent, officially-sanctioned way to 
accomplish this goal is still to

(1) create a new dataset with a different name and copy the data to it,
(2) modify PARMLIB LNKLST defs to replace the old library in linklist 
with the new at next IPL,

(3) IPL.
And if for some reason you really must have the original dataset name, 
repeat the process to get back to the old name.


All the other techniques that have been described here in the past to 
achieve this and bypass or defer the need for an IPL either don't 
guarantee the new library will be seen by all address spaces or carry 
some risk.  While those of us who have been around long enough are 
fairly certain of specific cases at our own installation where the risks 
of alternative methods are small enough and acceptable, it is 
understandable that IBM does not wish to endorse techniques whose 
success depends on SysProg competence and judgement and also in many 
cases upon the tacit cooperation of Murphy in keeping unrelated system 
failures from occurring in a narrow transition window during which 
libraries and PARMLIB might be in a state where successful IPL and 
recovery from system failure is not be possible (without an independent 
z/OS recovery system).


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 10:25 AM, Paul Gilmartin wrote:

On Sat, 21 Jan 2012 10:02:25 -0600, Joel C. Ewing wrote:


So the only functionally-equivalent, officially-sanctioned way to
accomplish this goal is still to
(1) create a new dataset with a different name and copy the data to it,
(2) modify PARMLIB LNKLST defs to replace the old library in linklist
with the new at next IPL,
(3) IPL.
And if for some reason you really must have the original dataset name,
repeat the process to get back to the old name.


Can LINKLIST contain aliases?  If so:

(0) Place the alias name in PARMLIB LINKLIST defs;
 IDCAMS DEFINE ALIAS to the real data set name.
(1) create a new dataset with a different name and copy the data to it,
(2) IDCAMS DELETE ALIAS; DEFINE ALIAS to identify the new data set.
(3) LLA REFRESH to identify members in the new data set.

Why not?  (I know; I've been corrupted by UNIX symbolic links, and
imagine aliases are similar.)

-- gil


If an alias name is acceptable in linklist, that still wouldn't solve 
the problem.  Any system enqueues are always done on the real name at 
the time of initial allocation for linklist, and the physical location 
and size of the dataset becomes fixed to the linklist once the dataset 
is initially allocated and is not changed by a REFRESH.  If an active 
linklist dataset is  renamed, cataloged elsewhere, or even deleted, 
linklist still points to the same original DASD tracks.


The only way to physically change the location and/or size of a linklist 
library is to deallocate and reallocate it, which requires activating a 
new linklist definition and eliminating any usage of the prior active 
linklist.  The latter is the difficult part because it can only be done 
by forcing a linklist update on  all long running address spaces, some 
of which cannot be stopped/restarted without an IPL and others of which 
cannot be restarted without disruption to end users.  As has been 
discussed in the past on this list, forcing a linklist update on an 
arbitrary running address space is an unnatural act that involves 
risk, and could in the worst case cause z/OS system failure and force an 
IPL.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM: Backup tape lost due to errors

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 04:06 PM, retired mainframer wrote:

If the entire volume is unreadable, it would seem you need to use DELVOL to
tell SMS that everything that was supposed to be on that volume is lost.

If some of the files on the tape are usable, then FREEVOL may be a better
choice or possibly RECYCLE (either combined with the appropriate BDELETEs to
skip over the unreadable files).

AUDIT does not process the media.  It processes the various CDSs and
catalogs to ensure consistency but not necessarily accuracy.

As with all datasets, anything of which you have only one copy that is
unreadable means you have zero copies.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of af dc
Sent: Saturday, January 21, 2012 11:29 AM
To: IBM-MAIN@bama.ua.edu
Subject: DFHSM: Backup tape lost due to errors

Hello,
suppose you have an 3592-JB dfhsm cart (native, not a vts stacked
cart) that can't be read due to physical damage, no tape backup
duplication. Several backups are unique, no additional versions. How
to correct bcds ?? Run AUDIT DATASETCONTROLS (BACKUP) with NOFIX (to
check the amount corrections) and then with FIX ??? Z/Os V1.12.??


I've said it before, but will say it again:  modern tape media has such 
a large capacity that a single dfhsm cartridge can contain an incredibly 
large number of datasets.  It is almost inevitable that loss of a single 
dfhsm ML2 or Backup cartridge will impact something you care about or 
can't afford to lose (or are legally required to retain), and it is also 
inevitable that single cartridges will occasionally be physically 
damaged.  Over the long haul, you really can't afford NOT to duplex all 
dfhsm carts (and for that matter, non-dfhsm carts that contain data you 
can't afford to lose).


All it should take is the potential loss of one critical, irreplaceable 
dataset to justify to management the cost of the extra cartridges and 
tape drives required.  Tape duplexing in some form with off-site storage 
is also a requirement for any reasonable installation Disaster Recovery 
plan, so it ought to be cost-justified on those grounds alone, with a 
side benefit that DR duplex copies can also save your backside from 
single media disasters.  If management lacks the wisdom to support 
duplexing, file the recommendation away and bring it back out when the 
next inevitable media failure and massive data set loss occurs.


The duplex support in dfhsm makes duplexing ML2 and BACKUP carts 
trivially easy to do as long as you have enough drives.  Similar support 
should also be available in software/hardware solutions that stack 
multiple non-dfhsm virtual volumes on physical volumes (e.g., CA-VTape), 
where similar exposures exist and where duplexing in some form is also 
highly recommended.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 04:21 PM, retired mainframer wrote:

::-Original Message-
::From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
::Behalf Of Joel C. Ewing
::Sent: Saturday, January 21, 2012 8:02 AM
::To: IBM-MAIN@bama.ua.edu
::Subject: Re: PDSE
::
::So the only functionally-equivalent, officially-sanctioned way to
::accomplish this goal is still to
::(1) create a new dataset with a different name and copy the data to it,
::(2) modify PARMLIB LNKLST defs to replace the old library in linklist
::with the new at next IPL,
::(3) IPL.
::And if for some reason you really must have the original dataset name,
::repeat the process to get back to the old name.

If you need the original DSN, after populating the new dataset, uncatalog
the original and rename the replacement (both actions not restricted by the
current enqueue).  This will eliminate the need for the second IPL.  And
after the IPL, you can rename the original dataset and delete it even though
the DSN is enqueued by LLA (assuming you the correct RACF access).



This technique is only possible if it is acceptable for the replacement 
DSN to be on a different volume, AND the original DSN volume is not an 
SMS volume, but I too have used this in the past when system datasets 
were divided between two non-SMS volumes and volume residency didn't 
matter.


 One must also recognize that there is a slight risk here: that between 
the uncatalog step and the final rename there is a window during 
which the system is in a state where a successful IPL may not be 
possible should the system crash; so to not tempt fate, you don't want 
to do this during a storm when the UPS is down or elongate the window by 
allowing yourself to be interrupted in the middle of the process -- and 
having an alternative recovery system available for the unlikely 
worst-case scenario is goodness.  I was never hit by a system failure in 
the middle of one of these sequences, but I am convinced it is only 
because I was sufficiently paranoid and Murphy knew I had recovery 
alternatives at hand.:)


If the object is to find, if possible, a procedure which passes through 
no window of risk where a system outage could leave you unable to IPL, 
then to preserve the original DS name I think you are stuck with two 
IPL's --  an unpleasant enough prospect that most SysProgs quickly learn 
alternatives (like the one above) that accept some tolerable level of risk.

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Catching a Phantom

2012-01-20 Thread Joel C. Ewing

On 01/20/2012 05:50 AM, Jan MOEYERSONS wrote:

On Thu, 19 Jan 2012 10:21:01 -0600, Chase, Johnjch...@ussco.com  wrote:



Occasionally, that MQ-draining transaction will be spawned nearly 1,000
times per second, and apparently do nothing:  No I'm starting
message, no I'm finished message, no ABEND, no nothing.  SMF records
for these phantom transactions, formatted by CA-Sysview, show that
each instance starts the program, which then issues two MQOPENs (not
back-to-back), two MQCLOSEs (again, not back-to-back), one MQINQ, one
MQGET and one SQL SELECT, and then it ends without comment.  Average
task lifetime for these phantoms is around 8 milliseconds.


I think this may be related to the fact that an MQ triggering message is made 
available to the initiation queue at the very moment the sending application 
puts an MQ message on the queue, before this application message is committed. 
Depending on how long it takes the batch program to commit its unit of work, 
this may well result in the draining transaction firing up, finding its input 
queue empty and ending again. Because there are still messages on the queue 
(not yet available for draining, but they are there...) MQ then immediately 
re-triggers the same. Over and over and over... until a message is actually 
committed and thus becoming available to the drainer.

Simple solution to this problem is to have the draining transaction do its 
first MQGET with a wait of say 250 to 1000 ms.

Cheers,

Jantje.




This sounds like a plausible explanation, but if true, isn't this a 
design flaw in MQ that needs fixing?  It makes zero sense for MQ to in 
effect create a busy waiting loop that uses resources by having a 
triggering mechanism repeatedly trigger a transaction for available 
message before the message can actually be obtained, for reasons that 
the observed transaction loop should make obvious.  The resources 
unnecessarily consumed by such ghost receiver transactions could even 
potentially delay the sender from reaching the point of message commit 
in a resource constrained environment.


If this design was a deliberate one to potentially allow faster response 
by allowing a long receiver-transaction start up time to overlap with 
a short sender time-to-commit, then at most MQ should trigger when the 
message first enters, and if the corresponding MQGET arrives prior to 
the commit and gets nothing (which obviously indicates a case where 
time-to-commit is not short, relatively speaking), trigger once more 
only after the commit has occurred and the message can actually be 
retrieved.


The suggested wait circumvention requires an application programmer to 
be aware there is a problem that needs fixing.  It would be better for 
MQ design to prevent the problem in the first place.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-20 Thread Joel C. Ewing

On 01/20/2012 08:54 AM, Paul Gilmartin wrote:

On Thu, 19 Jan 2012 17:37:12 -0600, Joel C. Ewing  wrote:


On 01/19/2012 02:55 PM, Paul Gilmartin wrote:

On Thu, 19 Jan 2012 11:09:27 -0800, Schwarz, Barry A wrote:

Someone might be failing to issue a DISConnect or RELEASE.

(It's not at all clear to me why this problem shouldn't occur similarly
with a classic PDS.)


...
How could this be an issue with the classic PDS?  Surely the connection
to members concept doesn't exist for a traditional PDS, as old member
data can only be eliminated by completely reorganizing, compressing the
PDS.  With no re-use of deleted-member space, there is no need for a
special mechanism to prevent over-write of a deleted member while some
other task is still reading the now-deleted version of the member.


The issue reported by the OP wasn't that a member was
overwritten; the issue was that the data set ran out of space.


Not really.  The context of this problem in It's not clear to me why 
this problem... was a discussion of member disconnect or release, and 
the original problem that started the thread was more along the lines of 
why isn't deleted space for a LNKLST PDSE library being freed and and 
available for reuse, even after an LLA REFRESH, as one would expect, 
rather than why is the data set out of space (which would have been 
obvious for a PDS that hadn't been compressed).

   JC Ewing


Conceptually, any user who has done a BLDL and saved the TTR
has a connection to the member.  PDSE merely formalized the
process.

-- gil


...

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-19 Thread Joel C. Ewing
 tables have been closed.  ISPF users who were no longer in 
the application might still have had hanging references to the 
application PDSE library - but I would not have expected this to tie up 
connections to specific table members that were no longer in use, or to 
have the major impact on space reclaim that it had.  It's almost as if 
some PDSE member connection issues never get fully resolved until the 
applications are also forced to de-allocate the PDSE - which is 
certainly not what I would have expected from my reading of the quoted 
DFSMS manual.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-19 Thread Joel C. Ewing

On 01/19/2012 02:55 PM, Paul Gilmartin wrote:

On Thu, 19 Jan 2012 11:09:27 -0800, Schwarz, Barry A wrote:


IEBCOPY compress generates an error message for a PDSE.


Become familiar with:

Title:  z/OS V1R12 DFSMS Using Data Sets
Document Number: SC26-7410-10

 http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dgt2d490/3.8.7.1

  3.8.7.1 Establishing Connections to Members

Someone might be failing to issue a DISConnect or RELEASE.

(It's not at all clear to me why this problem shouldn't occur similarly
with a classic PDS.)

-- gil


...
How could this be an issue with the classic PDS?  Surely the connection 
to members concept doesn't exist for a traditional PDS, as old member 
data can only be eliminated by completely reorganizing, compressing the 
PDS.  With no re-use of deleted-member space, there is no need for a 
special mechanism to prevent over-write of a deleted member while some 
other task is still reading the now-deleted version of the member.


The referenced 3.8.7.1 specifically applies to PDSE libraries; but it 
also implies that if LLA REFRESH is not sufficient to DISConnect or 
RELEASE all the old member connects on a PDSE in the LINKLIST and allow 
the deleted PDSE space to be re-used (don't know for sure that this is 
the case, but Juergen's experience that started this thread at least 
raises that as one possibility), then LLA REFRESH would seem to be 
failing to do something it really ought to be doing.


Someone surely is in a position to set up a simple test case, or may 
have already tested this:  A PDSE library in LINKLIST/LLA which no one 
is actually using with but members and minimal free space; delete all 
members; check whether there is enough free space to re-create the 
members; if not, do an LLA REFRESH on the library; re-check whether 
there is now free space for the new members.  If the deleted space is 
never freed, I would say we have a problem that needs fixing.


I gather from past warnings on ibm-main about forcing dynamic LINKLIST 
updates on running address spaces that there are some cases where only a 
partial load of a PDSE program object member has been done and a 
potential of additional future loading activity still exists.  If z/OS 
is smart enough to recognize this situation, then one would want it to 
preserve a PDSE member connection in such a case and continue to make 
the old member version available to the old A/S even across an LLA 
REFRESH; but I would also suspect this technique unlikely to be in use 
on a typical Installation library added to LINKLIST.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: ITIL Mainframe Terminology

2012-01-12 Thread Joel C. Ewing
I Googled ITIL and found http://www.itil-officialsite.com, looked at 
several of the introductory documents, and found them very heavy on 
MBA-like generalizations and abstractions.  If you didn't know anything 
about corporate Information Technology to start with, you could read one 
58-page intro and still not have the vaguest idea that the main object 
of IT is to solve corporate business problems by 
conceptualizing/designing/acquiring/maintaining computer application 
programs and selecting/acquiring/maintaining/operating the necessary 
computer hardware on which to run those applications.  The documentation 
I saw seemed oblivious that the basic management issues in corporate IT 
deal with computer hardware and computer applications, but instead just 
talked about managing resources. I wasn't terribly impressed.


There have been times when teacher training in this country spent 
entirely too much time with Education courses learning how to teach 
and not enough time mastering the subjects they were supposed to teach. 
 Corporations run into trouble when they have too many MBA managers who 
think they can manage the manufacture of widgets without understanding 
widgets.  What little I have seen of ITIL so far reminds me of those 
approaches applied to IT.  Just calling a mainframe a server or a 
resource encourages a management mindset that doesn't take into proper 
account its unique qualities that continue to distinguish it from lesser 
platforms and continue to justify its existence.


I doubt if there are any pure mainframe organizations in today's 
corporate IT world, rather Data Processing, Information Technology, 
Information Services, or some similarly named corporate Division, 
which may maintain a mainframe as part of its much larger corporate 
computer hardware inventory.  A large corporate IT division will 
typically be subdivided into many functional subdivisions which include 
in some fashion Application Development and Maintenance, Workstation 
support, End-User IT Help Desk, Technical Support, Operations, 
Production Control, etc..  Many of these functional sub-areas must deal 
with or manage across multiple platforms, not just mainframe or 
non-mainframe.


IBM mainframes have also traditionally been called Processing Systems 
(the box(es)that house the central processing elements, processor 
memory, and I/O channels, and related control), to distinguish them from 
the separate box(es) that house DASD Disk Storage Subsystems, Tape 
Subsystems, etc., all of which may also be collectively thought of as 
the mainframe; but that term (mainframe) is less used by those closest 
to actual management of one.  Specific mainframes systems are much more 
frequently referred to by their model family or type, as in IBM z9, or 
IBM z10 rather than by some useless generic IBM Marketing name like 
Enterprise Server.  The model family at least tells you the general 
functional capability, but little about the specific processing capacity 
(and cost), which has a wide variation even within the same processor 
family.


IBM Mainframes are probably more frequently known by Operations and 
Applications Development personnel by the Operating System that runs on 
the box (e.g., z/OS) rather than the hardware name; because this, rather 
than the hardware, most affects what interfaces those people see and 
must use on a daily basis.


End users (and non-IT managers) typically only see interfaces provided 
by Application systems, so in many cases they may not even be directly 
aware of the underlying Operating System or hardware platform used by 
those applications.

Joel C Ewing


On 01/11/2012 03:16 PM, Henrichon, Ryan wrote:

A common phrase used in ITIL to refer to mainframes in my shop is
Enterprise Server. However the term Mainframe gets used more than
Enterprise Server does by techies.

What is true about any best practices or new process that a company
uses; it is only as good as how involved the employee's are that are
using it.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
Behalf Of Bill Fairchild
Sent: Wednesday, January 11, 2012 3:08 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: ITIL Mainframe Terminology

This reminds me of ISO 9000 about 20 years ago.

Bill Fairchild

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
Behalf Of Jonathan Goossen
Sent: Wednesday, January 11, 2012 1:29 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: ITIL Mainframe Terminology

Peter,
You are correct in what ITIL stands for. The British started it. It
migrated to the US when companies wanted to cut costs. Several years ago
I was required to go through training and passed my certification for
the first level.

ITIL is a collection of best practices for running a company's IT. It
deals with processes and is equipment independent. ITIL doesn't have
terminology for mainframes.

Thank you and have a Terrific day!

Jonathan

Re: Clarification on Sysres Volumes(RMF)

2012-01-03 Thread Joel C. Ewing
I believe JES2 uses physical reserves on entire volume when accessing 
HASPCKPT datasets and depending on your JES PARMLIB member may hold it 
for a significant time --  not a good dataset type to put on a SYSRES 
shared with other systems.  Others may be able to suggest JES PARMLIB 
changes to shorten the hold time, but I would move all HASPCKPT datasets 
to volumes that have minimal shared access with another system, or at 
very least off volumes like SYSRES that contain many 
performance-critical datasets needed by all systems sharing that SYSRES.


I suspect the usual z/OS manual recommendations are to put HASPCKPT on a 
volume by itself, but a sufficiently quiet volume is good enough.

   JC Ewing

On 01/03/2012 02:13 AM, jagadishan perumal wrote:

Hi Lizette,

Apology for not being precise.

Datasets found in SYSRES are :

are some BCP related datasets , assembler datasets,language environment
datasets,IBM book manager datasets First failure Support technology(FFST)
datasets are found.

During This Situation when I do : /D GRS,C  It does really produces any
information about Volume or Dataset Contention. As per Mark advise I did
TSO RMFMON and Found some SYS1.HASCKPT datasets used by JES2, some User Job
using their Assigned volumes.

Jags

On Tue, Jan 3, 2012 at 1:21 PM, Lizette Koehlerstars...@mindspring.comwrote:


Hi All,

One thing I have noticed from the report is that most of the user ID are

the reason for

volume contentions.

Name   ReasonCritical val. Possible cause or
action
DOV0053DEV -Z18RS193.0 % delay May be reserved by another
system.
DOV0061DEV -Z18RS192.0 % delay May be reserved by another
system.
DOV0065DEV -Z18RS191.0 % delay May be reserved by another
system.
Is it something I need to look into user's Proc ? Could anyone please

throw some light

on this.

Jags


Jags,

Have you tried to see what datasets are open and enq'd during this time?
What research have you done?  What datasets are on your SYSRES volume?  You
have not answered these qestions.

Have you issued a D GRS,C to see if anything is obvious?

Lizette




--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IBM Manuals

2011-12-31 Thread Joel C. Ewing

On 12/30/2011 10:30 AM, Shmuel Metz (Seymour J.) wrote:

In4efdc973.3050...@acm.org, on 12/30/2011
at 08:23 AM, Joel C. Ewingjcew...@acm.org  said:


First thing I do with new equipment/appliance manuals is either find
and  save an on-line pdf version


Why? I know that in some cases PDF is all that's available, but
certainly not in all.



I want a form that preserves for my personal use, and has the ability to 
recreate if necessary, the original multi-page documents for human 
viewing.  PDF was designed with precisely that in mind and does it very 
well.  I want a format with a proven track record of continued support 
over an extended time period on multiple hardware platforms and 
operating systems.  The PDF specs are openly available, free PDF readers 
are available for multiple environments from multiple independent 
sources.  Ditto for free print-to-PDF converters, and direct PDF 
creation support is now built into many applications as well.


There are other formats that are useful or even better in specific 
environments, but nothing at this point is as ubiquitous as PDF, and I 
have no idea what systems I may be running ten years from now.


If you can get a text-based PDF document from the original source, that 
would certainly be preferable, as that allows text searching capability. 
But, if all you have is a hard copy, none of the current 
freely-available OCR tools come close to preserving the original 
document as accurately as image-based PDF, unless you have the time for 
extensive manual editing.  Bitsavers.org uses a modified archive 
approach that uses higher resolution to allow possible future OCR; but 
compensates for higher resolution by using black/white threshold images 
that sacrifice quality of embedded document illustrations.  I prefer to 
go with lower resolution adequate for human reading and preserve gray 
scale, and even color, where its use is significant.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IBM Manuals

2011-12-30 Thread Joel C. Ewing
First thing I do with new equipment/appliance manuals is either find and 
save an on-line pdf version or scan them in as a pdf image document.  By 
keeping all such in digital form, I can either find them later if needed 
as reference or make them large enough to read easily.


Wife's iPad2 booklet this December was so small I had to scan at 300 ppi 
instead of usual 150 ppi to get decently formed font and could barely 
read the original hard copy even with my reading glasses!


Guess we should be thankful IBM chose bookmanager and pdf to save trees 
rather than reducing z/OS manuals to 3x5 and microprint.

  JC Ewing

On 12/29/2011 11:38 PM, Ed Gould wrote:

Scott:

Last year I got an brand new IPAD. The installation instructions were on
a 3 X 5 in a 4 page booklet. The font size was 4 and I could not read
it to save my life. I had to get a friend to come over to read them so I
could do the install.
Bah humbug so much APPLE being user friendly.

Ed


...

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: removing nulls from a file

2011-12-29 Thread Joel C. Ewing

On 12/29/2011 09:19 AM, Brad Wissink wrote:

We have a client that is trying to transfer us a file and it is loaded with 
nulls.  They say that is the way it comes from the purchased software they have 
on their workstation.  The file has a null character inserted after every 
character so it looks like this

1 12/29/2011   becomes  F1004000F100F2006100F200F9006100F200F000F100F100...

Has anyone seen anything like this before?  Is there a quick and easy way to 
remove all the nulls?



It's almost as if they are using software that somehow mistakenly thinks 
EBCDIC is a two-byte character encoding, or the process for conversion 
to EBCDIC was written by a neophyte programmer who treated each 
character as a null-terminated string and some how managed to include 
the null termination for each character when outputting the converted 
data.  Maybe they have their software package mis-configured in some 
way, or since they are talking about workstation software producing the 
data, the software package generating the data may be ignorant of EBCDIC 
and the fault may be in what technique they are using after-the-fact to 
either convert the data to EBCDIC or transmit the data with conversion.


If at all possible, try to get clarification of exactly what software 
and options they are using to produce the data and exactly what 
technique and options they are using to transmit the data.  If possible, 
get them to transmit via other methods (Email, FTP, etc.) samples of any 
intermediate files as binary data so you can see exactly what data 
format they are really dealing with on their workstation (as opposed to 
what the client may think he has).  If they are really transmitting 
twice the number of needed bytes, fixing the problem at their end would 
be the better solution.  Perhaps given enough background on what they 
are doing, a solution would become obvious, or you would be able to 
search on-line for a solution if the client lacks the technical 
expertise to do so on their own.


If everything is kosher on their end and the byte doubling is somehow 
occurring just on your receiving system, then hopefully you will have 
enough to be able to recreate and fix the problem on your end.  Just 
cleaning up bad data after the fact will work, but is not as desirable 
as eliminating the creation of the bad data in the first place.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Question on adding an SVC routine dynamically to a running system

2011-12-29 Thread Joel C. Ewing

On 12/29/2011 11:06 AM, Dave Day wrote:

Answering my own question, just for the record. I just tried it, and
there is no need to actually have the module in the LPA chain. SVCUPDTE
does indeed update the SVC table, and the SVC can be executed with no
problems.

--Dave Day
- Original Message - From: Dave Day david...@consolidated.net
Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, December 29, 2011 9:00 AM
Subject: Question on adding an SVC routine dynamically to a running system


The auth services guide in the chapter on user SVCs, states a type 3
needs to be in LPA. Yet the SVCUPDATE will accept, and update the svc
table, an address in ECSA. Do I need to add the routine to LPA using
CSVDYLPA prior to SVCUPDATE?

--Dave Day


If the intent was a permanent change, just be sure you DO get 
appropriate parmlib and/or module location changed before next IPL, so 
you don't get surprised by unexpectedly losing your updated SVC at next 
IPL.  I always tried to do both updates at about the same time under the 
assumption that Murphy might arrange for unscheduled IPLs at his 
convenience, not mine.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: cpu / machine identification

2011-12-27 Thread Joel C. Ewing
 the above.

Now, this also meant that there were folks carrying beepers and temp
keys, so they could do that after-hours support.

Are you prepared to deal with all this? Is it worth it?

As you can tell, I'm not a fan of such mechanisms. But it's not my
decision (doh), so I'm trying to help :-)
--
zMan -- I've got a mainframe and I'm not afraid to use it




--
zMan -- I've got a mainframe and I'm not afraid to use it

...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: cpu / machine identification

2011-12-27 Thread Joel C. Ewing

On 12/27/2011 07:55 AM, Mark Zelden wrote:

On Mon, 26 Dec 2011 16:11:02 -0500, zManzedgarhoo...@gmail.com  wrote:



OK, I gotta ask -- what's the problem you're trying to solve? You
don't trust your customers? In over a quarter century in the mainframe
software business, I've come across ONE customer running software on
an unlicensed box, and it was an oversight -- and a nice full-price
bluebird for the sales rep. I don't believe CPUIDs are worth the
hassle.



Obviously the point of view of someone who doesn't make a living by
selling their software.

I can tell you from personal experience (one of my clients) who I helped
write CPU protection for about 10 years ago that there were many instances
of unauthorized use and in at least one case I know about the abuse was
rampant.   I know a lot of the unauthorized use wasn't intentional, but a
lot of it was also or shops just didn't care since there was no checking.
Some of the companies that used this software outsourced their IT, and
ended up using it on different machines than those that were licensed.
Or the outsourcer copied it to other machines / environments / clients.
My client must have lost hundreds of thousands of dollars in
licensing / maintenance fees and fees from related litigation.

In my own personal experience as a sysprog, I know some of the
same things have happened unintentionally.   Consolidations, moving
LPARs around, creating / cloning new LPARs can lead to this and
when the software doesn't check it's easy for the techies to
make mistakes since they often (usually?) don't know the T's and
C's of all the software contracts.

So even though it can be a pain, I actually prefer that any vendor that
cares, checks the CPU id for authorization.   If the software has a site
license option, then have a method for a non-cpu specific key to generate
for the client.   Provide an easy way to change the key and a grace
period that won't put the shop's business in jeopardy because of a
missing / wrong key after a CPU upgrade or engine add.

Regards,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS
mailto:m...@mzelden.com
Mark's MVS Utilities: http://www.mzelden.com/mvsutil.html
Systems Programming expert at http://expertanswercenter.techtarget.com/



Although most of my career was spent with configurations with no more 
than two independent mainframe systems, I can appreciate the additional 
confusion of trying to track inconsistent vendor license terms in a much 
larger environment and how that would raise the probability of 
unintended violations.


Thankfully I've never had direct experience with a company that was in 
process of outsourcing their mainframe operations, so I hadn't 
considered that aspect.  Obviously a company with no reservations about 
shafting its own IT department for short-term profit would probably have 
even less compunction about doing the same to its former software vendors.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Imagine having to deal with THIS in production

2011-12-21 Thread Joel C. Ewing

On 12/21/2011 02:02 AM, Shmuel Metz (Seymour J.) wrote:

In
capd5f5qckjjmp_gthv5isttfrkeovnnwkmjgqkadlfbpqpu...@mail.gmail.com,
on 12/20/2011
at 09:46 PM, John Gilmorejohnwgilmore0...@gmail.com  said:


Others are observing this sky too, and no cleric wishes to be made
a figure of fun by denying what is obvious to many others.


Shirley biblical literalism is an exception to that. I can't speak to
the Christian scriptures, but the Tanakh is absolutely laden with
obvious metaphor.



And since the 2/3's of the Christian scripture that is the Old Testament 
is essentially the Tanakh, the same is true of it.  That doesn't seem to 
stop a majority of fundamentalist Christians from taking some of the 
obvious metaphors, like the multiple, conflicting creation stories in 
Genesis, literally.  In the Christian New Testament the parables of 
Jesus are also obvious metaphors and essentially the entire book of 
Revelations as well, which again doesn't keep many fundamentalist 
Christians from trying to apply the later literally to our times rather 
than the Roman context in which it was written -- with the result that 
several times a decade in this country some religious nut will use his 
inspired interpretation of Revelations to confidently predict the 
exact date of the end of the world, attract a small following, and then 
vanish into richly-deserved obscurity when the date passes without incident.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Imagine dealing with THIS in production

2011-12-19 Thread Joel C. Ewing

On 12/19/2011 11:53 AM, Mike Schwab wrote:

How about the Julian Day as used by astronomers?

http://en.wikipedia.org/wiki/Julian_day
Julian day is used in the Julian date (JD) system of time measurement
for scientific use by the astronomy community, presenting the interval
of time in days and fractions of a day since January 1, 4713 BC
Greenwich noon. Julian date is recommended for astronomical use by the
International Astronomical Union.

Almost 2.5 million Julian days have elapsed since the initial epoch.
JDN 2,400,000 was November 16, 1858. JD 2,500,000.0 will occur on
August 31, 2132 at noon UT.  (Often .leading 2.4 million is assumed
and the low order 5 digits is used.)

Time is expressed as a fraction of a day.  0.1 day = 2.4 hours, 0.01 =
14.4 minutes.
0.001 = 1.44 minutes, 0.1 = 0.864 seconds. x.000 is Noon UT 1200Z

Modified Julian Date subtracts 0.5 so x.000 is Midnight UT Z.

On Mon, Dec 19, 2011 at 10:01 AM, Paul Gilmartinpaulgboul...@aim.com  wrote:

On Mon, 19 Dec 2011 08:44:00 -0600, Chase, John wrote:


Perhaps the world's eventual conversion to Star Date (or similar) will
be less confusing and disruptive  :-)


Ummm... NVFL.  See:

http://en.wikipedia.org/wiki/Stardate

-- gil


Usage of Julian Day will never catch on with non-astronomers for one 
simple reason not yet mentioned - its formal definition requires the 
date change to occur at noon, not midnight.  That makes much sense for 
astronomers that work through the night and sleep during the day, but is 
a terrible fit for people and businesses that have to deal with normal 
work hours and who would never tolerate the same period of daylight 
being called by two different dates.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Imagine having to deal with THIS in production

2011-12-19 Thread Joel C. Ewing

On 12/19/2011 05:39 PM, John Gilmore wrote:

Joel C. Ewing writes (of Julian days):

| That makes much sense for astronomers that work through
|  the night and sleep during the day, but is a terrible fit for people
| and businesses that have to deal with normal work hours and
| who would never tolerate the same period of daylight being
| called by two different dates

What people find tolerable is a function of their experience.  When my
wife and I lived in Iran we rapidly came to terms with the convention
that the day ends at sundown and even, with only a little more
difficulty, with idea that a dinner invitation for Tuesday night was
an invitation to have dinner following sundown on Monday.

However that may be, this objection has another, much more important
defect.  It confounds internal representations for machines with
external representations for people, which need to be interconvertible
but should seldom--I had almost written never--be the same.



John,
If you had read all of the included previous thread context in my 
previous response, the context was the world's eventual conversion to 
some date format, not a discussion limited to internal date usage by 
machines.  I would say that makes the tolerance of people for the date 
format highly relevant and an asset to the objection, not a defect.


There are of course other strong arguments against universal usage of JD 
for dates any time in our lifetime.  As long as we remain an 
Earth-centric and not a space-centric culture, that makes it unlikely 
most people would favor an ordinal-based standard date format like Star 
Date or Julian Day which has no obvious relationship to Earth's 
annual seasons, when awareness of those seasons is so important to our 
physical comfort and survival.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: One Less Mainframe Shop

2011-12-17 Thread Joel C. Ewing

On 12/16/2011 04:06 PM, DKM wrote:

Just over seven years ago, I was hired as the Financial System Administrator at
my place of emplacement.  In my first interview, I was told how they were
getting ready to pick a new ERP and get off their “archaic” mainframe.  After I
was hired, the IT director at the time told me with glee how they would be
shutting down the mainframe in six months.  This shocked me a bit it was going
to take at least a year to go live with the new ERP solution.

It turned out maintenance on the 20-year-old software was going to end in six
months.  The mainframe was actually scheduled for shutdown six months after we
went live on the new software and platform.  Well we did go live on the new ERP
within a year, but the mainframe at one time had run the entire business of the
company and while the financial suite was the last large part to go off it,
there were still several “smaller” but just as important systems still running
on it.

Consequently, it took seven years, and two other IT directors, before access to
the now 11-year-old System/390 was finally cut this week.  At some point after
the New Year, a ceremony is being planned to let the Chairman flip the final
switch to turn off the system.  He has been a “Champion of Modernization” to get
us off the mainframe for almost 10 years.  I’m sure speeches will be made about
how far we have come.  Yet, as I look around at the countless servers, real and
virtual, and think about the major software platforms hosted by outside vendors,
all to replace the one S/390 that was divided in to four virtual systems I can’t
help but wonder if we are really better off.

DKM

Since this sounds like management by ideology and 1990's airplane mags, 
I don't suppose they were honest enough to compare their current total 
IT costs now with their prior mainframe IT costs, which could likely 
have been reduced by simply upgrading to modern z boxes and DASD and a 
more modest migration strategy.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Is there an SPF setting to turn CAPS ON like keyboard key?

2011-12-15 Thread Joel C. Ewing
There is no code to send to the terminal to display lower case as caps 
in 3270 architecture - it was a manual switch on 3278's, to make the 
local display show both upper and lower case as upper case at the 
terminal even though dual case was transmitted both ways.  It was global 
for the entire display, not field specific.  If Hummingbird is an 
accurate 3270 emulator, it should have an option to emulate this 
feature, but I doubt this is what you are seeking, as it can get 
extremely confusing in a hurry with applications that actually support 
or require dual case in some contexts.


This display option made sense in the late 1970's when all applications 
still required upper case and all 3270 applications forced all input to 
upper case.  This feature was also for the benefit of users familiar 
with prior 3277 mono-case display limitations, who found 3278 dual-case 
behavior distracting when ENTER was pressed and the application echoed 
data back to the terminal in upper case and changed the visual appearance.


I used 3277's for a relatively short time before 3278's became available 
and quickly adapted to preferring dual-case mode even for mono-case 
applications, as it made it obvious whether you had actually sent the 
data to the application or not.  Some of the old timers of that day 
disliked the hardware change and continued to use the mono-case setting 
on 3278's for years.

  JC Ewing

On 12/15/2011 02:29 AM, Cris Hernandez #9 wrote:

thanks to all for the feedback.  it's a hummingbird terminal emulator but I 
have no idea how to program it by any other means than using the options panels 
provided.  the consensus here seems to be that if it can be done, the emulator 
would need to be sent code to turn caps on.
appreciate everyone's fond memories of dumb terminals as well, they may have 
been featureless, but I never had to shut down TSO because the system was 
hosed... a lot less distractions too.




--- On Wed, 12/14/11, Paul Gilmartinpaulgboul...@aim.com  wrote:


From: Paul Gilmartinpaulgboul...@aim.com
Subject: Re: Is there an SPF setting to turn CAPS ON like keyboard key?
To: IBM-MAIN@bama.ua.edu
Date: Wednesday, December 14, 2011, 8:37 PM
On Wed, 14 Dec 2011 17:21:11 -0800,
Cris Hernandez #9  wrote:


yeah, TRANSLATE works to change it after the user hits

enter, but I want characters to show in caps as soon as the
character is typed, for visual purposes more so than
anything else, and do so in both panel displays and spf
editing.



Data entry at the terminal is completely invisible to
ISPF.  No setting can affect it.

Hmmm.  Of course you may have the source to
x3270.  You could modify
it.  If there were way to send an escape sequence to
the terminal,
sort of complementary to the pt3270 that oedit/obrowse use,
or akin
to the mode switching done by IND$FILE, a nonsensical
cursor addressing
sequence, an emulator could capture that and switch the
case.

Your motivation is good.  It seriously violates
WYSIWYG to type a page
of text; press ENTER; and stare in dismay as the whole page
changes.
The worst design ever was the old 3277 (IIRC) which
displayed text
in CAPS and sent it to the host in lower case.  And
some independent
terminal vendors actually provided a switch to enable
compatibility
with this horrendous misfeature.


also wondering if panels have the ability to enlarge

font size by row.



Does your terminal hardware (emulator) have this
capability?

-- gil

...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JCL sheesh! for today

2011-12-10 Thread Joel C. Ewing

On 12/10/2011 12:19 AM, Ed Gould wrote:

Joel,

My reading of your reply is frankly confused.
Any DD  card that needs to be continued must end with a comma and start (in the 
continuation card with a // and a blank) in CC 4 or at least up to CC 16 (no 
continuation in CC 72 is needed).
A parm on the exec card in order to be continued had different rules and IIRC 
must start in CC 16 *AND* cannot be any longer than 100 characters (thats a 
double restriction) *AND* must have a continuation in 72 (thats three) which 
IIRC there is no other DD parameter has any restrictions (that I can remember 
of).

Ed


- Original Message -
From: Joel C. Ewingjcew...@acm.org
To: IBM-MAIN@bama.ua.edu
Cc:
Sent: Friday, December 9, 2011 10:19 PM
Subject: Re: JCL sheesh! for today

On 12/09/2011 11:57 AM, Ed Gould wrote:

JC,
My memory indicates that the parm on the exec card is really the only real column 
sensitive Nasty left in JCL.
Although there is some debate whether JCL is a language (or not) any language 
that I am familiar with does have column restrictions of some type.

Ed


All JCL statement continuation records are column peculiar, not just those 
involving PARM values or quoted strings.

A continuation which splits a quoted string is column sensitive on where the 
first record ends (column 71) and on where you must resume the string on the 
continuation (column 16).  But, all other statement continuation records are 
also column sensitive in that continued parameters must resume in columns 4 - 
16.  This complicates manual verification of JCL, because visually it is 
difficult to distinguish between any parameter continuation that resumes in 
column 16 versus one that resumes in column 17, but of course the latter fails.

It is true that there are languages (e.g., COBOL, FORTRAN, Assembler) with 
column restrictions that are an integral part of the language syntax;  but a 
number of other languages (e.g.,PL/I, C, REXX) are essentially free-form with a 
syntax that is column insensitive.

-- Joel C. Ewing,Bentonville, AR  jcew...@acm.org

...
Ed,
As you indicated in your response, all DD statements (or any other JCL 
statement for that matter) that need to be continued at a point not in a 
quoted string require that the continuation must resume on the following 
card in columns 4 - 16.


This is an *arbitrary* JCL column sensitivity that would be regarded by 
many as peculiar, especially when contrasted with the unrestricted 
location of the parameter field on the first card of the statement. 
This restriction is is easily violated accidentally by JCL programmers 
who attempt to align subsequent parameter continuation cards with a 
parameter field that just happens to start on column 17 or later on the 
first card of statement.  I would call this a nasty column 
sensitivity in the JCL syntax as well, because I have seen so many cases 
over the years where this caused JCL syntax errors when the coding 
intent was obvious to a human.


That the continuation rules for a quoted string are even worse doesn't 
mean that normal JCL continuation rules are adequate or column insensitive.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JCL sheesh! for today

2011-12-09 Thread Joel C. Ewing

On 12/09/2011 11:57 AM, Ed Gould wrote:

  JC,
My memory indicates that the parm on the exec card is really the only real column 
sensitive Nasty left in JCL.
Although there is some debate whether JCL is a language (or not) any language 
that I am familiar with does have column restrictions of some type.

Ed


All JCL statement continuation records are column peculiar, not just 
those involving PARM values or quoted strings.


A continuation which splits a quoted string is column sensitive on where 
the first record ends (column 71) and on where you must resume the 
string on the continuation (column 16).  But, all other statement 
continuation records are also column sensitive in that continued 
parameters must resume in columns 4 - 16.  This complicates manual 
verification of JCL, because visually it is difficult to distinguish 
between any parameter continuation that resumes in column 16 versus one 
that resumes in column 17, but of course the latter fails.


It is true that there are languages (e.g., COBOL, FORTRAN, Assembler) 
with column restrictions that are an integral part of the language 
syntax;  but a number of other languages (e.g.,PL/I, C, REXX) are 
essentially free-form with a syntax that is column insensitive.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JCL sheesh! for today

2011-12-08 Thread Joel C. Ewing

On 12/08/2011 09:08 AM, Tom Marchant wrote:

On Thu, 8 Dec 2011 07:56:52 -0600, Paul Gilmartin wrote:


On Thu, 8 Dec 2011 07:35:36 -0600, Tom Marchant wrote:


One difficulty with trying to replace JCL with ReXX code is that
the dynamic allocations can lead to more deadlocks between jobs.
The initiator avoids them by ENQing all the data sets defined in
the JCL before the job starts.


That function could, and should, be provided to Rexx in a function
package.  I can even imagine an extension to BPXWDYN that would
allow allocating multiple data sets in a single call.



Yes, that would be easy.  More difficult would be to maintain the ENQ
across deallocation and reallocation to different DDNAMES for use in
different steps.

Would you want to allocate all of the data sets for a hundred steps at
the same time in the beginning of the job? Even for those steps that
are skipped?  What happens if you need to add or remove a step?



The enqueue and restart issues to me are major reasons why you really 
need a specialized language designed to support program flow control. 
You don't want to have to manually specify all the enqueue details, and 
you do want a language that makes it easy to monitor for success of 
complex program sequences and easy to perform partial job restart when 
failures occur.


I don't think Fred Brooks was really thinking things through when he 
said he would have preferred language-specific support within 
higher-level programming languages for scheduling program execution 
rather than OS/360 JCL ( http://lilliana.eu/downloads/jcltalk.txt ). 
Can you imagine the operational chaos of trying to manage production 
environments where each high-level language had its own unique and 
incompatible way of specifying program execution sequencing and 
specifying data flow for each execution step?  As bad as JCL is, it at 
least does isolate most of the Operating-System specific and 
installation-specific details of program execution from programs, so 
that a program in a higher-level language has a reasonable chance of 
running without modification in many different environments; and 
Operations personnel only have to have JCL expertise, not understand 
every programming language used in the shop.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JCL sheesh! for today

2011-12-08 Thread Joel C. Ewing
Fred Brooks was the one who made side remarks in a 2004 talk about 
preferring program time scheduling to be embedded in the programming 
languages rather than having a separate JCL language, NOT me.


I'm guessing the idea was it would seem more natural to the programmers 
and they wouldn't be required to learn a separate JCL language; but 
Fred was only talking in generalities with no references to specific 
designs or examples.  I personally don't see how that would work, or 
even be an improvement, and time scheduling of a program addresses such 
a small part of the scheduling problem in the typical MVS production 
environment.  It's the ordering of program steps, other inter-step 
dependencies, specifying all the input and output data sets for the 
programs, and correctly handling all this when the system is maxed out 
and everything is running late, that introduces the complexity, not just 
some simple matter of specifying what time(s) to run a program.


I have never had time to sit down and seriously consider a redesign of 
JCL, other than getting far enough to recognize it is non-trivial and 
that no existing language (like REXX) is a good fit.  While it would be 
appealing to have a more functional JCL language, trying to make JCL 
into a full-fledged procedural language greatly complicates, perhaps in 
general makes impossible, calculating data set names early enough to 
handle job-level enqueues or providing simple mechanisms for partial 
reruns.


It would be a big improvement if one could just clean up all the 
syntactic garbage in the current JCL language, like dependencies on 
specific columns and a seemingly random and irrational intermixing of 
keyword, positional, and even positional-keyword parameter types; and 
bring more consistency to the language.  But, any changes that weren't 
somehow upward compatible and which required migration effort would meet 
heavy resistance from existing users.  Any major migration would be a 
hard sell, even if it allowed new functionality that was difficult to 
resist.

   JC Ewing


On 12/08/2011 03:17 PM, Lindy Mayfield wrote:

Would you mind, please, Joel  (or anyone else), giving me an example of some 
sort of JCL substitute language you and Fred speak of?  In the real/wild world?

I personally do not see any benefit from a JCL-like language being in the 
style of another language.  But I think I got this wrong.

This is something I think - and I said I think - that I've learned a bit over 
the past few years.  That when you take a general purpose type of language and 
make it specific for one purpose only...  then just go from there.  All the 
stuff you need to know you need to know from the assembly language.  But in my 
case, and perhaps many others, I wrote and modified over 10's of thousands of 
Cobol and CSP (not to mention JCL, Rexx, SQL, etc) lines of code during my 
lifetime up to a certain point.  And I never, ever knew any MVS instructions at 
all, other than, by accident IEFBR14.  I always knew the BR14 was somehow 
special.  Took me forever to find out, but I did.

What I would say is this:   If you made JCL more C like when running C 
programs, could I use it for running other programs like Cobol or Rexx or 
IDCAMS?  Then why not make it more C-like or Rexx-like or macro-like?  It all 
boils down to the same thing, doesn't it?  And if I hate C as much as some hate 
JCL, then what?

Didn't we already learn this lesson from shell's and shell scripting languages?

Today, for 45 more minutes is Sibelius' birthday.  Happy  birthday.  This is a 
nice Finlandia, if I didn't mistakenly post it already.  I've been having fun 
on both IBM-MAIN and ASSEMBLER-LIST for the past few days, so I get mixed up.
http://www.youtube.com/watch?v=ci3RPAOFok4

Regards
Lindy




-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
Joel C. Ewing
Sent: Thursday, December 08, 2011 10:45 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: JCL sheesh! for today

I don't think Fred Brooks was really thinking things through when he said he 
would have preferred language-specific support within higher-level programming 
languages for scheduling program execution rather than OS/360 JCL ( 
http://lilliana.eu/downloads/jcltalk.txt ).
Can you imagine the operational chaos of trying to manage production 
environments where each high-level language had its own unique and incompatible 
way of specifying program execution sequencing and specifying data flow for 
each execution step?  As bad as JCL is, it at least does isolate most of the 
Operating-System specific and installation-specific details of program 
execution from programs, so that a program in a higher-level language has a 
reasonable chance of running without modification in many different 
environments; and Operations personnel only have to have JCL expertise, not 
understand every programming language used in the shop.




--
Joel C. Ewing,Bentonville, AR

Re: ROOT file system is out space

2011-12-06 Thread Joel C. Ewing

On 12/06/2011 04:35 AM, John McKown wrote:

Total agreement. Updating the running system is dangerous. I did so once
by mistake (luckily it was my sandbox). I suffered an outage because an
update to a LINKLIB module required a corresponding update to an LPALIB
module. When somebody did an LLA REFRESH, the system died. I don't know
if this can happen with UNIX, but at the least, if the SMP/E job fails,
there could be incompatible files in the filesystem.

Your gun, your bullet, your foot

On Mon, 2011-12-05 at 20:16 -0500, Shmuel Metz (Seymour J.) wrote:

In4edd3dab.5070...@bremultibank.com.pl, on 12/05/2011
at 10:54 PM, R.S.r.skoru...@bremultibank.com.pl  said:


Why do you think so?


Because it's a ticking time bomb.

But it's not my dog.


Double Ditto.

Besides the issue of a PTF having pre- or co-requisites, there are many 
instances of a single PTF changing multiple elements at the same time. 
There is always the implicit requirement that all such interrelated 
changes be made concurrently; but if you install on live libraries or 
live file systems all changes will in fact be made serially, and the 
potential results of any task that might be using any of that code 
during the transition is unpredictable. That introduces the possibility 
for random Reliability/Availability/Security failures.  Doing things in 
a way that is known to be a RAS exposure may be acceptable in the PC 
world, but not in the z/OS world.


There are some rare instances where a PTF may only affect a single 
element, or you may know and be able to restrict all the potential users 
of affected code so that an apply of a few PTFs can be done to live 
libraries without significant exposure; but the chances of that being 
the case when applying all PTFs for a new RSU level are zero.


In the worst case scenario, a full library or some other failure during 
the APPLY to live libraries may leave some of the very tools you need to 
fix the failure in an unusable state, and even an IPL at that point may 
not help or may even break additional things.  The only safe 
installation technique in general is to install to alternate libraries 
and alternate file systems and use some technique (such as IPL with 
suitable parmlib changes) to install all changes concurrently, with some 
way to revert to old libraries if there are any significant issues with 
the new code level.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: PS file record Delete - Query

2011-11-28 Thread Joel C. Ewing
If by without disturbing any other lines you mean leaving data of 
remaining lines intact, then yes;  if you mean not effecting even the 
record ordinals or edit session line number of following lines, or not 
having to physically re-write all the undeleted records to DASD at the 
end of the edit session, then no.


If you have an edit macro that first identifies multiple line numbers to 
delete and you next want to delete those lines using original line 
numbers, one approach would be to delete from highest numbered lines to 
lowest.  Other approaches would be have your macro use ISREDIT commands 
to eXclude only the lines to be deleted as they are found and then have 
the macro delete all excluded lines, etc.


1000 records is a very small file in the MVS world.  If you had a large 
file and needed to delete a single record without having to physically 
re-write the entire file, then you would not choose a PS file for 
storage, but probably either a VSAM file or a DB2 table that directly 
supports deletions of individual records or rows.  But then you would 
also need something more sophisticated than the ISPF Editor to do the 
delete.

  JC Ewing

On 11/28/2011 04:14 AM, jagadishan perumal wrote:

  ISREDIT macro to delete a desired Line.

On Mon, Nov 28, 2011 at 1:50 PM, Binyamin Dissenbdis...@dissensoftware.com

wrote:



On Mon, 28 Nov 2011 10:22:03 +0530 jagadishan perumal
jagadish...@gmail.com
wrote:

:We have a PS file record where we have more than 1000 records. Just of of
:curiosity is it possible to delete a selected line without using D tso
:command. Objective behind this deletion is just to make sure the deletion
:is just done without disturbing anyother Lines. Tried Googling to find if
:there are any TSO tricks to do the same but not able to get the one.

Your request doesn't make any sense.

Obviously if you delete a line, the later lines will shift up.

If, instead, you merely want a non-ISPF way to delete a line, there are
many
options - though each, like ISPF, will temporarily make some lines
unavailable. You can use IEBGENER/SORT to copy the lines before and the
lines
after to a temporary and then copy it back. You can do the same with REXX.
Etc.

--
Binyamin Dissenbdis...@dissensoftware.com
http://www.dissensoftware.com

Director, Dissen Software, Bar  Grill - Israel



...
--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Licence to kill -9

2011-11-28 Thread Joel C. Ewing
 were frequently expected to be extremely 
creative and were continually involved in developing, not just 
following, standards,


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SETTING CONDITION CODE

2011-11-28 Thread Joel C. Ewing
And another fairly obvious alternative (but with possible political 
issues) is to provide on-line documentation for every production job 
which Operators are required to consult before reporting a problem, 
which could tell them that RC=0002 on that step is acceptable (something 
simple like text members in a PDS for each production job can be used 
here).  That rule would solve the problem even without a job scheduler. 
 And you could migrate to that mode of operation easily by defining 
default rules for Operations to follow if there is nothing explicit for 
a job and then providing explicit documentation for those jobs that 
require different rules for success as bogus calls by Operations point 
out which jobs need different rules documented.


If you have a production job scheduler (OPCS, CA-7, ZEKE, etc.), as 
Peter has already mentioned you can have Operations define for an 
individual job what maxcc constitutes successful execution, and 
Operators should be trained to only call if the scheduler says the job 
is unsuccessful.


It is a semi-standard of IBM utilities that conditions that are most 
likely fatal have cc=8, and that having cc=4 is required to indicate 
some condition that might be a problem.  It would not be unreasonable 
then to define an installation standard that cc4 from a step is OK, and 
if there were any cases from applications programs that violated that 
convention, one could require a following conditional step that forced a 
more obviously-fatal error if one of those exceptional steps had a bad 
cc  4.


If the job failing is not a production job but a test job, then 
reporting failures should not even be the responsibility of Operations. 
 Test jobs should be monitored by the submitter.

  JC Ewing

On 11/28/2011 04:47 PM, Farley, Peter x23353 wrote:

One alternative is to use your scheduling software to permit RC=0002 or less from that 
job to be marked successful.  OPCS can do this, we use that facility quite 
regularly.  Certain jobs are OK if any step returns RC=0004 or less, but 0008 or higher 
is flagged as an error in the OPS lists.  Only jobs flagged as errors in the scheduling 
system need to be investigated.

I'm reasonably certain that other commercial-grade scheduling software has 
similar capabilities.

HTH

Peter


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
Behalf Of John Dawes
Sent: Monday, November 28, 2011 1:16 PM
To: IBM-MAIN@bama.ua.edu
Subject: SETTING CONDITION CODE

G'Day,

Can I override a condition code so as to force it to a ?  For example
the job-step executes successfully and puts out a COND CODE=0002.  The job
continues on to the next step which is what we want.  The reason I need a
 instead of a 0002 is because I get paged because the operator sees a
0002.
I tried the following by using IDCAMS:
//STEPCOND EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
   IF MAXCC=0002 -
   THEN -
   DO -
   SET MAXCC=0  -
   END
//*

However this did not force a  on the previous step.  Is there a way I
can add the IDCAMS IF-THEN-DO logic in the step which is giving me the
headache?  Or is there a way of forcing a  on that step.  Here is what
the (headache) step is doing:

//PRINTDS   EXEC PGM=PRTDS,
// PARM=('DDNAME(INPDS)',
// 'SEA(.) REP(X''40'')',
//'SYSOUT(X)')
//INPDS   DD   DSN=SYS2..INFO.CONFIG,DISP=SHR
//SYSPRINT DD SYSOUT=*
//*

--


...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Java based Web Emulator

2011-11-22 Thread Joel C. Ewing

On 11/22/2011 03:15 AM, jagadishan perumal wrote:

Hi,

Are there any opensource (Java based) software mainframe emulator which can
run inside Internet Explorer . I am just trying to enable the mainframe
connection for the remote users.

Regards,
Jags


...
If you are talking about remote access to a mainframe, what you more 
likely mean is a telnet3270-capable 3270 terminal emulator, not a 
mainframe emulator.  A java-based mainframe emulator would be totally 
gross in terms of performance - not to mention that you couldn't legally 
run any current IBM mainframe software under such an environment.


I have never used browser-based 3270 emulators because in the past they 
haven't gotten very good reviews.  Telnet 3270 terminal emulators for 
native MS Windows environments tend to not be free, but some are not 
that expensive ( $50).  If you have to have opensource and free, there 
is always X3270 under some flavor of Linux, or somewhat less spiffy, 
X3270 under Cygwin under Windows.  Works, but tends to be a little 
kludgy under Windows.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ADRDSSU Compatibility

2011-11-18 Thread Joel C. Ewing

On 11/18/2011 07:11 AM, Norbert Friemel wrote:

On Fri, 18 Nov 2011 10:26:32 -0200, Carlos Bodra - Pessoal wrote:


If I have a volume backed up using ADRDSSU 1.11 (DUMP DATASET) is
possible to restore it using ADRDSSU 1.10?  (backward compatibility)
--


Yes, if the coexistence and fallback PTFs are installed on z/OS 1.10
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/E0Z2M171/2.1.2.1

Norbert Friemel


...
But, take note that this backward compatibility does not always exist. 
It seems that about every ten years or so as tape technology advances 
IBM decides to raise the block size written by dfdss, first to 64KiB, 
relatively recently to 256KiB.  Be sure to pay attention when this is 
mentioned in migration notes, because it invariably means that new dump 
tapes CANNOT be read by back-level versions of dfdss, and you might not 
have complete control over or knowledge of the update level of dfdss at 
a recovery site.


Although we never had to work around a dfdss failure, our DR tape set 
always included our current version of stand-alone dfdss, just in case - 
so we knew we could always restore our emergency restore system (at same 
software level as production) and didn't have to constantly remember to 
verify DR site dfdss compatibility that only rarely would be an issue.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ADRDSSU Compatibility

2011-11-18 Thread Joel C. Ewing

On 11/18/2011 08:18 AM, Norbert Friemel wrote:

On Fri, 18 Nov 2011 07:42:34 -0600, Joel C. Ewing wrote:


...
But, take note that this backward compatibility does not always exist.
It seems that about every ten years or so as tape technology advances
IBM decides to raise the block size written by dfdss, first to 64KiB,
relatively recently to 256KiB.  Be sure to pay attention when this is
mentioned in migration notes, because it invariably means that new dump
tapes CANNOT be read by back-level versions of dfdss, and you might not
have complete control over or knowledge of the update level of dfdss at
a recovery site.



IBM supports n-2 releases. The 256K blocks were new in z/OS 1.12. There are 
compatibility PTFs for 1.10 and 1.11 (OA30822).

Norbert Friemel


I'm glad to know n-2 compatibility is now supported.  If that were 
true long ago when the previous dfdss 64KiB change was made, it 
certainly wasn't advertised then.


The fact that compatibility PTFs for back-level releases are available 
still doesn't mean you can assume without checking that they are 
actually installed at some DR site not under your direct control, or 
that you don't need to be extra careful at such junctures that you have 
updated all your stand-alone dfdss tapes and have resolved compatibility 
issues on any back-level systems on site that could conceivably become 
an issue for local recovery.  It's easy to get complacent about issues 
that are so rare they at most exist for a few months every decade.

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ADRDSSU Compatibility

2011-11-18 Thread Joel C. Ewing

On 11/18/2011 04:15 PM, Ted MacNEIL wrote:

I'm glad to know n-2 compatibility is now supported.  If that were true long 
ago when the previous dfdss 64KiB change was made, it

certainly wasn't advertised then.

Funny, I thought n-1 was supported for a long time!
-
Ted MacNEIL
eamacn...@yahoo.ca
Twitter: @TedMacNEIL


Obviously this depends on what is meant by compatibility PTFs.

New dfdss versions have always been compatible in reading dumps produced 
by previous version(s) - changes that invalidated archived dfdss tapes 
would not be tolerable..


I also can't recall a case other than the 64K block size change where 
the n-1 version wasn't able to read tapes produced by version n as long 
as you didn't explicitly use some new features introduced in the new 
version. The problem with the 64K block change was that you got the new 
feature by default.  I may have missed it, but I don't remember there 
being a PTF to the old version to allow it to read 64K blocks, at least 
not at the time we migrated.


In some cases the best you can hope for is toleration support for the 
n-1 version so that it will try to do something reasonable or at least 
give meaningful warnings or errors if there is some new construct in the 
dump file related to a new feature in version n that the old version 
can't fully handle.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: TSO SCREENSIZE

2011-11-11 Thread Joel C. Ewing

On 11/10/2011 02:58 PM, Rick Fochtman wrote:

---snip

Remember how old the 3270 architecture is. Wikipedia says about 1972.
Think 1 Mhz 8080 as top of the line micro processor. The original
3277 and its controllers were STUPID. Rather than put a more powerful
processor in the controller, IBM decided to offload the complicated
function of calculating the position of the data into the host. Made
of discrete transistors and resistors! Very primitive. So, the host
just sent a simple to understand buffer address (a single number)
to the 3274.



Not without a time machine. The 3274 came later. The original 3270
controller lineup was 3271, 3272 and 3275, the latter combining
controller and display.

---unsnip---

Wasn't there also a 3276, with a display and controller that would
handle the integerated display, plus 7 more display-only devices?

Rick



Yes indeed, for remote BSC or SDLC operation.  The other 7 devices could 
also include 3270 printers.  Deceptively the same external size as a 
3278, but with enough additional steel and components inside to be 
appreciably heavier than a 3278, which was already borderline for one 
person to carry.  Trying to carry a 3276 by yourself was not wise if you 
wanted to avoid back problems.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Segmenting an output spool file in z/OS - 2nd attempt

2011-10-20 Thread Joel C. Ewing

Leslie,
Off topic,
but did you do anything different on 2nd posting to get it to work? 
Like something we could advise for others with similar base64 posting 
issues?


I received some additional headers from ibm-main with your 2nd posting 
attempt, most notably a X-MIME-Autoconverted: from base64 to 8bit by 
bama.ua.edu id p9KEf61i001292 which implies Email was still received by 
bama in base64 format, but for some reason it was able to convert the 
2nd post to 8bit and handle it reasonably but failed to do so for the 1st.


From the Received headers, it looks like the 2nd posting was received 
by a different ua.edu mail server than the first post (mailapp-2.ua.edu 
versus mailapp-1.ua.edu), so perhaps the difference is an issue at 
ua.edu.  Maybe they're experimenting with a circumvention/resolution of 
the original problem and you just hit a fixed path on 2nd try.  (The 
routing of Email though mo.gov servers was different as well, but base64 
encoding apparently got to bama in both cases)

  JC Ewing

On 10/20/2011 09:41 AM, Turriff, Leslie wrote:

Hi,
 I know that in z/VM and z/VSE I can break a large output spool 
file into several segments.  I've Googled, and looked through the z/OS JCL 
Reference and User Guide and the JES2 Introduction and Commands manuals, but I 
can't find an equivalent mechanism for z/OS.  Perhaps a different term is used 
in z/OS, and I'm looking for the wrong thing?

Leslie Turriff
State of Missouri
Information Technology Services Division

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html




--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and HFS/ZFS Datasets.

2011-10-15 Thread Joel C. Ewing

On 10/15/2011 12:29 AM, Ravi Gaur wrote:

2 cents should have seperate storage group..it's going to easier life with the 
issues/problem comes and also defining various options which not needed with 
basic file system...

Look in DFSMS : Implementing System-Managed storage 7.9 - Defining SMS 
Construct for HFS Data..


...

In an SMS shop those differences are easily handled via DATACLAS and 
MGMTCLAS assignments by ACS routines.  I would think whether you would 
want to isolate HFS/ZFS to a separate STORGRP or STORCLAS would depend a 
lot more on other factors unique to your shop.


If you have a significant number of developers using HFS/ZFS who think 
DASD is infinite, that could be an issue in favor of a separate DASD 
pool, but you can have exactly the same issues with standard MVS data 
sets used for testing under either batch or TSO, so HFS/ZFS is hardly 
unique in its potential for conflict with DASD space that might be 
needed by production work loads.


With modern highly-cached DASD subsystems with PAV support, performance 
conflicts really shouldn't be a motivation for a separate STORGRP either.


Different backup criteria for HFS/ZFS files can also be handled by 
MGMTCLAS, unless you choose to use a full-volume backup strategy as one 
user described, or some totally independent backup strategy that gets 
down to the level of changed files within an HFS/ZFS file system.


If most of your DASD is still non-SMS and you have a non-trivial system, 
you have other much more serious issues.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Sad News: [IP] Dennis Ritchie dies

2011-10-12 Thread Joel C. Ewing

On 10/12/2011 09:11 PM, David Boyes wrote:


��z{S���}�ĝ��xjǺ�*'���O*^��m��Z�w...


Attempt at reposting after manually decoding the garbaged base64 utf8 
block in David's post:


Begin forwarded message:
From: Tim Finin Date: October 12, 2011 9:32:06 PM EDT
Subject: Dennis Ritchie has died
Sad news.  Rob Pike reports on Google Plus that Dennis Ritchie died at 
his home this weekend after a long illness.  Ritchie created the C 
programming language and was a key contributor to Unix.  In 1983 he 
received the ACM Turing Award with his long time colleague Ken Thompson 
for the development of operating systems theory and the implementation 
of the UNIX.  He was elected to the National Academy of Engineering in 1988.



--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM Classification Rules question

2011-10-08 Thread Joel C. Ewing

On 10/07/2011 01:25 PM, Shmuel Metz (Seymour J.) wrote:

In4e8f1ad5.5020...@acm.org, on 10/07/2011
at 10:29 AM, Joel C. Ewingjcew...@acm.org  said:


If the list server does not support broadcasting multi-part MIME
messages, I don't see any easy way it could possibly handle an
incoming  message that is UTF-8 base-64 encoded without first
decoding the message  to UTF-8 to allow adding the list footing
without requiring a 2nd part  in the message body.


Why? The footer is ASCII. Any valid ASCII is valid UTF-8. Just BASE64
encode it and append.



I don't claim to be an expert on MIME Email syntax (only to know enough 
to be dangerous), but there is a definite end-line for a base64 encoded 
block (short line with trailing ='s).  Is it legit to follow one 
base64 encoded block immediately with another in the same message part 
without any additional headers?  Appending two base64 encodings in this 
way at the line level is not equivalent to a single base64 encoding as 
there is still a distinct division between the two blocks.  But if 
allowed, that would be certainly be a simpler solution than merging the 
two blocks into one (which requires at least a partial decoding of the 
last line of the first block to fill out that line)


But, in the last case I received,  this wasn't what the list server 
sent.  It separated the second UTF-8 base64 footing from the previous 
base64 encoded block with an intervening multi-part separator line 
sequence.  I thought once you added a multi-part separator, that 
required a new set of Contents headers, not to mention the other 
multi-part headers that were missing.


In any event, what I'm receiving is definitely not making my Thunderbird 
client happy and I've generally had good success with it; although, I 
can well believe other Email clients may behave differently on 
questionable Email formats.

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM Classification Rules question

2011-10-07 Thread Joel C. Ewing
This is another of several recent messages that displays as total 
garbage to me (Thunderbird 6.0.2 on Fedora 15).  Anyone have any idea 
what is going on here?  This looks to me like the email I am receiving 
is ill-formed.  Using view source I can see that the main message body 
is formatted with content


Content-Transfer-Encoding: base64
Content-Type: text/plain; charset=UTF-8

and what immediately follows those headers does appear to be legit 
base64 down to a point.  Feeding just those initial lines into web 
base64 translators gives valid results:
how about *SI*, Subsystem Instance? ... (etc.  including some 
characters that definitely require UTF-8)


But immediately on the next line following this valid base64 content is:
--bcaec547ca51d8417704aeadc3a4

DQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tDQpGb3IgSUJNLU1BSU4gc3Vic2NyaWJlIC8gc2lnbm9mZiAvIGFyY2hp
dmUgYWNjZXNzIGluc3RydWN0aW9ucywNCnNlbmQgZW1haWwgdG8gbGlzdHNlcnZAYmFtYS51YS5l
ZHUgd2l0aCB0aGUgbWVzc2FnZTogR0VUIElCTS1NQUlOIElORk8NClNlYXJjaCB0aGUgYXJjaGl2
ZXMgYXQgaHR0cDovL2JhbWEudWEuZWR1L2FyY2hpdmVzL2libS1tYWluLmh0bWwNCg==

which looks like a multi-part Email separator (there were no earlier 
multi-part headers to make this legit), followed by more base64 content 
minus the required Content headers.  Interestingly enough the final 
encoded piece when manually decoded appears to be the ibm-main footer:


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html;

Is this problem possibly being caused by some people sending multi-part 
MIME email to the list and totally confusing the list server which 
doesn't allow multi-part?  Or is there some bug in the list server's way 
of appending the list footer when the post is base64 encoded? Or, is 
this just another nasty side effect of sending email through down-level 
email clients or servers that still after all these years fail to 
properly support 8-bit email and force auto-conversion to base64 
encoding at some point during the transmission?

   JC Ewing

On 10/06/2011 11:20 PM, Cobe Xu wrote:


��z{S���}�ĝ��xjǺ�*'���O*^��m��Z�w!j��Ɔ�r�WB�4���7V'7�7FV���7F�6S�#��R
y-y=y�
yy�

...

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SPOOL move

2011-10-07 Thread Joel C. Ewing
And usually the only reason one would need to do this migration IS a new 
DASD subsystem and the removal of an old subsystem, which typically 
requires moving many volumes, not just SPOOL.  Having a product like 
TDMF or FDRPAS makes such moves non disruptive and so painless, that 
every one of our DASD subsystem acquisition contracts for over a decade 
has included temporary availability of such a product to do the 
migration!  Considering the capacity of current DASD subsystems, to do 
it any other way may require a month or more overlap between old and new 
DASD with at least some brief down time, unless you are willing to incur 
substantial down time for mass DASD moves.  With one of the above 
migration tools, the moves can be done non-disruptively over at most 
several days.

  Joel C. Ewing

On 10/07/2011 09:17 AM, Mark Jacobs wrote:

If you have a product such as IBM's TDMF or FDRPAS it will happily move
active spool volumes to another device.

Mark Jacobs

On 10/07/11 10:10, R.S. wrote:

I have to move JES2 SPOOL volumes from one control unit to another
(another dasd box).
IMHO the simplest method is to stop JES2, move the volumes using DSS
copyvol and voila. However this method require an outage which can be
long, because SPOOL is large.

I just looked at the documentation - it seems such movement could be
done by addition/deletion volumes to spool. However deletion of spool
volumes is not so funny - when some long running task keep output on
the volume, I can only wait. Oboviously closing tasks like CICS or DB2
means production outage, so closing JES2 is not big issue in such case.

Q: Is there any method to move spool volumes (or at least majority of
them) ?
Any clue?


Regards



--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM Classification Rules question

2011-10-07 Thread Joel C. Ewing

On 10/07/2011 09:35 AM, Roberts, John J wrote:

Anyone have any idea what is going on here?  This looks to me like the email I 
am receiving is ill-formed.


Some installations have special handling for email sent to addresses outside 
the home organization.  The email is intercepted and stored in a database.  
What gets sent to the destination email address is a rich text page containing 
hyperlinks the recipient can use to access the original message, but it works 
only for a limited period of time (like 30 days).

Of course, when these rich text pages are sent, they appear as hexadecimal 
garbage to the list servers.

I am plagued with this at my installation.  I always have to watch that my 
posts are small since big messages trigger the special secure handling.

John


But in this particular case, the data definitely is just UTF-8 text not 
RTF, only it is base64 encoded (which shouldn't be necessary these days 
- see 8BITMIME, circa 1994).  The message body appears to be 
quasi-multi-part MIME with some missing multi-part headers (specifically 
the initial ones that are required to indicate the message body is 
multi-part and the multi-part separator string) and also with missing 
Content headers for a 2nd base-64-encoded part in the message body 
(apparently a base64-encoded ibm-main footer added by the list server), 
and finally there is no final multi-part header to indicate the end of 
the 2nd part.


If the list server does not support broadcasting multi-part MIME 
messages, I don't see any easy way it could possibly handle an incoming 
message that is UTF-8 base-64 encoded without first decoding the message 
to UTF-8 to allow adding the list footing without requiring a 2nd part 
in the message body.  From the data I am seeing, it looks like the list 
server attempted to add the footer message in a format consistent with 
the original message data (UTF-8, base64), but was unable to handle the 
generation of all the headers required to make this really work.  If 
that is the case, then either the arrival of UTF-8 base64 rather than 
UTF-8 Text must somehow be prevented, or the server must be smart enough 
to convert the UTF-8 base64 to UTF-8 Text before attempting to process 
it.  Or, perhaps this failure was triggered by an incoming message that 
was actually multi-part (someone with email configured to send both text 
and html formats all the time?)


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMShsm DUPLEX tapes.

2011-10-05 Thread Joel C. Ewing
If your object is just to cease duplexing and get down to a single HSM 
volser from two, and it doesn't really matter whether the one left used 
to be the primary or the duplex volume, then one possibility is to do an 
HSM TAPEREPLACE to convert DUPLEX volumes into the primary volume, 
leaving no duplex volume.  TAPEREPL will still take some time as it must 
do updates to the HSM Control Datasets to replace all primary volser 
references with the duplex volser, but less time and no tape handling 
compared with a RECYCLE.


If you plan to then turn off duplexing and run in this mode for any 
length of time, I trust your tapes are not real cartridges but virtual 
cartridges that are duplexed in some other way.  A single physical 
cartridge is always at much greater risk from physical damage than two, 
especially with one at an alternate site, and running without HSM data 
duplexed in some fashion is ill-advised.


I'm also unaware of any command to just delete the duplex volume.  I 
would be inclined to suspect that the only references to a duplex volser 
is a field in the HSM OCDS record for the primary volume, a member in 
the RACF HSMHSM tapevol profile, and volume status information in your 
tape management system; but I would still be hesitant about trying to 
purge that data manually in the absence of a direct DFHSM command to do 
the delete.  It doesn't seem like an HSM duplex delete function should 
be that difficult to implement:  Perhaps it should be considered as a 
possible candidate for a SHARE requirement.

  JC Ewing

On 10/05/2011 11:02 AM, Barry Jones wrote:

Hi Listers,

Does anybody know if there is someway to do a DELVOL of just the duplex copy
of a tape.

IE: ML2 tape VOLSER 123456 has a duplex copy on VOLSER 654321.
I want to keep 123456, but delete the association with 654321 and return it
to scratch.

I know I can do it by SETSYS DUPLEX(MIGRATION(N)) and then RECYCLE EX
VOL(123456), but I have thousands of volumes and dont want to have to
recycle all of them.

I've scanned the fine manual, but cant see anyway to do it.

B.

...
--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Suggestion for a Job running Under a Loop

2011-09-20 Thread Joel C. Ewing

On 09/20/2011 07:45 AM, Steve Comstock wrote:

On 9/20/2011 4:22 AM, Jake anderson wrote:

I am not doing a COMPILE,BIND,LINKLED. Its the application trainee
users who
perform this task. Not the same user but different users performing same
task as part of training. Just curious to know if there are any
limitation
on TGNUM or CPU time to know if a Job is taking more resources.

Jake

On Tue, Sep 20, 2011 at 6:04 AM, Lizette
Koehlerstars...@mindspring.comwrote:


Jake,

I am a little confused. Are you doing a COMPLE,BIND,LINKED every time
you
run your program?



Lizette


Ah. For trainees running training jobs, you really
should put TIME= on JOB statements and OUTLIM= on
all SYSOUT DD statements.



And if this is a training exercise, the instructor in charge should 
already have a solution for the exercise and know what is a reasonable 
upper bound for OUTLIM on the SYSOUT datasets and TIME for the execution 
step of the exercise.  Forcing such limits in some way, at least on the 
test execution step, is infinitely simpler than trying to dynamically 
determine if a running job step is doing something reasonable or not and 
still achieves the desired end result of not allowing bad test jobs to 
waste huge amounts of system resources.


A default job step TIME value may be easily assigned by having a 
separate job class in JES for the test/trainee jobs and specifying the 
default there.  Another simple possibility would be to require the 
trainees to use a supplied PROC for their execution which includes the 
desired limits for the test execution job steps.  A more difficult 
solution would be to use JES exits to enforce the limits for specific 
test job classes or job names.  If those limits also catch a few trainee 
programs that might produce correct results, but in a terribly 
inefficient way, that is also a good training experience.


Considering the approximate MIPS ratings of typical processors today, 
any program that uses more than a few seconds of CPU time must of 
necessity contain program loops.  Distinguishing productive loops from 
inefficient loops and from non productive loops for an arbitrary program 
is a non trivial exercise.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Jabberwocky was:PTF question

2011-09-06 Thread Joel C. Ewing

Try
`Twas brillig, and the slithy toves
  Did gyre and gimble in the wabe:
  ...
http://www.jabberwocky.com/carroll/jabber/jabberwocky.html
Must be careful to preserve correct wording or you could distort the 
entire meaning of the poem :)

  JC Ewing

On 09/05/2011 10:14 PM, Ted MacNEIL wrote:

Did gyre  gymbol on the wabe.

PS: I don't think troth is the 'correct' spelling.
-
Ted MacNEIL
eamacn...@yahoo.ca
Twitter: @TedMacNEIL

-Original Message-
From: Ed Finnellefinnel...@aol.com
Sender: IBM Mainframe Discussion ListIBM-MAIN@bama.ua.edu
Date: Mon, 5 Sep 2011 22:16:35
To:IBM-MAIN@bama.ua.edu
Reply-To: IBM Mainframe Discussion ListIBM-MAIN@bama.ua.edu
Subject: Re: PTF question

'Twas brillig in the slimy troth...


In a message dated 9/5/2011 8:55:03 P.M. Central Daylight Time,
jo.skip.robin...@sce.com writes:

KFRoaXMgaXMgaG93IGl0IGxvb2tzIGluIG15IFNlbnQgZm9sZGVyLi4uKQ0KDQpBcyB0aGV5IHNh





--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: PTF question

2011-09-05 Thread Joel C. Ewing

On 09/05/2011 05:39 AM, גדי בן אבי wrote:

R200, R300 and R500 are the versions of TWS that each ptf applies to.

Since I don't have TWS, I don't know what the current versions are.

If you can, use SMP/E RECEIVE ORDER command. You can then specify the APAR 
(PK85334), and the correct PTF, and it's prerequisites will be retrieved from 
IBM's servers.

Gadi

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
Alvaro Guirao Lopez
Sent: Monday, September 05, 2011 1:26 PM
To: IBM-MAIN@bama.ua.edu
Subject: PTF question

Hi listers,

I have a question related with PTFs releases download:

When I see the next table at IBM web:

Applicable component levels

- R200 PSY 
UK49618https://www14.software.ibm.com/webapp/set2/ordermedia/shopCart?ptfs=UK49618

   UP09/10/05 P F910
- R300 PSY 
UK49649https://www14.software.ibm.com/webapp/set2/ordermedia/shopCart?ptfs=UK49649

   UP09/10/06 P F910
- R500 PSY 
UK49655https://www14.software.ibm.com/webapp/set2/ordermedia/shopCart?ptfs=UK49655

   UP09/10/06 P F910

What is 'R200', 'R300' and 'R500'? How can I know what PTF I must download an 
apply to my systems?

Thanks for your help.

--
Un saludo.
Álvaro Guirao



If you can view any one of the PTFs, it will most likely have IF FMID 
and VER clauses in the control information which will clearly tie all 
the various versions of the PTF to the FMID for which they apply.  For 
most (all?) product areas there will be an obvious correspondence 
between the four-character indicators and the product version FMIDs , 
like based on last characters of FMID.  Once you know that 
correspondence holds for the product area in question you won't even 
need to check the internals of some PTF.  I never have understood why 
after all these years IBM hasn't bothered to change their convention and 
use the full FMID in the applicable component level description and 
avoid all the unnecessary confusion for new users!


If you don't know for sure which FMIDs are on your system, that can 
generally be determined by displaying the appropriate global zone gzone 
data from the ISPF SMP/E dialog.  Or alternatively, you can use the 
SMP/E dialog to search the target zone for a specific product FUNCTION 
FMID sysmod and see which is in your target zone.

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: FTP JESLRECL Limit

2011-08-30 Thread Joel C. Ewing

On 08/29/2011 05:22 PM, Paul Gilmartin wrote:

On Sun, 28 Aug 2011 11:38:29 +0530, amit wrote:


i believe your answer points to the Value and range in JES sysparms...can
you set that value to 999:)?


I'm mystified that a JES sysparm should appear in the publication cited,

 z/OS V1R12.0 Communications Server IP Configuration Reference
 SC31-8776-18

Which is a TCP/IP publication, not a JES publication.  I find no reference
to JESLRECL in:

 JES2 Initialization and Tuning Reference
 Document Number SA22-7533-10

which would seem a more appropriate publication.  Where should I
be looking?  What's the name of the PARM?


why not?


Can't find where to set it.


On Sun, Aug 28, 2011 at 11:09 AM, Paul Gilmartin wrote:


On Sat, 27 Aug 2011 16:05:12 -0500, Mike Schwab wrote:




http://publib.boulder.ibm.com/infocenter/zos/v1r12/index.jsp?topic=/com.ibm.zos.r12.halz001/cjeslre.htm



Why?


On Sat, Aug 27, 2011 at 9:40 AM, Paul Gilmartin wrote:
deleted

ftp  quote site jeslrecl=999
200-Jeslrecl parameter (999) must be between 1 and 254.  Jeslrecl

ignored.

200 SITE command was accepted

Why?


-- gil



The jeslrecl parameter is not described in the JES manuals because it 
is not a JES parameter.  It is a sub-parameter of the site command of 
the MVS FTP server, hence it is described in the manual that describes 
the FTP server commands, as one should expect.


My reading of the IP Configuration Reference manual indicates that when 
submitting jobs to JES via FTP, jeslrecl plays the same role as the 
LRECL in a DD statement that allocates an INTRDR SYSOUT to spool a job 
to JES.  One shouldn't expect to find jeslrecl in the JES manuals any 
more than one would expect to find the DD LRECL parameter described in 
the JES manuals.


I think the restriction on JES INTRDR LRECL was raised at some point to 
32KiB.  If so, the FTP jeslrecl maximum value is a restriction imposed 
by FTP, probably for consistency with older versions of MVS.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ISMF QUESTION

2011-08-30 Thread Joel C. Ewing

On 08/30/2011 10:58 AM, esmie moo wrote:

Good Morning Gentle Readers,

I am trying to compile a report of dsns which use a specific MANAGEMENT CLASS via 
ISMF.  The fields I choose are 3  26.
For some reason I does not show the MANAGEMENT CLASS in the report.  It has 
 under the MANAGEMENT CLASS NAME.
I tried selecting STORAGE CLASS (27) the same thing happens.
Is there something special I have to do when selecting these fields if so 
please advise me?

Thanks in adance.

Since SMS class data is in the catalog, I would suspect that means you 
need to generate your dataset list from catalog rather than from 
VTOC.  If any dsns are migrated, you also need to specify Y for 
Acquire data if DFSMShsm migrated.


Any displayed DATA or INDEX components of VSAM datasets will show null 
class values, as class values belong to the CLUSTER; so if your object 
is to list allocated space associated with VSAM files (which is only on 
DATA and INDEX) for a specific MGMTCLAS, you may not be able to do this 
easily with ISMF.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: alter dataset blocksize

2011-08-29 Thread Joel C. Ewing

On 08/29/2011 12:56 AM, MONTERO ROMERO, ENRIQUE ELOI wrote:

Hi,

Why to preserve or change the date of the dataset?

An user had a library (PDS) with the incorrect BLOCKSIZE, so the HSM backup/migrate 
did not work for this dataset, we asked to the customer to fix the dataset using 
standard procedure (create new one, copy, delete old  rename the new one).

But...

The user told us, it was fixed, so i went to check it, found i could perform an hsend 
backds ..., it was correct, and check the library info, found it with the correct 
blocksize, and also found that the creation date was the same, it was not 
changed. It make me surprised, couse i have no explanation why this happens.

Thanks,

Enrique Montero


The far most likely cause of the original problem is that the user ran a 
batch job step to create a member in the PDS and either the program DCB 
or the DD statement for that file in that job step explicitly specified 
the inconsistent block size.


Perhaps when informed of the problem the user recognized what he had 
done and simply ran another job step to store a new PDS member 
specifying the correct block size to reset the PDS block size attribute 
and deleted any members that were written with incorrect block size. 
That doesn't alter the PDS create date and may be much more efficient 
than creating a new copy of the PDS if you know this is what caused the 
problem.  In fact, you likely would have to fix the original PDS in this 
fashion before you could read the old members to copy them to a new PDS, 
and once it is fixed the new PDS becomes redundant (this is not to say 
that there might be other ways to trash a PDS where a new PDS creation 
would be desirable).


Or, a less sophisticated user might have found he couldn't read any old 
members in the PDS and just restored a back-level version of the PDS 
with DFHSM, which as Allan noted also preserves the creation date.


DFHSM auto backup uses the data set changed bit, not the creation 
date, to determine if the dataset has been modified since the last 
successful backup; but an explicit hsend backds will create a 
potentially redundant backup even if the data set was just restored by 
the user from a DFHSM backup and the changed bit is off.


Analyzing SMF records and DFHSM logs can give some clues about what was 
done, but if you are curious sometimes the simplest approach is to just 
ask the user what he did to resolve the problem.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cryptography Processor

2011-08-23 Thread Joel C. Ewing
 player in the protection control 
design, but z/OS internal design and the hardware are what make it work. 
 In a normal z/OS installation, only a z/OS system programmer could 
introduce code into the system that would affect z/OS operation 
globally, and maintenance philosophies and the restricted number of 
system-level z/OS vendors and the fact that code distribution comes 
direct from those vendors and requires manual installation action to 
install make it very unlikely that malware would come via that route (it 
would probably destroy a vendor's business if it ever happened).


Application programmers can in general only introduce code that would 
cause failures within their own limited application area, not compromise 
z/OS itself, and maintenance philosophies, multi-party testing, and 
installation standards would make it highly unlikely that any 
non-approved code from outside (which could invite copyright or patent 
violation exposures) could be introduced, much less malware.


The largest body of z/OS users, end users of applications, can at best 
only alter data within their application area and would not have any 
interfaces or access to alter any code that manipulates the data; so 
none of their actions can introduce a virus or malware.  It's possible 
some subset of end users might also have access to TSO or the Unix 
Shell, where there are tools to create and execute code; but to use 
those tools to create or install malware would be a deliberate act with 
multiple steps, not an automated process a virus could easily exploit, 
and any potential damage would be restricted to those limited non-system 
directories/files/datasets to which that user had UPDATE access and to 
his own user session.


z/OS simply does not allow end users to exchange data in forms that 
potentially have hidden embedded executables or allow user data 
objects to be automatically executed unintentionally because of hidden 
type codes, much less allow such code to alter arbitrary files or system 
control information unrelated to the original data or user.  Those 
features in MS applications and MS Windows environments open many 
paths that can be and have been exploited by viruses.  Other virus 
exploits have attacked specific server flaws on specific platforms to 
introduce executable malware code, but for many reasons (different 
memory usage patterns, memory isolation/protection, incompatible 
hardware code, z/OS code and dataset isolation/protection, internet 
firewall protection, fewer TCP/IP server types to exploit), those 
approaches are very unlikely to succeed on z/OS.


The cryptographic processor is optionally used during the installation 
of z/OS maintenance, but merely as a tool to validate that no failure 
occurred during an electronic transmission of the software.  The 
reliability/security of the z/OS system software is established by 
validating the source of the code to a trusted vendor (IBM), not by 
encrypting it.  The primary significance of the cryptographic processor 
on z-architecture is in the offloading of the encryption of application 
data for securing application data and for maintaining application 
encryption keys in a physically secure manner.

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: UCAT allocation to CATALOG task

2011-08-21 Thread Joel C. Ewing

On 08/21/2011 02:09 AM, Anthony Fletcher wrote:

Does anyone on the list remember advice that a change occurred with z/OS 1.11 
that might result in UCATs being allocated to the CATALOG task under z/OS 1.11 
early on in IPL, when this almost certainly as not happening under z/OS 1.10?
We have been upgrading a 3-LPAR non-SYSPLEX system from z/OS 1.10 to 1.11, and 
didn't have any problems with the first two, but in the last one ended up in a 
VARY OFFLINE PENDING situation when a volume was varied offline.
The system is not a sysplex but does share several volumes across the LPARS.
A F CATALOG,UNALLOCATE(catname) command did unallocate the catalog and the 
allocation against the volume that we wanted t take offline.


...
At least during normal system operation a UCAT is only allocated if 
there is a reference that requires it to be opened, and it tends to stay 
open and allocated at that point even if not needed.  If you are certain 
it is not needed, try closing the UCAT first
(F CATALOG,CLOSE(catname) and then see if you can UNALLOCATE it.  You 
can't un-allocate while it is still open.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ABEND - 637-04

2011-08-19 Thread Joel C. Ewing
The requirement was that a physical block not be less than 18 bytes. 
Older tape technology could not distinguish blocks shorter than 18 bytes 
from tape noise.


Since an FB dataset with LRECL  18 could easily end up with a final 
block containing only 1 record (possibly even in the middle of the 
dataset if extended with DISP=MOD) I would think the physical block 
restriction would also have to imply a minimal LRECL of 18 as well. 
Similar considerations would also apply to VB files after taking the BDW 
and RDW bytes into account, and in this case there would have had to be 
an  minimum actual record length of 14 (including RDW) for the last 
record (LRECL for VB is only the max allowed) in order to guarantee success.


I'm pretty sure all those restrictions no longer apply, since all tapes 
subsystems from 3490 and later compact the logical blocks known to the 
operating system into physical super blocks and the physical block 
size on the tape is no longer under the control of operating system code.

  Joel C Ewing

On 08/19/2011 06:48 AM, Shmuel Metz (Seymour J.) wrote:

In
1240651948-1313637071-cardhu_decombobulator_blackberry.rim.net-95694995-@b12.c1.bise6.blackberry,
on 08/18/2011
at 03:11 AM, Ted MacNEILeamacn...@yahoo.ca  said:


I thought there was a minimum of 18 bytes for LRECL.


No.

CIG




--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Last card reader?

2011-08-18 Thread Joel C. Ewing

On 08/17/2011 06:05 PM, Ed Gould wrote:

  Rick,

My vague memory was a 407 read cards and you had a board you could place wires 
what you wanted to o print add subtract and take the results and print it out 
on the printer(132? Positions?). The wires were collided. So you could 
manipulate the data if needed and move it to the print buffer .

I don#39;t think you could divide just add subtract and maybe multiply. ( not 
sure about multiply).

Ed


I used to have a xerox copy of a 407 board wiring configuration that was 
supposed to do multiplication.  I never tried it, as no longer had 
access to a 407 at the time I first saw this, but preserved it just 
because it was fascinating that it could be done at all.


The 407 had the ability to suppress card advance and re-read the same 
card multiple times.  Someone figured out how to use that feature to 
perform multiplication by repeated addition.  No doubt would have been a 
slow as heck, but amazing that it could be done at all.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ABEND - 637-04

2011-08-17 Thread Joel C. Ewing
Knowing John's background, I would suspect he may have intended to state 
For RECFM=U, 32760 MAY indeed BE optimum, or perhaps intended to 
restrict the remark just to load libraries.  I believe all the arguments 
in the archives for using block size 32760 with RECFM=U on 3390 were 
specifically directed to datasets that are used for Load Libraries. 
While this is probably the most common usage for RECFM=U, there is 
nothing to disallow other usage of RECFM=U; and those other 
applications, unlike program management, might not be smart enough to 
utilize shorter blocks to effectively utilize a DASD track.


I agree that it has been established that BLKSIZE 32760 makes sense for 
a RECFM=U load library on a 3390.  But, a RECFM=U data set that is not a 
load library would not necessarily meet the additional requirement of 
containing a combination of long and short blocks, which as John states 
is a necessary condition for a block size greater than half-track to 
have any chance of decent track utilization on a 3390.

   Joel C. Ewing

On 08/17/2011 12:54 PM, John Eells wrote:

That really depends on the RECFM and, for variable blocked RECFMs, the
distribution of block sizes. For RECFM=U, 32760 is indeed optimum
(search the archives for why), and for some (not necessarily common)
distributions of variable blocks it might well use space better than
half track blocking. Anything that writes a combination of long and
short blocks can yield surprising results. (We found that z/OS fonts,
for example, use the least space at a counterintuitive block size.)

For plain old sequential FB data, though, it's quite right that 32K is a
poor choice of block size.

Bill Fairchild wrote:

3390 full track size is ca. 56K, and 3380 is ca. 47K. But you were
right - 32K is still not optimal for 3390.

snip



--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Soft Capping an LPAR

2011-08-13 Thread Joel C. Ewing
If you have RMF, you can run RMFPM (free download) on a workstation and 
have it track both the LPAR MSU and the LPAR 4-hour rolling MSU in near 
real time and periodically check the values in RMFPM, perhaps even take 
a few extra seconds to have RMFPM display a graph of the values to make 
trends more obvious.


If the object is just to get advance warning of capping, it isn't 
necessary to continually monitor the values, since you are dealing with 
a 4-hr average.  If you hit your cap, you usually have had a load 
problem (or application performance problem) for an extended period. Of 
course the potential warning time this gives depends on whether the LPAR 
is configured in a way that allows for a peak MSU that is higher than 
the soft cap MSU by a significant factor:  A peak LPAR MSU capability 
greater than 4 x cap MSU means capping could occur from less than 1 hour 
of peak load.  More modest peak/cap MSU ratios restrict how quickly the 
4-hour average can rise and also make it more likely that abnormally 
high loads will hit that peak and generate some response complaints from 
end users even before capping occurs, alerting that there is a load 
problem that needs to be resolved before capping makes it much worse.


Using something like RMFPM to get a general idea of what constitutes 
normal patterns of MSU usage for an LPAR for different hours of the 
day and different days of the month can also teach you when you need to 
monitor things more closely and also make it more likely that changes 
that introduce application performance bugs will be spotted as an 
abnormal pattern and fixed sooner rather than later.

   JC Ewing

On 08/12/2011 09:05 PM, David Middlebrook wrote:

I have a z10 processor running multiple customer systems where we are using 
VWLC, one of the systems was soft-capped during peak processing for the day 
which caused multiple CICS regions to go max task resulting in time outs, 
retries, transaction back out etc which just added to the demand on CPU 
resources.

Is there a monitor that will raise an alert when a system is approaching being 
soft capped?

TIA


...


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


  1   2   3   4   5   >