Writing an EOF mark would prevent issues when opening a new dataset as
DISP=MOD that was allocated in a previous step but never opened. I don't
know if it is still possible, but that used to give interesting results
when the new dataset just happened to start at the same TTR as a
previously
Hi Linda, how do you drill down to a volume's vtoc from QW s=vol* ?
All I get are the volumes stats/summary but I see no way of drilling
down into the detail. We are on version 6.9.
Regards,
Gil.
On Tue, 30 Sep 2008 19:54:49 +, Linda Mooney
[EMAIL PROTECTED] wrote:
I love that feature
If you are using QuickRef 6.6 and above (iirc)
Then you need to physically move the cursor to the volser and hit enter. It
will pop up.
You cannot tab to the volume, you have to place the cursor on the volume.
Lizette
Hi Linda, how do you drill down to a volume's vtoc from QW s=vol* ?
All
One of our recurring problems is with the management, i.e. proper use of
the HMC by the operators when they perform their job responsibilities;
1) IPL an lpar with a specific load address/load parm.
2) Change lpar settings, storage, cpu's, weights,...
We have had many instances of wrong lpars
What are some best practices that you use to prevent these and other operator
errors while performing HMC tasks?
Education
Threat of termination
Use of staff more sophisticated than systems operators
Locking of HMC
Change control/auditing
-
Too busy driving to stop for gas!
Umm, train them? Then use test LPAR's to keep their skills fresh? And
operators should NOT be changing weights, storage, CPU's etc. without a
sysprog there. Define the CP's and use config to vary them on/off.
Just my $.02
Doug
Mark Jacobs wrote:
One of our recurring problems is with the
I tend to agree with Doug. The only thing our operators ever do is IPL. They
do have to select the correct LPAR and put in the correct load address and
loadparm if that changes. The loadparm rarely changes... usually when we
upgrade the OS it temporarily points to LOADx9 for example... for z/OS
Hi Mark,
In respect of your request for information on:
What are some best practices that you use to prevent these and other
operator errors while performing HMC tasks?
Having had some experience with a company managing 100+ z/OS Systems, across
7+ sites I think the two staple requirements to
snip
proper use of the HMC by the operators when they perform their job
responsibilities
unsnip
I tend to agree that 'their responsibility' appears to be more than
operators tend to have. Another option is to support remote HMC access,
inside a VPN(ish) environment, and let the sysprog(s) tend
-Original Message-
From: IBM Mainframe Discussion List On Behalf Of Mark Jacobs
One of our recurring problems is with the management, i.e. proper use
of
the HMC by the operators when they perform their job responsibilities;
1) IPL an lpar with a specific load address/load parm.
Let the operator do the IPLs with only one Default LOAD profile that uses
dynamically changed address
for Load address and Use dynamically changed parameter for the load
parameters. The system programmers will then control everything via HCD.
Of course the operators still need to perform the
-Original Message-
From: IBM Mainframe Discussion List On Behalf Of John McKown
On Tue, 30 Sep 2008 14:53:12 -0500, Chase, John wrote:
Hi, All,
I can't remember what the //SYSABOUT DD statement was for (doing a
little cleanup here). Didn't it have something to do with OS/VS
Bingo, your memory serves you well Liz. Thanks for the clarification.
Gil.
On Wed, 1 Oct 2008 08:41:49 -0400, Lizette Koehler
[EMAIL PROTECTED] wrote:
If you are using QuickRef 6.6 and above (iirc)
Then you need to physically move the cursor to the volser and hit
enter. It
will pop up.
Mark,
I've found that training and frequent usage of the HMC reducing
confusions.
However some safe guards would be:
1. Change the PASSWORDS for all users other than OPERATOR.
2. Lock the IPL Profiles. Forces a DO YOU REALLY WANT TO IPL PROD
moment.
3. Prepare HMC Documents and procedures.
On Tue, 30 Sep 2008 08:10:16 -0500, Ron Wells [EMAIL PROTECTED]
wrote:
Any good cookbooks/ref's on install steps--hints--cautions..?
I see where some books were supplied in a later post and these are the ones.
Am big into SMB because of hosting a Windows File system of ZIP files of data
Jihad K Kawkabani wrote:
Let the operator do the IPLs with only one Default LOAD profile that uses
dynamically changed address
for Load address and Use dynamically changed parameter for the load
parameters. The system programmers will then control everything via HCD.
Of course the operators
Hi,
I'm dealing with one production job whose elapsed-time has increased
dramatically in the past month. Since I'm doing this remotely and unable to
collect relevant data by myself, I must rely on the customer to do that for
me. It's not so convenient so I must do more 'theroritical' analysis.
Liz,
We are on QuickRef 6.6. I'm using the cursor arrows (not the tab key)
to place the cursor under the first character of the volser and then
pressing enter. Nothing pops up for me. I am the mainframe capacity
planner here, not the z/OS system programmer, so I don't install
QuickRef. Could
First, I would respectfully submit that this not an operator problem.
Operators make mistakes and any dependence on operators accepts that as
a consequence.
To eliminate the mistakes, you must eliminate the operators.
That said, I have a few tricks that help minimize my screw ups:
1. You can
The function works on 6.8. You have to us the cursor control keys (arrow
keys) to move the cursor under the volume serial then press the enter key.
Regards,
Herman
-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Kelman, Tom
Sent: Wednesday,
Hi
Difficult to advice from here, in the past we had seen for some jobs
more or less the similar effect , find out that for some volumes the
VTOC indexing was disabled
Johnny Luo wrote:
Hi,
I'm dealing with one production job whose elapsed-time has increased
dramatically in the past
It is a trade-off.
It either protects people from doing something stupid (reading a dataset
they did not ever write) and adds overhead (writing EOF to a file which
will be properly written later anyway) or allows them to be a little
smarter about what they are doing.
I can see it being unsafe
We use DASD-only for the system logger.
When/how do the offload datasets get deleted?
We have a CICS region with 17 offload datasets and 16 of them are
migrated. When I run the IXCMIAPU utility and list the logstream, it
indicates that only .A000 may be orphaned?? These offload datasets
Not a lot to go on. For example, we don't even know how many files are
involved. Assuming only one, then how is it being accessed? Sequential?
VSAM sequential? VSAM random? Reading? Writing? Some of both? Each of
those would suggest a different attack vector.
The numbers suggest a 'knee of the
On Wed, 1 Oct 2008 09:25:56 -0500, Hal Merritt wrote:
First, I would respectfully submit that this not an operator problem.
Operators make mistakes and any dependence on operators accepts that as
a consequence.
People make mistakes. Operators certainly are in that category. Operators
can be
It either protects people from doing something stupid (reading a dataset they
did not ever write) and adds overhead (writing EOF to a file which
will be properly written later anyway)
The overhead is miniscule in the scheme of things.
I can see it being unsafe because reallocating a dataset
But the OP is complaining that that strategy is not working. And that
experience/observation is in line with my own.
That is, even trained, experienced operators are making an unacceptable
number of mistakes. We have to drill down to find root causes.
Here I think the most reasonable attack is
I can see it being unsafe because reallocating a dataset with the same
extents as one which was accidentally deleted is a technique sometimes
used to salvage data.
I have never attempted this myself, but I've also not seen very many
successful attempts, either.
FYI, we recovered an
On 1 Oct 2008 09:30:31 -0700, [EMAIL PROTECTED] (Ted MacNEIL) wrote:
It either protects people from doing something stupid (reading a dataset they
did not ever write) and adds overhead (writing EOF to a file which
will be properly written later anyway)
The overhead is miniscule in the scheme
Deletion of an offload dataset only occurs when a new offload dataset for that
logstream gets allocated. It will only delete the offload dataset if all
entries in the dataset are marked as logically deleted. That is controled by
the RETPD and AUTODELETE parm.
Larry Gray
Large Systems
FYI, we recovered an accidentally deleted SYS1.PARMLIB by that technique.
Always writing an EOF would prevent that.
So would writing to the 'new' file.
-
Too busy driving to stop for gas!
--
For IBM-MAIN subscribe / signoff /
I am not sure how you define your LOAD profile. What I do is I use the
DEFAULTLOAD profile supplied by IBM and then check the two boxes:
Use dynamically chaged address across from the Load address box.
Use dynamically changed parameter across from the load parameter box.
In HCD:
Make sure in
From your QW main panel, enter QINFO. That will bring up a panel with the
release info. QucikRef supports database only installs for most upgrades and
there is a zap to update the product level you see on the regular screen to
match the database level, so your product level may not be the same
If you are using a version that supports this feature, then the software
may need to be updated. For most QuickRef updates, it is sufficient to
reload the database,
Bill
On Wed, 1 Oct 2008 09:27:31 -0500, Kelman, Tom
[EMAIL PROTECTED] wrote:
Liz,
We are on QuickRef 6.6. I'm using the cursor
On Wed, 1 Oct 2008 11:47:18 -0500, Hal Merritt wrote:
But the OP is complaining that that strategy is not working. And that
experience/observation is in line with my own.
The op listed two problems.
On Wed, 1 Oct 2008 09:11:17 -0400, Mark Jacobs wrote:
One of our recurring problems is with the
Tom Marchant wrote:
It should be no surprise that with a philosophy of removing responsibility
from operators that they would act irresponsibly. My experience is that
operators who are treated with respect and given responsibility do good
work. Attempts to make a shop operator-proof don't work
We are going to upgrade from a z/890-370 to a z9 BC-X03 soon and of
course I'd like to add more memory to it if I can.
Does anyone know what an 8 Gig chunk would cost? Ballpark not exact
figures unless you know.
Thanks up front,
Claude
Claude Richbourg
Florida Department of Corrections
Systems
I think list price is $80K. With most things it may be negotiable.
Check with your BP.
--
Rich Smrcina
VM Assist, Inc.
Phone: 414-491-6001
Ans Service: 360-715-2467
rich.smrcina at vmassist.com
http://www.linkedin.com/in/richsmrcina
Catch the WAVV! http://www.wavv.org
WAVV 2009 - Orlando,
Just a quick question. I know that Java work is zAAP eligible. And I know
that a lot of DB2 work is zIIP eligible. From what I've read, to be zIIP
eligible, the code must run in an enclave SRB. What other products do
that? Unfortunately, our shop is very primitive. No DB2 or other RDMS. 99%
of our
I think certain types of IP's workload is supposed to start taking
advantage of the zIIP also. I think it's geared more toward distributed
workloads so it might not be worth it in your shop.
-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of
Unfortunately, our shop is very primitive. No DB2 or other RDMS. 99% of our
data is in either VSAM or sequential files. I'm thinking that a zIIP would be
useless in our environment. True?
Very!
-
Too busy driving to stop for gas!
I came up as an Operator and I have been a sysprog for a number of years now.
All of my fellow sysprogs here were Operators at one time. I suspect that is
true for most of us.
I remember, when I started, that we were lucky if at least one of our systems
didn't turn toes up and crash at
You could also check used. I'm looking at adding some Ficon to my z9-bc
and I got a price tag on used that was substantially less than new.
Rex
Subject: Re: 8 Gig memory in a z9.
I think list price is $80K. With most things it may be negotiable.
Check with your BP.
--
Rich Smrcina
VM
That's true, until you get an operator that's too lazy tocrack open a
manual, whether to look up a message or try and learn something new, and
he's also too gutless to accept any responsibilities. And a
management team that's too soft-hearted (or soft-headed) to do anything
about it.
I tend to agree with that assessment.
John McKown wrote:
Just a quick question. I know that Java work is zAAP eligible. And I know
that a lot of DB2 work is zIIP eligible. From what I've read, to be zIIP
eligible, the code must run in an enclave SRB. What other products do
that? Unfortunately,
I saw only the one problem:
We have had many instances of wrong lpars being deactivated and then
ipled incorrectly, changes to ipl environments not being applied
correctly...
I've also seen that even with trained, experienced operators to include
myself. Training and experience go a long way,
John,
You are essentially correct. The original use for zIIP was for
Distributed access to the mainframe (hence the enclaves) however more
work is moving to exploit the zIIPs like encryption products (e.g.
CA-Brightstore Tape Encryption), sort products, and DB2 utilities. You'd
have to look at
Fwiw, check out
http://investor.ca.com/releasedetail.cfm?ReleaseID=317136 - CA has a
good number of solutions that support and exploit the zIIP.
Reg Harbeck
ca
Product Management Director for Mainframe Strategy
tel: +1-403-605-7986
-Original Message-
From: IBM Mainframe Discussion List
On Wed, 2008-10-01 at 14:22 -0500, John McKown wrote:
to be zIIP eligible, the code must run in an enclave SRB.
What other products do that?
IDMS R17, in beta, offloads some percentage of system mode time to zIIP.
(Unfortunately CA has refused to discuss any performance experiences
whatsoever,
I'm a small shop (1 CEC, 4 LPARs + Sandbox) so life is a little easier.
Major changes a Sysprog is on site to (at least) verify the changes were
correct.
Sysprog handles all updates to the HMC (we don't use dynamic IPL parm
changes) we just update the LOAD profile as needed (maybe once a year).
John McKown wrote:
Just a quick question. I know that Java work is zAAP eligible. And I know
that a lot of DB2 work is zIIP eligible. From what I've read, to be zIIP
eligible, the code must run in an enclave SRB. What other products do
that? Unfortunately, our shop is very primitive. No DB2 or
On Wed, 1 Oct 2008 14:22:58 -0500, John McKown [EMAIL PROTECTED] wrote:
Just a quick question. I know that Java work is zAAP eligible. And I know
that a lot of DB2 work is zIIP eligible. From what I've read, to be zIIP
eligible, the code must run in an enclave SRB. What other products do
that?
Does anyone know if there is a way to automate changing the weight
distribution in the HMC? I have never heard of such a thing, but thought I'd
ask anyway.
TIA
--
For IBM-MAIN subscribe / signoff / archive access
Does anyone know if there is a way to automate changing the weight
distribution in the HMC? I have never heard of such a thing, but thought I'd
ask anyway.
I'm not sure what you mean.
But, IRD/WLM can manage it based on CEC usage goals, within LPARs in the same
processor.
-
Too busy driving
Plenty of other solutions can use zIIPs:
IBM's Comm Server for IPSEC
IBM'S Comm Server for Hipersocket Large Messages
IBM's XML some parts of the Parser
Synchsort (also can use MIDAWs)
CA's VTape
CA's Tape Encryption
CA's Datacom (release due out soon)
CA's IDMS (release due out soon)
CA's
Yes. It's called WLM :-)
Seriously, I thought I read somewhere that WLM can act globally.
-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of gsg
Sent: Wednesday, October 01, 2008 3:17 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: HMC and changing the
On Wed, 1 Oct 2008 21:05:11 +, Ted MacNEIL [EMAIL PROTECTED] wrote:
Does anyone know if there is a way to automate changing the weight
distribution in the HMC? I have never heard of such a thing, but thought I'd
ask anyway.
I'm not sure what you mean.
But, IRD/WLM can manage it based on CEC
On Wed, 1 Oct 2008 11:11:37 -0600, Howard Brazee wrote:
On 1 Oct 2008 09:30:31 -0700, [EMAIL PROTECTED] (Ted MacNEIL) wrote:
The overhead is miniscule in the scheme of things.
I can see it being unsafe because reallocating a dataset with the same
extents as one which was accidentally deleted
Norman Hollander on DesertWiz wrote:
Plenty of other solutions can use zIIPs:
IBM's Comm Server for IPSEC
IBM'S Comm Server for Hipersocket Large Messages
IBM's XML some parts of the Parser
Synchsort (also can use MIDAWs)
CA's VTape
CA's Tape Encryption
CA's Datacom (release due out soon)
gsg wrote:
Does anyone know if there is a way to automate changing the weight
distribution in the HMC? I have never heard of such a thing, but thought I'd
ask anyway.
You can programmatically change the weights and just about every other
parameter using the System z API.
--
Edward E
David,
If you are only interested in IDMS performance experiences; all I can do is
forward your request on to the IDMS people. If you are interested in our
experience with the CA Vtape and CA Tape Encryption products (both of
supported zIIP's for well over a year now) I would be happy to help. We
John,
As others have stated, it really isn't just DB2 or RDMS's anymore.
Encryption and compression can both be offloaded nicely to zIIP's; so our
products like CA-Vtape (which has an option to use software compression to
compress the CACHE) and CA Tape Encryption (which obviously uses encryption
I will be out of the office starting 10/01/2008 and will not return until
10/05/2008.
I will be out of the office at a Disaster Recovery test on Thursday
Friday (10/2 10/3). I'll have limited access to messages, but will try
to answer whenever I can. I will return on Monday, October 6th.
Also, IBM Scalable Architecture for Financial Reporting (SAFR) benefits
from zIIP:
http://www.ibm.com/systems/safr
System Data Mover (SDM) benefits (z/OS Global Mirror).
More to come.
- - - - -
Timothy Sipples
IBM Consulting Enterprise Software Architect
Based in Tokyo, Serving IBM Japan /
64 matches
Mail list logo