Or use IEBDG...
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@listserv.ua.edu] On
Behalf Of Joel C. Ewing
Sent: Thursday, June 21, 2012 6:00 PM
To: IBM-MAIN@listserv.ua.edu
Subject: Re: [IBM-MAIN] SMS processing setup for secondary extents
On 06/21/2012
Sad news indeed.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of ITURIEL DO NASCIMENTO NETO
Sent: Monday, July 02, 2012 6:17 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [IBM-MAIN] RES: Sad News About Rick Fochtman
Please, send
, Elardus Engelbrecht
elardus.engelbre...@sita.co.za wrote:
Ron Hawkins wrote:
I would urge you to consider vertical pooling rather than horizontal
pooling.
...
My key point is to go horizontal, and avoid vertical pooling.
Vertical against horizontal? Perhaps I missed something
:56 PM, Ron Hawkins
ronjhawk...@sbcglobal.netwrote:
Rob,
I have seen the small and large dataset concept discussed since the
early days of DFSMS. To tell the truth I have never seen the benefit
of doing this.
It has been suggested that it helps reduce allocation failures
Fair dinkum mate? I thought you were comin' the raw prawn with me. We could
blow the froth off a few and jaw wag about that for ages.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of John Gilmore
Sent: Monday, July 30, 2012 8:11 AM
Radoslaw,
Infrequent but not illogical.
I've used it for datasets that are only opened and written to on a weekly or
monthly basis. The secondary extents are allocated and used when anything is
written.
You will also find empty datasets with space released to zero tracks. This
is one of the
With TN3270 reconnect works fine, and so far always for at least half a decade.
I rely on it to switch from my laptop to my desktop and back.
Yes there is the proviso of the same screen size.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On
Kees,
I've just done some trial and error setup for a service unit based resource
group. This one is running about 300 jobs at once.
We've noticed the behavior when submitting all these jobs after some
inactivity is the work in the resource group will run unconstrained for a
few minutes, and
VOLCOUNT -
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2U2B1/2.3.13.6
.52?SHELF=dgt2bkb1DT=20120113165441CASE=
I'm not aware of a restriction for ZFS files.
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of
John,
Mate, when you restore a volume with COPYVOLID, and there is a volume online
with the same volser, the restored volume will be taken offline as described
elsewhere in this thread.
Just like when you try to vary a duplicate volser online, the original
volume is unaffected by the restore,
John,
You can read the SMF files on one LPAR through FTP on another where you have
SAS and MXG installed. You can do this using the FTP keyword in the FILENAME
statement.
On Windows the syntax for a concatenation is:
FILENAME SMF FTP(
Ken,
I believe that DFW cannot be turned off on 2105 and 2107 emulation.
CFW can still be set inactive, but the recommendation for HDS is to leave it
turned on. CFW reduces cache usage significantly as there is only one copy
of the write, and the CFW queues are treated differently to DFW. There
Jean Louis,
Use a guaranteed space storage class, and specify a unit count in the JCL -
e.g. UNIT=(,5). Specify a VOLSER list if you want the chunks in a particular
volume order.
This will allocate the primary extent requested on each of five volumes.
When you write to it sequentially the five
-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of DEBERT Jean-Louis
Sent: Wednesday, November 07, 2012 1:16 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] multi-volume SMS file allocation
Ron Hawkins wrote:
Use a guaranteed space storage class
Workload Manager is your friend.
Cram on this before attempting anything else. It is your primary tool to
accomplish your objectives, but you need to understand how the tool works.
If you don't know which end of the hammer to hold you'll never drive in the
nail.
If you don't have SAS and MXG,
Charles,
Yes you have implicitly requested one volume.
Assuming the DATACLAS has 1 or blank for the unit count field you have
effectively coded a unit count of 1 in this JCL.
You would have to code UNIT=(SYSDA,p) where p is 1, or multiple volsers
to increase the unit count.
Ron
-Original
Ed,
That's great news. Good for the economy of a very poor country.
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Ed Gould
Sent: Sunday, October 06, 2013 9:00 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [IBM-MAIN] IBM now
Gil,
They're not really virtualized. They are encpsulated.
Ron
Sent via the Samsung Galaxy Note® 8.0, an ATT 4G LTE tablet
Original message
From: Paul Gilmartin paulgboul...@aim.com
Date: 12/07/2013 09:49 (GMT-08:00)
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re:
I regularly vary 1000s of volumes online/offline several times a day, and
sometimes several times in a few minutes.
I don't want no steenkin' changes unless the default remains as it is now.
I do recall doing this on the z9 was a major pain and slow as watching grass
grow, but on a z196 it's all
behavior as the default.
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Elardus Engelbrecht
Sent: Thursday, December 19, 2013 2:50 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] VARY OFFLINE fat finger
Ron Hawkins
Ye Gods, I work for Hitachi. Should I leave the list?
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Scott Ford
Sent: Sunday, December 08, 2013 11:41 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] hexadecimal?
Btw
David,
Perhaps with a different twist, what problems are you trying to prevent :-)
I wouldn't go to the level of Controller separation anymore - it'd s bit of
overkill. However I'd still look for some level of hardware separation within
the controller to ensure paging runs as fast possible
Art,
Channels paths, ports, Front End Directors and ports are easy to map so that
locals can be allocated across every possible path.
The missing part of most strategies is understanding the parallelism from the
cache to the disk and array group. There are many schema available for RAID and
All,
My history with z/OS is more about performance and tuning, rather than
hardcore sysprogging.
Tuning is almost always about doing it a new way, and I only wish there were
more newbies in this field with no preconceived ideas about how it has
always worked. Back when I was not Mr Congeniality
it the foremost attribute
these
days?
Rob Schramm
On Jan 3, 2014 4:15 PM, Ron Hawkins ronjhawk...@sbcglobal.net wrote:
Art,
Channels paths, ports, Front End Directors and ports are easy to map
so that locals can be allocated across every possible path.
The missing part of most strategies
Ed,
You're original question to the list, and I quote verbatim: Is the owner
of IBM-MAIN alive or dead?
Darren's first sentence in his response: Yes, Ed, I'm still alive.
What part of your original question was not answered?
Ron
-Original Message-
From: IBM Mainframe Discussion
I wish they fix the ISMF panels to be whole screen length on a 60 line
screen first...
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Joel C. Ewing
Sent: Friday, January 24, 2014 7:36 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject:
Radoslaw,
One of the problems I see with Host based overwrite is that you can only
overwrite the current location of the logical volume.
If you are using IBM's Eazytier, or Hitachi's HDT you really do not know the
past location of the chunks of the volume, only the last location. The same
Can you release extents on legacy VSAM dataset?
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Shmuel Metz (Seymour J.)
Sent: Sunday, February 16, 2014 6:39 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] Was: Implicit
Bill,
Think EAV volume...
I just had a lot of fun with this very scenario building DEFRAG experiments.
2x1 track dataset, and then delete every second dataset across 256 volumes.
Who would have thought that the UCAT size would be my biggest problem (5
million datasets).
Ron
Bob,
It's usually simpler to specify UNIT=(3390,7) and forget the VOL parm
altogether. An explicit unit count is not ignored for SMS managed datasets.
I used to use ACC/SRS to force all DSORG=PS datasets to be allocated with a
unit count of 5 - solved my X37 nightmares big time.
Blew TIOT in a
Shai,
I'd hazard a guess that shops using some type of thin provisioning would
simply make the volume the maximum size supported by the controller, even if
the planned to just put the JES2 Checkpoint on the volume.
In this case a 10 CYL dataset on a 262K CYL volume is not a problem, or a
waste.
I recall it was 1/3 of the way in, and the idea was nuked by cache (3880-13/23).
No point clustering your busiest datasets together 1/3 or 1/2 of the way into
the volume when they are usually cache resident. You create a nice quiet area
on the platter for seeking between those less busy
That sounds like GTFPARS.
It used to read a GTF trace and generate IEHLIST for the volumes it found. It
used to generate a kewl seek histogram, and a dataset seek activity report you
could use to tune the location of your busiest datasets.
I used to tune SYSRES dataset positions using this,
Miklos,
As it is a PDSE load library, LLA freeze can probably help you first.
VLF will not do anything for PDSE that are not load library or REXX. You'll
need to tune your PDSE buffering parms to get improvements.
Lizette has a lot of experience with this...
Ron
-Original Message-
$d4617230$@sbcglobal.net, on 06/04/2013
at 08:14 AM, Ron Hawkins ronjhawk...@sbcglobal.net said:
VLF will not do anything for PDSE that are not load library or REXX.
CLIST.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT
Atid/2http://patriot.net/~shmuel
We don't care. We
Norma,
In terms of the buffering algorithm Hiperbatch and BLSR are really chalk and
cheese.
Hiperbatch is designed for buffering sequential access from multiple,
concurrent jobs where the buffering is designed to have trailing jobs catch
up to the leading job, and then have the leading job prime
Manshadi,
I'm a bit confused. You mention the HDS 9980 and XRC in the caption, the HDS
USP and asynch in the content.
There are three asynchronous remote copy methods supported in the 9980 and
USP:
- XRC
- TrueCopy Asynch
- Universal Replicator (HUR)
It's hard
Ed,
Actually the TCW has a shorter path length to process on the storage side, at
least on HDS kit.
On the VSP we see an improvement in response time, tens of microseconds, and an
increase in max throughput of a VSD (Virtual Storage Director - the MP that
handles almost everything except data
Skip,
There was an method posted many years ago that used a lexicon of common
words, and passwords, to encrypt a UID and match it to the value stored in
RACF. Is this what you are referring to?
The OP of that post mentioned this as an auditing tool, but I recall a
lengthy and robust discussion
Paul,
A
B
Just omit the duplicates, but keep the 1st one...
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Paul Gilmartin
Sent: Thursday, August 15, 2013 7:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN]
]
On Behalf Of Paul Gilmartin
Sent: Sunday, August 18, 2013 4:45 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] comparing binary file
On Sun, 18 Aug 2013 15:48:17 -0700, Ron Hawkins wrote:
A
B
Just omit the duplicates, but keep the 1st one...
On Thu, 15 Aug 2013 17:24:43
If your company is interested in doing this sort of benchmarking on a regular
basis, or having access to analysis and benchmarks outside of IBM I suggest you
look into the PAI/O Driver from Performance Associates.
Ron
-Original Message-
From: IBM Mainframe Discussion List
I forget when it was, but quite some time ago, perhaps pre-SMS, allocation
started massaging the Eligible Device list so that unit addresses already
allocated to a DD in a step were moved to the bottom of the EDL.
The reason for this was to prevent the behavior I think you are describing
below -
I'm with Lizette in thinking you could do a this a hundred different ways with
DCOLLECT and MXG, and pretty cheaply on a workstation if you have SAS for
Windows.
I also agree that going ES/EA from the get go would be a benefit. Did that 15
years ago and it worked a treat.
Ron
Sent via the
traditional British dd/mm/yy?
Surely you meant to say almost all of the rest of the world...
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of John Gilmore
Sent: Saturday, September 14, 2013 5:21 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Bobby,
From our default IEAIOSnn member.
MIH DASD=01:00
HYPERPAV=YES
ZHPF=YES
Individual addresses can be used to override these values.
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Herring, Bobby
Sent:
Radoslaw,
I always thought that HDP or Eazytier would render this feature useless, but
who knows.
There was a Share presentation two years ago by the development team that
talked about this and some of the details of the mechanism.
My recollection is that a Storage Class with an MSR of 3ms
Bill,
The problem for you is that changing the DATACLAS to one with EA, or
changing the existing DATACLAS to support EA will not change the attributes
of the files already allocated.
Think of DATACLAS as JCL DD statement. It described the attributes of the
dataset to be allocated. Those
I did exactly this shop wide for VSAM KSDS in 1997. It was a seamless change.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of John McKown
Sent: Wednesday, April 16, 2014 10:54 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re:
Because 1.14 would not sit well with the Cantonese...
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Nims,Alva John (Al)
Sent: Friday, April 25, 2014 10:40 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] Beyond the EC12
Reminds me of a Hong Kong building I was living in.
The floors went 11, 12, 12a, 14, 15...
Which was strange seeing 14 is unlucky for the Cantonese (that's twice I've
used this - boring...)
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On
, 2014 11:51 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] Beyond the EC12
Ron Hawkins wrote:
Reminds me of a Hong Kong building I was living in.
The floors went 11, 12, 12a, 14, 15...
Weird. If you truly believe in God, you really don't need all those
superstitions.
Perhaps
All,
Shmuel's reply prompts me to suggest that you consider what lies outside the
nine dots.
Firstly, would you consider thin provisioning of the installation volumes and
SYSRES? This will get you onto 3390-A for these volumes using any size you
want. Why is this good? Because DVE is also
Yeah, what he said.
This goes along with my response to this thread: people are still piddling
around with mod 9's and even mod 3's? Once dynamic pooling became
available on our Hitachi VSP I reconfigured all our volumes to mod 54's.
I've
provisioned volumes with 120% of the array's
DS8870 does, but don't
know if it was an option.
Ken
On Thu, May 1, 2014 at 4:53 PM, Ron Hawkins
ronjhawk...@sbcglobal.netwrote:
Yeah, what he said.
This goes along with my response to this thread: people are still
piddling
around with mod 9's and even mod 3's? Once dynamic
Victor,
If I understand problem at the root of your questions, you are trying to speed
up DFSMSdss logical dumps, especially for compressed PS-E data sets.
From your questions you are focusing on the tape output rate as the gating
factor for the elapsed time of the dump, but have you looked at
Juergen,
You need to allow for some double accounting of unchanged pages on AUX.
While it may not be at the root of what you are seeing, pages that are paged in
from AUX and are not changed will remain on AUX.
There is some growth in AUX over time because of this. I have no measurement,
but I
No it's not faster.
DFSMSdss calls repro under the covers for most VSAM data set copy operations,
but buffering is limited to whatever was used when the data set was defined.
I hope everyone uses a large BUFSP or BUFND value for REPRO (like 849920).
Ron
-Original Message-
From: IBM
Gary,
I may be having a senior moment but I think I have seen this when there are a
lot of logical dumps starting that all hit the same catalog. Even your
INCLUDE(**) is going to spend a big hunk of time searching the catalogs and
volumes for datasets before it starts to dump anything.
My
ADRDSSU ECB WAIT
On Thu, 19 Jun 2014 05:47:40 -0700, Ron Hawkins wrote:
Gary,
I may be having a senior moment but I think I have seen this when there
are a lot of logical dumps starting that all hit the same catalog. Even your
INCLUDE(**) is going to spend a big hunk of time searching
I love pork chitlins, or chicheron baboy is great with a beer.
Used to buy Haggis in Hong Kong that came in a vacuum sealed plastic pouch.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Gross, Randall [PRI-1PP]
Sent: Tuesday,
That works both ways Ed.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Ed Gould
Sent: Monday, July 7, 2014 9:15 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] Freebie software; was Feebie software
Shane:
That an
Agree with you Ted. (finally :-)
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Ted MacNEIL
Sent: Monday, July 14, 2014 10:07 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] Cleanup Sort Volumes
Shouldn't the real
Jean Louis,
I'm sorry if my advice led you astray. I know I had a solution for this many
years ago, and it used to work fine for multi volume SAS datasets.
I trying to find some time to do some testing to remind myself of how this
worked.
Ron
-Original Message-
From: IBM Mainframe
Mike,
While this will erase the logical track presented to a CKD host, it does not
actually erase the data on the physical drive.
You need to use the vendor's secure erase facility if the controller has
one, or actually overwrite the whole volume with your favorite data
generation or copy
, and it can be
checked/confirmed.
[Ron Hawkins] If you are using any of the current day wide striping or thin
provisioning products from IBM, EMC and HDS then this sort of mapping will
be near impossible. The closest you'll get is the array groups that make up
the pool. If you are using any dynamic
Radosalw,
Yes, it depends. It can depend on things you described, but also on the
detail
level. Usually you can map logical volume to the dasd array ;-) but
sometimes
it you can go down into details liek raid group.
[Ron Hawkins] Mapping a HDP pool is a bit like mapping a log structured
each volume so much it did not result in
any decreased elapsed time.
Erasing 1 volume on 20 8-packs at once over 8 ESCON cables did not result
in
slow downs.
Speed for other DASD devices I erased had different timings and will vary
for
you.
On Thu, Dec 13, 2012 at 9:44 AM, Ron Hawkins
written before the next record is sent.
On Thu, Dec 13, 2012 at 11:37 AM, Ron Hawkins
ronjhawk...@sbcglobal.net wrote:
Mike,
Have you confirmed with each DASD vendor that the contents of the
channel command generated by TRKFMT ERASDATA are actually passed to
the disk drive for drive types
: [IBM-MAIN] dsf to write over entire volume
W dniu 2012-12-13 18:22, Ron Hawkins pisze:
Radosalw, (Radoslaw, pronounced like Radoslav ;-) )
Encryption is a matter of time and cost. The question is WHEN I will
decrypt
the data, not IF. And WHEN depends on my budget (the more money
David,
Why not just delete the unnecessary datasets? The backups will follow. G,
D, R
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of David G. Schlecht
Sent: Tuesday, February 05, 2013 2:25 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Raju,
If memory serves me correctly, with IBM DASD aren't you required to define
the first two ranks as R5 6D+1P+1S, where S is the floating Dynamic Spare?
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Raju Reddy
Sent: Friday,
Leonardo,
Try the Type 64 subtype 64 record.
Be aware that some addresses do not write an interval record, so best to
include an IPL in your data collection.
This will allow you to split directory processing our from the total IO
count.
Type 14/15 records do not show the dataset name in a
Don,
I'm can't speak for the EMC and IBM iterations, but with HDS HDP setting up
mirrored configurations is just way to easy. I'm guessing it would be
similarly easy on the other vendors wide stripe configurations.
It's not exactly what you are asking for, but in the lab we are setting up
and
Ed,
While it's not perfect, what if you did the FlashCopy to a completely
controller?
If you have HDS DASD, you could virtualize some brand-x midrange disk and
FlashCopy or Shadowimage the zFS files/volumes to the midrange storage. Using
FlashCopy Incremental means that you only copy what
Yes I did. I'll blame the Percocet tablets for that brain fart...
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Ron Hawkins
Sent: Tuesday, February 19, 2013 1:04 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] Improve LLA/VLF
Manshadi,
1- How can I find consistency group time and the delay time?
Look at the XQUERY command. I think it would be something like XQUERY
VOLUME(ALL) STATUS but check the manual. This is issued against the STM and
it is the consistency time of records applied to the secondary volumes, and
not
Gil,
Sounds like it is just being Windows compatible - File is in use by another
user - right down to the how you identify the user...
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Paul Gilmartin
Sent: Saturday, March 09,
Miklos,
Others may want to correct me, but from what I've observed OPT(4) is only used
for the DUMP command.
COPY will accept OPT(4), but all you seem to get is OPT(3).
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Miklos
Ed,
While it does not directly answer your question, we have a bunch of Flash
Drives (SSD Drives) installed for about two years now that we use for
performance testing.
We don't use them every day, but when we do use them we beat the begeezus
out of them with both reads and writes. Our Open
Get an EMC or HDS engineer to do it... (G, D, R).
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
On Behalf Of Scott Ford
Sent: Tuesday, March 05, 2013 3:28 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: [IBM-MAIN] DS6800 3390 quesation
David,
Yes, your understanding is the same as mine. The primary eligible device
list contains volumes that will not exceed the max threshold if the primary
space is allocated there. This is sorted on performance criteria. The
secondary eligible device list has volumes with enough free space to
Fred,
For Hitachi, if you select the PARTNERS tab at WWW.HDS.COM the first item on
the action bar is BUY THROUGH A PARTNER.
Selecting this launches a new window where you can search for a partner in your
country, state and city.
Or you can just click on this link after checking for text
Peter,
Make sure you are not converting reserves for the devices that will be
defined as shared in the IOCDS.
If the LPARs sharing the devices are not in a common sysplex or GRSplex then
the hardware reserves converted to an ENQ will not be seen by the other
system.
If you don't let RESERVE
Peter,
You cannot convert a 10017 Cyl 3390-9 to a 32760 Cyl 3390-9.
You can use DFSMSdss or similar to physically copy from a 10017 Cyl to a
32760 Cyl volume, and this must be followed by a REFVTOC so the VTOC is
aware of the new volume size. You cannot merge three volumes into one unless
you do
DFHSM work from mix TAPE/DASD to More DASD
On Tue, 26 Aug 2014 00:18:21 -0700, Ron Hawkins ronjhawk...@sbcglobal.net
wrote:
A three tier strategy using HDD or SSD for Tier 1, Nearline SAS for
ML2, and virtualized Brand-X midrange storage for Tier 3 presents a new
paradigm for archiving inactive
Rex,
It is based on an IO per hour rate for the page. Occasionally referencing a
page, or thousands of pages will not cause the page(s) to be promoted from
tier 3 unless the activity is prolonged or intense enough to change the
IO/hour. There are both short and long term IO/hour measures that are
Kieth,
Why aren't you looking at the IO time for the input and output datasets?
Surely the Type 42 subtype 6 record will tell you more about the difference
in IO behavior than cpu time.
I don't see a comparison of SSCH count mentioned anywhere in the thread.
Ron
-Original Message-
Glenn,
I work for one of the three vendors, and so I am very surprised to hear that
HDS thinks that Tape is clearly the best storage media for long term data
archiving. Would you happen to know who it is that represents HDS
hardware/software that agrees with this? I for one strongly disagree
] On Behalf
Of Shane Ginnane
Sent: Tuesday, August 26, 2014 3:41 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving
DFHSM work from mix TAPE/DASD to More DASD
On Tue, 26 Aug 2014 00:18:21 -0700, Ron Hawkins wrote:
... but I drank
Same place as AIX I guess...
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Charles Mills
Sent: Saturday, October 17, 2015 4:08 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: [IBM-MAIN] What happenned to z/OS?
A Mac person accused me of
http://www-03.ibm.com/systems/z/os/zos/
-Original Message-
From: Ron Hawkins [mailto:ronjhawk...@sbcglobal.net]
Sent: Saturday, October 17, 2015 4:24 PM
To: 'IBM Mainframe Discussion List' <IBM-MAIN@LISTSERV.UA.EDU>
Subject: RE: [IBM-MAIN] What happenned to z/OS?
Same place as
I'm thinking they just want to improve on the copies of Hercules that has been
freely available to them for years.
Doesn't the EC require IBM to give the same sort of access to source code?
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Chuck,
I'm a performance guy, so when you say "balance" I immediately think of IO
activity. I'm sure you mean something else.
Do you mean balanced in terms of space, number of datasets, or do you simply
need them spread randomly across "up to" 70 volumes.
Ron
-Original Message-
From:
Chuck,
I'll start with three assumptions:
1) The 70 volumes are in there own Storage Group (let's call it
SGCHUCK0)
2) The datasets already exist
3) They have some qualifiers that will allow you to identify them
with a mask
If those assumptions are true, then my approach
Then run a CONVERTV and an Alter Storage group and remove the volumes from
SMS after the DFSMSDSS copy has completed.
Otherwise run a DCOLLECT, and use that as input to REXX or SAS to generate
the JCL you need to round robin 22 datasets per volume. Use your utility of
chpice.
Neither of these
Peter,
Space constraint relief is part of the DATACLAS construct, not the Storage
Class. These are the collective functions that will cause SPACE requests to
be overridden, space reduction, and additional volumes to be added to you
dataset.
You will need to use a DATACLAS that has Space
Bob,
All VSAM IO is handled by Media Manager, not just SMS.
You description below is attributing facilities to Media Manager that I have
not heard of or observed before.
Can you elaborate on " Media Manager consolidates the extents?"
Ron
-Original Message-
From: IBM Mainframe
LOL, you're trying to tell a Queeenslander there are better beaches somewhere
else?
You may as well take coal to Newcastle...
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf
Of Richards, Robert B.
Sent: Friday, October 2, 2015 5:43 AM
1 - 100 of 235 matches
Mail list logo