Why does a dataset that should never migrate does occassionally (hsm)

2014-02-21 Thread Michael Bieganski
Hi, we have a mainframe dataset that is used daily used in our automated
scheduling processes.
Occassionally, our ops support gets paged out because a process is delayed
and it turns out it was because
said dataset was migrated and had to be recalled.
this dataset's management class  has nolimit on expires, and blanks for
'Primary Days'
Partial Release is 'yes'  so could hsm be migrating this dsn in Primary
space mgmt, despite we have
primary-days set to blanks???
And if so, short of setting partial release to no, how can we keep an sms
managed dataset from ever migrating at all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Size limits on hsm cds files?

2013-11-06 Thread Michael Bieganski
Hi,
We have an hsm ocds cds file sized cyl(4000,0)
a bcds1 cds file sized cyl(4000,0)   a bcds2 cds also sized cyl,(4000,0)
an mcds1 cds file sized cyl(4000,0)  and an mcds2 file also sized
cyl(4000,0)
About every 3 months or so, 1 or more of these start creeping into the 90%+
full range and we do a re-org.
(we cannot take advantage of the ca reclaim function unfortunately)

These cds files have been size of CYL,(4000,0) for a couple years and I
think increasing their sizes one at a time,as particular reorgs come up to
be performed might
help reduce the number of reorgs I have to do during the year.

So, wouldn't  CYL,(4300,0) be the largest absolute size I can go with the
BCDS2 (without doing anything like extended attribute etc)?
Is there any rule that requires the bcd2 be the exact same filesize as the
bcds1?
Would there be any danger with say, backvol cds, by having the bcds1 4000
cyls but have the bcds2 bigger at 4300?
I've also heard that HSM doesn't like coding secondaries when creating the
cds files...true??

thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


hsm cds question - can we update cds to add new abars version at d.r?

2013-10-14 Thread Michael Bieganski
Help?...We are conducting an offsite disaster recovery drill tomorrow.  The
tapes with all the vsm and HSM cds files were cut on friday morning and
already sent off to the d.r site in N.Y.
An application needed to rerun an hsm-abars aggregate backup AFTER the
files were on the truck.
We re-ran the abars job for them, and have the tapes and are on their way
to New York, but all our HSM cds files are in the can from friday and thus
will not reflect this last abars runand we cannot send any recent hsm
cds files without screwing up everyone else.

Question: is there a way/some command we can issue at d.r tomorrow night to
update the recovered d.r. hsm cdses to tell it about this newest abars
abackup?   because right now the hsm cdses to be recovered will
only reflect the thursday's abars as the latest version.If we cannot
get hsm to recognize this newer backup with some command then we'll have to
ftp a ton of files from our real world to the d.r site tomorrow.

thanks (crossing fingers)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Increase in tape usage since zOS 1.13??

2013-09-19 Thread Michael Bieganski
Hi,
We have a set of weekly full volume dasd dumps just for our non-sms mvs
volumes
(sysres, pre-ipl vols etc), housing many of our system datasets that are
almost purely static...ie do not grow or do not get written to.
On July 21st, the lpar that run our weekly full volume dumps
was upgraded to zOS 1.13 (from 1.11).
Prior to zOS 1.13, the accumulated size of these dataset dumps, as per rmm,
averaged about 0.3 to 0.5 tb per week.
Since zOS 1.13 these same jobs, with the exact same static
datasets being dumped averages almost double the size in tb per week.
It's almost as if compression is not working right anymore.
Here is an example of the jcl used, it has not
changed between zOS 1.11 and 1.13:

//DUMP   EXEC PGM=ADRDSSU,REGION=6000K
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=0
//DISK1DD UNIT=UNIT,DISP=SHR,VOL=SER=CEC000
//TAPE1DD UNIT=TAP9,DISP=(NEW,CATLG,DELETE),DCB=TRTCH=COMP,
//DSN=DRP.BKP19AU.BWCEC000(+1),VOL=(,,,45)
//SYSINDD *
 DUMP INDD(DISK1) OUTDD(TAPE1) CAN OPTIMIZE(4) COM
/*
The tape is actually VSM, but would mimic 3490's which also
has some compression but the VSM has not changed...only
zOS has since we've seen this jump in space used.
For example, according to rmm reports,
here are the TBs used for these dumps:
July 1:  22.58 tb   zOS 1.11
July 8:  22.57 tb
July 15: 22.62 tb
July 22: 23.25 tb   zOS 1.13
July 29: 24.02 tb
Aug  5:  24.81 tb
Aug 12:  25.42 tb
etc
Has anyone else experienced this jump in tape usage?
I see IBM mentions that there are some differences in blksizes
and uses BSAM now, but saw nothing about changes to compress.
thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


DFHSM expire processing

2013-08-23 Thread Michael Bieganski
Hi, We have four lpars, running zOS 1.13 and use dfhsm.  On 3 of the lpars,
expirebv is always held.
Every morning at 7:30,automation issues this command on
just one of our four lpars:
HSEND EXPIREBV NONSMSVERSIONS(DBU(5) CATALOGEDDATA(50) -
  UNCATALOGEDDATA(0)) EXECUTE RESUME
and at 17:00..this is issued: HSEND HOLD EXPIREBV   to stop it.  So we only
get less than 10 hours of expirebv processing.

We've seen the size of hsm steadily growing and looked to see if the 10
hours of expirebv is not keeping up.
I issued an HSEND REPORT DAILY FUNCTION(BACKUP) for yesterday, Aug
22nd and see this:
HSM FUNCTION
BACKUP
DAILY BACKUP  0035945
DELETE BACKUPS0028811
So if that day is typical, it created approx 7,100 more backups than it
deletedthats going to add pound-age over time.

However, what I don't understand is that in going into HSM's baklog for
yesterday, for the only lpar that has an 'not-held'
expirebv.  I see doing a find on ARC0734I ACTION=EXBACKV  that I only get
4,748 hits.
Since we also have ABARS, that seems like a very small percentage of
expirebv's.
Does expirebv processing have a lower priority in hsm so it creeps along
slowly?

The previous storage admin set up the 10 hour limit of expirebv processing
with those 07:30-17:00 hours.
All I can surmise is perhaps he didn't want any expirebv processing while
automation was doing cds backups
(at 07:00 and at 17:30), and perhaps didn't want them using cycles when the
primary and secondary management kicks in
around 18:00.
Do any of you hsm'ers also restrict the hours of your expirebv'ing so that
it doesn't run while cds backups, primary/secondary mgmt is running?

If HSM is indeed growing hefty because it is creating more backup dsns than
its deleting, other than going through management classes with
a machete,  all I can think of to stop the expansion is to give expirebv
more hoursbut if we only get around 4k exbackv commands per day,
I don't think we'd ever catch up.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


ca - reclaim for hsm, rmm, cdses (and ucats??)

2013-08-14 Thread Michael Bieganski
Hello,  we just upgraded our last lpar to zos 1.13 so now have the option
of utilizing the ca-reclaim option. As of now, it is not defined in our
parmlib so is dormant.   I have both an rmm cds and an hsm bcds reorg
necessary in the upcoming months and was wondering if anyone has experience
yet with
ca-reclaim with rmm and/or hsm cdses.any gotchas???
Also, I know that once we turn that on via setsms or parmlib, all vsam
files on the system could be affected (I know we can expressly do an
alter noreclaimca).   Any suggestions where one would explicitly want to
set noreclaimca to avoid more problems than reclaim fixes?
(for example, we unfortunately have a number of user-catalogs that still
have 'imbed'  and theres no chance I'll be given any outages to
reorg the ucats to get rid of the imbed feature.I was wondering if
perhaps ucats with imbed dont play nicely with reclaim??)
thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


dfhsm expirebv command idles even though expirebv not held

2013-08-05 Thread Michael Bieganski
Hello,
I've issued this tso hsend cmd:
hsend expirebv abarsversions(agname(bkmcics) retainversions(0)) display
and have been waiting for over an hour for the response to the terminal.
When I issue an tso hsend q req, I see but only a tape being recycled,
and  MWEs ahead of my simple request:
ARC0161I RECYCLING VOLUME C42845 FOR USER W#CRM, REQUEST 00251220
ARC0167I COMMAND MWE FOR COMMAND EXPIREBV ABARSVERSIONS(AGNAME(BKMCICS)
ARC0167I (CONT.) RETAINVERSIONS(0)) DISPLAY FOR USER x, REQUEST 00251555
ARC0167I (CONT.) WAITING TO BE PROCESSED, 0 MWE(S) AHEAD OF THIS ONE

hsend q act shows that expirebv is not held on this lpar:
EXPIREBV=NOT HELD AND ACTIVE
So why would this expirebv command seem to be queued up
behind something and not complete and return the info?
Likely a simple explanation but now I'm scratchin' my head
thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


RMM common practices

2013-07-08 Thread Michael Bieganski
Hi,
I have 3 questions that I hope someone out there can share answers with me.
After running an rmm EDGHSKP,PARM='CATSYNCH,VERIFY'
I am left with about 3 dozen datasets showing cataloged in rmm but not in
the usercat
(eg EDG2233E DFSMSrmm CDS CATALOG STATUS YES FOR dsn VOLUME nn
FILE 1 CONFLICTS WITH CATALOG STATUS NO)
I believe these dsns are truly toast and should not be in the ucat, so I
don't wish to recat or anything;
Question 1:  What is the conventional wisdom in dealing with these 3 dozen
EDG2233E?  Should I always strive to have zero of these out of synch
EDG2233E's and run a catsynch without verify to correct?  Or is it okay to
leave these type of errors twist until the number of them starts to get
unwieldy?
Question 2:  Am I correct in that I can run EDGHSKP,PARM='CATSYNCH'  (no
verify)  while RMM is up and active? Or must I quiesce RMM on all lpars
before running catsynch?
Question 3: (unrelated to catsynch)  I see my RMM cds is 68% full.  It's
not growing obscenely fast, but what is the consensus as to what percentage
full I should start thinking about doing an rmm cds reorg ?   We will be at
zOS 1.13 in a few months, and I believe there is some nice feature that
will make having to reorg HSM and RMM cds'es obsolete (CA Reclaim??), but
in the meantime, I still may need to do the old idcams until then.   Wait
till upper 80's % ?  Low 90's % ?
Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Datasets not hsm migrating

2013-05-10 Thread Michael Bieganski
Hi,
I have a number of datasets whose management class reflects Primary Days of
2 and yet after 2 weeks of non-reference, they still are on dasd.  Cmd/Auto
Migrate is set to BOTH, Auto-Backup set to No.
I can hmigrate it in ispf 3.4 ok to ml2, but wondering why auto-migration
is not kicking in.
Is there a setting that regulates that a dataset will not auto-migrate if
in fact no backup exists?
(but apparently has no qualms about manually hmig'ing it?)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


ISMF /SMS questions - Why aren't these datasets whacked?

2013-05-01 Thread Michael Bieganski
Hi,
We are trying to lasso the extreme growth/size that our hsm has grown to in
the last 2 years.
The a large majority of our tapes (virtual) seem to tied to hsm backups.
One big side-issue is that the lions share of our sms management classes
all specify Expire-Non-Usage of Nolimit and also Expire Date/Days also of
Nolimitso basically, we
still have datasets created during the Reagan administration.
I was toying with trying to get buy-in from our management to at least
allow some managment classes to be changed from
say, Nolimit to say Expire-Non-Usage of 730 or whatever,
so that 1) old-moldy datasets go away and 2) thus their hsm backups would
also go away after the Retain-Days ran out.
and that way can start shaving off some of our hsm tapes that way

Nosing around in ISMF, I pulled up a management class, named standtso
 that has Expire-Non-Usage of 550 and Expire-Days of Nolimit.  So, my
interpretation of that is that if any dataset with the management class of
standtso, would expire, and thus be deleted
if not opened in the last 550 days.

The questions I have arise that I see quite a number of standtso datasets
with Last Referenced Date in the 1990's !?
Some are not even hsm-migrated.
Shouldn't any and all datasets that have a Last-Referenced-Date before 2011
be gone due to this Expire-Non-Usage of 550?

And a couple side-bar questionsin ISMF option 1 Datasets.  It seems
like if I enter *.** for dataset name, it tacks my userid as
the HLQ of the resulting list.

If I wanted all datasets that are in management class standard.  I can
page down and type criteria
Management class ame eq standardbut how do I get ALL such datasets
without having to do something like single-quoted
dataset name 'A.**'   then 'B.**'  then 'C.**'   etc.  If I type single
quoted  '*.**'  it wants a catalog name
Is the only way to specify all datasets, ie single-quoted '*.**'  is to do
so one catalog name at a time?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Rename of hsm migrated files due to userid change

2013-04-21 Thread Michael Bieganski
Hi...in the coming months, our company plans to completely change the
format of our mainframe userids.
For example, current mainframe userid a#xyz123 will change to mainframe
userid e123456.
We have literally tens of thousands of mainframe userids and tens of
thousands of hsm migrated datasets that belong to userids under the old
format.
I envision this would entail hrecalling the old-format-userid datasets,
renaming each of them to the new appropriate userid-hlq, and I imagine even
deleting all the hsm backups of the old-format-userid datasets at some
point (or let the 'retain days' take its course)
It seems like quite a daunting manual, labor intensive process..has
anyone out there any experience with mass change of mainframe userids and
having to account for all their datasets vis-a-vis HSM?
Any experiences regarding how to streamline this would be appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM audit error 39 on non-existing datasets

2013-04-05 Thread Michael Bieganski
Dave, Allen...where are we coming up with racf and arc1139?...the error is from 
hsm audit:
I have alter to the dsn profile.
The hsm storage admin guide has this for err 39...nothing about racf:

*ERR 39 userdsn IS MISSING version
The BCDS data set (B) record for userdsn refers to a backup version of that 
data set. However, the corresponding backup version (C) record no longer exists 
or exists but does not refer back to this (B) record.   The time 
and date stamp incorporated in the version name may indicate a version old 
enough that it is no longer useful, and the copy referred to by that version 
name may no longer exist. 
If a LIST BCDS LEVEL(userdsn) indicates there are other available versions of 
userdsn, you may not need the missing version.
If there is no other available version, you need to determine (for example, 
from a previous LIST output) what backup volume contains the version, so that 
you can use the FIXCDS C version ... command to regenerate the C record.
You should discover what caused the deletion of the C record and correct that 
problem to avoid future occurrences.

So I know a number of certain types of hsm errors you prefix ARC11 to, but as 
you mention that is for recall-reccovery etcI'm thinking that does not 
apply here for audit errors.
I scanned the dfhsm started task and the mvslogs and cmdlog for the last 2 
days...neither show an arc1139i msg.
And the hbdeletes worked just fine..and like I said, I have alter to the dsn 
profiles (and admin in hsm).
I'm thinking theres one more record in the bcds other than the C for that 
dataset that needs to be fixcds-deleted???

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is there a way to get all dsnames of a given sms mgmtclas, regardless if some are hsm-migrated?

2013-03-17 Thread Michael Bieganski
Good point...thanks Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is there a way to get all dsnames of a given sms mgmtclas, regardless if some are hsm-migrated?

2013-03-17 Thread Michael Bieganski
Thanks for the tip Lizetteyour suggestion does indeed show all data I was 
hoping for.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Is there a way to get all dsnames of a given sms mgmtclas, regardless if some are hsm-migrated?

2013-03-16 Thread Michael Bieganski
Using dcollect, or some other method,
Is it possible to get list of all dsnames with a mgmtclas xyz even if
some of the dsns are hsm-migrated ml1 or ml2?
I coded this dcollect jcl:
DCOLLECT -
  OUTFILE(DCOUT) -
  VOLUMES( -
  * -
   ) -
  MIGRATEDATA -
  CAPPLANDATA -
  SMSDATA(SCDSNAME(ACTIVE)) -
And included the mcds mcds2 bcds bcds2 ddcards in the above jcl;
I then ran a report against the resulting data with
(203,30,CH,EQ,C'CIMSSMS')) to find all dsnames
that belong to the acs-mgmtclas cimssms, and it looks like it returned
only the dsnames that were not hsm migrated……but, I'd like to be able to
get a complete list of ALL dsnames that have mgmtclas cimssms,regardless of
whether they are on dasd, or ML1 or ML2.  I had hoped that MIGRATEDATA
would have provided this, but
unless I have balky jcl coding, it appears not.
(If there is a method, the I'd expect same would be good for storclas etc
as well)

thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Deleting a nonexisting volser from hsm

2013-01-09 Thread Michael Bieganski
ARC0184I  ERROR WHEN READING THE DFSMSHSM CONTROL DATA SET X RECORD FOR
300706
Volser 300706 is neither in any hsm ttoc listing (ergo the data set x
record for 300706) nor in rmm,
so no prev-vol recycling can come into play here unfortunately.

Regarding try a fixcds create which should then enable you to do a
standard delvol.
I issued a TSO HSEND FIXCDS X 300706 CREATE
and hsm reported the record created:
ARC0197I TYPE X, KEY 300706, FIXCDS CREATE SUCCESSFUL
(I chose type X since X was what was reflected in the ARC0184I error
message for volser 300706)

Then followed with a DELVOL 300706 BACKUP(PURGE)
but that apparently simply deleted the record just created with the fixcds
create command.
ARC0260I BACKUP VOLUME 300706 ENTRY DELETED
leaving still, some vestige of 300706 residing somewhere in HSM
A subsequent LIST TTOC SELECT(FAILEDRECYCLE) still is showing the same
ERROR WHEN READING THE DFSMSHSM CONTROL DATA SET X RECORD FOR 300706

Short of reconstructive surgery on the cds...is there any other commands I
can try to tell
HSM to just fuhget-about this non-existing volser 300706??

On Tue, Jan 8, 2013 at 10:32 AM, Michael Bieganski mbiegans...@gmail.comwrote:

 Looking at hsm logs, I see that a recycle failed on volume 300706 along
 with message ARC0184I ERROR WHEN READING THE DFSMSHSM CONTROL DATA SET X
 RECORD FOR 300706.
 Apparently, hsm is trying to recycle this 300706 volser twice daily and
 failing each time.
 I ran AUDIT MEDIACONTROLS VOLUMES(300706) and got back: ERR 110 300706- NO
 V OR X RECORD so no backup or migration records exist.
 Looking in RMM, I do not find any volume 300706.
 HSM is trying to recycle this guy twice a day and just spinning is wheels.
 I tried DELVOL 300706 BACKUP/MIGRATION/PRIMARY   and all 3 responded with
 ARC0260I xx VOLUME 300706 ENTRY NOT DEFINED
 I also tried  FIXCDS T 01-300706- DELETE and a FIXCDS T L2-300706-
 DELETE
 resulting in ARC0195I TYPE T, KEY xx-300706-, FIXCDS DELETE,
 ERROR=RECORD NOT FOUND
 Is there anyway I can delete this nonexisting volser from hsm so that it
 will stop trying to recycle?

 thanks


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Deleting a nonexisting volser from hsm

2013-01-08 Thread Michael Bieganski
 Looking at hsm logs, I see that a recycle failed on volume 300706 along
with message ARC0184I ERROR WHEN READING THE DFSMSHSM CONTROL DATA SET X
RECORD FOR 300706.
Apparently, hsm is trying to recycle this 300706 volser twice daily and
failing each time.
I ran AUDIT MEDIACONTROLS VOLUMES(300706) and got back: ERR 110 300706- NO
V OR X RECORD so no backup or migration records exist.
Looking in RMM, I do not find any volume 300706.
HSM is trying to recycle this guy twice a day and just spinning is wheels.
I tried DELVOL 300706 BACKUP/MIGRATION/PRIMARY   and all 3 responded with
ARC0260I xx VOLUME 300706 ENTRY NOT DEFINED
I also tried  FIXCDS T 01-300706- DELETE and a FIXCDS T L2-300706-
DELETE
resulting in ARC0195I TYPE T, KEY xx-300706-, FIXCDS DELETE,
ERROR=RECORD NOT FOUND
Is there anyway I can delete this nonexisting volser from hsm so that it
will stop trying to recycle?

thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN