Re: Anyone exploiting ZEDC?

2024-04-17 Thread Glenn Wilcock
DFSMShsm is an excellent use case for zEDC and is our number one best practice 
for HSM.  When enabled, DSS zEDC compresses all blocks of data passed to HSM 
during Migration and Backup.  Because HSM is processing fewer data blocks, both 
cpu and elapsed time are reduced.  When going to ML1, the amount of storage is 
also significantly reduced.

Glenn Wilcock, DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM VTS and cloud

2024-03-28 Thread Glenn Wilcock
Another option is IBM Cloud Tape Connector.  One of its capabilities is Virtual 
Tape Replication, which replicates tape data sets to cloud object storage.  If 
interested in finding out more, please email me at wilc...@us.ibm.com  and I 
can connect you with the SME.  Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM VTS and cloud

2024-03-28 Thread Glenn Wilcock
I sent your question to Joe Swingler, TS7700 architect and replied as follows: 
"As of today, a single TS7700 can be either cloud attached or tape attached, 
but not both.  So, a third TS7700C would be needed or one of the two tape 
attach boxes would have to become only cloud attached."

Glenn Wilcock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What happens in HSM when I change a Management Class

2024-03-15 Thread Glenn Wilcock
I should have added that EXPIREBV will run for a while if you haven't run it 
for some time.  So, do it over a weekend.  Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What happens in HSM when I change a Management Class

2024-03-14 Thread Glenn Wilcock
You'll need to run EXPIREBV.  

Glenn Wilcock
DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMSHSM reports.

2024-03-03 Thread Glenn Wilcock
Thanks all for the great comments.  A few additional comments:

1) One of the sample reports for the DFSMS Report Generator is a Migrate/Recall 
thrashing report.

2) By default, HSM does not create an FSR record for data sets that are not 
eligible for primary space management.  These also cause 'spin' because they 
are examined every cycle but never processed.  Create FSRs with the following 
patch from the DFSMShsm I guide:
"Enabling FSR records to be recorded for errors, reported by message ARC0734I, 
found during SMS data set eligibility checking for primary space management"
To specify that FSR records are to be recorded for errors, reported by message 
ARC0734I, found during SMS data set eligibility checking for primary space 
management, issue the following PATCH command:
PATCH .MGCB.+EF BITS(..1.)
To specify that the FSR records are not to be recorded, issue the following 
PATCH command:
PATCH .MGCB.+EF BITS(..0.)

3) Also documented in the DFSMS I, a patch to enable HSM to process empty and 
certain undefined data sets
"Enabling backup and migration/expiration processing of empty data sets"
DFSMShsm can be tuned such that the storage administrator can request that 
certain data sets that are allocated but not opened be backed up, migrated, 
and/or expired based on typical DFSMShsm configurations.
The following two data set variations that can be handled by this change:
1. Cataloged data sets that have been allocated such that DSORG=0 (undefined) 
and have not been opened for output (DS1DSCHA=OFF and DS1LSTAR=0). This support 
will allow these types of data sets to be backed up and migrated based on the 
local DFSMShsm configuration.
2. Data sets that have been allocated such that a nonzero DSORG is set, but the 
data sets have not been opened for output (DS1DSCHA=OFF and DS1LSTAR=0). This 
data set variation is typically supported by DFSMShsm but might not be backed 
up as the change indicator (DS1DSCHA) indicates that the data set is unchanged. 
When DS1DSCHA indicates that the data set is unchanged, it can prevent the data 
set from being expired if the environment requires a valid backup before 
expiration. Once enabled, this support can ensure that these types of data sets 
can be backed up before they are migrated, thus allowing them to be expired 
based on the local SMS and/or non-SMS configuration.
To support the data set variations that were just discussed, issue the 
following patch commands: 
PATCH .MCVT.+287 BITS(..1.)
Request empty non-VSAM data sets (DS1DSCHA=OFF & DS1LSTAR=0) to be backed up.
PATCH .MCVT.+287 BITS(...1)
Allow empty cataloged data sets (DS1DSCHA=OFF & DS1LSTAR=0) with 
DSORG=UNDEFINED to be
backed up and migrated
Note: The following considerations must be made to ensure minimal impact to 
your environment as well
as desired results:
In environments that have many of these newly supported data sets, DFSMShsm 
functions can require additional time and CPU to complete. If these data sets 
have accumulated incrementally over time, this additional time and CPU 
requirement might be temporary. If these data sets are created frequently and 
in large quantities, the impact to DFSMShsm functions might be permanent.
In a SETSYS INCREMENTALBACKUP(CHANGEDONLY) environment, DFSMShsm relies upon 
the DS1DSCHA flag of a data set to determine when the data set requires backup 
and when a backup might have already been created. With this support, empty 
data sets can be backed up even when the DS1DSCHA flag is off. For this data 
set type, DFSMShsm will rely on the backup date of the backup version
to determine whether the data set with DS1DSCHA has a valid backup. For data 
sets that are allocated, backed up, deleted, and reallocated on the same date, 
DFSMShsm will not be able to distinguish that the backup is from a previously 
created version of the data set. The backup made on the same day as the 
creation will qualify the data set for expiration if the MC requires a backup 
or if a MIGRATE DELETEIFBACKEDUP command targets the data set. Given this 
support only applies to empty data set variations, this issue would only impact 
data sets whose most recent version is empty.

Glenn Wilcock
DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DSS dump and migrated datasets

2023-11-27 Thread Glenn Wilcock
Thanks Mike.

ABARS is great for this type of use case.  ABACKUP with the MOVE option will 
backup all data types (no recall for migrated data) and delete the original 
copies, including the migrated data sets.  Thus creating a single backup of all 
of the related data.  Essentially, an Archive type of function.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cloud Storage

2023-10-27 Thread Glenn Wilcock
There is also DS8000 Transparent Cloud Tiering.  For TS7700 Clients, TS7700 
Cloud Storage Tier is an excellent way to maintain SLAs and also leverage cloud 
storage for cold data.
Overview of IBM z/OS object storage solutions: 
https://community.ibm.com/community/user/ibmz-and-linuxone/viewdocument/zos-cloud-object-storage-solutions?CommunityKey=3c6c69c1-abde-4ee6-b603-19ef14942e8f=librarydocuments

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Z skills training

2023-09-27 Thread Glenn Wilcock
Besides formal training, also leverage the free z/OS Academy Events: 
"https://community.ibm.com/community/user/ibmz-and-linuxone/groups/community-home?communitykey=4f04f510-4165-4fa6-8a82-c00b4d150967;,
 join SHARE and gain access to its library of recordings: 
"https://www.share.org/Education;, and specific to DFSMS, there is a library of 
educational recordings: 
"https://ibm.ent.box.com/s/w2jpq95ydynpztbjwdplw3l0hoipl0i8;, 
"https://ibm.ent.box.com/v/DFSMS-Academ-DFSMShsm2021; and the DFSMS Community: 
"https://community.ibm.com/community/user/ibmz-and-linuxone/groups/topic-home/blog-entries?communitykey=3c6c69c1-abde-4ee6-b603-19ef14942e8f;.

Glenn Wilcock, DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Permanent Incremental FlashCopy Relationships? (Was: SETROPTS ERASE...)

2023-08-07 Thread Glenn Wilcock
Hey Ed, we haven't really been advertising this function because of the DS8K 
copy services restriction.  Besides FlashCopy, it means that only simplex 
volumes are eligible (no Metro or Global Mirror relationships).  When creating 
FlashCopy Full Volume copies, Incremental FlashCopy is still a best practice. 
Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMS Advanced Customization Guide

2023-07-13 Thread Glenn Wilcock
Hi Bill, this is a licensed manual.  Please direct questions to its owner, 
Andrew Wilt.  anw...@us.ibm.com  Thx.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z/OS Client Web Enablement Toolkit

2023-06-29 Thread Glenn Wilcock
For those interested in leveraging cloud object storage, please refer to the 
z/OS 3.1 preview announce that included a reference to a new DFSMS access 
method for cloud data - Cloud Data Access (CDA).  It is built on top of the Web 
Enablement Toolkit and simplifies authentication and put/get/etc operations to 
the various cloud object storage providers.  It will be provided at no cost.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Are Banks Breaking Up With Mainframes? | Forbes

2023-05-23 Thread Glenn Wilcock
Redpaper: https://www.redbooks.ibm.com/redpapers/pdfs/redp5705.pdf

Abstract:
In the recent Application modernization on the mainframe IBM Institute for 
Business Value study, among the 
surveyed, 71% of executives say that mainframe-based applications are central 
to their business strategy. As 
stated earlier, four out of five respondents say that their organizations must 
rapidly transform to keep up with 
competition, which includes modernizing mainframe-based apps and adopting an 
open approach that includes 
cloud.
IBM can help you meet these challenges by providing the tools and methodologies 
to help you transform your 
business. 
To increase this productivity, agility, and close the skills gaps, you should 
modernize your mainframe applications 
and embrace hybrid cloud. Specifically, consider the following best practices:
 Embracing a hybrid cloud approach to mainframe application modernization.
 Leveraging a continuous application modernization journey.
 Taking advantage of the recommended cloud reference architectures to leverage 
IBM Cloud®, Amazon Web 
Services (AWS), and Microsoft Azure.
 Getting started fast with the IBM proven co-creation methodology

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Are Banks Breaking Up With Mainframes? | Forbes

2023-05-23 Thread Glenn Wilcock
This post from Ross is a must read for this topic: 
https://www.linkedin.com/pulse/results-mainframe-application-modernization-migration-ross-mauri/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Command/interface to find the total size of a MIGRAT2 disk file?

2023-04-21 Thread Glenn Wilcock
For future reference, HSM DCOLLECT reports are also available if you want 
reports for larger amounts of information.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Tape compression and modern encryption facilities.

2023-04-17 Thread Glenn Wilcock
This discussion is all about use cases and data types.  As mentioned in this 
thread, not all data supports zEDC and it's highly unlikely that everything 
will be encrypted.  I would recommend analyzing the data written to your tape 
environment and determine if it would be beneficial to turn off tape 
compression for a specific workload.  (Via data class).

DSS was one use case that was mentioned.  I would never use the COMPRESS option 
of DSS if you have the zEDC feature.  If you are targeting disk, then use 
ZCOMPRESS (zEDC).  If you are targeting tape, then it's a trade-off with using 
zEDC and TS7700 compression.  zEDC will use host cpu.  DSS has the smarts not 
to try to compress a data set that is already compressed and or encrypted.  If 
cpu consumption is a concern, then just use TS7700 compression and no DSS 
compression.  This is especially true for full volume dump to tape.

HSM is a use case for which zEDC is a best practice.  It will improve 
throughput and slightly reduce CPU.  We recommend that you use ZCOMPRESS for 
both migration and backup.  (If you use HSM Dump, then it's the same 
recommendation as DSS.  Just use TS7700 compression to not experience a cpu 
increase).  For your HSM migration/backup data classes, disable TS7700 
compression.  For Recycle, assign a data class that utilizes TS7700 compression 
because you need it to continue to compress old data that wasn't 
migrated/backed up with zEDC.

For other tape data, determine if the application is already compressing and or 
encrypting the data and enable/disable TS7700 compression as appropriate.

Glenn Wilcock,
DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Hsm for system dump volume

2023-02-13 Thread Glenn Wilcock
Yes, auto migration will suffice.  Use the HSM ADDVOL command to have HSM 
manage a nonSMS volume.  You'll define it as a PRIMARY volume.  You can specify 
the days to keep the data sets on the volume before they are migrated with the 
MIGRATE(days) keyword. You should also specify the THRESHOLD command to 
indicate high and low thresholds.  Backup only applies if you want to 
additionally create a backup copy of the data sets.  Backup doesn't delete the 
data sets after the backup, so you'll essentially have 2 copies.

Glenn Wilcock
DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Transmitting SMF records

2023-01-19 Thread Glenn Wilcock
Late to the party on this discussion... DFSMS recently shipped a new utility 
named GDKUTIL.  It converts z/OS data sets and UNIX files to cloud objects and 
visa-versa.  This enables utilizing cloud object storage as a method for z/OS 
sharing data, such as SMF, as opposed to FTPing.  Use GDKUTIL to write the data 
to object storage for shared access across sysplexes / platforms and visa-versa 
- ingest off-platform data into z/OS.  DFSMS CDA (Cloud Data Access) is used to 
securely manage cloud access.

https://public.dhe.ibm.com/eserver/zseries/zos/DFSMS/CDA/OA62318/Cloud_Object_Utility_Pub_updates.pdf

Glenn Wilcock, DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Solutions for leveraging cloud object storage on z/OS

2022-12-15 Thread Glenn Wilcock
Hi Tom, you're not alone!  I've added an attachment of the pdf at the same 
link.  Viewing the pdf in full-screen mode gives you the same navigation, but 
removes all of the animation so that you can go at your own speed...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


IBM Solutions for leveraging cloud object storage on z/OS

2022-12-15 Thread Glenn Wilcock
I often get asked what solutions are available to leverage cloud object storage 
solutions on z/OS.  I'm working on a slideshow introduction to the solutions 
that IBM offers.  I plan to add a voice-over and some navigational 
enhancements, but wanted to share what I have so far.  Please feel free to 
reach out for follow-up discussions.  (I'm taking the last 2 weeks of the year 
off, so there will be a delay in my response over the next 2 weeks).  Happy 
Holidays and New Year!  

(The slideshow is stored on the IBM Community.  When I tested the link, it 
didn't require a log on).

https://community.ibm.com/community/user/ibmz-and-linuxone/viewdocument/zos-cloud-object-storage-solutions?CommunityKey=3c6c69c1-abde-4ee6-b603-19ef14942e8f=librarydocuments

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MCDS CDS BACKVOL fails

2022-08-22 Thread Glenn Wilcock
Hey Jake,

Even if you didn't get an ARC0745, I recommend following the steps of that 
message.  You need to see what your current backup copies are cataloged as.  If 
you don't have any, allocate as many as backup copies that you will be keeping. 
 Notice that you have to allocate them for the MCDS, BCDS, OCDS and JRNL.  I 
believe that you need to start with V001.  (For grins, you may start with 
V000 and then allocate an extra on the end)  Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: File Transfer Alternatives - Different Platforms

2022-06-28 Thread Glenn Wilcock
For folks on this thread who would be interested in doing this via cloud object 
storage, DFSMS is looking for sponsor users.  You'll need an NDA to 
participate.  Please reply to me via wilc...@us.ibm.com if you're interested.  
Thx.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM backups and CICS VSAM files,

2022-06-08 Thread Glenn Wilcock
If the data set is defined as BWO eligible, then DSS logic (data mover for HSM) 
can handle backing the data set up while open.How frequently a backup copy 
is created is determined by management class (SMS) or HSM policy (nonSMS). If a 
CI/CA split occurs during the backup, then it will be failed.  It can be 
automatically retried with the HSM INUSE option.  (Search for 'BWO' in  the HSM 
Storage Administration manual) and it will take you to the INUSE section.  
Search for 'BWO' in the DFSMSdss Storage Administration for all of the gory 
details :)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How to keep the response from HSENDCMD batch

2022-05-26 Thread Glenn Wilcock
Glad that you were able to solve it!  

If you have other questions, please feel free to reach out directly... 
wilc...@us.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How to keep the response from HSENDCMD batch

2022-05-25 Thread Glenn Wilcock
Hi, I'm not familiar with QUERYSET, but there is a single thread that processes 
certain commands (QUERY, LIST, etc), so it's possible that the QUERY was queued 
up behind one or more other commands that took longer to run.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How to keep the response from HSENDCMD batch

2022-05-19 Thread Glenn Wilcock
FYI - Here's a link to 8+ hours of video recordings of DFSMShsm Education: 

https://ibm.ent.box.com/v/DFSMS-Academ-DFSMShsm2021/folder/133296597964

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: directory backup

2022-05-10 Thread Glenn Wilcock
Yes, the access and modification times are preserved.  But, the change time is 
updated because the metadata is changed due to update the last backup 
timestamp.  But, as indicated, if the wildcarding selects only those files to 
be archived, the backup and delete can be done with a single command.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: directory backup

2022-05-09 Thread Glenn Wilcock
DFSMShsm supports wildcards and has exclude capabilities.  It will even do a 
DELETE if you want to delete all of the files after the backup.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Resizing MCDS

2022-02-23 Thread Glenn Wilcock
Great discussion here.  For those who are interested, here is a link to 
charts/recordings for a DFSMShsm education series.  One of the sessions is 
specifically on the DFSMShsm control data sets that discusses RLS, CA Reclaim, 
Reorgs, etc.  Feel free to contact me with any questions.

https://ibm.ent.box.com/v/DFSMS-Academ-DFSMShsm2021

Glenn Wilcock
DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zEDC Justification documents or links

2022-01-21 Thread Glenn Wilcock
Hi Robert, if your shop uses DFSMShsm, I have some charts that I can share with 
you that show the significant advantages of using zEDC for backup and 
migration. zEDC is the #1 Best Practice for DFSMShsm.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: automated SMS Storage pool management

2021-11-05 Thread Glenn Wilcock
Dave, HSM has 2 methods to dynamically address volumes going over threshold: 
Interval Migration and On Demand Migration.  Interval Migration runs hourly and 
processes volumes that went over threshold during the previous hour.  The 
preferred method is the On Demand Migration technique.  With ODM, as soon as a 
volume goes over threshold, SMS throws an ENF that HSM listens to and 
immediately space manages the volume.  Windows can be defined to quiesce this 
activity during specific times, such as when critical production workloads are 
running.

It's time to add more storage when the HSM space management runs to completion 
and a volume still doesn't get under threshold.

If you're new to DFSMS, we have a series of recorded DFSMS Academy sessions: 
https://ibm.ent.box.com/s/w2jpq95ydynpztbjwdplw3l0hoipl0i8

If you're new to DFSMShsm, we have an entire series of recorded DFSMShsm 
Academy sessions:
https://ibm.ent.box.com/v/DFSMS-Academ-DFSMShsm2021

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ADRDSSU and encrypted files

2021-11-04 Thread Glenn Wilcock
Hi Cameron, to answer your base question... No, ADRDSSU does not support 
decrypting an encrypted file during the dump process.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EAV and zHPF documentation, anyone?

2021-10-26 Thread Glenn Wilcock
There is a Redbook for EAV: 
https://www.redbooks.ibm.com/abstracts/redp5364.html?Open

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM dates?

2021-10-18 Thread Glenn Wilcock
HSM GA'd March 31, 1978.  I don't know when release 2 went out.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Secondary sources for DFP and DFSMS

2021-08-14 Thread Glenn Wilcock
"If you don't quote at least a tiny bit of what you are saying no to it is 
difficult to tell what the question was, and as a result difficult to gain much 
insight from your answer.

Charles"

Charles, it was in reply to this thread...  What's the magic to getting the 
original message included in the reply?  I was assuming the "Reply" would 
include it.  Guess not.  Thx.

"Would you have articles,  life cycle dates or announcement letters for any of

5645-001
5647-A01
5655-068
5655-069..."

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Secondary sources for DFP and DFSMS

2021-08-13 Thread Glenn Wilcock
Unfortunately, no.  As the product owner of DFSMShsm, I was passed down the 
original announce for the product, but it's the internal IBM Confidential 
version.  You think that we would have all of this stuff Migrated off to a reel 
tape somewhere :)  

Friday fun fact... In an informal survey that I recently did of the clients 
that I regularly work with, the oldest migrated data set still managed by HSM 
was migrated 11 Dec 1980!!  Before many of today's other platforms even existed!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Secondary sources for DFP and DFSMS

2021-08-12 Thread Glenn Wilcock
We've been vexed with this for some time.  We, the DFSMS org, have wanted to 
update this for some time.  But, we've run into these same issues, probably 
even more so since we are the product owners.  So, first, thanks so much for 
working on this!  SHARE presentations are a good idea, but wondering if since 
those are created by the product owners, if they will pass the litmus test.  
I've thought about Redbooks since we sometimes get non IBMers as authors, but 
didn't find any offhand that weren't written by IBMers.  Those familiar with 
Wikipedia, can a secondary resource come from someone within the IBM company, 
but not directly related to the referenced products?  If so, then the 'ABCs of 
IBM Systems Program Volume 3', is a great reference.  Outside of these sources, 
our google searches haven't provided much as far as secondary sources besides 
billable educational offerings.  Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Using HBACKDS or OMVS hbackup for USS files?

2021-06-18 Thread Glenn Wilcock
Hi, glad that you got past the problem.  Would you mind sharing what solved the 
issue?  I wouldn't expect that a setup issue to cause an 0C4 Abend, so I would 
like to see if there is something we can do on our end to detect the condition 
and have more user friendly messaging.  Thanks, Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Flashcopy for Non-SMS volume backup?

2021-05-13 Thread Glenn Wilcock
Ed, 

Glad to hear that it is working out for you! TCT will be a perfect addition.

1) HSM does provide data set level recovery from copy pools, just differently 
:)  If the data set has not been deleted or moved since the backup, it's 
supported w/o a catalog capture.  To support recovery for deleted/moved data 
sets, then it requires the data in the copy pool to be cataloged to catalogs 
also within the copy pool.  You then define the catalogs to the copy pool and 
indicate that HSM should capture the catalog information during the FRBACKUP.  
This will extend the elapsed time of the FRBACKUP.  LIST COPYPOOL has a 
datasets option to list the data sets that were included in the FRBACKUP.  You 
then use the DSNAME keyword on the FRRECOV command.  All flavors of data sets 
are supported.  Multivolume, etc.  We do a physical DSS copy (disk) or restore 
(tape) and then HSM recatalogs all of the pieces after the physical data 
recoveries complete.  Recovery from tape is slow because DSS has to search the 
dump volume to find the extents.  

2) You know the drill :)  Open an RFE.  With our crazy world, we have an 
extensive DFSMS roadmap around cyber resiliency enhancements.

Stay cool :)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Flashcopy for Non-SMS volume backup?

2021-05-03 Thread Glenn Wilcock
Hey Ed,

Chatted with some DFSMS folks about what you want to do.  First, you are 
correct.  The HSM FR support is only for SMS-managed volumes.  It is tightly 
integrated with several SMS constructs for managing the volumes and policy.  
(In the future, feel free to send me a direct email to save yourself alot of 
reading :)

Next, for your scenario, we  that it will work if you do the following:
1) Identify your source to target pairings
2) We HIGHLY recommend that you use FC Incremental because it creates a 
persistent relationship between the source and the target.  Because the HSM 
dump will be done in the standard dump path, it doesn't have the 'smarts' to do 
things like withdraw a NOCOPY relationship, etc.  Also, more later on why it 
will be very useful.
3) Create the FC with DSS and dump conditioning to the target, and with 
FCINCREMENTAL
4) I believe that you are all set with adding the FC Target volumes to HSM, 
setting up the appropriate dump class, dump cycle, etc and having them picked 
up as a part of autodump / dump command.
5) What makes the dump copies fundamentally different from a standard HSM Dump 
or an FRBACKUP DUMP - a) DSS is smart enough to know that a FC dump conditioned 
target volume is being dumped, so the dump copy that it creates will be written 
with the volser of the source volume and will be created as if it came directly 
from the source b) HSM is  smart enough to know that it is a dump 
conditioned volume, so the HSM CDS dump record and LIST output will record it 
as a dump of the target, with the FC target volser.  Hence, the recommendation 
for a persistent relationship between your source and target volumes.  Then you 
can manage the inconsistency between the HSM dump records and which volume was 
actually dumped.
6) As you indicate, you can use HSM Recover with TOVOLUME to recover directly 
back to the source.  Since the dump volume is actually of the source, all 
should be well after the recovery ends.  The volser will stay the same during 
the recovery process since the dump tape has the source volser.  We believe 
that you shouldn't have to do a clip or anything after the Recovery.  All 
should be well.

There's a lot going on here.  If you'd like to set up call to discuss, reach 
out and I can setup a call with all of the right SMEs to walk you through this.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM Backups using DFDSS - Enqueues

2021-03-05 Thread Glenn Wilcock
Hi Terri,

You have the correct HSM setting - INUSE(...)  HSM traps on the ADR412 message 
to know that DSS failed serialization and that it should be retried.  Do you 
see a subsequent DSS Dump for the data sets that initially failed (a retry w/o 
serialization)?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


New online z/OS DFSMS Community

2020-12-17 Thread Glenn Wilcock
We're happy to announce the creation of the online z/OS DFSMS Community, which 
is part of the larger IBM Community! Our community webpage contains blogs to 
advertise new function (z/OS Releases and Continuous Delivery), advertise 
upcoming events (SHARE, IBM TechU, DFSMS Academies, etc), have Discussions 
(currently we're asking for input on topics for the next DFSMS Academy) and a 
library of helpful documentation.  Join us today!  

https://community.ibm.com/community/user/ibmz-and-linuxone/groups/topic-home?CommunityKey=3c6c69c1-abde-4ee6-b603-19ef14942e8f

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Using HSM Transition to clear MOD-3 devices

2020-12-17 Thread Glenn Wilcock
Hi Chuck, We have clients who regularly use this function for what you plan on 
doing.  I'll reach out to the individual who owns the ticket and make sure that 
it makes progress.  Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AWS down again.

2020-11-28 Thread Glenn Wilcock
DFSMShsm Migrate/Recall and DFSMSdss use the DS8000 to offload data directly to 
cloud object storage.  S3 compliant object stores, including AWS, are 
supported.  When the object store is unavailable for any reason, you'll get 
user friendly messages from HSM/DSS that indicate the specific error being 
returned by the object store provider.  For HSM automatic migration, any data 
sets that failed to migrate will be reattempted the next day.  The big concern 
with the unavailability is retrieving your data, which is not possible when the 
object store is unavailable.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMShsm SETSYS BACKUP(DASD)

2020-10-08 Thread Glenn Wilcock
Hi Theo,

HBACK is a data set backup command.  Data set backup settings are determined by 
the SETSYS DSBACKUP parameters, of which the default for TAPE tasks is 2.  
Please update your SETSYS DSBACKUP parameters such that all backups will go to 
disk.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFDSS support for ZFS files query

2020-10-01 Thread Glenn Wilcock
For those following this thread who have a DFSMShsm license, HSM does support 
this.  It can backup all files, files based on wildcards and exclude specific 
files as directed by an EXCLUDE. A UNIX shell command is also available, to 
perform the HSM backups/recoveries directly from UNIX.
Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zSeries and using cloud for backups

2020-08-06 Thread Glenn Wilcock
The TS7700C provides a really nice option for what you are being asked to do.  
You can continue to take the backup copies that you do today, and have the 
TS7700 seamlessly migrate them out to the cloud for you.  Best of both worlds!

If you want to go directly to the cloud, the DFSMShsm/dss, DS8K transparent 
cloud tiering provides the value that 100% of the data movement is performed by 
the DS8K.  None of the disk data passes through Z!  DFSMShsm TCT migration has 
been available for some time.  This gives you the capability to create a fast 
PiT backup on disk with DSS (with FlashCopy, all of the data movement is within 
disk controller), and then have HSM automatic migration move them to the cloud 
with tct.  So, you get PiT backup copies in the cloud, with none of the data 
passing through Z.  Also, DSS just announced this capability for Full Volume 
Dump and HSM has a statement of direction to provide the same for it's dump 
capabilities.  DS8K TCT now supports compression when targeting a TS7700.  
Compression when targeting cloud object storage is in the short term roadmap.  
The DS8K natively encrypts the data, so that you have the piece of mind that 
the data was encrypted on your Z platform before it even lands on the ip 
connection to the cloud.

Cloud Tape Connector is another nice IBM option that allows you to spin off a 
tape copy to cloud.

Reach out to me if you would like any more information on any/all of the 
options and I can connect you with the folks to get you more information.
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PDS/e Encryption

2020-07-13 Thread Glenn Wilcock
DFSMSdss supports host-based encryption on it's dump command.  The support was 
added before tape encryption became generally available.  Still a possible 
solution for you since your tape environment doesn't support encryption.

Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage & tape question

2020-07-08 Thread Glenn Wilcock
Hi All,

I want to give another perspective on the need for backup copies.  The focus 
here is on physical loss of storage. With replication, and many clients having 
2, 3 and even 4 sites, the probability of needing a backup copy to recover from 
a physical loss of data really has decreased.  (Still there, none the less).  
BUT, the probability for logical data corruption has INCREASED.  Accidental and 
malicious data corruption is instantly mirrored to all replication copies, 
making them useless.  Working in HSM, I regularly see calls requesting 
assistance in recovering large amounts of data from backup copies.  We're all 
human and we all make mistakes.  Some of those mistakes result in data loss.  
Also, all products have programming defects and some of those defects result in 
data loss.  This speaks nothing to the current environment where governments 
are mandating policies and procedures for protecting against malicious data 
destruction. Your only hope for recovery is a PiT backup prior to the data 
loss/corruption.  Not all loss/corruption will be found immediately.  So, your 
ability to recover is a factor of how long it takes you to determine that there 
was corruption/loss and how much your willing to invest in keeping backup 
copies for at least that long.

Glenn Wilcock
DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage & tape question

2020-07-07 Thread Glenn Wilcock
Hi KB, IBM-MAIN is great.  There are shops that are all primary disk.  Just 
make sure that you are aware of all of the considerations before going down 
that route. (The ones that I'm aware of are still HSM users to take advantage 
of ML1, class transitions, and backup). A few responses to your follow-on 
questions...

Hi, 

Thank you for the detailed response Glenn, IBM-MAIN is truly amazing.


> Migrate/Archive
> The three purposes of HSM migration are to 1) compress the data so that the 
> footprint is smaller, 2) move it to a lower cost media so that the TCO is 
> lower and 3) move the data to an offline media that doesn't consume online 
> UCBs. When considering bringing all of your data back online, you need to 
> consider the impact of all three. 1) Assuming 3:1 compaction, you'll need 3x 
> the online storage. With zEDC, that will vary on both what you can get on the 
> primary storage and the ML2 storage. 3) For larger shops, the number on 
> online UCBs is a factor. It's not a factor for smaller shops.

-
1) Compression - wouldn't it be enough to rely on z15 on-chip compression + the 
compression/dedupe done by the storage array itself? Sure, it may not be 3:1.. 
but worth evaluating?
If the array itself is doing C+D, then "rehydrating" the data isn't a problem I 
believe?
>> Glenn: z15 compression can be utilized for nonVSAM, not VSAM.  
>> Compression/Dedup are generally provided by offline storage systems or those 
>> that emulate them (like virtual tape).  Dedupe is generally too slow for 
>> primary storage.  I'm not aware of compression capabilities on primary 
>> storage. If you are currently deduping your backup copies, then they will 
>> take even more storage when they are moved to primary disk unless you can 
>> utilize a backup solution that does software based dedup and targets disk.

2) It's not just the storage cost though right.. (cost of a bunch of disk, S) 
vs (cost of tape emulation, physical carts, bandwidth, S, HSM's direct & 
indirect costs)
>>Glenn: agreed.  TCO has to be considered

3) Ok, the UCB thing can be problematic for big shops, agreed. There's only so 
much you can do with 3390-54 (are bigger volumes coming anytime soon?).
>>Glenn.  DFSMS currently supports EAV 1TB volumes.  


> Another thing to consider with an all disk environment is your 'relief 
> valve'. It's simple to migrate data to tape as a means of ensuring that you 
> always have enough space on your primary storage for growth. If you only have 
> primary storage, what is your exception handling case when you have 
> unexpected growth and no migration tiers to which to move your inactive data? 
> How quickly can you bring more primary storage online?
Sorry, I know it sounds silly when I keep saying 'assume x/y/z is already 
catered to', but ... assuming primary storage provisioning is no longer a 
problem (apart from the UCBs mentioned above).


> Another option is DS8000 transparent cloud tiering. This enables you to 
> migrate inactive data to cloud object storage, with minimal cpu since the 
> DS8K is doing the data movement. If not a primary means of migrating data, it 
> is a very good option for a 'relief valve'.
Hmm... the two whole approaches (all-primary vs standard procedure) need to 
costed out and compared to be impartial to either case.


> Backup
> Regardless of the replication technique that you are using 
> (synchronous/asynchronous), you need point-in-time copies of your data for 
> logical corruption protection. If a data set is accidentally or maliciously 
> deleted, replication quickly deletes it from all copies. Also, if data 
> becomes logically corrupted, it is instantly corrupted in all copies. So, you 
> have to have a point-in-time backup technique for all of your data. You need 
> as many copies as you want recovery points. One copy doesn't give you much 
> security. Keeping n copies on disk can get pricey and consume alot of 
> storage. Also, you need to replicate the n PiT copies to all of your sites so 
> that you can do a logical recovery after a physical fail over. This makes the 
> cost add up even more quickly. TCT is another good option for this. You can 
> keep 1 or 2 copies on disk and then have HSM migrate/expire the older backup 
> copies to cloud object storage which is then available at all of your 
> recovery sites.
If we consider that the storage array has *proper* support for multi-site, 
snapshots/PiTs, etc. ... again not problematic.


Fully understand I may be dreaming about such a primary storage, it's good to 
know the technical constraints against it.

- KB

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage & tape question

2020-07-06 Thread Glenn Wilcock
A few thoughts:

Migrate/Archive
The three purposes of HSM migration are to 1) compress the data so that the 
footprint is smaller, 2) move it to a lower cost media so that the TCO is lower 
and 3) move the data to an offline media that doesn't consume online UCBs.  
When considering bringing all of your data back online, you need to consider 
the impact of all three.  1) Assuming 3:1 compaction, you'll need 3x the online 
storage.  With zEDC, that will vary on both what you can get on the primary 
storage and the ML2 storage.  3) For larger shops, the number on online UCBs is 
a factor.  It's not a factor for smaller shops.

Some clients have selected to go to an all HSM ML1 environment to still get the 
advantage of zEDC compression on inactive data.  (You may be utilizing zEDC for 
primary storage, but that is only available for nonVSAM data).  These clients 
utilize the lowest cost disk and utilize the value of zEDC compression to 
minimize the footprint.

Another thing to consider with an all disk environment is your 'relief valve'.  
It's simple to migrate data to tape as a means of ensuring that you always have 
enough space on your primary storage for growth.  If you only have primary 
storage, what is your exception handling case when you have unexpected growth 
and no migration tiers to which to move your inactive data?  How quickly can 
you bring more primary storage online?

Another option is DS8000 transparent cloud tiering.  This enables you to 
migrate inactive data to cloud object storage, with minimal cpu since the DS8K 
is doing the data movement.  If not a primary means of migrating data, it is a 
very good option for a 'relief valve'.  

Backup
Regardless of the replication technique that you are using 
(synchronous/asynchronous), you need point-in-time copies of your data for 
logical corruption protection.  If a data set is accidentally or maliciously 
deleted, replication quickly deletes it from all copies.  Also, if data becomes 
logically corrupted, it is instantly corrupted in all copies.  So, you have to 
have a point-in-time backup technique for all of your data.  You need as many 
copies as you want recovery points.  One copy doesn't give you much security. 
Keeping n copies on disk can get pricey and consume alot of storage.  Also, you 
need to replicate the n PiT copies to all of your sites so that you can do a 
logical recovery after a physical fail over.  This makes the cost add up even 
more quickly.  TCT is another good option for this.  You can keep 1 or 2 copies 
on disk and then have HSM migrate/expire the older backup copies to cloud 
object storage which is then available at all of your recovery sites.

Glenn Wilcock
DFSMS Chief Product Owner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMShsm questions

2020-05-28 Thread Glenn Wilcock
First, I'm impressed that your subject line has 'DFSMShsm' as opposed to DFHSM 
or any of the other names that it's gone by over the years.

Background on the design for selecting the data set backup target tier, first 
created when physical tape was still dominant  - for small data sets, you don't 
want the overhead of a tape mount for a one-off backup, and for large data sets 
you don't want them going to disk and eating up all of your space.  So, we 
created a SETSYS called DSBACKUP where you could fine tune just which data set 
backups go to disk and which go to tape and the tasking level for each.  With 
virtual tape, this is no longer quite an issue, but the parameters still 
persist.  If you always want you individual backup copies to go to disk, you 
can set the parameters to do so.  

In the DFSMShsm Storage Administration manual SETSYS chapter, there is a 
section names 'SETSYS fast-path for DFSMShsm Functions'.  In this section you 
will find a table with the parameters that affect tape processing. But, Chapter 
10 of the DFSMShsm Implementation & Customization Guide is an exhaustive guide 
for establishing the DFSMShsm tape environment.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMS Move Dataset

2020-05-17 Thread Glenn Wilcock
If you have DFSMShsm, HSM has a simple MIGRATE DSNAME(dsname) MOVE command that 
will invoke SMS ACS for volume selection and move the data set (any type) to a 
new volume.  Under the covers, HSM invokes DFSMSdss COPY w/ DELETE, w/o you 
having to create all of the JCL based on data set type.

It can also be issued at the VOLUME and STORAGEGROUP level to drain a volume or 
storage group.  For example, it's a simple single command that can be used to 
migrate from smaller volumes to newly defined larger volumes. It will even 
close open Db2 objects, move them and then automatically reopen.

https://www.ibm.com/support/knowledgecenter/SSLTBW_2.4.0/com.ibm.zos.v2r4.arcf000/s4388.htm

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMShsm high CPU consumption [EXTERNAL]

2020-04-28 Thread Glenn Wilcock
Hi Lizette,

It allows different WLM settings.  For example, a MAIN can do RECALLS and have 
a higher WLM goal than an AUX host that is assigned to run a migration window 
with a lower goal.  I've never actually worked with a client to do this...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMShsm high CPU consumption [EXTERNAL]

2020-04-28 Thread Glenn Wilcock
It is Newsletter 2018 No. 4

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMShsm high CPU consumption [EXTERNAL]

2020-04-28 Thread Glenn Wilcock
DFSMShsm has made many advances over the years to become more efficient.  I 
teamed up with Frank Kyne of Watson & Walker to write an article on this topic 
in their newsletter.  I'm also more than happy to give a WebEx on this topic 
with this client, and any other who is interested.  It's a popular presentation 
from IBM TechU and SHARE.  One of the topics is to reduce the MAX tasking level 
to reduce peak MIPS during HSM windows.  Just send me an email.

Glenn Wilcock, DFSMS Architecture
wilc...@us.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM/DCOLLECT QUESTION [EXTERNAL]

2019-11-05 Thread Glenn Wilcock
HSM's usage of RLS Access for the control data sets is driven by the PROCLIB 
value for CDSSHR.  CDSSHR=RLS indicates that HSM should access the CDSes in RLS 
mode.  If CDSSHR is not specified, or has any other value, then you are not 
using RLS for HSM.

DCOLLECT behaves this way because it is not running under the HSM address 
space, so it is how we determine the proper way to open the control data sets 
concurrently with HSM.

BTW - Using RLS to access the HSM control data sets is my second rated best 
practice for HSM.  If you run concurrent HSM activity across multiple LPARs, it 
can significantly improve elapsed times.  My highest rated best practice for 
HSM is to use zEDC.  zEDC does wonders for HSM elapsed times and storage 
reduction.  If migrating to ML1, it also reduces cpu consumption.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


The Next Evolution of Hierarchical Space Management

2019-10-23 Thread Glenn Wilcock
The next evolution of Hierarchical Storage Management is here with the 
announcement of TS7700 DS8000 Object Store! This evolutionary capability 
enables DFSMShsm to migrate data directly between IBM DS8000 Disk and TS7700 
Virtual Tape without the data passing through the Z host! Check out more at the 
IBM z Systems Development Blog:

https://www.ibm.com/developerworks/community/blogs/e0c474f8-3aad-4f01-8bca-f2c12b576ac9/entry/The_Next_Evolution_of_Hierarchical_Storage_Management?lang=en

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


DFSMS Continuous Delivery

2019-10-09 Thread Glenn Wilcock
Two recent z/OS DFSMS Continuous Delivery projects:  
  - OA57454 Added DELETE (Archive) capability to HSM File Level Backup Support 
(base support in OA52703)
  - OA57173 Added initial support for FlashCopy to Global Mirror Primaries
 
(https://www.ibm.com/developerworks/community/blogs/e0c474f8-3aad-4f01-8bca-f2c12b576ac9/entry/FlashCopy_onto_Global_Mirror_Primaries?lang=en)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Silly DFSMShsm questions

2019-07-30 Thread Glenn Wilcock
If it happens again, open a pmr.  ISPF uses device type to determine if 
MIGRAT'x' corresponds to ML'1', ML'2' or ML'C'.  Since you are seeing a 'C', 
but have never targeted 'C'loud object storage, there could be a defect in 
which the device type is not being set appropriately.  Thx.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Stay current with what's new in DFSMS

2019-07-29 Thread Glenn Wilcock
Certainly.  We are just trying to utilize new avenues for disseminating 
information.  One size never fits all.  In a recent LinkedIn post, I documented 
all of the DFSMShsm RFEs that are included in V2R4:
 
RFE 121915: (SHARE MVSS) Quiesce window for interval and on-demand migration
Request: HSM automatic migration via interval or on-demand migration sometimes 
runs during peak CPU utilization times. We are looking for the ability to 
provide windows in which we can turn off both types of automatic migration. It 
is already disabled during primary space management (PSM) processing; we look 
for additional times to disable it. We want to keep command migration available 
at all times. Disabling automatic migration should not prevent command 
migration. 
Response: New SETSYS EventDrivenMigrationQuiese and HOLD/RELEASE 
EventDrivenMigrationQuiese commands. 

RFE 114044: HSM should include which parameters are valid/invalid during startup
Request: When DFSMS starts up, it produces messages on parms in ARCCMDxx either 
being valid or invalid.  It would be good to know which parm was verified (or 
not).  So it could be ARC0100I SETSYS COMMAND COMPLETED for - X or ARC0104I 
INVALID INITIALIZATION COMMAND -  
Response: Provided

RFE 130361: Empty data set during backup flag
Request: In the Data Areas manual there is MCDFMPTY field that when set to 1, 
the data set was empty at the time of migration.  Would like the same for a 
backup. 
Response: MCCFMPTY added. 

The following messages are now issued to SYSLOG to enable automation:
RFE 116187: ARC0019I CELLPOOL cellpool ENCOUNTERED A SHORTAGE   
RFE 126300: ARC0263I DUMP VOLUME volser DELETED now issued to SYSLOG
RFE 132117: ARC1814I FAST REPLICATION BACKUP HAS COMPLETED SUCCESSFULLY AND   
DUMP IS NOW STARTING

RFE 119146: Improve FREEVOL messaging
Request: During FREEVOL of an ML1 volume, if an ARC0560E condition exists where 
migration to ML2 is limited due to lack of ML2 tape/DASD, datasets that should 
be migrated to ML2 stay on the ML1 volume. The corresponding ARC0734I message 
indicates that the dataset moved "TOVOL **", but the return code is zero 
and the reason code is zero. They should be RC5 REAS8. 
Response: Provided 
 
RFE 115386: ARC1001I messages routed to activity log
Request: BACKDS fails, we would like to see the failing ARC1001I message issued 
to the backup activity log. 
Response: Provided


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Stay current with what's new in DFSMS

2019-07-26 Thread Glenn Wilcock
The DFSMS team recently created a LinkedIn Group for z/OS DFSMS: 
https://www.linkedin.com/groups/12238880/  Join the group to stay current on 
the latest enhancements, tips, news, etc.  For example, while z/OS release RFAs 
describe all of the major enhancements in a release, they don't include all of 
the minor RAS items and RFEs.  In my latest post to the group, I list all of 
the DFSMShsm RFEs that are included in V2R4... with more to follow later in the 
year.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Thoughts on IBM z/OS Version 2 Release 3 enhancements

2019-04-25 Thread Glenn Wilcock
Documentation link is now live: 
http://publibz.boulder.ibm.com/zoslib/pdf/OA52703.pdf

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Thoughts on IBM z/OS Version 2 Release 3 enhancements

2019-04-24 Thread Glenn Wilcock
We just finished up an early test with a handful of clients and are in the 
process of closing all of  the V2R3 APARs.  (Due to preReq's, they all can't 
close at the same time).  The plan is for all of the ptf's to be available in 
May.  The support will be in the base of V2R4.  Thanks, Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM management

2019-01-09 Thread Glenn Wilcock
Have been wanting to add Query ODS for a while, along with making QUERY it's 
own task.  RFE 65675 requests this.  The more votes, the better the odds...

Regarding the main thread, the REPORT command is limited, but the Report 
Generator support enables customized reports of the HSM SMF and CDS records.  
ISVs do offer very nice reporting/management capabilities.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM and empty files

2019-01-04 Thread Glenn Wilcock
Hi,

Sorry to hear that you feel this way.  Our desire is to balance providing 
leading edge solutions while meeting our clients' needs for base functionality. 
 Recently, the z/OS team switched to using RFE so that we can better manage 
client requests.  It enables clients to vote on enhancements that are most 
meaningful to their environments. If there is a specific improvement that you'd 
like to see, please open an RFE and we can rank it with the other RFEs that we 
receive, based on overall client interest.  

In the last couple of years, the HSM team has closed 18 unique RFEs (a total of 
23).  The aforementioned APAR to process data sets with undefined data set orgs 
being one of them.

Glenn Wilcock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM and empty files

2019-01-03 Thread Glenn Wilcock
Hi, 

I looked and didn't see any Patches for skipping empty data sets.

HSM treats these as normal since both migration and expiration can depend on 
the existence of a valid backup copy.  For this reason, HSM recently added 
support for data sets with an undefined data set org (OA52304), since some 
clients reported that a large number of these data sets were consuming a large 
amount of primary storage and HSM was not able to process them.

Glenn Wilcock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM Migration Question

2019-01-02 Thread Glenn Wilcock
HSM uses the Base Data Component to determine data set migration eligibility.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMSHSM, RMM - Setup

2018-11-26 Thread Glenn Wilcock
Hi Jake,

Redbooks are always a great place to start to learn the basics.  Both DFSMShsm 
and rmm have primers.  Refer to the PROCLIB and PARMLIB members of each to 
understand how they are each currently configured.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM options for dataset back-up

2018-09-27 Thread Glenn Wilcock
DFSMShsm doesn't have any documented way to do this, and I haven't found any 
undocumented patches either.  

DFSMShsm also provides a NORESET option to not reset the DSCI when doing 
full-volume dumps.  I suspect that this is common across all backup products 
for full-volume processing, so that data set backups can continue as normal 
regardless of the dump cycle.  

Not sure of your specific scenario, but look at SMS GUARANTEED BACKUP 
FREQUENCY.  It drives periodic backups even if the DSCI is not set.  It has 
some caveat's, but it's worth looking at if that is what you are trying to 
solve.

While we always welcome RFEs, just want to say up front that unless there are 
many additional votes from other clients also requesting this type of 
enhancement, it's unlikely that a unique request like this will ever make it 
high enough in our backlog that it will get done.

Glenn Wilcock
DFSMShsm architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Auto backup of zfs

2018-09-10 Thread Glenn Wilcock
Hi, we have not yet publicly announced a date.  If you have a nonDisclosure 
Agreement, we can discuss offline.  Thx, Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Auto backup of zfs

2018-09-06 Thread Glenn Wilcock
Please refer to this Statement of Direction regarding the intent to enhance 
DFSMShsm and DFSMSdss to backup and recover individual files within a zFS.

https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN=CA=877/ENUSZP18-0290=lenovospain=es

Glenn Wilcock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How to exclude files from HSM Backup or Migration

2018-02-05 Thread Glenn Wilcock
A recent enhancement enables HSM size thresholds to be specified within a 
management class.  For your specific case, you can specify within the 
management class that data sets > xx in size should not be migrated.  (For 
those who are interested, yes, you can also specify that data sets <= xx in 
size should not be migrated, or data sets > x in size should always go to 
ML2, etc.  Gives you some flexibility to do things based on size in the 
management class instead of having to write up an MD exit).

The APAR is OA52901.  Unfortunately, it was marked PE because of a MULTISYS 
issue.  The PE fix should be available in the near future.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


What do Space Invaders and DFSMShsm have in common?

2018-01-16 Thread Glenn Wilcock
What do Space Invaders and DFSMShsm have in common?  They share an original 
product release year of 1978!  Back when a gallon of gas was 63 cents, the DOW 
closed at 805, Illinois Bell introduced the first cellular mobile phone and 
Saturday Night Fever was all the rage. As with everything else, DFSMShsm has 
transformed immensely over the last 40 years.  At the upcoming SHARE in 
Sacramento, we'll be celebrating DFSMShsm's birthday at the session "DFSMShsm 
at 40: A Lean, Mean, Data Management Machine"!

At this celebration, I'd like to highlight some of the HSM record holders that 
we have out there.  Please respond to me with the values for your shop.  Please 
reply to me directly as opposed to posting back on this board. At SHARE, I'll 
announce the winners.  I will leave off company names / responder names unless 
I get an explicit permission to include them (excluding the first question).  
To get the information, use DCOLLECT for the HSM records and apply a Sort.  
Better yet,  the add-on HSM products should also return this information. 

- When did you first start working on DFSMShsm (even if you don't work on HSM 
anymore)? (I plan to have a prize for the person present at SHARE who has 
worked on HSM the longest)
- What is the creation date of your oldest migrated data set?  (05 Nov 1987 is 
the earliest date that I have so far)
- What is the creation date of your oldest backed up data set?
- How many tapes does your DFSMShsm manage? (Total tapes defined to HSM, 
regardless of logical or physical)
- What's the total number of migrated data sets that you have?
- What's the total number of backed up data sets that you have?

Thanks!   I look forward to seeing you at SHARE!
Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMShsm Copy Pools

2017-11-01 Thread Glenn Wilcock
Hi,

There are many large clients who use DFSMShsm copy pools to backup their DB2 
systems.  Some also use it to create offline, full-volume copies of other 
volumes.  Only the FRBACKUP/FRRECOV/FRDELETE commands operate against copy 
pools.  FRRECOV does have a data set recovery option.  I recommend referring to 
the following Redbook: DFSMShsm Fast Replication Technical Guide.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM QUESTION

2017-06-30 Thread Glenn Wilcock
Correct.  'MCDS' is the default.  You just need to specify one of the backup 
keywords.  Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM Backup to Disk

2017-05-31 Thread Glenn Wilcock
VTS does wonders for HSM.  Others have made good points about having all of the 
backup copies on disk.  One other thing that I would like to point out would be 
the increase in CPU consumption and elapsed time.  With tape, the compression 
is done within the tape controller.  If you are backing up to disk, HSM must do 
the compression.  That will significantly increase the CPU and will probably 
elongate the elapsed time.  If you do go with disk, I would highly recommend 
using zEDC for the compression.  That will actually decrease the CPU and 
improve the elapsed time because the compression ratio is so good and the data 
is compressed by DSS before being passed to HSM.

Cloud Tape Connector is not an option for HSM data.  CTC works on SMS tape data 
sets, of which HSM and OAM data are not.  As noted earlier in this same thread, 
HSM stacks the data on tape as a single tape file and internally manages the 
location of each logical data set on tape by storing the File Block ID.  CTC 
doesn't work with this type of setup.

Regarding Cloud Storage, HSM and DS8K recently announced support for 
Transparent Cloud Tiering where the DS8K directly writes migration data to 
cloud storage.  The storage can either be on prem or off prem.  The value being 
that the data no longer flows through the z server, significantly reducing the 
CPU requirements and eliminating RECYCLE processing.  But, backup support is 
not yet available.  For those concerned about the data going to 'the cloud', we 
expect most z data to go to an on prem cloud, which is within the walls of your 
existing environments.  When going to an off prem cloud, z/OS will be providing 
data set level encryption, so that the data is encrypted on z before being sent 
out.  By policy, data will be able to be sent to ML2, on prem cloud or an off 
prem (public or private) cloud.

Additionally, IBM announced limited support for this same cloud support to 
target an IBM VTS.  This enables direct data movement between an IBM DS8K and 
IBM VTS for HSM migration and recall processing, also enabling significant CPU 
reduction, elimination of 16K block writes and elimination of RECYCLE 
processing.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Ransomware on Mainframe application ?

2017-05-19 Thread Glenn Wilcock

Pointers to some information from IBM on this topic...

https://securityintelligence.com/ is an IBM site that provides security related 
insight & analysis and provides valuable information regarding Ransomware, 
including WannaCry.

https://www-03.ibm.com/systems/z/solutions/security_integrity.html provides an 
overview of z Systems Security and is intended to help you stay current with 
security and system integrity fixes by providing current patch data and 
associated Common Vulnerability Scoring System (CVSS) ratings for new APARs. 
Security Notices are also provided to address highly publicized security 
concerns.

This link also includes the IBM z/OS System Integrity Statement, a portion of 
which states "IBM’s long-term commitment to System Integrity is unique in the 
industry, and forms the basis of z/OS’ industry leadership in system security. 
z/OS is designed to help you protect your system, data, transactions, and 
applications from accidental or malicious modification. This is one of the many 
reasons IBM z Systems remains the industry’s premier data server for 
mission-critical workloads." 

In addition to preventing Ransomware, enterprises need to protect data from 
being stolen.  IBM issued a Statement of Direction in the Announcement letter 
IBM United States Software Announcement 216-392, dated October 4, 2016, 
communicating "IBM plans to deliver application transparent, policy-controlled 
dataset encryption in IBM z/OS®. IBM DB2® for z/OS and IBM Information 
Management System (IMS™) intend to exploit z/OS dataset encryption."

When recovering from an accidental or malicious data destruction event, z/OS 
DB2 provides the RESTORE SYSTEM utility that recovers a DB2 instance to a 
specific point in time before the data was destroyed.  Additionally, IBM is 
gathering client feedback regarding extending this functionality to an entire 
enterprise. 

Glenn Wilcock
DFSMS Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM followup question

2017-02-06 Thread Glenn Wilcock
HSM uses the current values of the assigned management class to manage a data 
set.  So, any changes made to the values of a management class will be used the 
next time that HSM selects for processing any data sets assigned to that 
management class.

A data set may be assigned to a different management class.  In fact, in z/OS 
V2R1 and higher, the HSM class transition support can be used to change the 
management class, storage class and/or storage group based on policy (note that 
data class is excluded).  In z/OS V2R2, the HSM MIGRATE TRANSITION command can 
also be used.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CREATED date for migrated data sets?

2016-05-27 Thread Glenn Wilcock
Specifically, MCDDLC (Creation date stored in MCD) is copied to UMCREDT in the 
'M' record for DCOLLECT.  Further, the DFSMS Report Generator function can be 
used to process DCOLLECT M records to pull just the records that you want.  
Refer to Chapter 14 of the DFSMShsm Storage Administration.  Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EAV bug or feature?

2016-05-11 Thread Glenn Wilcock
Besides the Redbooks, a good place to start is the 'z/OS DFSMS Using New 
Functions' manual.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Example of ACS Environment of SPMGCLTR

2016-04-26 Thread Glenn Wilcock
Hi Theo,

Yes, SPMGCLTR was introduced in the ACS routines in 2.1.  It is used by HSM for 
transitions for structured data, opposed to CTRANS which is used by OAM for 
unstructured data.

I was incorrect in my earlier post.  After communicating with the developers, 
they indicated that management class, storage class and storage group should be 
passed into the SPMGCLTR routine as read only variables, but those are the only 
ones.  We need to add additional variables like UNIT, etc.

Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Example of ACS Environment of SPMGCLTR

2016-04-22 Thread Glenn Wilcock
Hi Theo,

Change:
 IF  = 'SPGMCLTR' THEN
  SELECT ()
WHEN ('USRTIER1') 
  SET  = 'USRTIER2'
  OTHERWISE SET  = 
   END

to:

IF  = 'SPGMCLTR' THEN
   SET  = 'USRTIER2'
END

This obviously overly simplifies it, unless you only plan on making the change 
to one management class.  Unfortunately, with the current limitations, you may 
not have access to the RO variables that you need to add the correct logic.

I also forgot to point out that we shipped a Development APAR OA47700 that 
enables you to override ACS logic and just specify MC, SC and / or storage 
group on the various Migrate Transition commands.  For example, you could 
transition a data set to just a new storage group with the command HMIGRATE 
DSNAME(... ) TRANSITION(STORAGEGROUP(Nearline)).

If you'd like to set up a conference call to discuss your options in more 
detail, you can email me at wilc...@us.ibm.com.

Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Example of ACS Environment of SPMGCLTR

2016-04-21 Thread Glenn Wilcock
Hi, 

I believe that the problem is that my original presentations on this topic 
showed using the existing classes/group to determine the new class/group.  
Unfortunately, that was function that did not make shipped support.  (I have 
updated the examples in my most recent presentations).  For clients who need 
that support, I have been requesting that they open an RFE.  We definitely have 
on our roadmap the addition of that functionality.

Since the existing management class is not being passed in, your logic that 
sets the new management class based on the current management class is skipped, 
meaning that the management class is not changed.  Same for the SC and SG.  
Since none of the classes/groups change, that is why you are getting the 
message that you are.  A data set is only eligible to transition if at least 
there is one change.

I believe that if you remove the check for the existing class and just set the 
class/group to a specific value, that it will get around this problem.

Glenn Wilcock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM QUESTION

2016-03-14 Thread Glenn Wilcock
You are correct.  'PREPARE' only creates an ABR record.  No verification or 
data movement occurs.

Glenn Wilcock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM/SMS QUESTION - SPACE MANAGEMENT

2016-01-15 Thread Glenn Wilcock
Hi, a couple of other things to look at: (1) Verify that the volume is being 
selected for space management.  As indicated, HSM doesn't process data sets on 
a volume unless the volume exceeds its high threshold.  Once a volume is 
selected, HSM will process all eligible data sets on that volume until the low 
threshold is reached.  (2) Enable PATCH .MGCB.+26 X’FF’, which indicates that 
HSM should issue additional ARC0734I messages to indicate why data sets weren't 
selected for processing.  (This patch is documented in the DFSMShsm Diagnosis 
manual).

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: RLS implementation for CDS's in DFHSM

2015-12-10 Thread Glenn Wilcock
Our top recommendation for improving DFSMShsm throughput is to use RLS for the 
HSM CDSes.  In nonRLS mode, each HSM locks access to the CDSes while it 
performs its queued up I/O requests.  While that host has the CDSes locked, all 
other HSM hosts are queueing up their I/O.  When a host gets exclusive access, 
it has to first flush all of its buffers because they are no longer valid and 
then do a VSAM VERIFY.  All of the buffers are then repopulated for the I/Os 
that have been queued up.

All of that overhead goes away with RLS.  Also, Read I/Os are faster, so read 
intensive workloads like ExpireBV, Secondary Space Management and Audit perform 
faster, even when there isn't alot of multiple host concurrent workload.  The 
higher the amount of concurrent HSM host activity that you have, the greater 
the improvement that you'll see with RLS.

In z/OS V2R1, DFSMShsm enhanced its error recovery for SMSVSAM server errors.  
Prior to V2R1, when there was an SMSVSAM server error, HSM lost access to the 
CDSes, so HSM would just take a fatal abend.  (Yes, very ugly, we know).  
Starting in V2R1, DFSMShsm will hold I/O when it identifies that there is an 
SMSVSAM error.  HSM waits for the SMSVSAM server to restart.  After the server 
restart, HSM will automatically retry all of the I/Os that were held.  The only 
I/Os that have to be failed are Update I/Os after a Read for Update, when the 
Read for Update was before the server error.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM Recovery question

2015-08-17 Thread Glenn Wilcock
Hi,

Did you happen to see any errors with reading the VTOC copy data set?

From the Storage Admin: DFSMShsm uses the latest VTOC copy data set to recover 
the volume unless an error occurs in allocating or opening the latest VTOC copy 
data set. If the error occurs, DFSMShsm attempts to recover the volume, using 
the next latest copy of the VTOC copy data set. If a similar error occurs 
again, the volume recovery fails.

You indicate that you only have one VTOC copy.  Does the date correspond to the 
Week 2 Dump date?  

I wasn't able to find any other logic that would cause HSM to recover data from 
the Week 1 VTOC copy.  If the above don't apply, then a pmr could be opened to 
analyze the situation.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM Recalls and MASKING

2015-06-01 Thread Glenn Wilcock
Hi Lizette,

We want to have the documentation as helpful as possible.  It would certainly 
be reasonable for you to open a Request for Publication Change to add 
information in the HSM SA to indicate that the DFSMShsm Managing Your Own Data 
has additional information available regarding using wild cards when a user is 
recalling data.

Thanks, 
Glenn Wilcock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM - Dump w/tolerate enqueue failure ?

2015-05-17 Thread Glenn Wilcock
Hi, 

If you have FlashCopy, you can create Point in Time backup copies and then dump 
them to tape.  This is supported via the DB2 BACKUP SYSTEM utility and DFSMShsm 
FRBACKUP.  You can also use DSS to create Image Copies.  I think that it would 
be  best if we have a call with you, DB2, DSS and HSM (myself) to discuss the 
various options available to backup DB2 data.  I'll set up a call.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM - Dump w/tolerate enqueue failure ?

2015-05-14 Thread Glenn Wilcock
Hi,

DFSMShsm invokes DSS DUMP FULL to create dump copies.  HSM specifies the 
TOLERATE(IOERROR) keyword.  This varies from the ENQF option.  ENQFAILURE is 
mutually exclusive with FULL.  TOL(IOE) indicates that I/O errors on the volume 
being dumped should not fail the dump (unless there are 100 or more).  On 
restore, the tracks that couldn't be read are cleared.  (Refer to the DSS 
manual for DUMP FULL and TOLERATE(IOERROR)).  Full volume dump processes at the 
Track/Cylinder level, not the data set level.  Individual data set enqueues are 
not obtained.  The VTOC is locked to prevent any creates, extends, renames and 
deletes during the dump.  Active data would be a Fuzzy backup if it is not 
being quiesced during the dump, but it would still be dumped.

What are you seeing when you indicate that no data is backed up?

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM full vol dump question

2015-05-06 Thread Glenn Wilcock
Hi,

You can also try LIST PRIMARYVOLUME ALLDUMPS BCDS.  (You may have to specify a 
specific volser).

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Thoughts on DFSMShsm ABARS in today's environment

2015-04-29 Thread Glenn Wilcock
It uses IBM FlashCopy, which both Hitachi and EMC support.  (We have clients 
with a mixed vendor storage environment who have implemented this solution).

Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Thoughts on DFSMShsm ABARS in today's environment

2015-04-29 Thread Glenn Wilcock
Hi Again,

As you can imagine, over the years our L2 folks have been called out to help 
recover data after many an 'oops'.   In many cases, there isn't a backup for 
all of the data, which has resulted in data loss.  Hence, most will concur that 
any backup is better than no backup.

For the case of backing up Online Systems, investigate what the applications 
themselves offer.  In the case of DB2, DB2 and DFSMS have worked together to 
create a Continuous Data Protection solution (CDP) through the DB2 BACKUP 
SYSTEM / RESTORE SYSTEM utilities.  BACKUP SYSTEM creates a fuzzy backup that 
can be made consistent during recovery through Log Applies.  In order to create 
a consistent recovery, creates/renames/deletes/extended must be quiesced during 
the backup, but data updates are allowed to continue.  (Essentially VTOC 
updates are held to ensure that the VTOCs are data consistent).  DB2 invokes 
DFSMShsm to create FlashCopy PiT backups (FRBACKUP).  During Recovery, the 
client selects the recovery point, which can be 'current', or a prior point in 
time.  DFSMShsm recovers the PiT copy using FlashCopy (if still on disk) or 
from Tape (if the backup copy has rolled off to tape).  DB2 then performs a log 
apply up to the recovery point, creating a data consistent recovery.  Some 
clients also use this process to create Clones.  When this CDP solution is 
combined with Metro Mirror, there is a very high level of protection from both 
physical and logical loss.  This solution can also be used for SAP systems, 
when DB2 is the database.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Thoughts on DFSMShsm ABARS in today's environment

2015-04-28 Thread Glenn Wilcock
Hi Lizette,

There have been some great appends to this question.  I also wanted to add my 
two cents...

When determining your backup strategy, you need to determine the desired RTO 
(Recover Time Objective), RPO (Recovery Point Objective) and BWO (Backup Window 
Object) for both physical loss (storage failure) and logical loss (data 
corruption) at an acceptable price point.  This can be daunting.  You'll find 
that this may require that multiple backup strategies be used for the various 
types of data in your environment.

To protect against physical data loss (some type of storage failure, whether it 
is a result of a failure with the storage itself or an external cause such as 
power outage, natural disaster, etc), there are many types of mirroring 
solutions.  These solutions create two or more copies of the data at different 
locations such that if the primary storage fails, processing can switch to the 
alternate location.  

To protect against logical data loss (some type of data corruption, whether it 
is caused by a software defect or human error), there needs to be point-in-time 
copies.  Mirroring solutions do not provide any protection here because the 
logical corruption is just instantly mirrored to all locations of the data.  
There are many different types of PiT copy techniques.  One of the important 
factors is to ensure that the data is quiesced so that it is a data consistent 
backup.  Or, that the recovery technique is able to create a consistent 
recovery from a fuzzy, non consistent, backup.  

The Use Case for ABARS is when you need to create a data consistent backup of a 
set of data that can be used to recover that entire set of data to the same 
point in time.  As was mentioned in another append, an example is backing up an 
entire Payroll application that can be recovered in it's entirety with a single 
recovery command.  After 9/11, I visited a client who was setting up ABARS just 
for Payroll because they wanted to guarantee that they could pay their 
employees after a disaster...  If you want you employees to show up after a 
disaster... you better be sure that you can pay them.  (In this case, ABARS was 
being used to protect against physical data loss).   ABARS also allows you to 
recover one or more individual files from the aggregate backup.

When mirroring alternatives are too expensive, ABARS can be used as a lower 
cost DR solution that clients use as follows:
For backups, full volume dumps are created for the system packs and ABARS is 
used to backup all of the various applications and their related data.
DR tests are performed by renting space at an offsite facility.  The first step 
is to recover all of the system packs and start the system.  They then recover 
the remainder of their data by prioritizing the ABARS recoveries.  They start 
with their most critical applications and bring them up as the recoveries 
complete.

So, when determining your backup strategy, first determine your recovery needs 
and then determine which backup technique will best fit those needs.

Glenn Wilcock
DFSMShsm Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM BCDS Z/OS 1.13

2014-12-11 Thread Glenn Wilcock
Hi, I agree that it just sounds like your CDSes have just grown and need more 
space.  Please note that the CA Reclaim function became available in V1R12.  
When CA Reclaim is enabled for the CDSes, it minimizes the need to reorganize 
the CDSes, especially after running EXPIREBV.  When a VSAM Control Area becomes 
empty, CA Reclaim will automatically free up that space, which is a primary 
need to run a reorg.

Glenn Wilcock
HSM Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM ML2 on disk

2014-11-19 Thread Glenn Wilcock
Hi,

The HSM ML2 Disk support is very basic with no plans to enhance it.  For 
clients who don't want physical tape anymore (for whatever reason), they 
generally select either a disk-only VTS ML2 or implement all ML1.  If you go 
all ML1, I would recommend implementing ML1 Overflow volumes to help ensure 
that you have space for larger data sets.  For ML1 Overflow volumes, the 
selection algorithm selects the volume with the least amount of freespace that 
will contain the data set, in order to maximize the most amount of freespace on 
the volumes.  (Standard ML1 selection chooses the volume with the most 
freespace which results in volumes getting filled evenly but not leaving any 
volume with a significant amount of freespace for larger data sets).

Evey client that I have talked to that has implemented a large disk-cache VTS 
is very pleased with the HSM improvements.  For large customers and those who 
need to minimize storage costs, the VTS also has backend physical tape.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HSM Recall process

2014-10-20 Thread Glenn Wilcock
Back in the day, HSM would process requests from tape in File Block ID order.  
But, when tape technology went to a serpentine pattern, processing in FBID 
order no longer ensured optimal results for random recalls, so HSM changed to 
using  1) Priority 2) FIFO.  Clients driving a large numbers of concurrent 
recalls from tape found that this resulted in slower performance.  So, we 
introduced SETSYS TAPEDATASETORDER(FBID).  When you know that you will be 
concurrently recalling a large number of data sets from a single tape, we 
recommend that you enable this parameter.  For general processing, we recommend 
using the default of Priority.  (There are also subparameters to just enable 
this for Recall or Recover).

Glenn Wilcock
HSM Architect

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD

2014-08-28 Thread Glenn Wilcock
Hi Clark, the current max size is 1 TB.  We've been steadily increasing the 
size to keep up with our largest customers who are pushing the limits for the 
amount of online data that they manage.  Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD

2014-08-27 Thread Glenn Wilcock
Hi Ron,

I agree with everything that you say about cu tiering, except for the fact that 
a migration tier is no longer necessary.  CU tiering is an exciting new storage 
opportunity for ILM.  I'd like to discuss this topic with you the next time 
that you attend SHARE or the Technical University.  I've discussed this topic 
with your colleagues from HDS and also those from EMC.  Let me go into more 
detail on some of the questions that were raised.

- I don't recall (pun intended) specific individuals that I've discussed 
tape/disk with, but HDS presentations on Tiering at SHARE show data moving to 
ML2 for archive after it has gone through the L0 - Ln disk tiers.  These 
presentations also highlight the value of software / hardware tiering on a 
slide or two.  Industry charts that show cost comparisons of cost/GB of storage 
show the clear cost value of tape.
- I didn't mean to say that cu tiering is only for small environments.  I was 
communicating that I believe that only small environment could eliminate a 
migration tier.  CU tiering is of value for all environments.  There is a 
finite amount of data that can be online to z/OS because of the UCB 
constraints.  To eliminate the migration tier, you have to uncompress and 
return all migration data to online UCBs.  That may be an option for smaller 
environments, but I don't see value in having uncompressed, archived data 
sitting on online disk, and large environment physically can't do that because 
of the limit to the amount of data that can be online.
- DFSMShsm Transitioning is not a 'kludge' but rather a strength.  Strengths of 
cu tiering 1) transparent 2) works on open data 3) no host MIPS.  Strengths of 
Transitions 1) Data set level 2) Business policy based 3) Works across CUs.   
Weakness of cu tiering 1) movement done on heatmap with no understanding of 
data's business value.  Weakness of transitions 1) data must be quiesced.  Cu 
transitining weaknesses: I reorg a DB2 object.  The data that had been 
fine-tuned to the correct tier is now scrambled across multiple tiers and until 
the cu relearns the correct tiers, I suffer subpar performance.  Also, in the 
presentations that I've seen, CU tiering is appropriate for data that can be 
learned, like database data.  It is not good for data like Batch data.  (Once 
again, look at the HDS presentations on tiering).
- The example that I have used for combining the two technologies that HDS has 
included in their Tiering presentation: 3 Tiers: T0 is SSD and Enterprise disk. 
 T1 is Enterprise and SAS.  T3 is Migration.  Newly allocated data goes to T0.  
Cu tiering moves the data between SSD and Enterprise based on heat map.  After 
a policy-based amount of time, the data has diminishing business value and HSM 
transitions it to T1.  Data remains on the lower cost T1 while it is still 
active and the cu moves the data between Enterprise and SAS based on heat map.  
After the data goes inactive and should be archived, HSM migrates it to the 
migration tier.  The migration tier can be all ML1, ML1/ML2, ML2 Virtual with 
all disk, ML2 Virtual with a combo of disk and tape, or all tape... whatever is 
best for each client's environment, as each has its strengths.
- If you reference my presentations on tiering, I have been a proponent of 
eliminating multiple migration tiers for years.  I have been recommending that 
customers use CU tiering for online tiering and that they don't migrate data 
until it really goes inactive, and then send it to a single migration tier.  
Until recently, I ML2 was the best choice because you get the compression for 
free on the tape cu (virtual or real).  In this quarter, HSM is shipping 
support for the new z compression engine.  That provides very high compression 
ratios for data on ML1, without using MIPS for compression.  So, that now makes 
ML1 attractive also, for those customers who want a tapeless environment, like 
those on this thread.
- There is a clear cost value to tape.  If a client can afford to have all of 
their data uncompressed on online disk and don't have to worry about the UCB 
constraints, then more power to them.  But, I suspect that most clients are 
still looking to keep their storage costs to a minimum.

Our strategy is to provide all the options so that clients can select the ILM 
strategy that best needs of their data.  Integrating the strengths of CU 
Tiering with Software Tiering provides the best of both worlds.

Glenn

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


  1   2   >