Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
Hi Clark, the current max size is 1 TB. We've been steadily increasing the size to keep up with our largest customers who are pushing the limits for the amount of online data that they manage. Glenn -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
Hi Ron, I agree with everything that you say about cu tiering, except for the fact that a migration tier is no longer necessary. CU tiering is an exciting new storage opportunity for ILM. I'd like to discuss this topic with you the next time that you attend SHARE or the Technical University. I've discussed this topic with your colleagues from HDS and also those from EMC. Let me go into more detail on some of the questions that were raised. - I don't recall (pun intended) specific individuals that I've discussed tape/disk with, but HDS presentations on Tiering at SHARE show data moving to ML2 for archive after it has gone through the L0 - Ln disk tiers. These presentations also highlight the value of software / hardware tiering on a slide or two. Industry charts that show cost comparisons of cost/GB of storage show the clear cost value of tape. - I didn't mean to say that cu tiering is only for small environments. I was communicating that I believe that only small environment could eliminate a migration tier. CU tiering is of value for all environments. There is a finite amount of data that can be online to z/OS because of the UCB constraints. To eliminate the migration tier, you have to uncompress and return all migration data to online UCBs. That may be an option for smaller environments, but I don't see value in having uncompressed, archived data sitting on online disk, and large environment physically can't do that because of the limit to the amount of data that can be online. - DFSMShsm Transitioning is not a 'kludge' but rather a strength. Strengths of cu tiering 1) transparent 2) works on open data 3) no host MIPS. Strengths of Transitions 1) Data set level 2) Business policy based 3) Works across CUs. Weakness of cu tiering 1) movement done on heatmap with no understanding of data's business value. Weakness of transitions 1) data must be quiesced. Cu transitining weaknesses: I reorg a DB2 object. The data that had been fine-tuned to the correct tier is now scrambled across multiple tiers and until the cu relearns the correct tiers, I suffer subpar performance. Also, in the presentations that I've seen, CU tiering is appropriate for data that can be learned, like database data. It is not good for data like Batch data. (Once again, look at the HDS presentations on tiering). - The example that I have used for combining the two technologies that HDS has included in their Tiering presentation: 3 Tiers: T0 is SSD and Enterprise disk. T1 is Enterprise and SAS. T3 is Migration. Newly allocated data goes to T0. Cu tiering moves the data between SSD and Enterprise based on heat map. After a policy-based amount of time, the data has diminishing business value and HSM transitions it to T1. Data remains on the lower cost T1 while it is still active and the cu moves the data between Enterprise and SAS based on heat map. After the data goes inactive and should be archived, HSM migrates it to the migration tier. The migration tier can be all ML1, ML1/ML2, ML2 Virtual with all disk, ML2 Virtual with a combo of disk and tape, or all tape... whatever is best for each client's environment, as each has its strengths. - If you reference my presentations on tiering, I have been a proponent of eliminating multiple migration tiers for years. I have been recommending that customers use CU tiering for online tiering and that they don't migrate data until it really goes inactive, and then send it to a single migration tier. Until recently, I ML2 was the best choice because you get the compression for free on the tape cu (virtual or real). In this quarter, HSM is shipping support for the new z compression engine. That provides very high compression ratios for data on ML1, without using MIPS for compression. So, that now makes ML1 attractive also, for those customers who want a tapeless environment, like those on this thread. - There is a clear cost value to tape. If a client can afford to have all of their data uncompressed on online disk and don't have to worry about the UCB constraints, then more power to them. But, I suspect that most clients are still looking to keep their storage costs to a minimum. Our strategy is to provide all the options so that clients can select the ILM strategy that best needs of their data. Integrating the strengths of CU Tiering with Software Tiering provides the best of both worlds. Glenn -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
On 27 Aug 2014 09:30:27 -0700, in bit.listserv.ibm-main you wrote: Hi Ron, I agree with everything that you say about cu tiering, except for the fact that a migration tier is no longer necessary. CU tiering is an exciting new storage opportunity for ILM. I'd like to discuss this topic with you the next time that you attend SHARE or the Technical University. I've discussed this topic with your colleagues from HDS and also those from EMC. Let me go into more detail on some of the questions that were raised. - I don't recall (pun intended) specific individuals that I've discussed tape/disk with, but HDS presentations on Tiering at SHARE show data moving to ML2 for archive after it has gone through the L0 - Ln disk tiers. These presentations also highlight the value of software / hardware tiering on a slide or two. Industry charts that show cost comparisons of cost/GB of storage show the clear cost value of tape. - I didn't mean to say that cu tiering is only for small environments. I was communicating that I believe that only small environment could eliminate a migration tier. CU tiering is of value for all environments. There is a finite amount of data that can be online to z/OS because of the UCB constraints. To eliminate the migration tier, you have to uncompress and return all migration data to online UCBs. That may be an option for smaller environments, but I don't see value in having uncompressed, archived data sitting on online disk, and large environment physically can't do that because of the limit to the amount of data that can be online. What is the largest size 3390 that can be defined? As someone who is in possession of multiple 1 and 2 terabyte portable disks for his PC I would hope in this day and age, the answer is in terabytes. If not the idiots who wouldn't spend 26 million to allow FBA on MVS have created a disaster. Clark Morris - DFSMShsm Transitioning is not a 'kludge' but rather a strength. Strengths of cu tiering 1) transparent 2) works on open data 3) no host MIPS. Strengths of Transitions 1) Data set level 2) Business policy based 3) Works across CUs. Weakness of cu tiering 1) movement done on heatmap with no understanding of data's business value. Weakness of transitions 1) data must be quiesced. Cu transitining weaknesses: I reorg a DB2 object. The data that had been fine-tuned to the correct tier is now scrambled across multiple tiers and until the cu relearns the correct tiers, I suffer subpar performance. Also, in the presentations that I've seen, CU tiering is appropriate for data that can be learned, like database data. It is not good for data like Batch data. (Once again, look at the HDS presentations on tiering). - The example that I have used for combining the two technologies that HDS has included in their Tiering presentation: 3 Tiers: T0 is SSD and Enterprise disk. T1 is Enterprise and SAS. T3 is Migration. Newly allocated data goes to T0. Cu tiering moves the data between SSD and Enterprise based on heat map. After a policy-based amount of time, the data has diminishing business value and HSM transitions it to T1. Data remains on the lower cost T1 while it is still active and the cu moves the data between Enterprise and SAS based on heat map. After the data goes inactive and should be archived, HSM migrates it to the migration tier. The migration tier can be all ML1, ML1/ML2, ML2 Virtual with all disk, ML2 Virtual with a combo of disk and tape, or all tape... whatever is best for each client's environment, as each has its strengths. - If you reference my presentations on tiering, I have been a proponent of eliminating multiple migration tiers for years. I have been recommending that customers use CU tiering for online tiering and that they don't migrate data until it really goes inactive, and then send it to a single migration tier. Until recently, I ML2 was the best choice because you get the compression for free on the tape cu (virtual or real). In this quarter, HSM is shipping support for the new z compression engine. That provides very high compression ratios for data on ML1, without using MIPS for compression. So, that now makes ML1 attractive also, for those customers who want a tapeless environment, like those on this thread. - There is a clear cost value to tape. If a client can afford to have all of their data uncompressed on online disk and don't have to worry about the UCB constraints, then more power to them. But, I suspect that most clients are still looking to keep their storage costs to a minimum. Our strategy is to provide all the options so that clients can select the ILM strategy that best needs of their data. Integrating the strengths of CU Tiering with Software Tiering provides the best of both worlds. Glenn -- For IBM-MAIN subscribe / signoff / archive access instructions, send
Re: [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
On Tue, 26 Aug 2014 00:18:21 -0700, Ron Hawkins wrote: ... but I drank the cool-aid back in 1994 with Iceberg That explains s much :0) Not that I'm about to argue storage tiering with you. Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
On Tue, 26 Aug 2014 00:18:21 -0700, Ron Hawkins ronjhawk...@sbcglobal.net wrote: A three tier strategy using HDD or SSD for Tier 1, Nearline SAS for ML2, and virtualized Brand-X midrange storage for Tier 3 presents a new paradigm for archiving inactive and dormant data sets, including back-ups, that I believe over time can displace DFSMShsm altogether. How does that present the location of the data to z/OS? You just have one giant pool of VOLSERs, and once a dataset is written to a DASD volser, it's z/OS percived location never changes, but the hardware is busy moving virtual tracks around amongst T1, T2 and T3 as it sees fit? That's an intriguing idea, I'd be all for getting rid of HSM. What could you use for managing retention and retrieval of application backups and archives? Dana -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
Ron, One of your explanation paragraphs caught my attention so I'm asking out of curiosity, for my own benefit. your paragraph What I find important is there is no data transformation or recall latency: it is all transparent to the application. You have to read 12 months of General Ledger files or SMF data sets? The application simply reads them directly from whatever tier disk the pages happen to be on. You're not waiting 24 hours for data sets scattered all over myriad ML2 tapes to be recalled, you don't have to find redundant Primary space to store the recalled data sets, and there won't be any TMM thrash when they are migrated again by the next space management cycle. That process is transparent to z/OS in the way we thought the STK Iceberg would go. Of course all this dormant data remains replicated with TC and/or HUR while it is being shuffled around the backstore tiers in both sites - you're not moving any DFSMShsm traffic across the replication links. /your paragraph The sentence where you said the application simply reads them directly from whatever tier disk the pages happen to be on intrigues me. Does the HDS scenario you are talking about here use some kind of algorithm to leave this kind of data on the level-3 spindles for a certain number of reads (or something else like that) or once the page is referenced does it work in the background to elevate the pages to higher performance tiers? Thanks, Rex The information contained in this message is confidential, protected from disclosure and may be legally privileged. If the reader of this message is not the intended recipient or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any disclosure, distribution, copying, or any action taken or action omitted in reliance on it, is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by replying to this message and destroy the material in its entirety, whether in electronic or hard copy format. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
Dana, You're correct. This just looks like a one giant pool of 3390-A to z/OS. z/OS is blissfully unaware of where a track is actually stored, similar to how disk arrays have worked for 20 years now except the pieces of a data set may be on different disk tiers based on the activity pf the page. I wouldn't go as far as to say the controller is busy moving pages between tiers, but I wouldn't say it is negligible either. I think what is more important is that occasional access to an inactive or dormant data set does not in itself trigger any page movement between tiers. Neither will any intense activity to a dataset that is predominantly cache hits. It is the back-end IO activity that defines a pages IO history. There are some happy value metrics designed to avoid tier thrashing every time a page crosses an IO/hour threshold. For archiving there is a lot of talk about moving ML1 and ML2 to tiered pools and radically changing the migration intervals. I'm suggesting a practical vision of doing away with ML1 and ML2 altogether. The data is RAID protected - usually RAID-6 - and so the whole backup DFSMShsmbackup requirement and strategy can be reviewed. Ask yourself why do I keep the very last DFSMShsm backup for three years, when I can just as easily keep the very last copy of the data set on a cheap RAID-6 disk array. DFSMShsm may still have a role to play as a dataset backup utility, but why aren’t they going to a tiered pool where they can age out to commodity disk at the page level? Personally I have usually seen DFSMShsm as a backup vehicle for development and TSO data sets, while production applications tend to use FDR, DFSMSdss, etc. Personally I like the idea of using Flashcopy to put a date-time suffixed copy of related datasets into a tiered Storage Group. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Dana Mitchell Sent: Tuesday, August 26, 2014 5:54 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD On Tue, 26 Aug 2014 00:18:21 -0700, Ron Hawkins ronjhawk...@sbcglobal.net wrote: A three tier strategy using HDD or SSD for Tier 1, Nearline SAS for ML2, and virtualized Brand-X midrange storage for Tier 3 presents a new paradigm for archiving inactive and dormant data sets, including back-ups, that I believe over time can displace DFSMShsm altogether. How does that present the location of the data to z/OS? You just have one giant pool of VOLSERs, and once a dataset is written to a DASD volser, it's z/OS percived location never changes, but the hardware is busy moving virtual tracks around amongst T1, T2 and T3 as it sees fit? That's an intriguing idea, I'd be all for getting rid of HSM. What could you use for managing retention and retrieval of application backups and archives? Dana -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
Rex, It is based on an IO per hour rate for the page. Occasionally referencing a page, or thousands of pages will not cause the page(s) to be promoted from tier 3 unless the activity is prolonged or intense enough to change the IO/hour. There are both short and long term IO/hour measures that are weighted to influence promotion/demotion. If cycle time is set appropriately and the activity is extreme enough the pages could start to be promoted within 30 minutes. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Pommier, Rex Sent: Tuesday, August 26, 2014 7:07 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD Ron, One of your explanation paragraphs caught my attention so I'm asking out of curiosity, for my own benefit. your paragraph What I find important is there is no data transformation or recall latency: it is all transparent to the application. You have to read 12 months of General Ledger files or SMF data sets? The application simply reads them directly from whatever tier disk the pages happen to be on. You're not waiting 24 hours for data sets scattered all over myriad ML2 tapes to be recalled, you don't have to find redundant Primary space to store the recalled data sets, and there won't be any TMM thrash when they are migrated again by the next space management cycle. That process is transparent to z/OS in the way we thought the STK Iceberg would go. Of course all this dormant data remains replicated with TC and/or HUR while it is being shuffled around the backstore tiers in both sites - you're not moving any DFSMShsm traffic across the replication links. /your paragraph The sentence where you said the application simply reads them directly from whatever tier disk the pages happen to be on intrigues me. Does the HDS scenario you are talking about here use some kind of algorithm to leave this kind of data on the level-3 spindles for a certain number of reads (or something else like that) or once the page is referenced does it work in the background to elevate the pages to higher performance tiers? Thanks, Rex The information contained in this message is confidential, protected from disclosure and may be legally privileged. If the reader of this message is not the intended recipient or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any disclosure, distribution, copying, or any action taken or action omitted in reliance on it, is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by replying to this message and destroy the material in its entirety, whether in electronic or hard copy format. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
I suspected that you were with your brother. This must be quite disheartening to say the least. The man from CRA is back from holidays and asked yesterday how the MICS component was coming along. John T. Abell President International Software Products Tel: 800-295-7608 Ext: 224 International: 1-416-593-5578 Ext: 224 Fax: 800-295-7609 International: 1-416-593-5579 E-mail: john.ab...@intnlsoftwareproducts.com Web: www.ispinfo.com This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, retention, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive on behalf of the named recipient), please contact the sender by reply email and delete all copies of this message. Also,email is susceptible to data corruption, interception, tampering, unauthorized amendment and viruses. We only send and receive emails on the basis that we are not liable for any such corruption, interception, tampering, amendment or viruses or any consequence thereof. -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Pommier, Rex Sent: Tuesday, August 26, 2014 10:07 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD Ron, One of your explanation paragraphs caught my attention so I'm asking out of curiosity, for my own benefit. your paragraph What I find important is there is no data transformation or recall latency: it is all transparent to the application. You have to read 12 months of General Ledger files or SMF data sets? The application simply reads them directly from whatever tier disk the pages happen to be on. You're not waiting 24 hours for data sets scattered all over myriad ML2 tapes to be recalled, you don't have to find redundant Primary space to store the recalled data sets, and there won't be any TMM thrash when they are migrated again by the next space management cycle. That process is transparent to z/OS in the way we thought the STK Iceberg would go. Of course all this dormant data remains replicated with TC and/or HUR while it is being shuffled around the backstore tiers in both sites - you're not moving any DFSMShsm traffic across the replication links. /your paragraph The sentence where you said the application simply reads them directly from whatever tier disk the pages happen to be on intrigues me. Does the HDS scenario you are talking about here use some kind of algorithm to leave this kind of data on the level-3 spindles for a certain number of reads (or something else like that) or once the page is referenced does it work in the background to elevate the pages to higher performance tiers? Thanks, Rex The information contained in this message is confidential, protected from disclosure and may be legally privileged. If the reader of this message is not the intended recipient or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any disclosure, distribution, copying, or any action taken or action omitted in reliance on it, is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by replying to this message and destroy the material in its entirety, whether in electronic or hard copy format. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
While control unit storage tiering may be considered as a replacement to HSM processing for smaller environments, such a recommendation is an over simplification of the need for a comprehensive ILM strategy to properly manage data in middle-to-large environments. At the various conferences that I attend each year, this concept was originally discussed when cu tiering was first introduced, but after discussions, all three vendors see the value of HSM ILM and cu tiering being used together to create a powerful solution as opposed trying to select one over another. Each tiering technique, hardware and software, has strengths and weaknesses. Using each technique to its strengths provides tremendous opportunity as we move forward with managing the significant growth of data that we are seeing. In z/OS V2R1, DFSMS introduced its initial Storage Tiering solution. This offering lays the framework for z/OS's long term strategy to provide various ILMs solutions so that clients can implement the ILM solution that works best for them. An integral part of this strategy is to move away from ML1 and move toward an L0 - Ln, ML2 solution. Tape is still clearly the best storage media for long-term data archiving, and all three vendors will agree to that. I am currently working with clients to move to an L0 - Ln, ML2 environment, and it is exciting to see the opportunities that exist by integrating software and hardware tiering into a single, powerful ILM strategy. I'm more than happy to meet with clients to discuss the V2R1 DFSMS Storage Tiering solution and discuss the opportunities that it provides to exploit the strengths of the two types of tiering. Glenn Wilcock DFSMShsm Architect -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
Glenn, I work for one of the three vendors, and so I am very surprised to hear that HDS thinks that Tape is clearly the best storage media for long term data archiving. Would you happen to know who it is that represents HDS hardware/software that agrees with this? I for one strongly disagree with the statement. Why would you suggest that CU tiering is only for small environments? I'm thinking the large, multi PiB environments will gain far more from storage tiering than smaller to medium shops. They have far more to gain from removing the overheads of Primary and Secondary space management and the costs of recalls, especially when they can take advantage of commodity midrange storage for inactive data. This is the same storage that makes virtual tape so attractive, but without the cost of redundant data movement and transformation. The two single greatest changes in DFSMShsm in recent times have been CRQ and Transitioning, where CRQ acknowledges the problems of managing large scale HSM activity, and transitioning is a kludge that allows DFSMShsm to take advantage of CU based tiering. In fact I think I spoke to you in San Francisco about the possibility of replacing the transitioning command with a proprietary migration command if the vendor had a way to do command level tiering of a data set extents rather than FlashCopy. The DFSMShsm ML2 strategy decrees that data can no longer be directly accessed when it becomes inactive or dormant. Why is that a better strategy for Information Lifecycle Management than allowing the data to be accessed directly even as it moves to a more cost effective media as it ages? Why is last accessed a date a better strategy for archiving data than the backend IO/hour? Why can't I archive 100s of TBs of data because someone keeps opening the file but not touching the data? I agree that Each tiering technique, hardware and software, has strengths and weaknesses but I earnestly feel that the archiving controls afforded DFSMShsm do not lend themselves to good storage cost containment. I acknowledge again that ML1 and disk based ML2 can operate very effectively in a tiered storage environment, especially where ML2 can transparently migrate to commodity midrange storage as it ages. I do not however agree that the cost of data translation and recall are necessary to maintain good ILM strategies. I still remember the Iceberg pundits from STK describing how there would be a Nearline library behind every Iceberg and HSM would disappear. I think we are getting closer to this every day and many shops are in a position to do exactly that. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Glenn Wilcock Sent: Tuesday, August 26, 2014 10:05 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD While control unit storage tiering may be considered as a replacement to HSM processing for smaller environments, such a recommendation is an over simplification of the need for a comprehensive ILM strategy to properly manage data in middle-to-large environments. At the various conferences that I attend each year, this concept was originally discussed when cu tiering was first introduced, but after discussions, all three vendors see the value of HSM ILM and cu tiering being used together to create a powerful solution as opposed trying to select one over another. Each tiering technique, hardware and software, has strengths and weaknesses. Using each technique to its strengths provides tremendous opportunity as we move forward with managing the significant growth of data that we are seeing. In z/OS V2R1, DFSMS introduced its initial Storage Tiering solution. This offering lays the framework for z/OS's long term strategy to provide various ILMs solutions so that clients can implement the ILM solution that works best for them. An integral part of this strategy is to move away from ML1 and move toward an L0 - Ln, ML2 solution. Tape is still clearly the best storage media for long-term data archiving, and all three vendors will agree to that. I am currently working with clients to move to an L0 - Ln, ML2 environment, and it is exciting to see the opportunities that exist by integrating software and hardware tiering into a single, powerful ILM strategy. I'm more than happy to meet with clients to discuss the V2R1 DFSMS Storage Tiering solution and discuss the opportunities that it provides to exploit the strengths of the two types of tiering. Glenn Wilcock DFSMShsm Architect -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe
Re: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD
The concept, not the fact that it was the slowest DASD storage in the market place when it arrived and stayed that way even when IBM rebadged it. The compression engine in the front end was a dog... -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Shane Ginnane Sent: Tuesday, August 26, 2014 3:41 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: [Bulk] Re: [IBM-MAIN] [Bulk] Re: [IBM-MAIN] General question on moving DFHSM work from mix TAPE/DASD to More DASD On Tue, 26 Aug 2014 00:18:21 -0700, Ron Hawkins wrote: ... but I drank the cool-aid back in 1994 with Iceberg That explains s much :0) Not that I'm about to argue storage tiering with you. Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN