Hi Jim, >It uses a Log Structured Array to map virtual tracks >to the back end raid array hardware. This is analogous >to a paging in MVS. As in MVS, a track is not >allocated on the hardware until it is written to.
This is kinda how evms builds a sparse LV. The logical volume is divided up into chunks (like a snapshot). If a write is made to a chunk that has no backing storage an exception causes the chunk to be mapped before the i/o continues. This lets you create volumes much larger than the actual physical storage. The maps are saved for persistance. Originally conceived of as a means of testing very large volumes on very modest disks. On-demand storage :) >The technology is based on several assumptions true >for an MVS environment 1) The VTOC extent maps may say >the data set has been allocated with so much space, >but in reality, only a small park of it is used, and >2) free space really takes up space on a volume, and >3) there are a lot of repeated characters (mostly >blanks) on MVS volumes. >Typically, MVS volumes may be run at 50% full because >of the need to expand without abending a process. Dont know anything about mvs. But a format_1 dscb has room for describing multiple extents. Can't mvs automatically grow the dataset till these extent descriptors are used up before abending the process? >In addition, they added space compression, to get the >repeated characters compressed out. >The net effect is that the REAL hardware space >occupied for a volume can be zero (0) to a real full >volume. In MVS, the volumes get up to 66% compression >of the data and the packs are only 50% full. Neat. >Storage Tech took this one step further. Since they >are do not have to have all the space on a volume >reserved on the hardware, they only back the virtual >space with a fracture of the real hardware space. For >example, on one of my RVA's I have 512 3390-3 volumes >defined for a virtual capacity of 1.4 Terabytes. >However, the real dasd on the RVA is only 219 GB! So >the assumption is that there will be 6.3 >"overalloction" of the pack due to compressible data >and unused free space. Gotcha. >Actually, quite brilliant when you think about it. >Remember, this is late 1980's technology when disk >drives were still expensive and RAID technology was >new. I don't anyone uses this approach now because the >disk drives are so cheap. Vendors are using RAID10 now >- mirrored raid5, so there is actually twice has much >disc hardware that is needed for recovery purposes. Yep ... and linear concatenations of raid10 or feeding raid10 devices into an lvm volume ... to help when it comes time to resize the volume. >If you look at an MVS VTOC, the extent map may show >that the pack is full and for conventional dasd and >most vendors implementations, the space is reserved, >though it may not be actually used. Sure ... that is exactly how I viewed a cdl partition ... %100 allocated/reserved ... but I think the amount actually used would have to be determined by asking the file system. >This then brings up question of garbage collection. >There is an interface between MVS allocation and the >RVA called IXFP in the IBM RVA that communicates >allocations and free up of space between MVS and the >RVA. There is also dynamic dasd space reclamation >(DDSR) that runs periodically, interrogates the VTOC, >and frees any free tracks based on the VTOC. This is >the exposure that we are talking about. Depending upon >what IXFP interrogates in the VTOC, there is a data >loss exposure. I have seen no reports that anyone has >tested this, just recommendations that you keep your >Linux DASD seperate from your other OS. Sounds like DDSR finds dataset extents and then determines which tracks in the extent are used or not. In the evms case, the sparse volume knows which chunks have had write operations and it knows for certain which chunks are free or not. I guess IXFP must be doing something similiar by talking to mvs allocation/free rtns. -Don
