I believe both sequential and at least some VSAM datasets can utilize compression in their representation on DASD. This requires that the datasets be SMS-managed and be assigned a DATACLAS that defines the dataset as "extended format" and "compressed". This is enabled on the individual dataset level based on DATACLAS attributes and not on a volume level. To determine whether it was in effect, you would have to check the DATACLAS associated with the dataset and then look up the attributes your installation has assigned for the Data Class to see if it specifies compressed extended format.

The compression/decompression is done by the I/O access method routines using special hardware compression assists associated with a Central Processor in a manner that is almost transparent to the application program, which still thinks it is reading/writing records and blocks in uncompressed format. The only part that is not transparent is that the maximum block size that can be declared and still fit n blocks per track is reduced by an additional 32 bytes of control information that is added to each extended format block (not accessible to an application program).

Whether it makes economic sense for a specific application or installation depends on many factors, and your mileage may vary. It does cost extra CPU time, so if you are in a CPU-constrained shop (or concerned with what extra CPU usage does to your software costs) and have plenty of DASD, it might be a hard sell to justify. We look at it every several years to see if the performance has significantly improved, but I haven't had a chance to re-test since we migrated to a z9.

IBM tape subsystems from 3490 on (it was an optional feature on 3480's) routinely do hardware data compression within the tape subsystem itself and merge logical data blocks into super-blocks to optimize tape media usage, all at no cost of additional mainframe processor time. This approach has not been taken with DASD subsystems, I suspect because there are so many places that knowledge of track/cylinder capacity is used in significant ways, plus the potential that concurrent users of a dataset may be dependent in some way on a logical block corresponding to a physical block on DASD to properly control serialization and update-commit strategies.

OppThumb wrote:
To All,

From what I've seen, the majority of posters on this newsgroup are
serious systems people, and I'm just a simple applications programmer,
so please excuse me if I oversimplify this question...:-)

A discussion has come up at work concerning DASD usage. I was always
under the impression that, for example, an 80-byte fixed-length record
truly occupied 80 bytes of storage. Is this the case? Or can mainframe
DASD be compressed in a manner similar to a PC-based ZIP file? Last
but not least, if the answer is yes, is there any way to determine if
a compression algorithm is being used on a given (3390) DASD volume?

Thanks,

John



--
Joel C. Ewing, Fort Smith, AR        [EMAIL PROTECTED]

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to