Hi,
We have a set of weekly full volume dasd dumps just for our non-sms mvs
volumes
(sysres, pre-ipl vols etc), housing many of our system datasets that are
almost purely static...ie do not grow or do not get written to.
On July 21st, the lpar that run our weekly full volume dumps
was upgraded to zOS 1.13 (from 1.11).
Prior to zOS 1.13, the accumulated size of these dataset dumps, as per rmm,
averaged about 0.3 to 0.5 tb per week.
Since zOS 1.13 these same jobs, with the exact same static
datasets being dumped averages almost double the size in tb per week.
It's almost as if compression is not working right anymore.
Here is an example of the jcl used, it has not
changed between zOS 1.11 and 1.13:

//DUMP   EXEC PGM=ADRDSSU,REGION=6000K
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=0
//DISK1    DD UNIT=&UNIT,DISP=SHR,VOL=SER=CEC000
//TAPE1    DD UNIT=TAP9,DISP=(NEW,CATLG,DELETE),DCB=TRTCH=COMP,
//            DSN=DRP.BKP19AU.BWCEC000(+1),VOL=(,,,45)
//SYSIN    DD *
 DUMP INDD(DISK1) OUTDD(TAPE1) CAN OPTIMIZE(4) COM
/*
The tape is actually VSM, but would mimic 3490's which also
has some compression but the VSM has not changed...only
zOS has since we've seen this jump in space used.
For example, according to rmm reports,
here are the TBs used for these dumps:
July 1:  22.58 tb  << zOS 1.11
July 8:  22.57 tb
July 15: 22.62 tb
July 22: 23.25 tb  <<< zOS 1.13
July 29: 24.02 tb
Aug  5:  24.81 tb
Aug 12:  25.42 tb
etc
Has anyone else experienced this jump in tape usage?
I see IBM mentions that there are some differences in blksizes
and uses BSAM now, but saw nothing about changes to compress.
thanks

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to