Fred
... And sorry if my question confused ...
Unlike Hal Merritt, I wasn't *confused* by the question, only intrigued ...
... does VTAM compression apply only for SNA traffic or also IP traffic?
VTAM provides compression as a service to applications using the VTAM API.
Because
Thanks Chris, for the most helpful and informative reply. And sorry if my
question confused must be an Australianism.
Regards, Fred
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to
Chris (or anyone)... does VTAM compression apply only for SNA traffic or also
IP traffic? In case you haven't already guessed, I am not a comm's person, so
please excuse my ignorance.
Regards, Fred
--
For IBM-MAIN subscribe
Anyone know whether Netview FTP uses hardware or software compression?
Regards, Fred Schmidt
--
Informationen (einschließlich Pflichtangaben) zu einzelnen, innerhalb der EU
tätigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank
finden Sie unter http://www.deutsche
Netview and FTP are generally considered two separate things.
The FTP supplied with z/os uses software compression if certain conditions are
met. Hardware compression may occur downstream in the network appliances.
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM
Fred
I assume that you question is to be interpreted as Would someone be able to
tell me whether NetView FTP uses hardware or software compression?
-
According to
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=ddsubtype=smappname=ShopzSerieshtmlfid=897/ENUS5685-108
NetView FTP
Thanks. these are small datasets. I don't know why they were compressed
with Data Accelerator. We greatly overused that product. Management at
the time said: Great! Compress everything and we don't need to get any
more DASD! Management today says: Use SMS compression and eliminate
the cost of Data
I realize you said you aren't testing for compression ratio or CPU usage, but
you might still want to take a quick look at those with both tailored and
generic/standard compression. I just recently found that switching my SMF
data to tailored compression saved about 40% of the space
Chapman
Sent: Tuesday, August 02, 2011 6:32 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: where 2 find SMS compression code
I realize you said you aren't testing for compression ratio
or CPU usage, but you might still want to take a quick look
at those with both tailored and generic/standard
On Tue, 2 Aug 2011 06:32:22 -0500, Scott Chapman wrote:
I realize you said you aren't testing for compression ratio or CPU usage, but
you might still want to take a quick look at those with both tailored and
generic/standard compression. I just recently found that switching my SMF
data
! Management today says: Use SMS compression and eliminate
the cost of Data Accelerator! We did not testing to see how this will
affect CPU usage or compression ratio. Just say save money! and eyes
glisten like a child in a candy shop.
--unsnip
In my SMS conversion to compress some VSAM dataset, I am getting a
message like:
IGD17162I RETURN CODE (12) REASON CODE (5F01083F) RECEIVED FROM
COMPRESSION SERVICES WHILE PROCESSING DATA SET
PRITV.PR.GCR26KSD , COMPRESSION REQUEST NOT
HONORED BECAUSE DATA SET CHARACTERISTICS DO NOT MEET
On Mon, 1 Aug 2011 04:48:05 -0500, John McKown wrote:
I don't seem to be able to find the 5F01083F code.
X'5F' = Compression Management Services
X'01' = CMPSVCAL (allocation)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R171/5.1.2.2
X'083F' = 2111 (DEC) = RS_NO_BENEFIT
-
From: IBM Mainframe Discussion List
[mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Norbert Friemel
Sent: Monday, August 01, 2011 5:17 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: where 2 find SMS compression code
On Mon, 1 Aug 2011 04:48:05 -0500, John McKown wrote:
I don't seem to be able to find
On Mon, 1 Aug 2011 09:16:52 -0500, McKown, John wrote:
Thanks. Of course, I was really hoping as to WHY it is no benefit. Guess
I'll need to double check the allocation / max lrecl / cisize.
Primary space 5 or 8MB or *minimum* lrecl (w/o key) 40
Norbert Friemel
We're looking into using the CMPSC Compression Call instruction, and the
z/Arch POP says .. assumes knowledge of the introductory information and
information about dictionary formats in *Enterprise Systems Architecture/390
Data Compression, SA22-7208-01*.
I find lots of references to SA22-7208
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR602/CCONTENTS?DT=19961127103547
On Thu, May 12, 2011 at 8:02 PM, Tom Simons tom.sim...@gmail.com wrote:
We're looking into using the CMPSC Compression Call instruction, and the
z/Arch POP says .. assumes knowledge
compression - how to manage
We are replacing BMC's Data Accelerator compression with SMS compression. I
have a DATACLAS (named DCEXTC) created which implements this. The DATACLAS
works. At present, Data Accelerator works by having a list of dataset names and
patterns which are used to determine
John,
I didn't think MAXSIZE took multivolume into account. Isn't it just primary
+ (15 * secondary)?
I've often thought that compression products should come with a sampling
utility to read one CYL of a dataset and provide a compression report. This
could be used to isolate find the best
-Original Message-
From: IBM Mainframe Discussion List
[mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Ron Hawkins
Sent: Thursday, April 28, 2011 10:03 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: RES: using SMS compression - how to manage
John,
I didn't think MAXSIZE took multivolume
We are replacing BMC's Data Accelerator compression with SMS compression. I
have a DATACLAS (named DCEXTC) created which implements this. The DATACLAS
works. At present, Data Accelerator works by having a list of dataset names and
patterns which are used to determine if (and how) a dataset
compression - how to manage
We are replacing BMC's Data Accelerator compression with SMS
compression. I have a DATACLAS (named DCEXTC) created which implements
this. The DATACLAS works. At present, Data Accelerator works by having
a list
of dataset names and patterns which are used to determine
'
If F3 FILTLIST gets to long, I'll need to create F4 and update the WHEN. I just
don't like it. I would like something better. If, as you say, DVC does not
influence MAXSIZE, then I might end up not assigning DCEXTC when I really
should. I would really like to eliminate all compression. But we
Tell them to stop spending zCycles on compression. :) Simplify the
FILTLIST to just the one VSAM file.
Or, do as I did. Extended, stripped, compressed is the default. With a
DATACLAS=NOEXTEND for the few cases that can't handle it.
Dave Gibney
Information Technology Services
Washington State
-Original Message-
From: IBM Mainframe Discussion List
[mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Gibney, Dave
Sent: Wednesday, April 27, 2011 1:59 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: using SMS compression - how to manage
Tell them to stop spending zCycles on compression
Extended is the default for all non-ESDS VSAM files. I can't remember what it
was, but we had some problem with ESDS files with Extended Addressing (perhaps
in CICS). I may try to reduce the number of files in the FILTLIST. I was trying
to duplicate the current compression environment
Well, it might be as much work to manage, and so might not be what you want,
but you could make use of the RACF DFP segments that everyone seems to
ignore, which were designed originally to eliminate the need for programming
ACS exit routines and things like FILTLISTs.
The DFP segment for the
I'm looking for some solution for file exchange between z/OS and
Windows/Linux platform.
The only requirement is to encrypt the file (PS dataset) on z/OS side
and decrypt it on distributed side and vice versa.
Nice to have:
- hash calculation
- compression
- exploitation of CPACF
versa.
Nice to have:
- hash calculation
- compression
- exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce
cost of CPU)
Any clues and suggestions including both home-grown (DIY) solutions
and
commercial products are welcome.
Why isn't FTP over SSL desirable?
-jc
for some solution for file exchange between z/OS and
Windows/Linux platform.
The only requirement is to encrypt the file (PS dataset) on z/OS side
and decrypt it on distributed side and vice versa.
Nice to have:
- hash calculation
- compression
- exploitation of CPACF or CryptoExpress or zIIP
On 04/05/11 09:31, R.S. wrote:
I'm looking for some solution for file exchange between z/OS and
Windows/Linux platform.
The only requirement is to encrypt the file (PS dataset) on z/OS side
and decrypt it on distributed side and vice versa.
Nice to have:
- hash calculation
- compression
@bama.ua.edu] On
Behalf Of R.S.
Sent: 5. apríl 2011 13:31
To: IBM-MAIN@bama.ua.edu
Subject: Encryption, compression, etc.
I'm looking for some solution for file exchange between z/OS and
Windows/Linux platform.
The only requirement is to encrypt the file (PS dataset) on z/OS side
and decrypt
Of R.S.
Sent: Tuesday, April 05, 2011 8:31 AM
To: IBM-MAIN@bama.ua.edu
Subject: Encryption, compression, etc.
I'm looking for some solution for file exchange between z/OS and
Windows/Linux platform.
The only requirement is to encrypt the file (PS dataset) on z/OS side
and decrypt
] On Behalf Of R.S.
Sent: Tuesday, April 05, 2011 8:31 AM
To: IBM-MAIN@bama.ua.edu
Subject: Encryption, compression, etc.
I'm looking for some solution for file exchange between z/OS and
Windows/Linux platform.
The only requirement is to encrypt the file (PS dataset) on z/OS side
and decrypt
05, 2011 8:31 AM
To: IBM-MAIN@bama.ua.edu
Subject: Encryption, compression, etc.
I'm looking for some solution for file exchange between z/OS and
Windows/Linux platform.
The only requirement is to encrypt the file (PS dataset) on z/OS side
and decrypt it on distributed side and vice
- compression
- exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce cost of
CPU)
Any clues and suggestions including both home-grown (DIY) solutions and
commercial products are welcome.
The company I used to work for (Proginet - acquired last year by
Tibco) has a comprehensive
Certificate based TLS FTP is native to the z/os platform. While certificates
are very secure, they do carry a pretty good learning curve. Any z/os hardware
features installed on the box are exploited by default, I think.
Typically encryption defeats compression. It seems that you can have one
snip
Is z/OS Encryption Facility different from ICSF ? A link to the app prog
guide here :
http://publib.boulder.ibm.com/infocenter/zos/v1r10/topic/com.ibm.zos.r10
.csfb400/toc.htm
/snip
YES!
--
For IBM-MAIN subscribe / signoff
Encrypted data is usually though to be non-compressible. If you want
compression in addition to encryption you'd compress first and then
encrypt the compressed data file.
Mark Jacobs
On 04/05/11 11:20, Hal Merritt wrote:
Certificate based TLS FTP is native to the z/os platform. While
think.
Typically encryption defeats compression. It seems that you can have one or
the other but not both. I haven't looked, but z/os FTP may compress before
encryption. (I think the compression occurs in the application layer and the
encryption occurs in the transport layer.)
IBM Ported Tools
data sets..
Best Regards,
Yifat
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Andrew N Wilt
Sent: Friday, December 03, 2010 1:45 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?
Ron
, 2010 12:22 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?
Pardon my bringing back an old thread, but -
I wanted to see how much better is the COMPRESS option over the HWCOMPRESS
in regards to CPU time and was pretty surprised when my results suggested
to tape. It works well but the cpu usage is a problem cause we have
many such backup jobs running simultaneously.
If hardware-assisted compression cannot reduce the cpu overhead, I will
consider using resource group to cap those jobs.
Best Regards,
Johnny Luo
Johnny,
The saving in hardware assisted compression is in decompression - when you read
it. Look at what should be a much lower CPU cost to decompress the files during
restore and decide if the speed of restoring the data concurrently is worth the
increase in CPU required to back it up
Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
with HDC jobsteps showing very high CPU. But then they seem to both write
and read. Enough CPU to potentially suffer from queuing.
(And, yes, I know you were talking about a different category of HDC
usage.)
Martin
Hi
A few years ago I have tried with hardware compression, as we are using
intensively the zlib library (http://www.ietf.org/rfc/rfc1950.txt) to
compress/expand .
Never got a proper answer, and till now not clear, in which case would
bring the hardware compression some CPU reduction
Miklos,
What do you mean by 'zlib'? Is it free on z/OS?
Best Regards,
Johnny Luo
On Thu, Dec 2, 2010 at 8:10 PM, Miklos Szigetvari
miklos.szigetv...@isis-papyrus.com wrote:
Hi
A few years ago I have tried with hardware compression, as we are using
intensively the zlib library (http
with hardware compression, as we are using
intensively the zlib library (http://www.ietf.org/rfc/rfc1950.txt) to
compress/expand .
Never got a proper answer, and till now not clear, in which case would
bring the hardware compression some CPU reduction
On 12/2/2010 12:36 PM, Martin Packer wrote:
Ron
On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:
The saving in hardware assisted compression is in decompression - when you
read it. Look at what should be a much lower CPU cost to decompress the files
during restore and decide if the speed of restoring the data concurrently is
worth
List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Johnny Luo
Sent: יום ה 02 דצמבר 2010 12:13
To: IBM-MAIN@bama.ua.edu
Subject: Hardware-assisted compression: not CPU-efficient?
Hi,
DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
that HWCOMPRESS costs more CPU than COMPRESS
Yifat Oren yi...@tmachine.com wrote in message
news:3d0c19e6913742b282eeb9a7c4ae3...@yifato...
Hi Johnny,
I was under the impression that for DFDSS DUMP, COMPRESS and
HWCOMPRESS are
synonymous;
Are you saying they are not?
If you are writing to tape why not use the drive
On Thu, 2 Dec 2010 16:29:56 +0200, Yifat Oren wrote:
I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are
synonymous;
Are you saying they are not?
Yes, they are not synonymous. HWCOMPRESS uses the CMPSC instruction
(dictionary-based compression). COMPRESS uses RLE (run
On 2 December 2010 05:53, Ron Hawkins ron.hawkins1...@sbcglobal.net wrote:
Johnny,
The saving in hardware assisted compression is in decompression - when you
read it. Look at what should be a much lower CPU cost to decompress the files
during restore and decide if the speed of restoring
On Thu, 2 Dec 2010 12:09:23 -0500, Tony Harminc wrote:
On 2 December 2010 05:53, Ron Hawkins wrote:
The saving in hardware assisted compression is in
decompression - when you read it. Look at what should be a
much lower CPU cost to decompress the files during restore
and decide if the speed
Unfortunately, IBM, et. al. *DO NOT* bill on elapsed time.
More CPU used for Dump is less CPU available for productive work, or
worse yet, a bigger software bill!
snip
Increased CPU time to do the dump does not necessarily mean that
the elapsed time is longer. In fact, by compressing the
Martin,
Except for when the compression assist instructions were in millicode on the
G4 and G5, the hardware compression from Compression Services has always had
am asymmetric cost for DFDMS compression. I remember some early
documentation from IBM when it was first introduced in DFSMS
: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:
The saving in hardware assisted compression is in decompression - when
you
read it. Look at what should be a much lower CPU cost to decompress the
files
during restore
-assisted compression: not CPU-efficient?
Gil,
I was thinking that a faster restore would be have some value as a reduction
in recovery time, as opposed to back-up duration which is usually outside of
any business critical path.
This would have value in business continuance whether it was a small
Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Hal Merritt
Sent: Friday, 3 December 2010 6:44 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?
Conversely, sometimes it is hard to get the backups
opposed to back-up duration which is usually outside of
any business critical path.
It shouldn't be, especially if back-ups have to complete before sub-systems can
come up.
If we ran out of window, we had senior IT management and business contacts
decide which was more critical: back-up;
Tony,
You are surprised, and then you explain your surprise by agreeing with me.
I'm confused.
I'm not sure if you realized that the Huffman encoding technique used by
DFMSdss COMPRESS keyword is not a dictionary based method, and has a
symmetrical CPU cost for compression and decompression
these products.
I did say that the increase cost and time for backup needs to evaluated
against any improvement in restoration time with hardware compression. Thank
you to all those that reinforced the need for this evaluation in their
response.
-Original Message-
From: IBM Mainframe Discussion
Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman
encoding technique, and works amazing for repeated bytes (just the types of
things
technique used by
DFMSdss COMPRESS keyword is not a dictionary based method, and has a
symmetrical CPU cost for compression and decompression.
No, I didn't know anything about the compression methods triggered by
these two keywords until this thread. But I do know to some extent how
both Huffman
Tony,
Then the misunderstanding is that Compression Services as called by DFSMSdfp
and DFSMSdss with HWCOMPRESS uses an LZW compression scheme, while DFSMShsm
and DFSMSdss with the COMPRESS keyword use Huffman technique.
The Asymmetric cost of HWCOMPRESS I was referring to, and that apparently
Ron,
We compress our GDG's and don't send them to ML1. However at the
time we were trying to save DASD and used the 8 MB And 5 mb values. We
want to raise the bar for Compression, because of DASD issues have
subsided. Is there a ideal threshold for the DASD/Compression size value
Tobias,
There's no magic number. It's pretty much depends on the site, the
compression method and how datasets are accessed. It's like asking is there
an ideal wheel rim size for every car.
Why not raise the bar in some increment and measure the affect. You probably
have some idea of where you
@bama.ua.edu
cc
Subject
Re: SMS compression cost size
Tobias,
There's no magic number. It's pretty much depends on the site, the
compression method and how datasets are accessed. It's like asking is
there
an ideal wheel rim size for every car.
Why not raise the bar in some increment
:35 AM
Please respond to
IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu
To
IBM-MAIN@bama.ua.edu
cc
Subject
Re: SMS compression cost size
Tobias,
There's no magic number. It's pretty much depends on the site, the
compression method and how datasets are accessed. It's like asking
...@bama.ua.edu] On
Behalf Of
R Hey
Sent: Wednesday, February 24, 2010 12:16 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: [IBM-MAIN] SMS compression cost size
Ron,
Your example is an 'exception', it was decided to do it for that DS to
gain
the
benefit. That's OK by me.
It wasn't decided
they are not read many times. This
doesn't make sense to me, if one is short on CPU.
I should have said:
I don't see why anyone would compress ALL DS under 500 Cyl these days,
just to save space, when one is short on CPU.
there is more to compression than just the size of the dataset.
Amen
Rez,
Do you have a analysis of what Compression cost/dsn?
Regards,
Tobias Cafiero
Data Resource Management
Tel: (212) 855-1117
R Hey sys...@yahoo.com
Sent by: IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu
02/24/2010 03:16 AM
Please respond to
IBM Mainframe Discussion
Thanks for the replies.
Redbook on vsam says:
There are some types of data sets that are not suitable for ZL compression,
resulting in rejection, such as these:
- Small data sets.
- Data sets in which the pattern of the data does
Reza,
I can think of a KSDS that was being accessed by LSR, but every CI was
touched until it was loaded into the LSR pool. Over 150 programs touched
that 200 CYL baby in the same 60 minutes. Compression reduced the size by
55% reduced the IO by 55% as well.
In terms of IO savings
On Wed, 2010-02-17 at 01:28 -0500, Ron Hawkins wrote:
What is the objective of compressing the dataset?
In my environment (cycles to burn) reads for certain long sequential
datasets are faster for compressed data. So my ACS routines look for
specific larger datasets that are written once, read
Ron,
What is the objective of compressing the dataset?
Nobody remembers.
It was done in a time far, far away ...
My client is short on CPU, so I (new sysFROG) started wondering why ...
regardless of size.
So, size is not everything after all ;-)
Rez
snip---
By 'killer apps' you mean good ones to COMP for, right?
Would you COMP regardless of size, if short on CPU already, with lots of
DASD?
(even for less than 50 cyls)
If size matters, what should the MIN size be?
Hi,
Are there any figures for the cost of SMS compression out there,
or is it YMMV?
(I've checked the archive to find cost is higher for W than for R ...,
seen many who decided not to do it with a lot of YMMV ...)
Also, are there any ROT for the min size to compress for?
One client I had
Reza,
It's LZW compression which has an asymmetric cost by design - compressing
always costs more than decompressing.
Back when the compression assist instructions were announced IBM were saying
the difference was around 6:1 compression vs decompression.
The compression and compression
Ron,
By 'killer apps' you mean good ones to COMP for, right?
Would you COMP regardless of size, if short on CPU already, with lots of DASD?
(even for less than 50 cyls)
If size matters, what should the MIN size be?
Cheers,
Rez
Reza,
Yes, killer Apps are the good ones.
I wouldn't compress a 50 CYL dataset just for compressions sake. If it was
being read by all and sundry with no updates then I'd load it into
Hiperbatch.
I've never looked at compression to save space. I've always viewed it as an
IO reduction technique
27, 2010 10:28 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: why compression costs additional I/O?
Peter,
Yes for your example I am recommending NCP=96, which means BUFNO=96. I
habitually put both NCP and BUFNO on BSAM files because I've never been sure
if BSAM calculates BUFNO using the NCP value from
Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of
Pawel Leszczynski
Sent: Wednesday, January 27, 2010 9:23 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: why compression costs additional I/O?
Hi Yifat,
Thanks for answer - you are right! - I 've checked
If a Dataclass with the following attributes is invoked:
Data Set Name Type . . . . . : EXTENDED
If Extended . . . . . . . . : REQUIRED
Extended Addressability . . : YES
Record Access Bias . . . . : USER
Space Constraint Relief . . . : YES
Reduce Space Up To (%) . . : 50
David,
SMS uses hardware compression. It has an asymmetric CPU cost, where
decompressing the data uses 80% less CPU than compressing it.
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
O'Brien, David W. (NIH/CIT) [C]
Sent
Thanks Ron
Dave O'Brien
NIH Contractor
From: Ron Hawkins [ron.hawkins1...@sbcglobal.net]
Sent: Thursday, January 28, 2010 1:51 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: SMS Compression - Software or Hardware
David,
SMS uses hardware compression. It has
There are 2 kinds of compression.
The outboard kind that takes place in the tape unit is one example.
there is no difference in the z.os cpu time for writing a compressed
tape.
the operating system kind which is always software.
the software compression can be either just software or hardware
Mike,
It is the hardware assisted compression that I was referring to. There
were/are products that do software compression without using the Hardware
assist. DFSMSdss, DFSMShsm and old versions of SAS and IAM spring to mind.
Then there was the IBM G4 and G5 that moved the hardware assist
-compressible output we
can see this:
EXCP TCB SRB el.time
TESTXWP5 STEP110 00 757K 3.51.709.01 -- w/o compression
TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45 -- w. compresion
We guess that big SRB in (2) goes
compare such sorting with sorting on non-compressible output we
can see this:
EXCP TCB SRB el.time
TESTXWP5 STEP110 00 757K 3.51.709.01-- w/o compression
TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45-- w
I can understand from your explanation that we would get the
Radoslaw,
I can understand from your explanation that we would get the same number of
EXCPs, but not twice as many. If say we are backing up 1000 30K blocks, if
compression reduces the size of each block, to say 10K, then we
-compressible output we
can see this:
EXCP TCB SRB el.time
TESTXWP5 STEP110 00 757K 3.51.709.01 -- w/o compression
TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45 -- w. compresion
We guess that big SRB in (2) goes
-- w/o
compression
TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45-- w.
compresion
We guess that big SRB in (2) goes for compression (that we understand -
we
probably quit compression at all), but we don't understand 2 times bigger
EXCP
in second case.
Any ideas
.
Hope that helps,
Yifat Oren.
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Pawel Leszczynski
Sent: Wednesday, January 27, 2010 12:56 PM
To: IBM-MAIN@bama.ua.edu
Subject: why compression costs additional I/O?
Hello everybody,
Recently
: why compression costs additional I/O?
Hello everybody,
Recently we are reviewing our EndOfDay jobs looking for potential
performance improvements (reducing CPU/elapsed time).
We have several jobs sorting big datasets where output is SMS-compressible
(type: EXTENDED) datasets.
When we compare
generally all of it probably mean that using DFSORT for compressed
datasets is
not good idea.
I'm not sure I would agree with a general statement such as that.
First. There is a cpu overhead associated with compression and it effects
ALL applications, not just sort. The overhead
is generally
Pawel Leszczynski wrote:
generally all of it probably mean that using DFSORT for compressed datasets is
not good idea.
The EXCP access method is not supported for extended sequential data
sets--whether compressed or not, striped or not. I/O for these data sets
is performed by Media
count. These are usually mult-Cyl
chains.
One of the few problems with Extended Format datasets is that the block
chaining defaults are lousy. This is probably why your job is taking longer
with compression. BSAM, and QSAM, always use double buffering, so whatever
you specify is halved for chaining
: Re: why compression costs additional I/O?
Pawel,
For a regular DSORG=PS dataset DFSORT and SYNCSORT use their own
access
method to read and write the SORTIN and SORTOUT using very efficient
long
chained Start Sub-Channels. The EXCP count reported for these datasets
is
the Start SubChannel
, January 27, 2010 11:51 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: [IBM-MAIN] why compression costs additional I/O?
Ron,
If a PS-E dataset has 6 stripes, are you recommending using NCP=96 (=16
* 6)? If so, what BUFNO should be used in that case?
A long time ago in a galaxy far, far away
1 - 100 of 167 matches
Mail list logo