Re: Netview FTP - Hardware or software compression?

2011-11-04 Thread Chris Mason
Fred ... And sorry if my question confused ... Unlike Hal Merritt, I wasn't *confused* by the question, only intrigued ... ... does VTAM compression apply only for SNA traffic or also IP traffic? VTAM provides compression as a service to applications using the VTAM API. Because

Re: Netview FTP - Hardware or software compression?

2011-11-03 Thread Fred Schmidt
Thanks Chris, for the most helpful and informative reply. And sorry if my question confused must be an Australianism. Regards, Fred -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to

Re: Netview FTP - Hardware or software compression?

2011-11-03 Thread Fred Schmidt
Chris (or anyone)... does VTAM compression apply only for SNA traffic or also IP traffic? In case you haven't already guessed, I am not a comm's person, so please excuse my ignorance. Regards, Fred -- For IBM-MAIN subscribe

Netview FTP - Hardware or software compression?

2011-11-02 Thread Fred Schmidt
Anyone know whether Netview FTP uses hardware or software compression? Regards, Fred Schmidt -- Informationen (einschließlich Pflichtangaben) zu einzelnen, innerhalb der EU tätigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter http://www.deutsche

Re: Netview FTP - Hardware or software compression?

2011-11-02 Thread Hal Merritt
Netview and FTP are generally considered two separate things. The FTP supplied with z/os uses software compression if certain conditions are met. Hardware compression may occur downstream in the network appliances. -Original Message- From: IBM Mainframe Discussion List [mailto:IBM

Re: Netview FTP - Hardware or software compression?

2011-11-02 Thread Chris Mason
Fred I assume that you question is to be interpreted as Would someone be able to tell me whether NetView FTP uses hardware or software compression? - According to http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=ddsubtype=smappname=ShopzSerieshtmlfid=897/ENUS5685-108 NetView FTP

Re: where 2 find SMS compression code

2011-08-02 Thread John McKown
Thanks. these are small datasets. I don't know why they were compressed with Data Accelerator. We greatly overused that product. Management at the time said: Great! Compress everything and we don't need to get any more DASD! Management today says: Use SMS compression and eliminate the cost of Data

Re: where 2 find SMS compression code

2011-08-02 Thread Scott Chapman
I realize you said you aren't testing for compression ratio or CPU usage, but you might still want to take a quick look at those with both tailored and generic/standard compression. I just recently found that switching my SMF data to tailored compression saved about 40% of the space

Re: where 2 find SMS compression code

2011-08-02 Thread McKown, John
Chapman Sent: Tuesday, August 02, 2011 6:32 AM To: IBM-MAIN@bama.ua.edu Subject: Re: where 2 find SMS compression code I realize you said you aren't testing for compression ratio or CPU usage, but you might still want to take a quick look at those with both tailored and generic/standard

Re: where 2 find SMS compression code

2011-08-02 Thread Norbert Friemel
On Tue, 2 Aug 2011 06:32:22 -0500, Scott Chapman wrote: I realize you said you aren't testing for compression ratio or CPU usage, but you might still want to take a quick look at those with both tailored and generic/standard compression. I just recently found that switching my SMF data

Re: where 2 find SMS compression code

2011-08-02 Thread Rick Fochtman
! Management today says: Use SMS compression and eliminate the cost of Data Accelerator! We did not testing to see how this will affect CPU usage or compression ratio. Just say save money! and eyes glisten like a child in a candy shop. --unsnip

where 2 find SMS compression code

2011-08-01 Thread John McKown
In my SMS conversion to compress some VSAM dataset, I am getting a message like: IGD17162I RETURN CODE (12) REASON CODE (5F01083F) RECEIVED FROM COMPRESSION SERVICES WHILE PROCESSING DATA SET PRITV.PR.GCR26KSD , COMPRESSION REQUEST NOT HONORED BECAUSE DATA SET CHARACTERISTICS DO NOT MEET

Re: where 2 find SMS compression code

2011-08-01 Thread Norbert Friemel
On Mon, 1 Aug 2011 04:48:05 -0500, John McKown wrote: I don't seem to be able to find the 5F01083F code. X'5F' = Compression Management Services X'01' = CMPSVCAL (allocation) http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R171/5.1.2.2 X'083F' = 2111 (DEC) = RS_NO_BENEFIT

Re: where 2 find SMS compression code

2011-08-01 Thread McKown, John
- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Norbert Friemel Sent: Monday, August 01, 2011 5:17 AM To: IBM-MAIN@bama.ua.edu Subject: Re: where 2 find SMS compression code On Mon, 1 Aug 2011 04:48:05 -0500, John McKown wrote: I don't seem to be able to find

Re: where 2 find SMS compression code

2011-08-01 Thread Norbert Friemel
On Mon, 1 Aug 2011 09:16:52 -0500, McKown, John wrote: Thanks. Of course, I was really hoping as to WHY it is no benefit. Guess I'll need to double check the allocation / max lrecl / cisize. Primary space 5 or 8MB or *minimum* lrecl (w/o key) 40 Norbert Friemel

where is ESA/390 Data Compression manual SA22-7208

2011-05-12 Thread Tom Simons
We're looking into using the CMPSC Compression Call instruction, and the z/Arch POP says .. assumes knowledge of the introductory information and information about dictionary formats in *Enterprise Systems Architecture/390 Data Compression, SA22-7208-01*. I find lots of references to SA22-7208

Re: where is ESA/390 Data Compression manual SA22-7208

2011-05-12 Thread Steve Horein
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR602/CCONTENTS?DT=19961127103547 On Thu, May 12, 2011 at 8:02 PM, Tom Simons tom.sim...@gmail.com wrote: We're looking into using the CMPSC Compression Call instruction, and the z/Arch POP says .. assumes knowledge

RES: using SMS compression - how to manage

2011-04-28 Thread ITURIEL DO NASCIMENTO NETO
compression - how to manage We are replacing BMC's Data Accelerator compression with SMS compression. I have a DATACLAS (named DCEXTC) created which implements this. The DATACLAS works. At present, Data Accelerator works by having a list of dataset names and patterns which are used to determine

Re: RES: using SMS compression - how to manage

2011-04-28 Thread Ron Hawkins
John, I didn't think MAXSIZE took multivolume into account. Isn't it just primary + (15 * secondary)? I've often thought that compression products should come with a sampling utility to read one CYL of a dataset and provide a compression report. This could be used to isolate find the best

Re: RES: using SMS compression - how to manage

2011-04-28 Thread McKown, John
-Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Ron Hawkins Sent: Thursday, April 28, 2011 10:03 AM To: IBM-MAIN@bama.ua.edu Subject: Re: RES: using SMS compression - how to manage John, I didn't think MAXSIZE took multivolume

using SMS compression - how to manage

2011-04-27 Thread McKown, John
We are replacing BMC's Data Accelerator compression with SMS compression. I have a DATACLAS (named DCEXTC) created which implements this. The DATACLAS works. At present, Data Accelerator works by having a list of dataset names and patterns which are used to determine if (and how) a dataset

Re: using SMS compression - how to manage

2011-04-27 Thread Gibney, Dave
compression - how to manage We are replacing BMC's Data Accelerator compression with SMS compression. I have a DATACLAS (named DCEXTC) created which implements this. The DATACLAS works. At present, Data Accelerator works by having a list of dataset names and patterns which are used to determine

Re: using SMS compression - how to manage

2011-04-27 Thread McKown, John
' If F3 FILTLIST gets to long, I'll need to create F4 and update the WHEN. I just don't like it. I would like something better. If, as you say, DVC does not influence MAXSIZE, then I might end up not assigning DCEXTC when I really should. I would really like to eliminate all compression. But we

Re: using SMS compression - how to manage

2011-04-27 Thread Gibney, Dave
Tell them to stop spending zCycles on compression. :) Simplify the FILTLIST to just the one VSAM file. Or, do as I did. Extended, stripped, compressed is the default. With a DATACLAS=NOEXTEND for the few cases that can't handle it. Dave Gibney Information Technology Services Washington State

Re: using SMS compression - how to manage

2011-04-27 Thread McKown, John
-Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Gibney, Dave Sent: Wednesday, April 27, 2011 1:59 PM To: IBM-MAIN@bama.ua.edu Subject: Re: using SMS compression - how to manage Tell them to stop spending zCycles on compression

Re: using SMS compression - how to manage

2011-04-27 Thread Greg Shirey
Extended is the default for all non-ESDS VSAM files. I can't remember what it was, but we had some problem with ESDS files with Extended Addressing (perhaps in CICS). I may try to reduce the number of files in the FILTLIST. I was trying to duplicate the current compression environment

Re: using SMS compression - how to manage

2011-04-27 Thread Walt Farrell
Well, it might be as much work to manage, and so might not be what you want, but you could make use of the RACF DFP segments that everyone seems to ignore, which were designed originally to eliminate the need for programming ACS exit routines and things like FILTLISTs. The DFP segment for the

Encryption, compression, etc.

2011-04-05 Thread R.S.
I'm looking for some solution for file exchange between z/OS and Windows/Linux platform. The only requirement is to encrypt the file (PS dataset) on z/OS side and decrypt it on distributed side and vice versa. Nice to have: - hash calculation - compression - exploitation of CPACF

Re: Encryption, compression, etc.

2011-04-05 Thread Chase, John
versa. Nice to have: - hash calculation - compression - exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce cost of CPU) Any clues and suggestions including both home-grown (DIY) solutions and commercial products are welcome. Why isn't FTP over SSL desirable? -jc

Re: Encryption, compression, etc.

2011-04-05 Thread Staller, Allan
for some solution for file exchange between z/OS and Windows/Linux platform. The only requirement is to encrypt the file (PS dataset) on z/OS side and decrypt it on distributed side and vice versa. Nice to have: - hash calculation - compression - exploitation of CPACF or CryptoExpress or zIIP

Re: Encryption, compression, etc.

2011-04-05 Thread Mark Jacobs
On 04/05/11 09:31, R.S. wrote: I'm looking for some solution for file exchange between z/OS and Windows/Linux platform. The only requirement is to encrypt the file (PS dataset) on z/OS side and decrypt it on distributed side and vice versa. Nice to have: - hash calculation - compression

Re: Encryption, compression, etc.

2011-04-05 Thread Jóhannes Magnússon
@bama.ua.edu] On Behalf Of R.S. Sent: 5. apríl 2011 13:31 To: IBM-MAIN@bama.ua.edu Subject: Encryption, compression, etc. I'm looking for some solution for file exchange between z/OS and Windows/Linux platform. The only requirement is to encrypt the file (PS dataset) on z/OS side and decrypt

Re: Encryption, compression, etc.

2011-04-05 Thread McKown, John
Of R.S. Sent: Tuesday, April 05, 2011 8:31 AM To: IBM-MAIN@bama.ua.edu Subject: Encryption, compression, etc. I'm looking for some solution for file exchange between z/OS and Windows/Linux platform. The only requirement is to encrypt the file (PS dataset) on z/OS side and decrypt

Re: Encryption, compression, etc.

2011-04-05 Thread Kirk Wolf
] On Behalf Of R.S. Sent: Tuesday, April 05, 2011 8:31 AM To: IBM-MAIN@bama.ua.edu Subject: Encryption, compression, etc. I'm looking for some solution for file exchange between z/OS and Windows/Linux platform. The only requirement is to encrypt the file (PS dataset) on z/OS side and decrypt

Re: Encryption, compression, etc.

2011-04-05 Thread Nagesh S
05, 2011 8:31 AM To: IBM-MAIN@bama.ua.edu Subject: Encryption, compression, etc. I'm looking for some solution for file exchange between z/OS and Windows/Linux platform. The only requirement is to encrypt the file (PS dataset) on z/OS side and decrypt it on distributed side and vice

Re: Encryption, compression, etc.

2011-04-05 Thread Tony Harminc
- compression - exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce cost of CPU) Any clues and suggestions including both home-grown (DIY) solutions and commercial products are welcome. The company I used to work for (Proginet - acquired last year by Tibco) has a comprehensive

Re: Encryption, compression, etc.

2011-04-05 Thread Hal Merritt
Certificate based TLS FTP is native to the z/os platform. While certificates are very secure, they do carry a pretty good learning curve. Any z/os hardware features installed on the box are exploited by default, I think. Typically encryption defeats compression. It seems that you can have one

Re: Encryption, compression, etc.

2011-04-05 Thread Staller, Allan
snip Is z/OS Encryption Facility different from ICSF ? A link to the app prog guide here : http://publib.boulder.ibm.com/infocenter/zos/v1r10/topic/com.ibm.zos.r10 .csfb400/toc.htm /snip YES! -- For IBM-MAIN subscribe / signoff

Re: Encryption, compression, etc.

2011-04-05 Thread Mark Jacobs
Encrypted data is usually though to be non-compressible. If you want compression in addition to encryption you'd compress first and then encrypt the compressed data file. Mark Jacobs On 04/05/11 11:20, Hal Merritt wrote: Certificate based TLS FTP is native to the z/os platform. While

Re: Encryption, compression, etc.

2011-04-05 Thread Kirk Wolf
think. Typically encryption defeats compression.  It seems that you can have one or the other but not both. I haven't looked, but z/os FTP may compress before encryption. (I think the compression occurs in the application layer and the encryption occurs in the transport layer.) IBM Ported Tools

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-08 Thread Yifat Oren
data sets.. Best Regards, Yifat -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Andrew N Wilt Sent: Friday, December 03, 2010 1:45 AM To: IBM-MAIN@bama.ua.edu Subject: Re: Hardware-assisted compression: not CPU-efficient? Ron

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-08 Thread Hal Merritt
, 2010 12:22 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Hardware-assisted compression: not CPU-efficient? Pardon my bringing back an old thread, but - I wanted to see how much better is the COMPRESS option over the HWCOMPRESS in regards to CPU time and was pretty surprised when my results suggested

Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Johnny Luo
to tape. It works well but the cpu usage is a problem cause we have many such backup jobs running simultaneously. If hardware-assisted compression cannot reduce the cpu overhead, I will consider using resource group to cap those jobs. Best Regards, Johnny Luo

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Johnny, The saving in hardware assisted compression is in decompression - when you read it. Look at what should be a much lower CPU cost to decompress the files during restore and decide if the speed of restoring the data concurrently is worth the increase in CPU required to back it up

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Martin Packer
Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM with HDC jobsteps showing very high CPU. But then they seem to both write and read. Enough CPU to potentially suffer from queuing. (And, yes, I know you were talking about a different category of HDC usage.) Martin

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Miklos Szigetvari
Hi A few years ago I have tried with hardware compression, as we are using intensively the zlib library (http://www.ietf.org/rfc/rfc1950.txt) to compress/expand . Never got a proper answer, and till now not clear, in which case would bring the hardware compression some CPU reduction

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Johnny Luo
Miklos, What do you mean by 'zlib'? Is it free on z/OS? Best Regards, Johnny Luo On Thu, Dec 2, 2010 at 8:10 PM, Miklos Szigetvari miklos.szigetv...@isis-papyrus.com wrote: Hi A few years ago I have tried with hardware compression, as we are using intensively the zlib library (http

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Miklos Szigetvari
with hardware compression, as we are using intensively the zlib library (http://www.ietf.org/rfc/rfc1950.txt) to compress/expand . Never got a proper answer, and till now not clear, in which case would bring the hardware compression some CPU reduction On 12/2/2010 12:36 PM, Martin Packer wrote: Ron

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Paul Gilmartin
On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote: The saving in hardware assisted compression is in decompression - when you read it. Look at what should be a much lower CPU cost to decompress the files during restore and decide if the speed of restoring the data concurrently is worth

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Yifat Oren
List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Johnny Luo Sent: יום ה 02 דצמבר 2010 12:13 To: IBM-MAIN@bama.ua.edu Subject: Hardware-assisted compression: not CPU-efficient? Hi, DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test that HWCOMPRESS costs more CPU than COMPRESS

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Vernooij, CP - SPLXM
Yifat Oren yi...@tmachine.com wrote in message news:3d0c19e6913742b282eeb9a7c4ae3...@yifato... Hi Johnny, I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are synonymous; Are you saying they are not? If you are writing to tape why not use the drive

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Norbert Friemel
On Thu, 2 Dec 2010 16:29:56 +0200, Yifat Oren wrote: I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are synonymous; Are you saying they are not? Yes, they are not synonymous. HWCOMPRESS uses the CMPSC instruction (dictionary-based compression). COMPRESS uses RLE (run

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tony Harminc
On 2 December 2010 05:53, Ron Hawkins ron.hawkins1...@sbcglobal.net wrote: Johnny, The saving in hardware assisted compression is in decompression - when you read it. Look at what should be a much lower CPU cost to decompress the files during restore and decide if the speed of restoring

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tom Marchant
On Thu, 2 Dec 2010 12:09:23 -0500, Tony Harminc wrote: On 2 December 2010 05:53, Ron Hawkins wrote: The saving in hardware assisted compression is in decompression - when you read it. Look at what should be a much lower CPU cost to decompress the files during restore and decide if the speed

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Staller, Allan
Unfortunately, IBM, et. al. *DO NOT* bill on elapsed time. More CPU used for Dump is less CPU available for productive work, or worse yet, a bigger software bill! snip Increased CPU time to do the dump does not necessarily mean that the elapsed time is longer. In fact, by compressing the

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Martin, Except for when the compression assist instructions were in millicode on the G4 and G5, the hardware compression from Compression Services has always had am asymmetric cost for DFDMS compression. I remember some early documentation from IBM when it was first introduced in DFSMS

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient? On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote: The saving in hardware assisted compression is in decompression - when you read it. Look at what should be a much lower CPU cost to decompress the files during restore

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Hal Merritt
-assisted compression: not CPU-efficient? Gil, I was thinking that a faster restore would be have some value as a reduction in recovery time, as opposed to back-up duration which is usually outside of any business critical path. This would have value in business continuance whether it was a small

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Stephen Mednick
Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Hal Merritt Sent: Friday, 3 December 2010 6:44 AM To: IBM-MAIN@bama.ua.edu Subject: Re: Hardware-assisted compression: not CPU-efficient? Conversely, sometimes it is hard to get the backups

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ted MacNEIL
opposed to back-up duration which is usually outside of any business critical path. It shouldn't be, especially if back-ups have to complete before sub-systems can come up. If we ran out of window, we had senior IT management and business contacts decide which was more critical: back-up;

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Tony, You are surprised, and then you explain your surprise by agreeing with me. I'm confused. I'm not sure if you realized that the Huffman encoding technique used by DFMSdss COMPRESS keyword is not a dictionary based method, and has a symmetrical CPU cost for compression and decompression

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
these products. I did say that the increase cost and time for backup needs to evaluated against any improvement in restoration time with hardware compression. Thank you to all those that reinforced the need for this evaluation in their response. -Original Message- From: IBM Mainframe Discussion

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Andrew N Wilt
Ron, Thank you for the good response. It is true that the DFSMSdss COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of compression. Like Ron said, the COMPRESS keyword is using a Huffman encoding technique, and works amazing for repeated bytes (just the types of things

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tony Harminc
technique used by DFMSdss COMPRESS keyword is not a dictionary based method, and has a symmetrical CPU cost for compression and decompression. No, I didn't know anything about the compression methods triggered by these two keywords until this thread. But I do know to some extent how both Huffman

Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Tony, Then the misunderstanding is that Compression Services as called by DFSMSdfp and DFSMSdss with HWCOMPRESS uses an LZW compression scheme, while DFSMShsm and DFSMSdss with the COMPRESS keyword use Huffman technique. The Asymmetric cost of HWCOMPRESS I was referring to, and that apparently

Re: SMS compression cost size

2010-03-01 Thread Tobias Cafiero
Ron, We compress our GDG's and don't send them to ML1. However at the time we were trying to save DASD and used the 8 MB And 5 mb values. We want to raise the bar for Compression, because of DASD issues have subsided. Is there a ideal threshold for the DASD/Compression size value

Re: SMS compression cost size

2010-03-01 Thread Ron Hawkins
Tobias, There's no magic number. It's pretty much depends on the site, the compression method and how datasets are accessed. It's like asking is there an ideal wheel rim size for every car. Why not raise the bar in some increment and measure the affect. You probably have some idea of where you

Re: SMS compression cost size

2010-03-01 Thread Tobias Cafiero
@bama.ua.edu cc Subject Re: SMS compression cost size Tobias, There's no magic number. It's pretty much depends on the site, the compression method and how datasets are accessed. It's like asking is there an ideal wheel rim size for every car. Why not raise the bar in some increment

Re: SMS compression cost size

2010-03-01 Thread Tobias Cafiero
:35 AM Please respond to IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu To IBM-MAIN@bama.ua.edu cc Subject Re: SMS compression cost size Tobias, There's no magic number. It's pretty much depends on the site, the compression method and how datasets are accessed. It's like asking

Re: SMS compression cost size

2010-02-27 Thread Ron Hawkins
...@bama.ua.edu] On Behalf Of R Hey Sent: Wednesday, February 24, 2010 12:16 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] SMS compression cost size Ron, Your example is an 'exception', it was decided to do it for that DS to gain the benefit. That's OK by me. It wasn't decided

Re: SMS compression cost size

2010-02-24 Thread R Hey
they are not read many times. This doesn't make sense to me, if one is short on CPU. I should have said: I don't see why anyone would compress ALL DS under 500 Cyl these days, just to save space, when one is short on CPU. there is more to compression than just the size of the dataset. Amen

Re: SMS compression cost size

2010-02-24 Thread Tobias Cafiero
Rez, Do you have a analysis of what Compression cost/dsn? Regards, Tobias Cafiero Data Resource Management Tel: (212) 855-1117 R Hey sys...@yahoo.com Sent by: IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu 02/24/2010 03:16 AM Please respond to IBM Mainframe Discussion

Re: SMS compression cost size

2010-02-22 Thread R Hey
Thanks for the replies. Redbook on vsam says: There are some types of data sets that are not suitable for ZL compression, resulting in rejection, such as these: - Small data sets. - Data sets in which the pattern of the data does

Re: SMS compression cost size

2010-02-22 Thread Ron Hawkins
Reza, I can think of a KSDS that was being accessed by LSR, but every CI was touched until it was loaded into the LSR pool. Over 150 programs touched that 200 CYL baby in the same 60 minutes. Compression reduced the size by 55% reduced the IO by 55% as well. In terms of IO savings

Re: SMS compression cost size

2010-02-17 Thread David Andrews
On Wed, 2010-02-17 at 01:28 -0500, Ron Hawkins wrote: What is the objective of compressing the dataset? In my environment (cycles to burn) reads for certain long sequential datasets are faster for compressed data. So my ACS routines look for specific larger datasets that are written once, read

Re: SMS compression cost size

2010-02-17 Thread R Hey
Ron, What is the objective of compressing the dataset? Nobody remembers. It was done in a time far, far away ... My client is short on CPU, so I (new sysFROG) started wondering why ... regardless of size. So, size is not everything after all ;-) Rez

Re: SMS compression cost size

2010-02-17 Thread Rick Fochtman
snip--- By 'killer apps' you mean good ones to COMP for, right? Would you COMP regardless of size, if short on CPU already, with lots of DASD? (even for less than 50 cyls) If size matters, what should the MIN size be?

SMS compression cost size

2010-02-16 Thread R Hey
Hi, Are there any figures for the cost of SMS compression out there, or is it YMMV? (I've checked the archive to find cost is higher for W than for R ..., seen many who decided not to do it with a lot of YMMV ...) Also, are there any ROT for the min size to compress for? One client I had

Re: SMS compression cost size

2010-02-16 Thread Ron Hawkins
Reza, It's LZW compression which has an asymmetric cost by design - compressing always costs more than decompressing. Back when the compression assist instructions were announced IBM were saying the difference was around 6:1 compression vs decompression. The compression and compression

Re: SMS compression cost size

2010-02-16 Thread R Hey
Ron, By 'killer apps' you mean good ones to COMP for, right? Would you COMP regardless of size, if short on CPU already, with lots of DASD? (even for less than 50 cyls) If size matters, what should the MIN size be? Cheers, Rez

Re: SMS compression cost size

2010-02-16 Thread Ron Hawkins
Reza, Yes, killer Apps are the good ones. I wouldn't compress a 50 CYL dataset just for compressions sake. If it was being read by all and sundry with no updates then I'd load it into Hiperbatch. I've never looked at compression to save space. I've always viewed it as an IO reduction technique

Re: why compression costs additional I/O?

2010-01-28 Thread Yifat Oren
27, 2010 10:28 PM To: IBM-MAIN@bama.ua.edu Subject: Re: why compression costs additional I/O? Peter, Yes for your example I am recommending NCP=96, which means BUFNO=96. I habitually put both NCP and BUFNO on BSAM files because I've never been sure if BSAM calculates BUFNO using the NCP value from

Re: why compression costs additional I/O?

2010-01-28 Thread Bill Fairchild
Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Pawel Leszczynski Sent: Wednesday, January 27, 2010 9:23 AM To: IBM-MAIN@bama.ua.edu Subject: Re: why compression costs additional I/O? Hi Yifat, Thanks for answer - you are right! - I 've checked

SMS Compression - Software or Hardware

2010-01-28 Thread O'Brien, David W. (NIH/CIT) [C]
If a Dataclass with the following attributes is invoked: Data Set Name Type . . . . . : EXTENDED If Extended . . . . . . . . : REQUIRED Extended Addressability . . : YES Record Access Bias . . . . : USER Space Constraint Relief . . . : YES Reduce Space Up To (%) . . : 50

Re: SMS Compression - Software or Hardware

2010-01-28 Thread Ron Hawkins
David, SMS uses hardware compression. It has an asymmetric CPU cost, where decompressing the data uses 80% less CPU than compressing it. Ron -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of O'Brien, David W. (NIH/CIT) [C] Sent

Re: SMS Compression - Software or Hardware

2010-01-28 Thread O'Brien, David W. (NIH/CIT) [C]
Thanks Ron Dave O'Brien NIH Contractor From: Ron Hawkins [ron.hawkins1...@sbcglobal.net] Sent: Thursday, January 28, 2010 1:51 PM To: IBM-MAIN@bama.ua.edu Subject: Re: SMS Compression - Software or Hardware David, SMS uses hardware compression. It has

Re: SMS Compression - Software or Hardware

2010-01-28 Thread Mike Bell
There are 2 kinds of compression. The outboard kind that takes place in the tape unit is one example. there is no difference in the z.os cpu time for writing a compressed tape. the operating system kind which is always software. the software compression can be either just software or hardware

Re: SMS Compression - Software or Hardware

2010-01-28 Thread Ron Hawkins
Mike, It is the hardware assisted compression that I was referring to. There were/are products that do software compression without using the Hardware assist. DFSMSdss, DFSMShsm and old versions of SAS and IAM spring to mind. Then there was the IBM G4 and G5 that moved the hardware assist

why compression costs additional I/O?

2010-01-27 Thread Pawel Leszczynski
-compressible output we can see this: EXCP TCB SRB el.time TESTXWP5 STEP110 00 757K 3.51.709.01 -- w/o compression TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45 -- w. compresion We guess that big SRB in (2) goes

Re: why compression costs additional I/O?

2010-01-27 Thread R.S.
compare such sorting with sorting on non-compressible output we can see this: EXCP TCB SRB el.time TESTXWP5 STEP110 00 757K 3.51.709.01-- w/o compression TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45-- w

Re: why compression costs additional I/O?

2010-01-27 Thread NIGEL WOLFENDALE
I can understand from your explanation that we would get the Radoslaw, I can understand from your explanation that we would get the same number of EXCPs, but not twice as many. If say we are backing up 1000 30K blocks, if compression reduces the size of each block, to say 10K, then we

Re: why compression costs additional I/O?

2010-01-27 Thread John Kington
-compressible output we can see this: EXCP TCB SRB el.time TESTXWP5 STEP110 00 757K 3.51.709.01 -- w/o compression TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45 -- w. compresion We guess that big SRB in (2) goes

Re: why compression costs additional I/O?

2010-01-27 Thread Pawel Leszczynski
-- w/o compression TESTXWP5 STEP120 00 1462K 3.62 2.89 10.45-- w. compresion We guess that big SRB in (2) goes for compression (that we understand - we probably quit compression at all), but we don't understand 2 times bigger EXCP in second case. Any ideas

Re: why compression costs additional I/O?

2010-01-27 Thread Yifat Oren
. Hope that helps, Yifat Oren. -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Pawel Leszczynski Sent: Wednesday, January 27, 2010 12:56 PM To: IBM-MAIN@bama.ua.edu Subject: why compression costs additional I/O? Hello everybody, Recently

Re: why compression costs additional I/O?

2010-01-27 Thread Pawel Leszczynski
: why compression costs additional I/O? Hello everybody, Recently we are reviewing our EndOfDay jobs looking for potential performance improvements (reducing CPU/elapsed time). We have several jobs sorting big datasets where output is SMS-compressible (type: EXTENDED) datasets. When we compare

Re: why compression costs additional I/O?

2010-01-27 Thread David Betten
generally all of it probably mean that using DFSORT for compressed datasets is not good idea. I'm not sure I would agree with a general statement such as that. First. There is a cpu overhead associated with compression and it effects ALL applications, not just sort. The overhead is generally

Re: why compression costs additional I/O?

2010-01-27 Thread Edward Jaffe
Pawel Leszczynski wrote: generally all of it probably mean that using DFSORT for compressed datasets is not good idea. The EXCP access method is not supported for extended sequential data sets--whether compressed or not, striped or not. I/O for these data sets is performed by Media

Re: why compression costs additional I/O?

2010-01-27 Thread Ron Hawkins
count. These are usually mult-Cyl chains. One of the few problems with Extended Format datasets is that the block chaining defaults are lousy. This is probably why your job is taking longer with compression. BSAM, and QSAM, always use double buffering, so whatever you specify is halved for chaining

Re: why compression costs additional I/O?

2010-01-27 Thread Farley, Peter x23353
: Re: why compression costs additional I/O? Pawel, For a regular DSORG=PS dataset DFSORT and SYNCSORT use their own access method to read and write the SORTIN and SORTOUT using very efficient long chained Start Sub-Channels. The EXCP count reported for these datasets is the Start SubChannel

Re: why compression costs additional I/O?

2010-01-27 Thread Ron Hawkins
, January 27, 2010 11:51 AM To: IBM-MAIN@bama.ua.edu Subject: Re: [IBM-MAIN] why compression costs additional I/O? Ron, If a PS-E dataset has 6 stripes, are you recommending using NCP=96 (=16 * 6)? If so, what BUFNO should be used in that case? A long time ago in a galaxy far, far away

  1   2   >