Ted,
I cannot tell if you're being sarcastic or not, but we did use compression
on
our banking databases, VSAM, and flat files.
[Ron Hawkins]
Yes I was being sarcastic - you caught me - but it is still fact that I have
successfully used these techniques for many countries, Banks and Telcos. I
Both hardware and software (SHRINK).
[Ron Hawkins]
For Shrink to be hardware compression it must be using the instructions
provided for compression services.
I meant we used hardware compression.
And, we used software compression, and the software compression product was
SHRINK.
I meant them
overhead.
Thanks,
Yifat
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Ron Hawkins
Sent: יום ד 07 יולי 2010 08:50
To: IBM-MAIN@bama.ua.edu
Subject: Re: VSAM Max Lrecl?
Ted,
The performance gain made sense then, and it makes sense now
I'm missing your point, as I didn't mention a write intensive environment.
I'm not certain what you meant.
My first examples is father to son updates which is 50% write at worse, and
the second example is read intensive.
Those are nice theoretical environments.
As the second example is not
...
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
Ted MacNEIL
Sent: Wednesday, July 07, 2010 3:11 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: [IBM-MAIN] VSAM Max Lrecl?
I'm missing your point, as I didn't mention a write intensive
I guess Banking Applications must be really unique in Canada, because this
method worked for Banking and Credit Card applications for over 20
countries.
I cannot tell if you're being sarcastic or not, but we did use compression on
our banking databases, VSAM, and flat files.
Both hardware and
--snip--
I'm wondering if your primary rule of not compressing a file unless it
will exceed its architectural limit may have blocked the opportunity for
you to come across cases where compression is not a waste of time.
Synchronous
--snip-
Ted,
Well, I don't know what else to say Ted. These are real world examples that
provided significant IO and IO Time reduction for Banking and Credit Card
applications. I did not say I recommend these techniques, I
On Wed, 7 Jul 2010 13:48:13 +, Ted MacNEIL wrote:
I guess Banking Applications must be really unique in Canada,
because this method worked for Banking and Credit Card
applications for over 20 countries.
I cannot tell if you're being sarcastic or not, but we did use
Tsk, tsk.
sarcastic?
On 7 Jul 2010 07:48:56 -0700, in bit.listserv.ibm-main you wrote:
--snip-
Ted,
Well, I don't know what else to say Ted. These are real world examples that
provided significant IO and IO Time reduction for Banking and Credit
W dniu 2010-07-07 18:41, Clark Morris pisze:
On 7 Jul 2010 07:48:56 -0700, in bit.listserv.ibm-main you wrote:
--snip-
Ted,
Well, I don't know what else to say Ted. These are real world examples that
provided
R.S. wrote:
W dniu 2010-07-07 18:41, Clark Morris pisze:
On 7 Jul 2010 07:48:56 -0700, in bit.listserv.ibm-main you wrote:
--snip-
Ted,
Well, I don't know what else to say Ted. These are real world
examples that
W dniu 2010-07-07 22:47, Steve Comstock pisze:
R.S. wrote:
Yes and no.
You can store your photos in USS files. In PC world - FROM OS POINT OF
VIEW all files are unstructured. Just sequence of bytes. No records,
no access methods. Obviously the applications uses its own formats,
most of them are
Subject: Re: [IBM-MAIN] VSAM Max Lrecl?
Ron,
How does SMS striping measure up in regards to synchronous remote copy?
Locally, the same 40-50% I/O elapsed time savings can be gained by
SMSingly
striping the data sets (into 2 or more stripes).
True, there is some CPU overhead for striping
On 7/6/2010 12:14 AM, Paul Gilmartin wrote:
And that doesn't help.
Yes, it does. There will still be cases where any compression
produces larger output, but it is more likely that when one
method fails, another will show improvement.
Just consider PKZIP in its entierty as a
complex
: Tuesday, July 06, 2010 11:48 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: VSAM Max Lrecl?
On 7/6/2010 12:14 AM, Paul Gilmartin wrote:
And that doesn't help.
Yes, it does. There will still be cases where any compression
produces larger output, but it is more likely that when one
method fails, another
The formal proof applies to PKZIP regardless of its internal complexity.
While your statement sounds plausible, it's not compelling.
If you treat PKZIP as a black box.
IE: data in -- compressed out.
Then the formal proof still applies.
If I remember my Computing. Theory correctly, from the
snip---
The formal proof applies to PKZIP regardless of its internal complexity.
While your statement sounds plausible, it's not compelling.
If you treat PKZIP as a black box.
IE: data in -- compressed out.
Then the
I'm a results oriented guy; formal proofs, elegant or otherwise, don't
float my boat anywhere near as well as concrete results that I can
quantify.
I tend to agree with you; without the time to do an empirical study on
compression what do you have?
I've never had a compressed file come out
On Tue, 6 Jul 2010 20:40:53 +, Ted MacNEIL wrote:
I'm a results oriented guy; formal proofs, elegant or otherwise, don't
float my boat anywhere near as well as concrete results that I can
quantify.
You may safely disregard a formal proof that a technique will
work, and operate as if it
On 7/6/2010 3:28 PM, Charles Mills wrote:
Can't the lengthens case be limited in theory to lengthens by one bit
because we can simply add a flag bit for compressed/not compressed.
(Practically speaking, it would probably be one byte, allowing for
additional information such as compression method
-Original Message-
From: IBM Mainframe Discussion List On Behalf Of Gerhard Postpischil
[ snip ]
I consider the 50% to be a practical lower limit; any less, and
the method would not be considered? But it reminds me of an
interesting article I read in the eighties - the author
On 7/6/2010 6:24 PM, Chase, John wrote:
Google IRTNOG for a short story about compression. :-)
Thanks, I had completely forgotten I ever read that. And on the
first hit's page, there is a very nice comment by Jorge Luis Borges.
Gerhard Postpischil
Bradford, VT
Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
Ted MacNEIL
Sent: Tuesday, July 06, 2010 1:41 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: [IBM-MAIN] VSAM Max Lrecl?
I'm a results oriented guy; formal proofs, elegant or otherwise, don't
float my boat
I'm wondering if your primary rule of not compressing a file unless it will
exceed its architectural limit may have blocked the opportunity for you to
come across cases where compression is not a waste of time.
It's actually the other way around.
We found it a waste of time and resources.
So,
is the one you
don't do. That's something compression can do for you.
Ron
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
Ted MacNEIL
Sent: Tuesday, July 06, 2010 5:19 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: [IBM-MAIN] VSAM Max Lrecl
I've been using these techniques since 1996, and they still work more than
10 years later.
I've been using them for a lot longer than that.
The performance gain made sense then, and it makes sense now.
Does it with sub-5ms response?
After all, I'm sure you are one of the supporters of the
Ted,
The performance gain made sense then, and it makes sense now.
Does it with sub-5ms response?
[Ron Hawkins]
Yes. I usually figure out the saving with 0.35 to 1.5ms response time in
SIMPLEX and 0.75 to 3ms response time in DUPLEX with Synchronous remote
copy. Anything else is usually
I've unbuttoned the ARCHIVER (CBTTAPE file 147) for some upgrade
work, thanks to a problem I've discovered. This problem is due to the
limit on the LRECL/BLKSIZE of any non-VSAM dataset being processed. In
certain isolated cases, any record may GROW instead of shrink when going
through the
Date: Mon, 5 Jul 2010 14:11:13 -0500
From: rfocht...@ync.net
I need to know: what is the maximum RECORDSIZE I can define for the
ARCHIVE cluster, on 3380 or 3390 devices. Assume that I will not specify
a CISIZE but rather let IDCAMS choose a size. I'd like to expand the
ARCHIVE
W dniu 2010-07-05 21:11, Rick Fochtman pisze:
I've unbuttoned the ARCHIVER (CBTTAPE file 147) for some upgrade
work, thanks to a problem I've discovered. This problem is due to the
limit on the LRECL/BLKSIZE of any non-VSAM dataset being processed. In
certain isolated cases, any record may GROW
At 2:11 PM -0500 on 7/5/10, Rick Fochtman wrote about VSAM Max Lrecl?:
In certain isolated cases, any record may GROW instead of shrink when going
through the ARCHIVER's compaction process (using the Huffman algorithm).
That is interesting. I thought that one of the attributes of the
Huffman
On Mon, 5 Jul 2010 16:56:20 -0400, Robert A. Rosenberg wrote:
That is interesting. I thought that one of the attributes of the
Huffman algorithm was that expansion due to the substitution was
impossible ...
Not impossible; rather, inevitable. Think pigeonhole principle.
-- gil
---snip---
In certain isolated cases, any record may GROW instead of shrink when
going
through the ARCHIVER's compaction process (using the Huffman algorithm).
That is interesting. I thought that one of the attributes of
---snip-:
That is interesting. I thought that one of the attributes of the
Huffman algorithm was that expansion due to the substitution was
impossible ...
Not impossible; rather, inevitable. Think pigeonhole principle.
On 7/5/2010 11:12 PM, Rick Fochtman wrote:
Not inevitable. Souce code usually achieves 60%-90% compression
with the hard-coded tables in use so far. But the table is
biased toward source code and doesn't work so well with load
modules.
There exists a formal proof that for any compression
On Mon, 5 Jul 2010 23:33:45 -0400, Gerhard Postpischil wrote:
On 7/5/2010 11:12 PM, Rick Fochtman wrote:
Not inevitable. Souce code usually achieves 60%-90% compression
with the hard-coded tables in use so far. But the table is
biased toward source code and doesn't work so well with load
37 matches
Mail list logo