On 06/05/2010 21:07, Erik Trimble wrote:
VM images contain large quantities of executable files, most of which
compress poorly, if at all.
What data are you basing that generalisation on ?
Look at these simple examples for libc on my OpenSolaris machine:
1.6M /usr/lib/libc.so.1*
636K
On 06/05/2010 21:07, Erik Trimble wrote:
VM images contain large quantities of executable files, most of which
compress poorly, if at all.
What data are you basing that generalisation on ?
Look at these simple examples for libc on my OpenSolaris machine:
1.6M /usr/lib/libc.so.1*
636K
On Fri, May 7, 2010 04:32, Darren J Moffat wrote:
Remember also that unless you are very CPU bound you might actually
improve performance from enabling compression. This isn't new to ZFS,
people (my self included) used to do this back in MS-DOS days with
Stacker and Doublespace.
CPU has
On 06/05/2010 21:07, Erik Trimble wrote:
VM images contain large quantities of executable files, most of which
compress poorly, if at all.
What data are you basing that generalisation on ?
note : I can't believe someone said that.
warning : I just detected a fast rise time on my pedantic
On Thu, May 6, 2010 at 2:06 AM, Richard Jahnel rich...@ellipseinc.com wrote:
I've googled this for a bit, but can't seem to find the answer.
What does compression bring to the party that dedupe doesn't cover already?
Compression will reduce the storage requirements for non-duplicate data.
As
This is interesting, but what about iSCSI volumes for virtual machines?
Compress or de-dupe? Assuming the virtual machine was made from a clone of the
original iSCSI or a master iSCSI volume.
Does anyone have any real world data this? I would think the iSCSI volumes
would diverge quite a bit
On Fri, 2010-05-07 at 03:10 +0900, Michael Sullivan wrote:
This is interesting, but what about iSCSI volumes for virtual machines?
Compress or de-dupe? Assuming the virtual machine was made from a clone of
the original iSCSI or a master iSCSI volume.
Does anyone have any real world data
I've googled this for a bit, but can't seem to find the answer.
What does compression bring to the party that dedupe doesn't cover already?
Thank you for you patience and answers.
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Dedup came much later than compression. Also, compression saves both
space and therefore load time even when there's only one copy. It is
especially good for e.g. HTML or man page documentation which tends to
compress very well (versus binary formats like images or MP3s that
don't).
It
I've googled this for a bit, but can't seem to find
the answer.
What does compression bring to the party that dedupe
doesn't cover already?
Thank you for you patience and answers.
That almost sounds like a classroom question.
Pick a simple example: large text files, of which each is
Another thought is this: _unless_ the CPU is the bottleneck on
a particular system, compression (_when_ it actually helps) can
speed up overall operation, by reducing the amount of I/O needed.
But storing already-compressed files in a filesystem with compression
is likely to result in wasted
One of the big things to remember with dedup is that it is
block-oriented (as is compression) - it deals with things in discrete
chunks, (usually) not the entire file as a stream. So, let's do a
thought-experiment here:
File A is 100MB in size. From ZFS's standpoint, let's say it's made up
of 100
Hmm...
To clarify.
Every discussion or benchmarking that I have seen always show both off,
compression only or both on.
Why never compression off and dedup on?
After some further thought... perhaps it's because compression works at the
byte level and dedup is at the block level. Perhaps I
On 05/ 6/10 03:35 PM, Richard Jahnel wrote:
Hmm...
To clarify.
Every discussion or benchmarking that I have seen always show both off,
compression only or both on.
Why never compression off and dedup on?
After some further thought... perhaps it's because compression works at the
byte level
On May 5, 2010, at 8:35 PM, Richard Jahnel wrote:
Hmm...
To clarify.
Every discussion or benchmarking that I have seen always show both off,
compression only or both on.
Why never compression off and dedup on?
I've seen this quite often. The decision to compress is based on the
15 matches
Mail list logo