that I needed. Thanks to all that took the time to reply.
-Matt Breitbach
-Original Message-
From: Donal Farrell [mailto:vmlinuz...@gmail.com]
Sent: Wednesday, November 23, 2011 10:42 AM
To: Matt Breitbach
Subject: Re: [zfs-discuss] Compression
is this on esx 3.5.x? or 4.x or greater
Currently using NFS to access the datastore.
-Matt
-Original Message-
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Tuesday, November 22, 2011 11:10 PM
To: Matt Breitbach
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Compression
Hi Matt,
On Nov 22, 2011
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, self, are the values shown here the size on disk, or
are they the pre-compressed values. Google gives me no great results on
the first few pages, so I headed here.
This really relates to my VMware
2011-11-23 7:39, Matt Breitbach wrote:
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, self, are the values shown here the size on disk, or
are they the pre-compressed values. Google gives me no great results on
the first few pages, so I headed here.
On 11/23/11 04:58 PM, Jim Klimov wrote:
2011-11-23 7:39, Matt Breitbach wrote:
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, self, are the values shown here the size on disk, or
are they the pre-compressed values. Google gives me no great results on
2011-11-23 8:21, Ian Collins wrote:
If you use du on the ZFS filesystem, you'll see the logical
storage size, which takes into account compression and sparse
bytes. So the du size should be not greater than ls size.
It can be significantly bigger:
ls -sh x
2 x
du -sh x
1K x
Pun accepted ;)
Hi Matt,
On Nov 22, 2011, at 7:39 PM, Matt Breitbach wrote:
So I'm looking at files on my ZFS volume that are compressed, and I'm
wondering to myself, self, are the values shown here the size on disk, or
are they the pre-compressed values. Google gives me no great results on
the first few
On Wed, 15 Sep 2010, Brandon High wrote:
When using compression, are the on-disk record sizes determined before
or after compression is applied? In other words, if record size is set
to 128k, is that the amount of data fed into the compression engine,
or is the output size trimmed to fit? I
When using compression, are the on-disk record sizes determined before
or after compression is applied? In other words, if record size is set
to 128k, is that the amount of data fed into the compression engine,
or is the output size trimmed to fit? I think it's the former, but I'm
not certain.
On Wed, Apr 7, 2010 at 10:47 AM, Daniel Bakken
dan...@economicmodeling.com wrote:
When I send a filesystem with compression=gzip to another server with
compression=on, compression=gzip is not set on the received filesystem. I am
using:
Is compression set on the dataset, or is it being
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597
Cindy
On 04/07/10 17:59, Daniel Bakken wrote:
We have found
On 08 April, 2010 - Cindy Swearingen sent me these 2,6K bytes:
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
Daniel,
Which Solaris release is this?
I can't reproduce this on my lab system that runs the Solaris 10 10/09
release.
See the output below.
Thanks,
Cindy
# zfs destroy -r tank/test
# zfs create -o compression=gzip tank/test
# zfs snapshot tank/t...@now
# zfs send -R tank/t...@now | zfs
I worked around the problem by first creating a filesystem of the same name
with compression=gzip on the target server. Like this:
zfs create sas/archive
zfs set compression=gzip sas/archive
Then I used zfs receive with the -F option:
zfs send -vR promise1/arch...@daily.1 | zfs send zfs receive
The receive side is running build 111b (2009.06), so I'm not sure if your
advice actually applies to my situation.
Daniel Bakken
On Tue, Apr 6, 2010 at 10:57 PM, Tom Erickson thomas.erick...@oracle.comwrote:
After build 128, locally set properties override received properties, and
this
Here is the info from zstreamdump -v on the sending side:
BEGIN record
hdrtype = 2
features = 0
magic = 2f5bacbac
creation_time = 0
type = 0
flags = 0x0
toguid = 0
fromguid = 0
toname = promise1/arch...@daily.1
nvlist
We have found the problem. The mountpoint property on the sender was at one
time changed from the default, then later changed back to defaults using zfs
set instead of zfs inherit. Therefore, zfs send included these local
non-default properties in the stream, even though the local properties are
Daniel Bakken wrote:
When I send a filesystem with compression=gzip to another server with
compression=on, compression=gzip is not set on the received filesystem.
I am using:
zfs send -R promise1/arch...@daily.1 | zfs receive -vd sas
The zfs manpage says regarding the -R flag: When received,
Daniel Bakken wrote:
The receive side is running build 111b (2009.06), so I'm not sure if
your advice actually applies to my situation.
The advice regarding received vs local properties definitely does not
apply. You could still confirm the presence of the compression property
in the send
Daniel Bakken wrote:
Here is the info from zstreamdump -v on the sending side:
BEGIN record
hdrtype = 2
features = 0
magic = 2f5bacbac
creation_time = 0
type = 0
flags = 0x0
toguid = 0
fromguid = 0
toname =
Daniel Bakken wrote:
We have found the problem. The mountpoint property on the sender was at
one time changed from the default, then later changed back to defaults
using zfs set instead of zfs inherit. Therefore, zfs send included these
local non-default properties in the stream, even though
With the default compression scheme (LZJB ), how does one calculate the ratio
or amount compressed ahead of time when allocating storage?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote:
I still use disk swap because I have some bad experiences
with ZFS swap. (ZFS appears to cache and that is very wrong)
I'm experimenting with running zfs swap with the primarycache attribute
set to metadata instead of the default
Bill Sommerfeld wrote:
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote:
I still use disk swap because I have some bad experiences
with ZFS swap. (ZFS appears to cache and that is very wrong)
I'm experimenting with running zfs swap with the primarycache attribute
set to metadata
Bob Friesenhahn wrote:
On Wed, 17 Jun 2009, Haudy Kazemi wrote:
usable with very little CPU consumed.
If the system is dedicated to serving files rather than also being
used interactively, it should not matter much what the CPU usage is.
CPU cycles can't be stored for later use. Ultimately,
On Thu, 18 Jun 2009, Haudy Kazemi wrote:
for text data, LZJB compression had negligible performance benefits (task
times were unchanged or marginally better) and less storage space was
consumed (1.47:1).
for media data, LZJB compression had negligible performance benefits (task
times were
Hello Richard,
Monish Shah wrote:
What about when the compression is performed in dedicated hardware?
Shouldn't compression be on by default in that case? How do I put in an
RFE for that?
Is there a bugs.intel.com? :-)
I may have misled you. I'm not asking for Intel to add hardware
David Magda dma...@ee.ryerson.ca writes:
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
So the cache saves not only the time to access the disk but also
the CPU time to decompress. Given this, I think it could be a big
win.
Unless you're in GIMP working on JPEGs, or doing some kind of
On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Hommekjeti...@linpro.no wrote:
indeed. I think only programmers will see any substantial benefit
from compression, since both the code itself and the object files
generated are easily compressible.
Perhaps compressing /usr could be handy, but
On Wed, Jun 17, 2009 at 5:03 PM, Kjetil Torgrim Hommekjeti...@linpro=
.no wrote:
indeed. =A0I think only programmers will see any substantial benefi=
t
from compression, since both the code itself and the object files
generated are easily compressible.
Perhaps compressing /usr could be
Fajar A. Nugraha fa...@fajar.net writes:
Kjetil Torgrim Homme wrote:
indeed. I think only programmers will see any substantial benefit
from compression, since both the code itself and the object files
generated are easily compressible.
Perhaps compressing /usr could be handy, but why
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG
video editing--or ripping audio (MP3 / AAC / FLAC) stuff. All of
which are probably some of the largest files in most people's
homedirs nowadays.
indeed. I think only programmers will see any substantial benefit
from
Monish Shah mon...@indranetworks.com writes:
I'd be interested to see benchmarks on MySQL/PostgreSQL performance
with compression enabled. my *guess* would be it isn't beneficial
since they usually do small reads and writes, and there is little
gain in reading 4 KiB instead of 8 KiB.
OK,
On Wed, June 17, 2009 06:15, Fajar A. Nugraha wrote:
Perhaps compressing /usr could be handy, but why bother enabling
compression if the majority (by volume) of user data won't do
anything but burn CPU?
How do you define substantial? My opensolaris snv_111b installation
has 1.47x
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Rich Teer wrote:
You actually have that backwards. :-) In most cases, compression is
very
desirable. Performance studies have shown that today's CPUs can
compress
data faster
David Magda wrote:
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
So the cache saves not only the time to access the disk but also the CPU
time to decompress. Given this, I think it could be a big win.
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG video
On Wed, 17 Jun 2009, Haudy Kazemi wrote:
usable with very little CPU consumed.
If the system is dedicated to serving files rather than also being used
interactively, it should not matter much what the CPU usage is. CPU cycles
can't be stored for later use. Ultimately, it (mostly*) does not
Hello,
I would like to add one more point to this.
Everyone seems to agree that compression is useful for reducing load on the
disks and the disagreement is about the impact on CPU utilization, right?
What about when the compression is performed in dedicated hardware?
Shouldn't compression
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Rich Teer wrote:
You actually have that backwards. :-) In most cases, compression is very
desirable. Performance studies have shown that today's CPUs can compress
data faster than it takes for the uncompressed data to be read
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait
Kyle McDonald wrote:
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it
Monish Shah wrote:
Hello,
I would like to add one more point to this.
Everyone seems to agree that compression is useful for reducing load
on the disks and the disagreement is about the impact on CPU
utilization, right?
What about when the compression is performed in dedicated hardware?
Darren J Moffat wrote:
Kyle McDonald wrote:
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that
it was
faster
On Tue, June 16, 2009 15:32, Kyle McDonald wrote:
So the cache saves not only the time to access the disk but also the CPU
time to decompress. Given this, I think it could be a big win.
Unless you're in GIMP working on JPEGs, or doing some kind of MPEG video
editing--or ripping audio (MP3 /
Hi,
I just installed 2009.06 and found that compression isn't enabled by default
when filesystems are created. Does is make sense to have an RFE open for this?
(I'll open one tonight if need be.) We keep telling people to turn on
compression. Are there any situations where turning on
Bob Friesenhahn wrote:
On Mon, 15 Jun 2009, Shannon Fiume wrote:
I just installed 2009.06 and found that compression isn't enabled by
default when filesystems are created. Does is make sense to have an
RFE open for this? (I'll open one tonight if need be.) We keep telling
people to turn on
* Shannon Fiume (shannon.fi...@sun.com) wrote:
Hi,
I just installed 2009.06 and found that compression isn't enabled by
default when filesystems are created. Does is make sense to have an
RFE open for this? (I'll open one tonight if need be.) We keep telling
people to turn on compression.
On Mon, 15 Jun 2009 22:51:12 +0200
Thommy M. thommy.m.malmst...@gmail.com wrote:
IIRC there was a blog about I/O performance with ZFS stating that it
was faster with compression ON as it didn't have to wait for so much
data from the disks and that the CPU was fast at unpacking data. But
sure,
On Mon, 15 Jun 2009, dick hoogendijk wrote:
IF at all, it certainly should not be the DEFAULT.
Compression is a choice, nothing more.
I respectfully disagree somewhat. Yes, compression shuould be a
choice, but I think the default should be for it to be enabled.
--
Rich Teer, SCSA, SCNA,
On Mon, 15 Jun 2009, Thommy M. wrote:
In most cases compression is not desireable. It consumes CPU and
results in uneven system performance.
IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait for so much data
from the
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:
In most cases compression is not desireable. It consumes CPU and results in
uneven system performance.
You actually have that backwards. :-) In most cases, compression is very
desirable. Performance studies have shown that today's CPUs can
On Mon, 15 Jun 2009, dick hoogendijk wrote:
IF at all, it certainly should not be the DEFAULT.
Compression is a choice, nothing more.
I respectfully disagree somewhat. Yes, compression shuould be a
choice, but I think the default should be for it to be enabled.
I agree that Compression
On Mon, 15 Jun 2009, Rich Teer wrote:
You actually have that backwards. :-) In most cases, compression is very
desirable. Performance studies have shown that today's CPUs can compress
data faster than it takes for the uncompressed data to be read or written.
Do you have a reference for
Richard Elling wrote:
Miles Nordin wrote:
AIUI the later BE's are clones of the first, and not all blocks
will be rewritten, so it's still an issue. no?
In practice, yes, they are clones. But whether it is an issue
depends on what the issue is. As I see it, the issue is that
someone wants
Carson Gaspar wrote:
Richard Elling wrote:
Miles Nordin wrote:
AIUI the later BE's are clones of the first, and not all blocks
will be rewritten, so it's still an issue. no?
In practice, yes, they are clones. But whether it is an issue
depends on what the issue is. As I see it, the
I'll call bull* on that. Microsoft has an admirably simple installation
and 88% of the market. Apple has another admirably simple installation
and 10% of the market. Solaris has less than 1% of the market and has
had a very complex installation process. You can't win that battle by
increasing
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework that was delivered as part of the OpenSolaris SPARC
preview. Can anyone
JumpStart framework?
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Torrey McMahon
Sent: Tuesday, May 05, 2009 6:38 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Compression/copies on root pool RFE
Before I put
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified. I agree that
interactive installation needs to remain as simple as possible.
How about offering a choice an installation time: Custom or default??
Those that don't want/need the interactive
This sounds like a good idea to me, but it should be brought up
on the caiman-disc...@opensolaris.org mailing list, since this
is not just, or even primarily, a zfs issue.
Lori
Rich Teer wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified.
On Wed, May 6, 2009 at 11:14 AM, Rich Teer rich.t...@rite-group.com wrote:
On Wed, 6 May 2009, Richard Elling wrote:
popular interactive installers much more simplified. I agree that
interactive installation needs to remain as simple as possible.
How about offering a choice an installation
re == Richard Elling richard.ell...@gmail.com writes:
re Note: in the Caiman world, this is only an issue for the first
re BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
be rewritten, so it's still an
On Wed, May 6, 2009 at 2:54 AM, casper@sun.com wrote:
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework that was
Miles Nordin wrote:
re == Richard Elling richard.ell...@gmail.com writes:
re Note: in the Caiman world, this is only an issue for the first
re BE. Later BEs can easily have other policies. -- richard
AIUI the later BE's are clones of the first, and not all blocks will
Before I put one in ... anyone else seen one? Seems we support
compression on the root pool but there is no way to enable it at install
time outside of a custom script you run before the installer. I'm
thinking it should be a real install time option, have a jumpstart
keyword, etc. Same with
?
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Torrey McMahon
Sent: Tuesday, May 05, 2009 6:38 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Compression/copies on root pool RFE
Before I put one in ... anyone else
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote:
PS: At one point the old JumpStart code was encumbered, and the
community wasn't able to assist. I haven't looked at the next-gen
jumpstart framework that was delivered as part of the OpenSolaris SPARC
preview. Can anyone
Hello Krzys,
Wednesday, November 5, 2008, 5:41:16 AM, you wrote:
K compression is not supported for rootpool?
K # zpool create rootpool c1t1d0s0
K # zfs set compression=gzip-9 rootpool
K # lucreate -c ufsBE -n zfsBE -p rootpool
K Analyzing system configuration.
K ERROR: ZFS pool rootpool does
compression is not supported for rootpool?
# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
ERROR: ZFS pool rootpool does not support boot environments
#
why? are there any plans to have compression on
Krzys wrote:
compression is not supported for rootpool?
# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
I think gzip compression is not supported on zfs root. Try compression=on.
Regards,
Fajar
smime.p7s
Description: S/MIME Cryptographic Signature
On 11/09/2007, Mike DeMarco [EMAIL PROTECTED]
wrote:
I've got 12Gb or so of db+web in a zone on a ZFS
filesystem on a mirrored zpool.
Noticed during some performance testing today
that
its i/o bound but
using hardly
any CPU, so I thought turning on compression
would be
a
On 9/12/07, Mike DeMarco [EMAIL PROTECTED] wrote:
Striping several disks together with a stripe width that is tuned for your
data
model is how you could get your performance up. Stripping has been left out
of the ZFS model for some reason. Where it is true that RAIDZ will stripe
the data
Mike DeMarco wrote:
IO bottle necks are usually caused by a slow disk or one that has heavy
workloads reading many small files. Two factors that need to be considered
are Head seek latency and spin latency. Head seek latency is the amount
of time it takes for the head to move to the track
On 9/12/07, Mike DeMarco [EMAIL PROTECTED] wrote:
Striping several disks together with a stripe width
that is tuned for your data
model is how you could get your performance up.
Stripping has been left out
of the ZFS model for some reason. Where it is true
that RAIDZ will stripe
the
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool.
Noticed during some performance testing today that its i/o bound but
using hardly
any CPU, so I thought turning on compression would be a quick win.
I know I'll have to copy files for existing data to be compressed,
On 9/11/07, Dick Davies [EMAIL PROTECTED] wrote:
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored
zpool.
Noticed during some performance testing today that its i/o bound but
using hardly
any CPU, so I thought turning on compression would be a quick win.
I know I'll
I've got 12Gb or so of db+web in a zone on a ZFS
filesystem on a mirrored zpool.
Noticed during some performance testing today that
its i/o bound but
using hardly
any CPU, so I thought turning on compression would be
a quick win.
If it is io bound won't compression make it worse?
I
On 11/09/2007, Mike DeMarco [EMAIL PROTECTED] wrote:
I've got 12Gb or so of db+web in a zone on a ZFS
filesystem on a mirrored zpool.
Noticed during some performance testing today that
its i/o bound but
using hardly
any CPU, so I thought turning on compression would be
a quick win.
79 matches
Mail list logo