On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote:
I enabled compression on a zfs filesystem with compression=gzip9 - i.e. fairly
slow compression - this stores backups of databases (which compress fairly
well).
The next question is: Is the CRC on the disk based on the uncompressed data
There was an announcement made in November about auto snapshots being made
obsolete in build 128, I assume major changes are afoot:
http://www.opensolaris.org/jive/thread.jspa?messageID=437516tstart=0#437516
--
This message posted from opensolaris.org
On Mon, Dec 14, 2009 at 4:04 AM, Jens Elkner
jel+...@cs.uni-magdeburg.de wrote:
On Sat, Dec 12, 2009 at 04:23:21PM +, Andrey Kuzmin wrote:
As to whether it makes sense (as opposed to two distinct physical
devices), you would have read cache hits competing with log writes for
bandwidth. I
Thanks for the update, it's no help to you of course, but I'm watching your
progress with interest. Your progress updates are very much appreciated.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston mijoh...@gmail.com wrote:
Thanks for the info Alexander... I will test this out. I'm just wondering
what it's going to see after I install Power Path. Since each drive will
have 4 paths, plus the Power Path... after doing a zfs import how will I
On Mon, 2009-12-07 at 23:31 +0100, Martijn de Munnik wrote:
On Dec 7, 2009, at 11:23 PM, Daniel Carosone wrote:
but if you attempt to add a disk to a redundant
config, you'll see an error message similar [..]
Doesn't the mismatched replication message help?
Not if you're trying to
We are also running into this bug.
Our system is a Solaris 10u4
SunOS sunsystem9 5.10 Generic_127112-10 i86pc i386 i86pc
ZFS version 4
We opened a Support Case (Case ID 71912304) which after some discussion came to
the conclusion that we should not use /etc/reboot for rebooting.
This leads me
There was an announcement made in November about auto
snapshots being made obsolete in build 128
That thread (which I know well) talks about the replacement of the
[b]implementation[/b], while retaining the (majority of) the behaviour and
configuration interface. The old implementation had
Hi,
Martin Uhl wrote:
We opened a Support Case (Case ID 71912304) which after some discussion came to the
conclusion that we should not use /etc/reboot for rebooting.
Yes. You are using /etc/reboot which is the same as calling
/usr/sbin/halt:
% ls -l /etc/reboot
lrwxrwxrwx 1 root
Hi, if someone running 129 could try this out, turn off compression in your
pool, mkfile 10g /pool/file123, see used space and then remove the file and see
if it makes used space available again. I'm having trouble with this, reminds
me of similar bug that occurred in 111-release.
Yours
Markus
Hi, if someone running 129 could try this out, turn off compression in your
pool, mkfile 10g /pool
/file123, see used space and then remove the file and see if it makes used
space available again. I
'm having trouble with this, reminds me of similar bug that occurred in
111-release.
Any
Hi, if someone running 129 could try this out, turn off compression in your
pool, mkfile 10g /pool
/file123, see used space and then remove the file and see if it makes used
space available again. I
'm having trouble with this, reminds me of similar bug that occurred in
111-release.
Any
Hello,
On 14 dec 2009, at 14.16, Markus Kovero markus.kov...@nebula.fi wrote:
Hi, if someone running 129 could try this out, turn off compression
in your pool, mkfile 10g /pool/file123, see used space and then
remove the file and see if it makes used space available again. I’m
having
If you umount a ZFS FS that has some other FS's underneath it, then the
mount points for the child FS needs to be created to have those
mounted; that way if you don't export the pool the dirs won't be deleted
and next time you import the pool the FS will fail to mount because your
mount
On Dec 13, 2009, at 11:28 PM, Yaverot wrote:
Been lurking for about a week and a half and this is my first post...
--- bfrie...@simple.dallas.tx.us wrote:
On Fri, 11 Dec 2009, Bob wrote:
Thanks. Any alternatives, other than using enterprise-level drives?
You can of course use normal
Hi,
Martin Uhl wrote:
obviously that will fail.
So AFAIK those directories will be created on mount but not removed on unmount
Good point. I was not aware of this. Will check with engineering.
The problem is not that exporting will not remove dirs (which I doubt it should) but
Hi James,
What are the commands that are used to reboot this server?
Also, you can use the fmdump -eV command to review any underlying
hardware problems. You might see some clues about what is going
on with c7t2d0.
Thanks,
Cindy
On 12/13/09 16:46, James Nelson wrote:
A majority of the time
FMA (not ZFS, directly) looks for a number of
failures over a period of time.
By default that is 10 failures in 10 minutes. If you
have an error that trips
on TLER, the best it can see is 2-3 failures in 10
minutes. The symptom
you will see is that when these long timeouts happen,
they
How you can setup these values to fma?
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of R.G. Keen
Sent: 14. joulukuuta 2009 20:14
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] hard
On Sun, Dec 13, 2009 at 11:51 PM, Steve Radich, BitShop, Inc.
ste...@bitshop.com wrote:
I enabled compression on a zfs filesystem with compression=gzip9 - i.e.
fairly slow compression - this stores backups of databases (which compress
fairly well).
The next question is: Is the CRC on the
On Dec 14, 2009, at 10:18 AM, Markus Kovero wrote:
How you can setup these values to fma?
UTSL
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/fm/modules/common/zfs-diagnosis/zfs_de.c#775
Standard caveats for adjusting timeouts applies.
-- richard
On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
ZFS deduplication is block-level, so to deduplicate one needs data
broken into blocks to be written. With compression enabled, you don't
have these until data is compressed. Looks like cycles waste indeed,
but ...
ZFS compression
On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
ZFS deduplication is block-level, so to deduplicate one needs data
broken into blocks to be written. With compression enabled, you don't
have these until data is compressed. Looks like cycles waste indeed,
but ...
ZFS compression
On Mon, Dec 14, 2009 at 9:53 PM, casper@sun.com wrote:
On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
ZFS deduplication is block-level, so to deduplicate one needs data
broken into blocks to be written. With compression enabled, you don't
have these until data is
Hi Cesare,
According to our CR 6524163, this problem was fixed in PowerPath 5.0.2,
but then the problem reoccurred.
According to the EMC PowerPath Release notes, here:
www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf
This problem is fixed in 5.2 SP1.
I would review the related
On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:
Right, but 'verify' seems to be 'extreme safety' and thus rather rare
use case.
Hmm, dunno. I wouldn't set anything, but scratch file system to
dedup=on. Anything of even slight significance is set to dedup=verify.
Sorry if you got this twice
but I never saw it appear on the alias.
OK Today I played with a J4400 connected to a Txxx server running S10
10/09
First off
read the release
notes I spent about 4 hours pulling my hair out as I could not get
stmsboot to work until we read in the release
I am also accustomed to seeing diluted properties such as compressratio. IMHO
it could be useful (or perhaps just familiar) to see a diluted dedup ratio for
the pool, or maybe see the size / percentage of data used to arrive at
dedupratio.
As Jeff points out, there is enough data available to
On 12/14/09, Cyril Plisko cyril.pli...@mountall.com wrote:
On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:
Right, but 'verify' seems to be 'extreme safety' and thus rather rare
use case.
Hmm, dunno. I wouldn't set anything, but scratch file system to
Is there better solution to this problem, what if the machine crashes?
Crashes are abnormal conditions. If it crashes you should fix the problem to
avoid future crashes and probably you will need to clear the pool dir
hierarchy prior to import the pool.
Are you serious? I really hope that
On Mon, Dec 14, 2009 at 3:54 PM, Craig S. Bell cb...@standard.com wrote:
I am also accustomed to seeing diluted properties such as compressratio.
IMHO it could be useful (or perhaps just familiar) to see a diluted dedup
ratio for the pool, or maybe see the size / percentage of data used to
Thanks.
I've decided now to only post when:
1) I have my zfs pool back
or
2) I give up
I should note that there are periods of time where I can ping my server
(rarely), but most of the time not. I have not been able to ssh into it, and
the console is hung (minus the little blinking cursor).
On Mon, Dec 14, 2009 at 01:29:50PM +0300, Andrey Kuzmin wrote:
On Mon, Dec 14, 2009 at 4:04 AM, Jens Elkner
jel+...@cs.uni-magdeburg.de wrote:
...
Problem is pool1 - user homes! So GNOME/firefox/eclipse/subversion/soffice
...
Flash-based read cache should help here by minimizing (metadata)
On Dec 4, 2009, at 9:33, James Risner r...@akira.stdio.com wrote:
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of
ZFS iirc.)
At some point I knocked it out (export) somehow, I don't remember
doing so intentionally. So I can't do commands like zpool replace
since
On Dec 15, 2009, at 5:50, Jack Kielsmeier jac...@netins.net wrote:
Thanks.
I've decided now to only post when:
1) I have my zfs pool back
or
2) I give up
I should note that there are periods of time where I can ping my
server (rarely), but most of the time not. I have not been able to
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE
On Dec 15, 2009, at 5:50, Jack Kielsmeier
jac...@netins.net wrote:
Thanks.
I've decided now to only post when:
1) I have my zfs pool back
or
2) I give up
I should note that there are periods of time where
I can ping my
server (rarely), but most of the time not. I have
37 matches
Mail list logo