anyone know at what version you get a warning,
and at what version installgrub is run automatically after upgrading
a root pool/filesystem?
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing
After upgrading to zpool version 29/zfs version 5 on a S10 test system via the
kernel patch 144501-19 it will now boot only as far as the to the grub menu.
What is a good Solaris rescue image that I can boot that will allow me to
import this rpool to look at it (given the newer version)?
How do you verify that a zfs send binary object is valid?
I tried running a truncated file through zstreamdump and it completed
with no error messages and an exit() status of 0. However, I noticed it
was missing a final print statement with a checksum value,
END checksum = ...
Is there any
On Jan 30, 2011, at 6:03 PM, Richard Elling wrote:
On Jan 30, 2011, at 5:01 PM, Stuart Anderson wrote:
On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
Is it possible to partition the global setting for the maximum ARC size
On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
Is there a simple way to query zfs send binary objects for basic information
such as:
1) What snapshot they represent?
2) When they where created?
3) Whether they are the result
Is it possible to partition the global setting for the maximum ARC size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would be interesting as well?
The use case is to prioritize which zvol devices should be fully cached
in DRAM on a
On Jan 30, 2011, at 1:49 PM, Richard Elling wrote:
On Jan 30, 2011, at 11:19 AM, Stuart Anderson wrote:
On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
Is there a simple way to query zfs send binary objects for basic
information
On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
Is it possible to partition the global setting for the maximum ARC size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would
Is there a simple way to query zfs send binary objects for basic information
such as:
1) What snapshot they represent?
2) When they where created?
3) Whether they are the result of an incremental send?
4) What the the baseline snapshot was, if applicable?
5) What ZFS version number they where
device for
the mirror, that's slightly smaller than the others, I have no reason to
care.
However, I believe there are some downsides to letting ZFS manage just
a slice rather than an entire drive, but perhaps those do not apply as
significantly to SSD devices?
Thanks
--
Stuart Anderson ander
Edward Ned Harvey solaris2 at nedharvey.com writes:
Allow me to clarify a little further, why I care about this so much. I have
a solaris file server, with all the company jewels on it. I had a pair of
intel X.25 SSD mirrored log devices. One of them failed. The replacement
device came
to upgrade the firmware if you are going to
be running multiple X25-E drives from the same controller.
I hope that helps.
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss
On Oct 2, 2009, at 11:54 AM, Robert Milkowski wrote:
Stuart Anderson wrote:
On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to get ZFS
to cache compressed data in DRAM?
In particular, given a 2
On Dec 17, 2009, at 9:21 PM, Richard Elling wrote:
On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
As a specific example of 2 devices with dramatically different performance
for sub-4k transfers has anyone done any ZFS benchmarks between the X25E and
the F20 they can share?
I am
On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
The question behind the question is, given the
really bad things that
can happen performance-wise with writes that are not
4k aligned when
using flash devices, is there any way to insure that
any and all
writes from ZFS are 4k aligned?
only a small percentage.
Sparse-ness is not a factor here. Sparse just means we ignore the
reservation so you can create a zvol bigger than what we'd normally
allow.
Cindy
On 10/17/09 13:47, Stuart Anderson wrote:
What does it mean for the reported value of a zvol volsize to be
less than
) * compresratio (11.20) = 166907917926
which is 3.6% larger than volsize.
Is this a bug or a feature for sparse volumes? If a feature, how
much larger than volsize/compressratio can the actual used
storage space grow? e.g., fixed amount overhead and/or
fixed percentage?
Thanks.
--
Stuart Anderson
On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to
get ZFS to cache compressed data in DRAM?
In particular, given a 2-way zvol mirror of highly compressible
data on persistent storage devices, what
but
unavailable?
Note, this Gedanken experiment is for highly compressible (~9x)
metadata for a non-ZFS filesystem.
Thanks.
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs
:31 PM, Stuart Anderson wrote:
This is S10U7 fully patched and not open solaris, but I would
appreciate any
advice on the following transient Permanent error message generated
while running a zpool scrub.
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
Question :
Is there a way to change the volume blocksize
say
via 'zfs snapshot send/receive'?
As I see things, this isn't possible as the
target
volume (including property values) gets
overwritten
by 'zfs receive'.
By default, properties are not received. To
Question :
Is there a way to change the volume blocksize say
via 'zfs snapshot send/receive'?
As I see things, this isn't possible as the target
volume (including property values) gets overwritten
by 'zfs receive'.
By default, properties are not received. To pass
properties,
0
errors: No known data errors
Thanks.
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0
c3t0d0 ONLINE 0 0 0
spares
c6t0d0INUSE currently in use
errors: No known data errors
--
Stuart Anderson ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson
___
zfs
On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson ander...@ligo.caltech.edu
wrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS, it seems like
, e.g.,
adding a faster cache device for reading and/or writing?
I am also curious if anyone has a prediction on when the
snapshot-restarting-resilvering bug will be patched in Solaris 10?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6343667
Thanks.
--
Stuart Anderson ander
On Jun 21, 2009, at 8:57 PM, Richard Elling wrote:
Stuart Anderson wrote:
It is currently taking ~1 week to resilver an x4500 running S10U6,
recently patched with~170M small files on ~170 datasets after a
disk failure/replacement, i.e.,
wow, that is impressive. There is zero chance
aggregated filesystem metadata via /bin/df or zfs list and the
compressratio.
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Wed, Apr 16, 2008 at 10:09:00AM -0700, Richard Elling wrote:
Stuart Anderson wrote:
On Tue, Apr 15, 2008 at 03:51:17PM -0700, Richard Elling wrote:
UTSL. compressratio is the ratio of uncompressed bytes to compressed
bytes.
http://cvs.opensolaris.org/source/search?q
/compressratio in the context of compression={on,off}, possibly
also refering to both sparse and non-sparse files?
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
specifically for this possibility.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported compressratio for that filesystem
is 1.00x even though this 1GB file only uses only 1kB.
ZFS
On Mon, Apr 14, 2008 at 05:22:03PM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported
in understanding what compressratio means.
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if the scrub
completion event was also logged in the zpool history along with the
initiation event.
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Thu, Mar 06, 2008 at 11:51:00AM -0800, Stuart Anderson wrote:
I currently have an X4500 running S10U4 with the latest ZFS uber patch
(127729-07) for which zpool scrub is making very slow progress even
though the necessary resources are apparently available. Currently it has
It is also
, somehow accelerated your
Thumper to near the speed of light.
(:-)
If true, that would certainly help, since we actually are using these
thumpers to help detect gravitational waves! See, http://www.ligo.caltech.edu.
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu
Anderson wrote:
Thanks for the information.
How does the temporary patch 127729-07 relate to the IDR127787 (x86) which
I believe also claims to fix this panic?
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs
thumper1 genunix: [ID 655072 kern.notice] fe8000809c60
genunix:taskq_thread+bc ()
Feb 18 17:55:18 thumper1 genunix: [ID 655072 kern.notice] fe8000809c70
unix:thread_start+8 ()
Feb 18 17:55:18 thumper1 unix: [ID 10 kern.notice]
--
Stuart Anderson [EMAIL PROTECTED]
http
On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anderson wrote:
Is this kernel panic a known ZFS bug, or should I open a new ticket?
Feb 18 17:55:18 thumper1 genunix: [ID 403854 kern.notice] assertion failed:
arc_buf_remove_ref(db-db_buf, db) == 0, file: ../../common/fs/zfs/dbuf.c,
line
for
this panic is in temporary state and will be released via SunSolve soon.
Please contact your support channel to get these patches.
--
Prabahar.
Stuart Anderson wrote:
On Mon, Feb 18, 2008 at 06:28:31PM -0800, Stuart Anderson wrote:
Is this kernel panic a known ZFS bug, or should I open
and has not displayed
any disconnected messages since then.
Can anyone confirm that that 125205-07 has solved these NCQ problems?
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Oct 24, 2007 at 10:40:41AM -0700, David Bustos wrote:
Quoth Stuart Anderson on Sun, Oct 21, 2007 at 07:09:10PM -0700:
Running 102 parallel zfs destroy -r commands on an X4500 running S10U4 has
resulted in No more processes errors in existing login shells for several
minutes of time
590 4552K 1492K sleep0:26 0.00% zfs
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 16, 2007 at 09:36:06PM -0700, Stuart Anderson wrote:
Running Solaris 10 Update 3 on an X4500 I have found that it is possible
to reproducibly block all writes to a ZFS pool by running chgrp -R
on any large filesystem in that pool. As can be seen below in the zpool
iostat output
() = 1189279453
/13:time() = 1189279453
Is this a known bug with fmd and ZFS?
Thanks.
On Fri, Sep 07, 2007 at 08:55:52PM -0700, Stuart Anderson wrote:
I am curious why zpool status reports a pool to be in the DEGRADED
c8t1d0 INUSE currently in use
errors: No known data errors
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
/d3, offset 1645084672, content: kernel
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Dump device: /dev/md/dsk/d2 (swap)
Savecore directory: /var/crash/x4500gc
Savecore enabled: yes
# ls -laR /var/crash/x4500gc/
/var/crash/x4500gc/:
total 2
drwx-- 2 root root 512 Jul 12 16:26 .
drwxr-xr-x 3 root root 512 Jul 12 16:26 ..
Thanks.
--
Stuart Anderson [EMAIL
x4500gc genunix: [ID 943907 kern.notice] Copyright 1983-2007
Sun Microsystems, Inc. All rights reserved.
On Tue, Jul 17, 2007 at 12:40:16PM -0700, Stuart Anderson wrote:
On Tue, Jul 17, 2007 at 03:08:44PM +1000, James C. McPherson wrote:
Log a new case with Sun, and make sure you supply
or should I open a new case with Sun?
Thanks.
--
Stuart Anderson [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jul 17, 2007 at 02:49:08PM +1000, James C. McPherson wrote:
Stuart Anderson wrote:
Running Solaris 10 Update 3 on an X4500 I have found that it is possible
to reproducibly block all writes to a ZFS pool by running chgrp -R
on any large filesystem in that pool. As can be seen below
53 matches
Mail list logo