On 9/13/06, Matthew Ahrens [EMAIL PROTECTED]
wrote:
Sure, if you want *everything* in your pool to be
mirrored, there is no
real need for this feature (you could argue that
setting up the pool
would be easier if you didn't have to slice up the
disk though).
Not necessarily.
On Thu, Sep 14, 2006 at 05:08:18PM -0500, Nicolas Williams wrote:
On Thu, Sep 14, 2006 at 10:32:59PM +0200, Henk Langeveld wrote:
Bady, Brant RBCM:EX wrote:
Part of the archiving process is to generate checksums (I happen to use
MD5), and store them with other metadata about the digital
Yup, its almost certain that this is the bug you are hitting.
-Mark
Alan Hargreaves wrote:
I know, bad form replying to myself, but I am wondering if it might be
related to
6438702 error handling in zfs_getpage() can trigger page not
locked
Which is marked fix in progress with a
Hi forum,
I'm currently a little playing around with ZFS on my workstation.
I created a standard mirrored pool over 2 disk-slices.
# zpool status
Pool: mypool
Status: ONLINE
scrub: Keine erforderlich
config:
NAME STATE READ WRITE CKSUM
mypoolONLINE
The disks in that Blade 100, are these IDE disks?
The performance problem is probably bug 6421427:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427
A fix for the issue was integrated into the Opensolaris 20060904 source
drop (actually closed binary drop):
Luke Scharf wrote:
It sounded to me like he wanted to implement tripwire, but save some
time and CPU power by querying the checksumming-work that was already
done by ZFS.
Nevermind. The e-mail client that I chose to use broke up the thread,
and I didn't see that the issue had already been
What's the brand and model of the cards ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Sep 15, 2006 at 09:31:04AM +0100, Ceri Davies wrote:
On Thu, Sep 14, 2006 at 05:08:18PM -0500, Nicolas Williams wrote:
Yes, but the checksum is stored with the pointer.
So then, for each file/directory there's a dnode, and that dnode has
several block pointers to data blocks or
On Fri, Sep 15, 2006 at 01:23:31AM -0700, can you guess? wrote:
Implementing it at the directory and file levels would be even more
flexible: redundancy strategy would no longer be tightly tied to path
location, but directories and files could themselves still inherit
defaults from the
It is highly likely you are seeing a duplicate of:
6413510 zfs: writing to ZFS filesystem slows down fsync() on
other files in the same FS
which was fixed recently in build 48 on Nevada.
The symptoms are very similar. That is a fsync from the vi would, prior
to the bug being fixed, have
s10u2, once zoned, always zoned? i see that zoned property is not
cleared after removing the dataset from a zone cfg or even
uninstalling the entire zone... [right, i know how to clear it by
hand, but maybe i am missing a bit of magic otherwise anodyne
zonecfg et al.]
oz
--
ozan s. yigit |
the status showed 19.46% the first time I ran it, then 9.46% the second. The
question I have is I added the new disk, but it's showing the following:
Device: c5d0
Storage Pool: fserv
Type: Disk
Device State: Faulted (cannot open)
The disk is currently unpartitioned and unformatted. I was
On Fri, Sep 15, 2006 at 01:10:25PM -0700, Tim Cook wrote:
the status showed 19.46% the first time I ran it, then 9.46% the
second. The question I have is I added the new disk, but it's showing
the following:
Device: c5d0
Storage Pool: fserv
Type: Disk
Device State: Faulted (cannot open)
hrmm... cannot replace c5d0 with c5d0: cannot replace a replacing device
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Quoth Darren J Moffat on Fri, Sep 08, 2006 at 01:59:16PM +0100:
Nicolas Dorfsman wrote:
Regarding system partitions (/var, /opt, all mirrored + alternate
disk), what would be YOUR recommendations ? ZFS or not ?
/var for now must be UFS since Solaris 10 doesn't not have ZFS root
Yes sir:
[EMAIL PROTECTED]:/
# zpool status -v fserv
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool
will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 5.90%
(I looked at my email before checking here, so I'll just cut-and-paste the
email response in here rather than send it. By the way, is there a way to view
just the responses that have accumulated in this forum since I last visited -
or just those I've never looked at before?)
Bill Moore wrote:
17 matches
Mail list logo