Peter,
That's a great suggestion. And as fortune would have it, we have the
code to do it already. Scrubbing in ZFS is driven from the logical
layer, not the physical layer. When you scrub a pool, you're really
just scrubbing the pool-wide metadata, then scrubbing each filesystem.
At 50,000
Bob Friesenhahn wrote:
On Fri, 28 Mar 2008, abs wrote:
Sorry for being vague but I actually tried it with the cifs in zfs
option, but I think I will try the samba option now that you mention
it. Also is there a way to actually improve the nfs performance
specifically?
CIFS uses TCP.
Kyle McDonald wrote:
In this case you may need to use 'zfs unshare', since I don't know if
'unshare' can unshare a ZFS that was sared by 'zfs share'.
This makes no difference:
# zfs unshare files/custfs/cust12/2053699a
cannot unshare 'files/custfs/cust12/2053699a': not currently shared
# zfs
Perhaps someone else can correct me if I'm wrong, but if you're using the
whole disk, ZFS shouldn't be displaying a slice when listing your disks,
should it? I've *NEVER* seen it do that on any of mine except when using
partials/slicese.
I would expect:
c1d1s8
To be:
c1d1
On Sun, Mar 30,
On Mon, 31 Mar 2008, Tim wrote:
Perhaps someone else can correct me if I'm wrong, but if you're using the
whole disk, ZFS shouldn't be displaying a slice when listing your disks,
should it? I've *NEVER* seen it do that on any of mine except when using
partials/slicese.
I would expect:
I would be very happy having a filesystem based zfs scrub
We have a 18TB big zpool, it takes more then 2 days to do the scrub.
Since we cannot take snapshots during the scrub, this is unacceptable
Kristof
This message posted from opensolaris.org
I'm trying to figure out how to restore a filesystem using zfs recv.
Obviously there's some important concept I don't understand. I'm
using my zfsdump script to create the dumps that I'm going to restore.
Here's what I tried:
Save a level 0 dump in d.0:
datsun# zfsdump 0 home/tckuser d.0
zfs
We've got a couple of X4500's. I am able to get into ZFS Administration
on one of them, but on the newer one (latest Solaris 10 8/7 with
patches), which also has a larger number of ZFS filesystems, whenever I
go to the Java Web Console and click on ZFS Administration, after a
couple of minutes,
Bill Shannon wrote:
datsun# zfs recv -d test d.0
cannot open 'test/tckuser': dataset does not exist
Despite the error message, the recv does seem to work.
Is it a bug that it prints the error message, or is it a bug that it
restores the data?
___
Bill Shannon wrote:
If I do something like this:
zfs snapshot [EMAIL PROTECTED]
zfs send [EMAIL PROTECTED] tank.backup
sleep 86400
zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs snapshot [EMAIL PROTECTED]
zfs send -I [EMAIL PROTECTED] [EMAIL PROTECTED] tank.incr
Am I going to be
I did it and worked. Some had problem with the patch but eventually worked.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I intended to add a disk as a hot spare to a zpool but inadvertently added as
an equal partner of the entire pool. i.e.
zpool add ataarchive c1t1d0
instead of
zpool add ataarchive spare c1t1d0
This is a zpool on an X4500 with 4 raidz2's configured. From my reading of
previous threads, the
12 matches
Mail list logo