Hello Jürgen,
Monday, June 4, 2007, 7:09:59 PM, you wrote:
Patching zfs_prefetch_disable = 1 has helped
It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug 6437054
Hello Jürgen,
Monday, June 4, 2007, 7:09:59 PM, you wrote:
Patching zfs_prefetch_disable = 1 has helped
It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug
so does anyone know how to rescan the LUN(s) part of its pool
and detect new size of the LUN ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hi Doug,
On Mon, 2007-06-04 at 12:25 -0700, Douglas Atique wrote:
I have been trying to setup a boot ZFS filesystem since b63 and found
out about bug 6553537 that was preventing boot from ZFS filesystems
starting from b63. First question is whether b65 has solved the
problem as was planned on
from what I know this operation goes via an zpool export, re-label
(with format) , then zpool import
it's not online
On 6/5/07, Yan [EMAIL PROTECTED] wrote:
so does anyone know how to rescan the LUN(s) part of its pool
and detect new size of the LUN ?
This message posted from opensolaris.org
Hi Doug,
On Tue, 2007-06-05 at 06:45 -0700, Douglas Atique wrote:
Hi, Tim. Thanks for your hints.
No problem
Comments on each one follow (marked with Doug: and in blue).
html mail :-/
Tim Foster [EMAIL PROTECTED] wrote:
There's a number of things you could check:
I'm in bit of a bind...
I did a replace and the resilver has started properly. Unfortunately I need
to now abort the replace. Is there way to do this? Can I do some thing like
take the new device offline?
thank
This message posted from opensolaris.org
hi,
from the following link there is no problem with b65
http://www.opensolaris.org/os/community/zfs/boot/netinstall
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Did you try issuing:
zpool detach your_pool_name new_device
That should detach the new device and stop the resilver. If you just
want to stop the resilver (and leave the device), you should be able to
do:
zpool scrub -s your_pool_name
Which will stop the scrub/resilver.
--Bill
On
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
eric kustarz wrote:
There's going to be some very good stuff for ZFS in s10u4, can you
please update the issues *and* features when it comes out?
Of course. That was my commitment when I decided to create the beware
section in the wikipedia
Would be very nice if the improvements would be documented
anywhere :-)
Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Check out the What's New in ZFS? section.
eric
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
eric kustarz wrote:
Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Check out the What's New in ZFS? section.
I will update the wikipedia entry when
Vic Engle wrote:
Hi All,
Just curious about how the incremental send works. Is it changed blocks or
files and how are the changed blocks or files identified?
It's done at the DMU layer, based on blocks of objects. We use the
block-pointer relationships (ie, the on-disk structure of files)
Starfox wrote:
First time around, create a snapshot and send it to remote: zfs snapshot
master/[EMAIL PROTECTED] zfs send master/[EMAIL PROTECTED] | ssh mirror zfs recv
backup/mirrorfs
Once that's done, [EMAIL PROTECTED], correct?
More accurately, master/[EMAIL PROTECTED] == backup/[EMAIL
Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
eric kustarz wrote:
Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Check out the What's New in ZFS? section.
I will update the
I have also been trying to figure out the best strategy regarding ZFS
boot... I currently have a single disk UFS boot and RAID-Z for data. I plan
on getting a mirror for boot, but I still don't understand what my options
are regarding:
- Should I set up one zfs slice for the entire drive and
On Thu, 2007-05-31 at 13:27 +0100, Darren J Moffat wrote:
What errors and error rates have you seen?
I have seen switches flip bits in NFS traffic such that the TCP checksum
still match yet the data was corrupted. One of the ways we saw this was
when files were being checked out of
17 matches
Mail list logo