Tomas Ögren wrote:
On 24 January, 2008 - Steve Hillman sent me these 1,9K bytes:
I realize that this topic has been fairly well beaten to death on this
forum, but I've also read numerous comments from ZFS developers that they'd
like to hear about significantly different performance numbers
Hello Eric,
Wednesday, January 23, 2008, 7:21:42 PM, you wrote:
ES Sorry, no such feature exists. We do generate sysevents for when
ES resilvers are completed, but not scrubs. Adding those sysevents would
ES be an easy change, but doing anything more complicated (such as baking
ES that
Hello Darren,
DJM BTW there isn't really any such think as disk corruption there is
DJM data corruption :-)
Well, if you scratch it hard enough :)
--
Best regards,
Robert Milkowski mailto:[EMAIL PROTECTED]
zpool status shows a few checksum errors against 1 device in a raidz1 3 disk
array and no read or write errors against that device. The pool marked as
degraded. Is there a difference if you clear the errors for the pool before you
scrub versus scrubing then clearing the errors? I'm not sure if
Hi,
I have this setup:
2xSUN V440 servers with FC adapters, installed Solaris 10u4.
Both servers see one LUN on XP storage.
On that LUN is created ZFS filesystem (on server1).
If I export that ZFS filesystem on server1, I can import it on server2, and
vice-versa.
If I have imported ZFS on
Hello Kam,
Friday, January 25, 2008, 9:11:24 AM, you wrote:
K zpool status shows a few checksum errors against 1 device in a
K raidz1 3 disk array and no read or write errors against that
K device. The pool marked as degraded. Is there a difference if you
K clear the errors for the pool before
Hi,
pool wasn't exported.
server1 was rebooted (with ZFS on it).
During reboot ZFS (pool) was released, and I could import it on server2 (which
I have done).
However, when server1 was booting up it imported pool and mounted ZFS
filesystems even thou they were already imported and mounted on
New, yes. Aware - probably not.
Given cheap filesystems, users would create many
filesystems was an easy guess, but I somehow don't
think anybody envisioned that users would be creating
tens of thousands of filesystems.
ZFS - too good for it's own good :-p
IMO (and given mails/posts
Hello Niksa,
Friday, January 25, 2008, 9:27:17 AM, you wrote:
NF Hi,
NF I have this setup:
NF 2xSUN V440 servers with FC adapters, installed Solaris 10u4.
NF Both servers see one LUN on XP storage.
NF On that LUN is created ZFS filesystem (on server1).
NF If I export that ZFS filesystem on
Niksa Franceschi wrote:
Hi,
pool wasn't exported.
server1 was rebooted (with ZFS on it).
During reboot ZFS (pool) was released, and I could import it on server2
(which I have done).
However, when server1 was booting up it imported pool and mounted ZFS
filesystems even thou they were
Christopher Gorski wrote:
unsorted/photosbackup/laptopd600/[D]/cag2b/eujpg/103-0398_IMG.JPG is a
file that is always missing in the new tree.
Oops, I meant:
unsorted/drive-452a/[E]/drive/archives/seconddisk_20nov2002/eujpg/103-0398_IMG.JPG
is always missing in the new tree.
Robert Milkowski wrote:
Hello Christopher,
Friday, January 25, 2008, 5:37:58 AM, you wrote:
CG michael schuster wrote:
I assume you've assured that there's enough space in /pond ...
can you try
$(cd pond/photos; tar cf - *) | (cd /pond/copytestsame; tar xf -)
CG I tried it, and it
Torrey McMahon [EMAIL PROTECTED] wrote:
http://www.philohome.com/hammerhead/broken-disk.jpg :-)
Be careful, things like this can result in device corruption!
Jörg
--
EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
[EMAIL PROTECTED](uni)
[EMAIL
On Fri, 25 Jan 2008 15:18:36 -0500
Tiernan, Daniel [EMAIL PROTECTED]
wrote:
You may have hit a cp and or shell bug due to the
directory naming
topology. Rather then depend on cp -r I prefer the cpio
method:
find * print|cpio -pdumv dest_path
I'd try the find by itself to see if it
You may have hit a cp and or shell bug due to the directory naming
topology. Rather then depend on cp -r I prefer the cpio method:
find * print|cpio -pdumv dest_path
I'd try the find by itself to see if it yields the correct file list
before piping into cpio...
-Original Message-
On Jan 25, 2008, at 6:06 AM, Niksa Franceschi wrote:
Yes, the link explains quite well the issue we have.
Only difference is that server1 can be manually rebooted, and while
it's still down I can mount ZFS pool on server2 even without -f
option, and yet server1 when booted up still
On Fri, Jan 25, 2008 at 12:59:18AM -0500, Kyle McDonald wrote:
... With the 256MB doing write caching, is there any further benefit
to moving thte ZIL to a flash or other fast NV storage?
Do some tests with/without ZIL enabled. You should see a big
difference. You should see something
Albert Chin wrote:
On Fri, Jan 25, 2008 at 12:59:18AM -0500, Kyle McDonald wrote:
... With the 256MB doing write caching, is there any further benefit
to moving thte ZIL to a flash or other fast NV storage?
Do some tests with/without ZIL enabled. You should see a big
difference.
Yes, the link explains quite well the issue we have.
Only difference is that server1 can be manually rebooted, and while it's still
down I can mount ZFS pool on server2 even without -f option, and yet server1
when booted up still mounts at same time.
Just one questiong though.
Is there any ETA
Robert Milkowski wrote:
Hello Darren,
DJM BTW there isn't really any such think as disk corruption there is
DJM data corruption :-)
Well, if you scratch it hard enough :)
http://www.philohome.com/hammerhead/broken-disk.jpg :-)
___
Christopher Gorski [EMAIL PROTECTED] wrote:
can you try
$(cd pond/photos; tar cf - *) | (cd /pond/copytestsame; tar xf -)
CG I tried it, and it worked. The new tree is an exact copy of the old
one.
could you run your cp as 'truss -t open -o /tmp/cp.truss cp * '
and
21 matches
Mail list logo