Re: [zfs-discuss] X4500 ILOM thinks disk 20 is faulted, ZFS thinks not.

2007-12-04 Thread Ralf Ramge
Jason J. W. Williams wrote: Have any of y'all seen a condition where the ILOM considers a disk faulted (status is 3 instead of 1), but ZFS keeps writing to the disk and doesn't report any errors? I'm going to do a scrub tomorrow and see what comes back. I'm curious what caused the ILOM to

[zfs-discuss] X4500 ILOM thinks disk 20 is faulted, ZFS thinks not.

2007-12-04 Thread Jason J. W. Williams
Hey Guys, Have any of y'all seen a condition where the ILOM considers a disk faulted (status is 3 instead of 1), but ZFS keeps writing to the disk and doesn't report any errors? I'm going to do a scrub tomorrow and see what comes back. I'm curious what caused the ILOM to fault the disk. Any

Re: [zfs-discuss] X4500 ILOM thinks disk 20 is faulted, ZFS thinks not.

2007-12-04 Thread Jason J. W. Williams
Hi Ralf, Thank you for the suggestion. About half of the disks are reporting 1968-1969 in the Soft Errors field. All disks are reporting 1968 in the Illegal Request field. There don't appear to be any other errors; all other counters are 0. The Illegal Request count seems a little fishy...like

Re: [zfs-discuss] Yager on ZFS

2007-12-04 Thread can you guess?
Your response here appears to refer to a different post in this thread. I never said I was a typical consumer. Then it's unclear how your comment related to the material which you quoted (and hence to which it was apparently responding). If you look around photo forums, you'll see an

Re: [zfs-discuss] ZFS write time performance question

2007-12-04 Thread can you guess?
And some results (for OLTP workload): http://przemol.blogspot.com/2007/08/zfs-vs-vxfs-vs-ufs -on-scsi-array.html While I was initially hardly surprised that ZFS offered only 11% - 15% of the throughput of UFS or VxFS, a quick glance at Filebench's OLTP workload seems to indicate that it's

[zfs-discuss] clones bound too tightly to its origin

2007-12-04 Thread Andreas Koppenhoefer
Hallo all, while experimenting with zfs send and zfs receive mixed with cloning on receiver side I found the following... On server A there is a zpool with snapshots created on regular basis via cron. Server B get's updated by a zfs-send-ssh-zfs-receive command pipe. Sometimes I want to do some

Re: [zfs-discuss] clones bound too tightly to its origin

2007-12-04 Thread Andreas Koppenhoefer
I forgot to mention: we are running Solaris 10 Update 4 (08/07)... - Andreas This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS performance with Oracle

2007-12-04 Thread Sean Parkinson
So, if your array is something big like an HP XP12000, you wouldn't just make a zpool of one big LUN (LUSE volume), you'd split it in two and make a mirror when creating the zpool? If the array has redundancy built in, you're suggesting to add another layer of redundancy using ZFS on top of

Re: [zfs-discuss] I screwed up my zpool

2007-12-04 Thread jonathan soons
Why didn't this command just fail? # zpool add tank c4t0d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses raidz and new vdev is disk I did not use '-f' and yet my configuration was changed. That was unexpected behaviour. Thanks for

Re: [zfs-discuss] clones bound too tightly to its origin

2007-12-04 Thread Andreas Koppenhoefer
It seems my script got lost during edit/posting message. I'll try again attaching... - Andreas This message posted from opensolaris.org test-zfs-clone.sh Description: Bourne shell script ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] current status of zfs boot partition on Sparc

2007-12-04 Thread Jerry K
I haven't seen anything about this recently, or I have missed it. Can anyone share what the current status of ZFS boot partition on Sparc is? Thanks, Jerry K ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] current status of zfs boot partition on Sparc

2007-12-04 Thread Lori Alt
It's currently planned for integration into Nevada in the build 82 or 83 time frame. Lori Jerry K wrote: I haven't seen anything about this recently, or I have missed it. Can anyone share what the current status of ZFS boot partition on Sparc is? Thanks, Jerry K

[zfs-discuss] mounting a volume as zfs

2007-12-04 Thread Paul Haldane
I can't decide if this is a dumb question or not (so I'll try asking it). We have two Solaris machines (Solaris 08/07) one (x86) with a load of disk attached and one (sparc) without. I've configured a volume on the disk server and made that available via iscsi. Connected to that on the sparc

Re: [zfs-discuss] Yager on ZFS

2007-12-04 Thread Stefano Spinucci
On 11/7/07, can you guess? [EMAIL PROTECTED] wrote: However, ZFS is not the *only* open-source approach which may allow that to happen, so the real question becomes just how it compares with equally inexpensive current and potential alternatives (and that would make for an interesting

Re: [zfs-discuss] why are these three ZFS caches using so much kmem?

2007-12-04 Thread James C. McPherson
James C. McPherson wrote: Got an issue which is rather annoying to me - three of my ZFS caches are regularly using nearly 1/2 of the 1.09Gb of allocated kmem in my system ...[snip] Following suggestions from Andre and Rich that this was probably the ARC, I've implemented a 256Mb limit for my

Re: [zfs-discuss] Yager on ZFS

2007-12-04 Thread can you guess?
On 11/7/07, can you guess? [EMAIL PROTECTED] wrote: However, ZFS is not the *only* open-source approach which may allow that to happen, so the real question becomes just how it compares with equally inexpensive current and potential alternatives (and that would make for an

[zfs-discuss] ZFS with Memory Sticks

2007-12-04 Thread Paul Gress
OK, I've been putting off this question for a while now, but it eating at me, so I can't hold off any more. I have a nice 8 gig memory stick I've formated with the ZFS file system. Works great on all my Solaris PC's, but refuses to work on my Sparc processor. So I've formated it on my