[zfs-discuss] Odd zpool / zfs issue

2008-10-05 Thread homerun
Hi I have one usb hard drive that shows in zpool import as zpool that does not exist in disk anymore # zpool import pool: usb1 id: 8159001826765429865 state: FAULTED status: The pool metadata is corrupted. action: The pool cannot be imported due to damaged devices or data. The

[zfs-discuss] Permanent errors on filesystem (opensolaris 2008.05)

2008-10-05 Thread Emmanuel
Hi I am looking for guidance on the following zfs setup and error: - opensolaris 2008.05 running as guest in vmware server - ubuntu host - system has run flawlessly as an NFS file server for some months now. Single zpool (called 'tank'), 2 vdevs each as raid-Z, about 10 filesystems (one of them

Re: [zfs-discuss] Permanent errors on filesystem (opensolaris 2008.05)

2008-10-05 Thread Emmanuel
Reading through the post the error message didn't come through properly. It is tank/mail:0x0 (with lesser than and greater than on either sides of the 0's). Also, the 4 disks (2 vdevs x 2 for raid-z) are physical sata disks dedicated to the vmware image. Thanks. -- This message posted from

Re: [zfs-discuss] Odd zpool / zfs issue

2008-10-05 Thread Ron Halstead
zpool destroy -f usb1 --ron -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] xterm - strange output

2008-10-05 Thread Ron Halstead
/usr/openwin/bin/xterm returns 'Could not set destroy callback to IM' but does open an xterm. I have never seen this one before. Any ideas? --ron -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] SATA/SAS (Re: Quantifying ZFS reliability)

2008-10-05 Thread Anton B. Rang
Erik: (2) a SAS drive has better throughput and IOPs than a SATA drive Richard: Disagree. We proved that the transport layer protocol has no bearing on throughput or iops. Several vendors offer drives which are identical in all respects except for transport layer protocol: SAS or SATA.

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-05 Thread Brian Hechinger
On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote: So I tried this experiment this week... On each host (OpenSolaris 2008.05), I created an 8GB ramdisk with ramdiskadm. I shared this ramdisk on each host via the iscsi target and initiator over a 1GB crossconnect cable (jumbo

Re: [zfs-discuss] Adding my own compression to zfs

2008-10-05 Thread MC
It would be trivial to make the threshold a tunable, but we're trying to avoid this sort of thing. I don't want there to be a ZFS tuning guide, ever. That would mean we failed. Jeff harumph... http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide :-) Well now that that

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-05 Thread Adam Leventhal
So what are the downsides to this? If both nodes were to crash and I used the same technique to recreate the ramdisk I would lose any transactions in the slog at the time of the crash, but the physical disk image is still in a consistent state right (just not from my apps point of

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-05 Thread Nicolas Williams
On Sun, Oct 05, 2008 at 09:07:31PM -0400, Brian Hechinger wrote: On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote: I'm not sure I could survive a crash of both nodes, going to try and test some more. Ok, so taking my idea above, maybe a pair of 15K SAS disks in those boxes so