Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-30 Thread George
I suggest you to try running 'zdb -bcsv storage2' and show the result. r...@crypt:/tmp# zdb -bcsv storage2 zdb: can't open storage2: No such device or address then I tried r...@crypt:/tmp# zdb -ebcsv storage2 zdb: can't open storage2: File exists George -- This message posted from

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-30 Thread Victor Latushkin
On Jun 30, 2010, at 10:48 AM, George wrote: I suggest you to try running 'zdb -bcsv storage2' and show the result. r...@crypt:/tmp# zdb -bcsv storage2 zdb: can't open storage2: No such device or address then I tried r...@crypt:/tmp# zdb -ebcsv storage2 zdb: can't open storage2: File

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-30 Thread George
Please try zdb -U /dev/null -ebcsv storage2 r...@crypt:~# zdb -U /dev/null -ebcsv storage2 zdb: can't open storage2: No such device or address If I try r...@crypt:~# zdb -C storage2 Then it prints what appears to be a valid configuration but then the same error message about being unable

Re: [zfs-discuss] Permanet errors detected in metadata:0x13

2010-06-30 Thread W Brian Leonard
Hi Cindy, The scrub didn't help and yes, this is an external USB device. Thanks, Brian Cindy Swearingen wrote: Hi Brian, You might try running a scrub on this pool. Is this an external USB device? Thanks, Cindy On 06/29/10 09:16, Brian Leonard wrote: Hi, I have a zpool which is

[zfs-discuss] ZFS on Caviar Blue (Hard Drive Recommendations)

2010-06-30 Thread Patrick Donnelly
Hi list, I googled around but couldn't find anything on whether someone has good or bad experiences with the Caviar *Blue* drives? I saw in the archives Caviar Blacks are *not* recommended for ZFS arrays (excluding apparently RE3 and RE4?). Specifically I'm looking to buy Western Digital Caviar

[zfs-discuss] What are requirements for zpool split ?

2010-06-30 Thread Mitchell Petty
Hi, Is "zpool split" available ? If not when will it be ? If it is what are the prerequisites ? Thanks In Advance , Mitch ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Permanet errors detected in metadata:0x13

2010-06-30 Thread W Brian Leonard
Interesting, this time it worked! Does specifying the device to clear cause the command to behave differently? I had assumed w/out the device specification, the clear would just apply to all devices in the pool (which are just the one). Thanks, Brian Cindy Swearingen wrote: Hi Brian,

Re: [zfs-discuss] Permanet errors detected in metadata:0x13

2010-06-30 Thread W Brian Leonard
Well, I was doing a ZFS send / receive to backup a large (60 GB) of data, which never completed. A zpool clear at that point just hung and I had to reboot the system, after which it appeared to come up clean. As soon as I tried the backup again I noticed the pool reported the error you see

[zfs-discuss] zfs rpool corrupt?????

2010-06-30 Thread Tony MacDoodle
Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor bash-3.00# zpool status -v rpool pool: rpool state: DEGRADED status: One or more devices has experienced

Re: [zfs-discuss] What are requirements for zpool split ?

2010-06-30 Thread Cindy Swearingen
Hey Mitch, The zpool split feature is available in the OpenSolaris release if you upgrade to build 131. You can read about the requirements here: http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs See the ZFS Admin Guide, page 89-90 Thanks, Cindy On 06/29/10 13:37, Mitchell Petty

Re: [zfs-discuss] ZFS on Caviar Blue (Hard Drive Recommendations)

2010-06-30 Thread Freddie Cash
On Tue, Jun 29, 2010 at 11:25 AM, Patrick Donnelly batr...@batbytes.com wrote: I googled around but couldn't find anything on whether someone has good or bad experiences with the Caviar *Blue* drives? I saw in the archives Caviar Blacks are *not* recommended for ZFS arrays (excluding

Re: [zfs-discuss] zfs-discuss Digest, Vol 56, Issue 126

2010-06-30 Thread Eric Andersen
On Jun 28, 2010, at 10:03 AM, zfs-discuss-requ...@opensolaris.org wrote: Send zfs-discuss mailing list submissions to zfs-discuss@opensolaris.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.opensolaris.org/mailman/listinfo/zfs-discuss or, via email,

Re: [zfs-discuss] zfs-discuss Digest, Vol 56, Issue 126

2010-06-30 Thread Bob Friesenhahn
I searched and searched but was not able to find your added text in this long quoted message. Please re-submit using the english language in simple ASCII text intended for humans. Thanks, Bob On Wed, 30 Jun 2010, Eric Andersen wrote: On Jun 28, 2010, at 10:03 AM,

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Edward Ned Harvey
From: Arne Jansen [mailto:sensi...@gmx.net] Edward Ned Harvey wrote: Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log device removal that was introduced

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Ray Van Dolson
On Wed, Jun 30, 2010 at 09:47:15AM -0700, Edward Ned Harvey wrote: From: Arne Jansen [mailto:sensi...@gmx.net] Edward Ned Harvey wrote: Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15

Re: [zfs-discuss] Announce: zfsdump

2010-06-30 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Asif Iqbal would be nice if i could pipe the zfs send stream to a split and then send of those splitted stream over the network to a remote system. it would help sending it over to remote

Re: [zfs-discuss] Forcing resilver?

2010-06-30 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk There was some messup with switching of drives and an unexpected reboot, so I suddenly have a drive in my pool that is partly resilvered. zfs status shows the pool is

Re: [zfs-discuss] Announce: zfsdump

2010-06-30 Thread Asif Iqbal
On Wed, Jun 30, 2010 at 12:54 PM, Edward Ned Harvey solar...@nedharvey.com wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Asif Iqbal would be nice if i could pipe the zfs send stream to a split and then send of those splitted stream

Re: [zfs-discuss] ZFS on Ubuntu

2010-06-30 Thread Roy Sigurd Karlsbakk
- Original Message - I think zfs on ubuntu currently is a rather bad idea. See test below with ubuntu Lucid 10.04 (amd64) Quick update on this - it seems this is due to a bug in the Linux kernel where it can't deal with partition changes on a drive with mounted filesystems. I'm not

Re: [zfs-discuss] Forcing resilver?

2010-06-30 Thread Roy Sigurd Karlsbakk
This may not work for you, but it worked for me, and I was pleasantly surprised. Replace a drive with itself. zpool replace tank c0t2d0 c0t2d0 I tried that - it didn't work - I replaced the drive with a new one, that worked, and then I made a new zpool on the old drive with zfs-fuse in

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Ragnar Sundblad
On 12 apr 2010, at 22.32, Carson Gaspar wrote: Carson Gaspar wrote: Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: How do you handle the case when a hotplug SATA drive is powered off unexpectedly with data in its write cache? Do you replay the writes, or do

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread valrh...@gmail.com
Thanks to everyone for such helpful and detailed answers. Contrary to some of the trolls in other threads, I've had a fantastic experience here, and am grateful to the community. Based on the feedback, I'll upgrade my machine to 8 GB of RAM. I only have two slots on the motherboard, and either

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Garrett D'Amore
On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote: To be safe, the protocol needs to be able to discover that the devices (host or disk) has been disconnected and reconnected or has been reset and that either parts assumptions about the state of the other has to be invalidated. I

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread valrh...@gmail.com
Another question on SSDs in terms of performance vs. capacity. Between $150 and $200, there are at least three SSDs that would fit the rough specifications for the L2ARC on my system: 1. Crucial C300, 64 GB: $150: medium performance, medium capacity. 2. OCZ Vertex 2, 50 GB: $180: higher

[zfs-discuss] Zpool mirror fail testing - odd resilver behaviour after reconnect

2010-06-30 Thread Matt Connolly
I have a Opensolaris snv_134 machine with 2 x 1.5TB drives. One is a Samsung Silencer the other is a dreaded Western Digital Green. I'm testing the mirror for failure by simply yanking out the SATA cable while the machine is running. The system never skips a beat, which is great. But the

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread Nicolas Williams
On Wed, Jun 30, 2010 at 01:35:31PM -0700, valrh...@gmail.com wrote: Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm the only user of the fileserver, so there probably won't be more than two or three computers, maximum, accessing stuff (and writing stuff) remotely. It

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread Garrett D'Amore
On Wed, 2010-06-30 at 16:41 -0500, Nicolas Williams wrote: On Wed, Jun 30, 2010 at 01:35:31PM -0700, valrh...@gmail.com wrote: Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm the only user of the fileserver, so there probably won't be more than two or three computers,

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Ragnar Sundblad
On 30 jun 2010, at 22.46, Garrett D'Amore wrote: On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote: To be safe, the protocol needs to be able to discover that the devices (host or disk) has been disconnected and reconnected or has been reset and that either parts assumptions about

Re: [zfs-discuss] b134 pool borked!

2010-06-30 Thread Michael Mattsson
Just in case any stray searches finds it way here, this is what happened to my pool: http://phrenetic.to/zfs -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Kernel Panic on zpool clean

2010-06-30 Thread George
Aha: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6794136 I think I'll try booting from a b134 Live CD and see that will let me fix things. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zfs rpool corrupt?????

2010-06-30 Thread Ian Collins
On 07/ 1/10 01:36 AM, Tony MacDoodle wrote: Hello, Has anyone encountered the following error message, running Solaris 10 u8 in an LDom. bash-3.00# devfsadm devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor Not specifically. But it is clear from what follows

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-06-30 Thread Carson Gaspar
Ragnar Sundblad wrote: I was referring to the case where zfs has written data to the drive but still hasen't issued a cache flush, and before the cache flush the drive is reset. If zfs finally issues a cache flush and then isn't informed that the drive has been reset, data is lost. I hope this

Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-30 Thread Fred Liu
Any duration limit on the supercap? How long can it sustain the data? Thanks. Fred -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of David Magda Sent: 星期六, 六月 26, 2010 21:48 To: Arne Jansen Cc: 'OpenSolaris ZFS

Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-30 Thread Bob Friesenhahn
On Wed, 30 Jun 2010, Fred Liu wrote: Any duration limit on the supercap? How long can it sustain the data? A supercap on a SSD drive only needs to sustain the data until it has been saved (perhaps 10 milliseconds). It is different than a RAID array battery. Bob -- Bob Friesenhahn

Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-30 Thread Fred Liu
See. Thanks. Does it have the hardware functionality to detect the power outage and do force cache flush when the cache is enabled? Any more detailed info about the capacity (farad) of this supercap and how long one discharge will be? Thanks. Fred -Original Message- From: Bob

Re: [zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance

2010-06-30 Thread Mike La Spina
Hi Eff, There are a significant number of variables to work through with dedup and compression enabled. So the first suggestion I have is to disable those features for now so your not working with too many elements. With those features set aside an NTFS cluster operation does not = a 64k raw

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread Erik Trimble
On 6/30/2010 2:01 PM, valrh...@gmail.com wrote: Another question on SSDs in terms of performance vs. capacity. Between $150 and $200, there are at least three SSDs that would fit the rough specifications for the L2ARC on my system: 1. Crucial C300, 64 GB: $150: medium performance, medium

Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-30 Thread Erik Trimble
On 6/30/2010 7:17 PM, Fred Liu wrote: See. Thanks. Does it have the hardware functionality to detect the power outage and do force cache flush when the cache is enabled? Any more detailed info about the capacity (farad) of this supercap and how long one discharge will be? Thanks. Fred