Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-05 Thread Eugen Leitl
On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote: On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote: root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0 You should use c3d1s0 here. Th root@nexenta:/export/home/eugen# zpool add tank cache

[zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible! assertion failed: zvol_get_stats(os, nv) == 0

2011-08-05 Thread Stu Whitefish
System: snv_151a 64 bit on Intel. Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0, file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815 Failure first seen on Solaris 10, update 8 History: I recently received two 320G drives and realized from reading this list it would have been

Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-05 Thread Eugen Leitl
I think I've found the source of my problem: I need to reflash the N36L BIOS to a hacked russian version (sic) which allows AHCI in the 5th drive bay http://terabyt.es/2011/07/02/nas-build-guide-hp-n36l-microserver-with-nexenta-napp-it/ ... Update BIOS and install hacked Russian BIOS The HP

[zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Edward Ned Harvey
After a certain rev, I know you can set the sync property, and it takes effect immediately, and it's persistent across reboots. But that doesn't apply to Solaris 10. My question: Is there any way to make Disabled ZIL a normal mode of operations in solaris 10? Particularly: If I do

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Darren J Moffat
On 08/05/11 13:11, Edward Ned Harvey wrote: After a certain rev, I know you can set the sync property, and it takes effect immediately, and it's persistent across reboots. But that doesn't apply to Solaris 10. My question: Is there any way to make Disabled ZIL a normal mode of operations in

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Tomas Ă–gren
On 05 August, 2011 - Darren J Moffat sent me these 0,9K bytes: On 08/05/11 13:11, Edward Ned Harvey wrote: After a certain rev, I know you can set the sync property, and it takes effect immediately, and it's persistent across reboots. But that doesn't apply to Solaris 10. My question: Is

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Michael Sullivan
On 5 Aug 11, at 08:14 , Darren J Moffat wrote: On 08/05/11 13:11, Edward Ned Harvey wrote: My question: Is there any way to make Disabled ZIL a normal mode of operations in solaris 10? Particularly: If I do this echo zil_disable/W0t1 | mdb -kw then I have to remount the filesystem. It's

Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Brian Wilson
On 8/3/2011 5:47 PM, Ian Collins wrote: On 08/ 4/11 01:29 AM, Stuart James Whitefish wrote: I have Solaris on Sparc boxes available if it would help to do a net install or jumpstart. I have never done those and it looks complicated, although I think I may be able to get to the point in the

Re: [zfs-discuss] Exapnd ZFS storage.

2011-08-05 Thread Nix
Thanks Guys... :-) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Stuart James Whitefish
Jim wrote: But I may be wrong, and anyway the single user shell in the u9 DVD also panics when I try to import tank so maybe that won't help. Ian wrote: Put your old drive in a USB enclosure and connect it to another system in order to read back the data. Given that update 9 can't import

[zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-05 Thread Stuart James Whitefish
I am opening a new thread since I found somebody else reported a similar failure in May and I didn't see a resolution hopefully this post will be easier to find for people with similar problems. Original thread was http://opensolaris.org/jive/thread.jspa?threadID=140861 System: snv_151a 64 bit

Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Stuart James Whitefish
I'm opening a new thread since the original subject was not as helpful and I saw a similar problem mentioned in May of this year (2011) and others going back to 2009. New thread is found at http://opensolaris.org/jive/thread.jspa?threadID=140899 -- This message posted from opensolaris.org

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Richard Elling
On Aug 5, 2011, at 6:14 AM, Darren J Moffat darr...@opensolaris.org wrote: On 08/05/11 13:11, Edward Ned Harvey wrote: After a certain rev, I know you can set the sync property, and it takes effect immediately, and it's persistent across reboots. But that doesn't apply to Solaris 10. My

Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-05 Thread Mike Gerdts
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish swhitef...@yahoo.com wrote: # zpool import -f tank http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/ I encourage you to open a support case and ask for an escalation on CR 7056738. -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] zpool import starves machine of memory

2011-08-05 Thread Paul Kraus
Another update: The configuration of the zpool is 45 x 1 TB drives in three vdev's, each of 15 drives. We should have a net capacity of between 30 and 36 TB (and that agrees with my memory of the pool). I ran zdb -e -d against the pool (not imported) and totaled the size of the datasets and

Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Bill
On Thu, Aug 04, 2011 at 03:52:39AM -0700, Stuart James Whitefish wrote: Jim wrote: But I may be wrong, and anyway the single user shell in the u9 DVD also panics when I try to import tank so maybe that won't help. Ian wrote: Put your old drive in a USB enclosure and connect it to

Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Bob Friesenhahn
On Fri, 5 Aug 2011, Bill wrote: True but I haven't found a way to get an ISO onto a USB that my system can boot from it. I was using DD to copy the iso to the usb drive. Is there some other way? Maybe give http://unetbootin.sourceforge.net/ a try. This package seems to list support for

Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Ian Collins
On 08/ 4/11 10:52 PM, Stuart James Whitefish wrote: Ian wrote: Put your old drive in a USB enclosure and connect it to another system in order to read back the data. Given that update 9 can't import the pool is this really worth trying? I would use a newer (express maybe) system. Most

Re: [zfs-discuss] Large scale performance query

2011-08-05 Thread Orvar Korvar
Is mirrors really a realistic alternative? I mean, if I have to resilver a raid with 3TB discs, it can take days I suspect. With 4TB disks it can take a week, maybe. So, if I use mirror and one disk break, then I only have single redundancy while the mirror repairs. Reparation will take long

[zfs-discuss] Problem booting after zfs upgrade

2011-08-05 Thread stuart anderson
After upgrading to zpool version 29/zfs version 5 on a S10 test system via the kernel patch 144501-19 it will now boot only as far as the to the grub menu. What is a good Solaris rescue image that I can boot that will allow me to import this rpool to look at it (given the newer version)?

Re: [zfs-discuss] Large scale performance query

2011-08-05 Thread Ian Collins
On 08/ 6/11 10:42 AM, Orvar Korvar wrote: Is mirrors really a realistic alternative? To what? Some context would be helpful. I mean, if I have to resilver a raid with 3TB discs, it can take days I suspect. With 4TB disks it can take a week, maybe. So, if I use mirror and one disk break,

Re: [zfs-discuss] Problem booting after zfs upgrade

2011-08-05 Thread Ian Collins
On 08/ 6/11 11:48 AM, stuart anderson wrote: After upgrading to zpool version 29/zfs version 5 on a S10 test system via the kernel patch 144501-19 it will now boot only as far as the to the grub menu. What is a good Solaris rescue image that I can boot that will allow me to import this rpool

Re: [zfs-discuss] Large scale performance query

2011-08-05 Thread Rob Cohen
Generally, mirrors resilver MUCH faster than RAIDZ, and you only lose redundancy on that stripe, so combined, you're much closer to RAIDZ2 odds than you might think, especially with hot spare(s), which I'd reccommend. When you're talking about IOPS, each stripe can support 1 simultanious user.

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Edward Ned Harvey
From: Darren J Moffat [mailto:darr...@opensolaris.org] Sent: Friday, August 05, 2011 10:14 AM echo set zfs:zil_disable = 1 /etc/system This is a great way to cure /etc/system viruses :-) LOL! :-) Thank you. ___ zfs-discuss mailing list

Re: [zfs-discuss] Problem booting after zfs upgrade

2011-08-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins On 08/ 6/11 11:48 AM, stuart anderson wrote: After upgrading to zpool version 29/zfs version 5 on a S10 test system via the kernel patch 144501-19 it will now boot only as far