On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
You should use c3d1s0 here.
Th
root@nexenta:/export/home/eugen# zpool add tank cache
System: snv_151a 64 bit on Intel.
Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0,
file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815
Failure first seen on Solaris 10, update 8
History:
I recently received two 320G drives and realized from reading this list it
would have been
I think I've found the source of my problem: I need to reflash
the N36L BIOS to a hacked russian version (sic) which allows
AHCI in the 5th drive bay
http://terabyt.es/2011/07/02/nas-build-guide-hp-n36l-microserver-with-nexenta-napp-it/
...
Update BIOS and install hacked Russian BIOS
The HP
After a certain rev, I know you can set the sync property, and it takes
effect immediately, and it's persistent across reboots. But that doesn't
apply to Solaris 10.
My question: Is there any way to make Disabled ZIL a normal mode of
operations in solaris 10? Particularly:
If I do
On 08/05/11 13:11, Edward Ned Harvey wrote:
After a certain rev, I know you can set the sync property, and it
takes effect immediately, and it's persistent across reboots. But that
doesn't apply to Solaris 10.
My question: Is there any way to make Disabled ZIL a normal mode of
operations in
On 05 August, 2011 - Darren J Moffat sent me these 0,9K bytes:
On 08/05/11 13:11, Edward Ned Harvey wrote:
After a certain rev, I know you can set the sync property, and it
takes effect immediately, and it's persistent across reboots. But that
doesn't apply to Solaris 10.
My question: Is
On 5 Aug 11, at 08:14 , Darren J Moffat wrote:
On 08/05/11 13:11, Edward Ned Harvey wrote:
My question: Is there any way to make Disabled ZIL a normal mode of
operations in solaris 10? Particularly:
If I do this echo zil_disable/W0t1 | mdb -kw then I have to remount
the filesystem. It's
On 8/3/2011 5:47 PM, Ian Collins wrote:
On 08/ 4/11 01:29 AM, Stuart James Whitefish wrote:
I have Solaris on Sparc boxes available if it would help to do a net
install
or jumpstart. I have never done those and it looks complicated,
although I
think I may be able to get to the point in the
Thanks Guys... :-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim wrote:
But I may be wrong, and anyway the single user shell in the u9 DVD also
panics when I try to import tank so maybe that won't help.
Ian wrote:
Put your old drive in a USB enclosure and connect it
to another system in order to read back the data.
Given that update 9 can't import
I am opening a new thread since I found somebody else reported a similar
failure in May and I didn't see a resolution hopefully this post will be easier
to find for people with similar problems. Original thread was
http://opensolaris.org/jive/thread.jspa?threadID=140861
System: snv_151a 64 bit
I'm opening a new thread since the original subject was not as helpful and I
saw a similar problem mentioned in May of this year (2011) and others going
back to 2009. New thread is found at
http://opensolaris.org/jive/thread.jspa?threadID=140899
--
This message posted from opensolaris.org
On Aug 5, 2011, at 6:14 AM, Darren J Moffat darr...@opensolaris.org wrote:
On 08/05/11 13:11, Edward Ned Harvey wrote:
After a certain rev, I know you can set the sync property, and it
takes effect immediately, and it's persistent across reboots. But that
doesn't apply to Solaris 10.
My
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
swhitef...@yahoo.com wrote:
# zpool import -f tank
http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
I encourage you to open a support case and ask for an escalation on CR 7056738.
--
Mike Gerdts
http://mgerdts.blogspot.com/
Another update:
The configuration of the zpool is 45 x 1 TB drives in three
vdev's, each of 15 drives. We should have a net capacity of between 30
and 36 TB (and that agrees with my memory of the pool). I ran zdb -e
-d against the pool (not imported) and totaled the size of the
datasets and
On Thu, Aug 04, 2011 at 03:52:39AM -0700, Stuart James Whitefish wrote:
Jim wrote:
But I may be wrong, and anyway the single user shell in the u9 DVD also
panics when I try to import tank so maybe that won't help.
Ian wrote:
Put your old drive in a USB enclosure and connect it
to
On Fri, 5 Aug 2011, Bill wrote:
True but I haven't found a way to get an ISO onto a USB that my system can boot
from it. I was using DD to copy the iso to the usb drive. Is there some other
way?
Maybe give http://unetbootin.sourceforge.net/ a try.
This package seems to list support for
On 08/ 4/11 10:52 PM, Stuart James Whitefish wrote:
Ian wrote:
Put your old drive in a USB enclosure and connect it
to another system in order to read back the data.
Given that update 9 can't import the pool is this really worth trying?
I would use a newer (express maybe) system.
Most
Is mirrors really a realistic alternative? I mean, if I have to resilver a raid
with 3TB discs, it can take days I suspect. With 4TB disks it can take a week,
maybe. So, if I use mirror and one disk break, then I only have single
redundancy while the mirror repairs. Reparation will take long
After upgrading to zpool version 29/zfs version 5 on a S10 test system via the
kernel patch 144501-19 it will now boot only as far as the to the grub menu.
What is a good Solaris rescue image that I can boot that will allow me to
import this rpool to look at it (given the newer version)?
On 08/ 6/11 10:42 AM, Orvar Korvar wrote:
Is mirrors really a realistic alternative?
To what? Some context would be helpful.
I mean, if I have to resilver a raid with 3TB discs, it can take days I
suspect. With 4TB disks it can take a week, maybe. So, if I use mirror and one
disk break,
On 08/ 6/11 11:48 AM, stuart anderson wrote:
After upgrading to zpool version 29/zfs version 5 on a S10 test system via the
kernel patch 144501-19 it will now boot only as far as the to the grub menu.
What is a good Solaris rescue image that I can boot that will allow me to
import this rpool
Generally, mirrors resilver MUCH faster than RAIDZ, and you only lose
redundancy on that stripe, so combined, you're much closer to RAIDZ2 odds than
you might think, especially with hot spare(s), which I'd reccommend.
When you're talking about IOPS, each stripe can support 1 simultanious user.
From: Darren J Moffat [mailto:darr...@opensolaris.org]
Sent: Friday, August 05, 2011 10:14 AM
echo set zfs:zil_disable = 1 /etc/system
This is a great way to cure /etc/system viruses :-)
LOL!
:-)
Thank you.
___
zfs-discuss mailing list
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
On 08/ 6/11 11:48 AM, stuart anderson wrote:
After upgrading to zpool version 29/zfs version 5 on a S10 test system
via
the kernel patch 144501-19 it will now boot only as far
25 matches
Mail list logo