On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
>
>> root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
>
> You should use c3d1s0 here.
>
>> Th
>> root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
>
> Use c3d1s1.
Thanks, that did the trick!
root@nex
root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
You should use c3d1s0 here.
Th
root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
Use c3d1s1.
Larry
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
I'm a bit solaristarded, can somebody please help?
I've got a
c3d1
/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0
which looks like
partition> print
Current partition table (original):
Total disk sectors available: 39074830 + 16384 (reserved sectors)
Part TagFlag First Sector
Try mirrors. You will get much better multi-user performance, and you can
easily split the mirrors across enclosures.
If your priority is performance over capacity, you could experiment with n-way
mirros, since more mirrors will load balance reads better than more stripes.
--
This message post
Updates to my problem:
1. The destroy operation appears to be restarting from the same point
after the system hangs and has to be rebooted. Oracle gave me the
following to track progress:
echo '::pgrep "zpool$" |::walk thread|::findstack -v' | mdb -k | grep
dsl_dataset_destroy
then take first arg