[zfs-discuss] zpool split failing

2012-04-16 Thread Matt Keenan

Hi

Attempting to split a mirrored rpool and fails with error :

  Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as 
the laptop is not too healthy I'd like to split the pool into two pools 
and attach the external drive to another laptop and mirror it to the new 
laptop.


What I did :

- Booted laptop into an live DVD

- Import the rpool:
  $ zpool import rpool

- Attempt to split :
  $ zpool split rpool rpool-ext

- Error message shown and split fails :
  Unable to split rpool: pool already exists

- So I tried exporting the pool
  and re-importing  with a different name and I still get the same
  error. There are no other zpools on the system, both zpool list and
  zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Solaris 11/ZFS historical reporting

2012-04-16 Thread Anh Quach
Are there any tools that ship w/ Solaris 11 for historical reporting on things 
like network activity, zpool iops/bandwidth, etc., or is it pretty much 
roll-your-own scripts and whatnot? 

Thanks in advance… 

-Anh


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11/ZFS historical reporting

2012-04-16 Thread Tomas Forsman
On 16 April, 2012 - Anh Quach sent me these 0,4K bytes:

 Are there any tools that ship w/ Solaris 11 for historical reporting on 
 things like network activity, zpool iops/bandwidth, etc., or is it pretty 
 much roll-your-own scripts and whatnot? 

zpool iostat 5  is the closest built-in..

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11/ZFS historical reporting

2012-04-16 Thread Bob Friesenhahn

On Mon, 16 Apr 2012, Tomas Forsman wrote:


On 16 April, 2012 - Anh Quach sent me these 0,4K bytes:


Are there any tools that ship w/ Solaris 11 for historical reporting on things 
like network activity, zpool iops/bandwidth, etc., or is it pretty much 
roll-your-own scripts and whatnot?


zpool iostat 5  is the closest built-in..


Otherwise, switch from Solaris 11 to SmartOS or Illumos.  Lots of good 
stuff going on there for monitoring and reporting.  The dtrace.conf 
conference seemed like it was pretty interesting.  See 
http://smartos.org/blog/;.  Lots more good stuff at 
http://www.youtube.com/user/deirdres; and elsewhere on Youtube.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-16 Thread Cindy Swearingen

Hi Matt,

I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems.

I'm not a fan of root pools on external USB devices.

I haven't tested these steps in a while but you might try
these steps instead. Make sure you have a recent snapshot
of your rpool on the unhealthy laptop.

1. Ensure that the existing root pool and disks are healthy.

# zpool status -x

2. Detach the USB disk.

# zpool detach rpool disk-name

3. Connect the USB disk to the new laptop.

4. Force import the pool on the USB disk.

# zpool import -f rpool rpool2

5. Device cleanup steps, something like:

Boot from media and import rpool2 as rpool.
Make sure the device info is visible.
Reset BIOS to boot from this disk.

On 04/16/12 04:12, Matt Keenan wrote:

Hi

Attempting to split a mirrored rpool and fails with error :

Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as
the laptop is not too healthy I'd like to split the pool into two pools
and attach the external drive to another laptop and mirror it to the new
laptop.

What I did :

- Booted laptop into an live DVD

- Import the rpool:
$ zpool import rpool

- Attempt to split :
$ zpool split rpool rpool-ext

- Error message shown and split fails :
Unable to split rpool: pool already exists

- So I tried exporting the pool
and re-importing with a different name and I still get the same
error. There are no other zpools on the system, both zpool list and
zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11/ZFS historical reporting

2012-04-16 Thread Hans Duedal
On Mon, Apr 16, 2012 at 9:18 PM, Anh Quach a...@blackandcode.com wrote:

 Are there any tools that ship w/ Solaris 11 for historical reporting on 
 things like network activity, zpool iops/bandwidth, etc., or is it pretty 
 much roll-your-own scripts and whatnot?

I find brendans nicstat useful for a nice overview of nic activity,
http://www.brendangregg.com/K9Toolkit/nicstat.c
gcc is available from the package repo, if you install that and the
system/header package for the deps you can compile it.

For iops zpool iostat as already suggested is fine especially with
-v, but also take a look at iostat -xnc 2. They wont give you
historical data though, but you can always feed it to rrdtool :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive upgrades

2012-04-16 Thread Peter Jeremy
On 2012-Apr-14 02:30:54 +1000, Tim Cook t...@cook.ms wrote:
You will however have an issue replacing them if one should fail.  You need to 
have the same block count to replace a device, which is why I asked for a 
right-sizing years ago.

The traditional approach this is to slice the disk yourself so you have a 
slice size with a known area and a dummy slice of a couple of GB in case a 
replacement is a bit smaller.  Unfortunately, ZFS on Solaris disables the drive 
cache if you don't give it a complete disk so this approach incurs as 
significant performance overhead there.  FreeBSD leaves the drive cache enabled 
in either situation.  I'm not sure how OI or Linux behave.

-- 
Peter Jeremy


pgprzpycAxFkZ.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive upgrades

2012-04-16 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Peter Jeremy
 
 On 2012-Apr-14 02:30:54 +1000, Tim Cook t...@cook.ms wrote:
 You will however have an issue replacing them if one should fail.  You
need
 to have the same block count to replace a device, which is why I asked for
a
 right-sizing years ago.
 
 The traditional approach this is to slice the disk yourself so you have
a slice
 size with a known area and a dummy slice of a couple of GB in case a
 replacement is a bit smaller.  Unfortunately, ZFS on Solaris disables the
drive
 cache if you don't give it a complete disk so this approach incurs as
significant
 performance overhead there.  

It's not so much that it disables it, as doesn't enable it.  By default,
for anything, the write back cache (on-disk) would be disabled, but if
you're using the whole disk for ZFS, then ZFS enables it, because it's known
to be safe.  (Unless... nevermind.)

Whenever I've deployed ZFS on partitions, I just script the enabling of the
writeback.  So Peter's message is true, but it's solvable.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive upgrades

2012-04-16 Thread Richard Elling
For the archives...

On Apr 16, 2012, at 3:37 PM, Peter Jeremy wrote:

 On 2012-Apr-14 02:30:54 +1000, Tim Cook t...@cook.ms wrote:
 You will however have an issue replacing them if one should fail.  You need 
 to have the same block count to replace a device, which is why I asked for a 
 right-sizing years ago.
 
 The traditional approach this is to slice the disk yourself so you have a 
 slice size with a known area and a dummy slice of a couple of GB in case a 
 replacement is a bit smaller.  Unfortunately, ZFS on Solaris disables the 
 drive cache if you don't give it a complete disk so this approach incurs as 
 significant performance overhead there.  FreeBSD leaves the drive cache 
 enabled in either situation.  I'm not sure how OI or Linux behave.

Write-back cache enablement is toxic for file systems that do not issue cache 
flush commands, such as Solaris' UFS. In the early days of ZFS, on Solaris 10 or
before ZFS was bootable on OpenSolaris, it was not uncommon to have ZFS and
UFS on the same system.

NB, there are a number of consumer-grade IDE/*ATA disks that ignore disabling
the write buffer. Hence, it is not always a win to enable the write buffer that 
cannot
be disabled.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss