Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-30 Thread Darin Perusich

On Saturday, August 28, 2010 06:04:17 am Mattias Pantzare wrote:
 On Sat, Aug 28, 2010 at 02:54, Darin Perusich
 
 darin.perus...@cognigencorp.com wrote:
  Hello All,
  
  I'm sure this has been discussed previously but I haven't been able to
  find an answer to this. I've added another raidz1 vdev to an existing
  storage pool and the increased available storage isn't reflected in the
  'zfs list' output. Why is this?
  
  The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
  Generic_139555-08. The system does not have the lastest patches which
  might be the cure.
  
  Thanks!
  
 
 I think you have to explain your problem more, 392G is more than 196G?

This is actually the wrong output, it was the end of a LONG day. Here's the 
correct output.

zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1
zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
datapool   398G   191K   398G 0%  ONLINE  -

zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
datapool91K   196G 1K  /datapool

zpool add datapool raidz c1t50060E800042AA70d2 c1t50060E800042AA70d3

zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
datapool   796G   231K   796G 0%  ONLINE  -

zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
datapool   111K   392G18K  /datapool

-- 
Darin Perusich
Unix Systems Administrator
Cognigen Corporation
395 Youngs Rd.
Williamsville, NY 14221
Phone: 716-633-3463
Email: darin...@cognigencorp.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-30 Thread Darin Perusich

On Saturday, August 28, 2010 12:27:36 am Edho P Arief wrote:
 On Sat, Aug 28, 2010 at 7:54 AM, Darin Perusich
 
 darin.perus...@cognigencorp.com wrote:
  Hello All,
  
  I'm sure this has been discussed previously but I haven't been able to
  find an answer to this. I've added another raidz1 vdev to an existing
  storage pool and the increased available storage isn't reflected in the
  'zfs list' output. Why is this?
 
 you must do zpool export followed by zpool import

I tried this but it didn't have any effect.

-- 
Darin Perusich
Unix Systems Administrator
Cognigen Corporation
395 Youngs Rd.
Williamsville, NY 14221
Phone: 716-633-3463
Email: darin...@cognigencorp.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-30 Thread Darin Perusich

On Saturday, August 28, 2010 05:56:27 am Tomas Ögren wrote:
 On 27 August, 2010 - Darin Perusich sent me these 2,1K bytes:
  Hello All,
  
  I'm sure this has been discussed previously but I haven't been able to
  find an answer to this. I've added another raidz1 vdev to an existing
  storage pool and the increased available storage isn't reflected in the
  'zfs list' output. Why is this?
  
  The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
  Generic_139555-08. The system does not have the lastest patches which
  might be the cure.
  
  Thanks!
  
  Here's what I'm seeing.
  zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1
 
 Just fyi, this is an inefficient variant of a mirror. More cpu required
 and lower performance.
 

This is a testing setup, the production pool is currently 1 raidz1 vdev split 
across 6 disks. Thanks for the heads up though.

-- 
Darin Perusich
Unix Systems Administrator
Cognigen Corporation
395 Youngs Rd.
Williamsville, NY 14221
Phone: 716-633-3463
Email: darin...@cognigencorp.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-30 Thread Richard Elling
This is a FAQ
Why doesn't the space that is reported by the zpool list command and the zfs 
list command match?
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq

 -- richard

On Aug 30, 2010, at 5:47 AM, Darin Perusich wrote:

 
 On Saturday, August 28, 2010 06:04:17 am Mattias Pantzare wrote:
 On Sat, Aug 28, 2010 at 02:54, Darin Perusich
 
 darin.perus...@cognigencorp.com wrote:
 Hello All,
 
 I'm sure this has been discussed previously but I haven't been able to
 find an answer to this. I've added another raidz1 vdev to an existing
 storage pool and the increased available storage isn't reflected in the
 'zfs list' output. Why is this?
 
 The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
 Generic_139555-08. The system does not have the lastest patches which
 might be the cure.
 
 Thanks!
 
 
 I think you have to explain your problem more, 392G is more than 196G?
 
 This is actually the wrong output, it was the end of a LONG day. Here's the 
 correct output.
 
 zpool create datapool raidz1 c1t50060E800042AA70d0 c1t50060E800042AA70d1
 zpool list
 NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
 datapool   398G   191K   398G 0%  ONLINE  -
 
 zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 datapool91K   196G 1K  /datapool
 
 zpool add datapool raidz c1t50060E800042AA70d2 c1t50060E800042AA70d3
 
 zpool list
 NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
 datapool   796G   231K   796G 0%  ONLINE  -
 
 zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 datapool   111K   392G18K  /datapool
 
 -- 
 Darin Perusich
 Unix Systems Administrator
 Cognigen Corporation
 395 Youngs Rd.
 Williamsville, NY 14221
 Phone: 716-633-3463
 Email: darin...@cognigencorp.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com
ZFS and performance consulting
http://www.RichardElling.com












___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-28 Thread Tomas Ögren
On 27 August, 2010 - Darin Perusich sent me these 2,1K bytes:

 Hello All,
 
 I'm sure this has been discussed previously but I haven't been able to find 
 an 
 answer to this. I've added another raidz1 vdev to an existing storage pool 
 and 
 the increased available storage isn't reflected in the 'zfs list' output. Why 
 is this?
 
 The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel 
 Generic_139555-08. The system does not have the lastest patches which might 
 be 
 the cure.
 
 Thanks!
 
 Here's what I'm seeing.
 zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1

Just fyi, this is an inefficient variant of a mirror. More cpu required
and lower performance.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-28 Thread Mattias Pantzare
On Sat, Aug 28, 2010 at 02:54, Darin Perusich
darin.perus...@cognigencorp.com wrote:
 Hello All,

 I'm sure this has been discussed previously but I haven't been able to find an
 answer to this. I've added another raidz1 vdev to an existing storage pool and
 the increased available storage isn't reflected in the 'zfs list' output. Why
 is this?

 The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
 Generic_139555-08. The system does not have the lastest patches which might be
 the cure.

 Thanks!

 Here's what I'm seeing.
 zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1

 zpool status
  pool: datapool
  state: ONLINE
  scrub: none requested
 config:

        NAME                       STATE     READ WRITE CKSUM
        datapool                   ONLINE       0     0     0
          raidz1                   ONLINE       0     0     0
            c1t50060E800042AA70d0  ONLINE       0     0     0
            c1t50060E800042AA70d1  ONLINE       0     0     0

 zfs list
 NAME       USED  AVAIL  REFER  MOUNTPOINT
 datapool   108K   196G    18K  /datapool

 zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3

 zpool status
  pool: datapool
  state: ONLINE
  scrub: none requested
 config:

        NAME                       STATE     READ WRITE CKSUM
        datapool                   ONLINE       0     0     0
          raidz1                   ONLINE       0     0     0
            c1t50060E800042AA70d0  ONLINE       0     0     0
            c1t50060E800042AA70d1  ONLINE       0     0     0
          raidz1                   ONLINE       0     0     0
            c1t50060E800042AA70d2  ONLINE       0     0     0
            c1t50060E800042AA70d3  ONLINE       0     0     0

 zfs list
 NAME       USED  AVAIL  REFER  MOUNTPOINT
 datapool   112K   392G    18K  /datapool

I think you have to explain your problem more, 392G is more than 196G?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-28 Thread eXeC001er
 On Sat, Aug 28, 2010 at 02:54, Darin Perusich
 darin.perus...@cognigencorp.com wrote:
  Hello All,
 
  I'm sure this has been discussed previously but I haven't been able to
 find an
  answer to this. I've added another raidz1 vdev to an existing storage
 pool and
  the increased available storage isn't reflected in the 'zfs list' output.
 Why
  is this?
 
  The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
  Generic_139555-08. The system does not have the lastest patches which
 might be
  the cure.
 
  Thanks!
 
  Here's what I'm seeing.
  zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1
 
  zpool status
   pool: datapool
   state: ONLINE
   scrub: none requested
  config:
 
 NAME   STATE READ WRITE CKSUM
 datapool   ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d0  ONLINE   0 0 0
 c1t50060E800042AA70d1  ONLINE   0 0 0
 
  zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  datapool   108K   196G18K  /datapool
 
  zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3
 
  zpool status
   pool: datapool
   state: ONLINE
   scrub: none requested
  config:
 
 NAME   STATE READ WRITE CKSUM
 datapool   ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d0  ONLINE   0 0 0
 c1t50060E800042AA70d1  ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d2  ONLINE   0 0 0
 c1t50060E800042AA70d3  ONLINE   0 0 0
 
  zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  datapool   112K   392G18K  /datapool


Darin, you created 'pool'-vdev from the two 'raid-z'-vdev: result you have
size_of_pool = 2 * 'raid-z'




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-27 Thread Darin Perusich
Hello All,

I'm sure this has been discussed previously but I haven't been able to find an 
answer to this. I've added another raidz1 vdev to an existing storage pool and 
the increased available storage isn't reflected in the 'zfs list' output. Why 
is this?

The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel 
Generic_139555-08. The system does not have the lastest patches which might be 
the cure.

Thanks!

Here's what I'm seeing.
zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1

zpool status
  pool: datapool
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
datapool   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
c1t50060E800042AA70d0  ONLINE   0 0 0
c1t50060E800042AA70d1  ONLINE   0 0 0

zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
datapool   108K   196G18K  /datapool

zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3

zpool status
  pool: datapool
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
datapool   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
c1t50060E800042AA70d0  ONLINE   0 0 0
c1t50060E800042AA70d1  ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
c1t50060E800042AA70d2  ONLINE   0 0 0
c1t50060E800042AA70d3  ONLINE   0 0 0

zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
datapool   112K   392G18K  /datapool

zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
datapool   796G   471K   796G 0%  ONLINE  -


-- 
Darin Perusich
Unix Systems Administrator
Cognigen Corporation
395 Youngs Rd.
Williamsville, NY 14221
Phone: 716-633-3463
Email: darin...@cognigencorp.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-27 Thread Edho P Arief
On Sat, Aug 28, 2010 at 7:54 AM, Darin Perusich
darin.perus...@cognigencorp.com wrote:
 Hello All,

 I'm sure this has been discussed previously but I haven't been able to find an
 answer to this. I've added another raidz1 vdev to an existing storage pool and
 the increased available storage isn't reflected in the 'zfs list' output. Why
 is this?


you must do zpool export followed by zpool import

-- 
O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss