Re: [zfs-discuss] liveupgrade ufs root - zfs ?

2008-08-28 Thread Robin Guo
Hi,

  I think LU 94-96 would be fine, if there's no zone in your system,
just simply do

  # cd cdrom/Solaris_11/Tools/Installers
  # liveupgrade20 --nodisplay
  # lucreate -c BE94 -n BE96  -p newpool   (The newpool must be SMI lable)
  # luupgrade -u -n BE96 -s cdrom
  # luactivate BE96
  # init 6
 
  During snv_90~96, quite a lot LU bugs are solved, so I think you could 
complete
the process successfully, if no special case..


Paul Floyd wrote:
 Hi

 On my opensolaris machine I currently have SXCEs 95 and 94 in two BEs. The 
 same fdisk partition contains /export/home and swap. In a separate fdisk 
 partition on another disk I have a ZFS pool.

 Does anyone have a pointer to a howto for doing a liveupgrade such that I can 
 convert the SXCE 94 UFS BE to ZFS (and liveupgrade to SXCE 96 while I'm at 
 it) if this is possible? Searching with google shows a lot of blogs that 
 describe the early problems that existed when ZFS was first available (ON 90 
 or so).

 A+
 Paul
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs promote and ENOSPC

2008-06-11 Thread Robin Guo
Hi, Mike,

  It's like 6452872, it need enough space for 'zfs promote'

  - Regards,

Mike Gerdts wrote:
 I needed to free up some space to be able to create and populate a new
 upgrade.  I was caught off guard by the amount of free space required
 by zfs promote.

 bash-3.2# uname -a
 SunOS indy2 5.11 snv_86 i86pc i386 i86pc

 bash-3.2# zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 rpool 5.49G  1.83G55K  /rpool
 [EMAIL PROTECTED] 46.5K  -  49.5K  -
 rpool/ROOT5.39G  1.83G18K  none
 rpool/ROOT/2008.052.68G  1.83G  3.38G  legacy
 rpool/ROOT/2008.05/opt 814M  1.83G  22.3M  legacy
 rpool/ROOT/2008.05/[EMAIL PROTECTED]43K  -  22.3M  -
 rpool/ROOT/2008.05/opt/SUNWspro739M  1.83G   739M  legacy
 rpool/ROOT/2008.05/opt/netbeans   52.9M  1.83G  52.9M  legacy
 rpool/ROOT/preview2   2.71G  1.83G  2.71G  /mnt
 rpool/ROOT/[EMAIL PROTECTED] 6.13M  -  2.71G  -
 rpool/ROOT/preview2/opt 27K  1.83G  22.3M  legacy
 rpool/export  89.8M  1.83G19K  /export
 rpool/export/home 89.8M  1.83G  89.8M  /export/home

 bash-3.2# zfs promote rpool/ROOT/2008.05
 cannot promote 'rpool/ROOT/2008.05': out of space

 Notice that I have 1.83 GB of free space and the snapshot from which
 the clone was created (rpool/ROOT/[EMAIL PROTECTED]) is 2.71 GB.  It
 was not until I had more than 2.71 GB of free space that I could
 promote rpool/ROOT/2008.05.

 This behavior does not seem to be documented.  Is it a bug in the
 documentation or zfs?

   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help with a BIG problem,

2008-05-23 Thread Robin Guo
Hi, Herman
 
  You may not use '-n' to Makefile, that'll lead swap comlain.

Hernan Freschi wrote:
 I forgot to post arcstat.pl's output:

 Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz c
 22:32:37  556K  525K 94  515K   949K   98  515K   97 1G1G
 22:32:38636310063  100 0063  100 1G1G
 22:32:39747410074  100 0074  100 1G1G
 22:32:40767610076  100 0076  100 1G1G
 State Changed
 22:32:41757510075  100 0075  100 1G1G
 22:32:42777710077  100 0077  100 1G1G
 22:32:43727210072  100 0072  100 1G1G
 22:32:44808010080  100 0080  100 1G1G
 State Changed
 22:32:45989810098  100 0098  100 1G1G

 sometimes c is 2G.

 I tried the mkfile and swap, but I get:
 [EMAIL PROTECTED]:/]# mkfile -n 4g /export/swap
 [EMAIL PROTECTED]:/]# swap -a /export/swap
 /export/swap may contain holes - can't swap on it.

 /export is the only place where I have enough free space. I could add another 
 drive if needed.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-16 Thread Robin Guo
Hi, Brian

  You mean stripe type with multiple-disks or raidz type? I'm afraid 
it's still single disk
or mirrors only. If opensolaris start new project of this kind of 
feature, it'll be backport
to s10u* eventually, but that's need some time to go, sounds no 
possibility in U6, I think.

Brian Hechinger wrote:
 On Fri, May 16, 2008 at 09:30:27AM +0800, Robin Guo wrote:
   
 Hi, Paul

   At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
 

 As far as root zfs goes, are there any plans to support more than just single
 disks or mirrors in U6, or will that be for a later date?

 -brian
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to upgrade kernel by my own on root-zfs platform

2008-05-15 Thread Robin Guo




Hi, Aubrey

 Could you point the entry you added into menu.lst? I think it might
be the
issue that syntax not correct.

 Thanks.

Aubrey Li wrote:

  On Thu, May 15, 2008 at 3:38 PM, Aubrey Li [EMAIL PROTECTED] wrote:
  
  
Hi all,

I'm new to zfs.
Recently I compiled ON successfully on OpenSolaris 200805 release.
So I want to upgrade kernel by my own image.
Following the cap-eye-install, I got a tar ball and extract it under "/".
I also added a entry in /rpool/boot/grub/menu.lst to boot my own kernel.
After I selected the new enty on the grub menu, I was told kernel want to
mount ufs, not zfs.
Am I on the right way? Is there any guide doc about this issue?


  
  
Here is the error on my side:

SunOS Release 5.11 Version cpupm-gate 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
DEBUG enabled
NOTICE: mount: not a UFS magic number (0x0)

panic[cpu0]/thread=fbc250e0: cannot mount root path /ramdisk:a

fbc44f80 genunix:rootconf+113 ()
fbc44fd0 genunix:vfs_mountroot+65 ()
fbc45010 genunix:main+128 ()
fbc45020 unix:_locore_start+92 ()

skipping system dump - no dump device configured

I really appreciate any suggestions!

Thanks,
-Aubrey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  



-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to upgrade kernel by my own on root-zfs platform

2008-05-15 Thread Robin Guo

Hi, Aubrey,

 Do you ever do installgrub to the slice, and boot up from the
disk/slice where ZFS resides on?

It should be

 # mount -F zfs rpool/ROOT/opensolaris  /mnt
 # installgrub /mnt/boot/grub/stage1 /mnt/boot/grub/stage2 
/dev/rdsk/slice


 I think if not do installgrub, the grub is still boot from UFS slice.

Aubrey Li wrote:

Robin Guo wrote:
  

Hi, Aubrey

  Could you point the entry you added into menu.lst?  I think it might be
the
issue that syntax not correct.




Here is my menu.lst:

[EMAIL PROTECTED]:~/work/cpupm-gate$ cat /rpool/boot/grub/menu.lst
splashimage /boot/grub/splash.xpm.gz
timeout 30
default 0
#-- ADDED BY BOOTADM - DO NOT EDIT --
title OpenSolaris 2008.05 snv_86_rc3 X86
bootfs rpool/ROOT/opensolaris
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
#-END BOOTADM
#-- ADDED BY BOOTADM - DO NOT EDIT --
title OpenSolaris 2008.05 snv_86_rc3 X86
bootfs rpool/ROOT/opensolaris
kernel$ /platform/i86pc/kernel.mine/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
#-END BOOTADM
# End of LIBBE entry =
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to upgrade kernel by my own on root-zfs platform

2008-05-15 Thread Robin Guo
Yes, you may need to check if following steps are also done.

  mount -F zfs rpool/ROOT/opensolaris /mnt
  bootadm update-archive -R /mnt
  zpool set bootfs=rpool/ROOT/opensolaris rpool

Aubrey Li wrote:
 On Thu, May 15, 2008 at 9:34 PM, Darren J Moffat [EMAIL PROTECTED] wrote:
   
 Aubrey Li wrote:
 
 Robin Guo wrote:
   
 Hi, Aubrey

  Could you point the entry you added into menu.lst?  I think it might be
 the
 issue that syntax not correct.

 
 Here is my menu.lst:

 [EMAIL PROTECTED]:~/work/cpupm-gate$ cat /rpool/boot/grub/menu.lst
 splashimage /boot/grub/splash.xpm.gz
 timeout 30
 default 0
 #-- ADDED BY BOOTADM - DO NOT EDIT --
 title OpenSolaris 2008.05 snv_86_rc3 X86
 bootfs rpool/ROOT/opensolaris
 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
 module$ /platform/i86pc/$ISADIR/boot_archive
 #-END BOOTADM
 #-- ADDED BY BOOTADM - DO NOT EDIT --
 title OpenSolaris 2008.05 snv_86_rc3 X86
 bootfs rpool/ROOT/opensolaris
 kernel$ /platform/i86pc/kernel.mine/$ISADIR/unix -B $ZFS-BOOTFS
 module$ /platform/i86pc/$ISADIR/boot_archive
 #-END BOOTADM
 # End of LIBBE entry =
   
 You are loading your kernel but using the boot_archive from the already
 existing one, you need to fix the module$ path to the boot_archive.

 Also I think you should probably choose a different title so you actually
 know which entry is booting each kernel.

 

 After build ON, I use cap-eye-install to install my own kernel image.
 # Install -G kernel.mine -k i86pc
 # tar xf /tmp/Install.aubrey/Install.i86pc.tar

 I have no idea how to create another boot_archive, Isn't the existing
 boot_archive
 fit for zfs-rootfs boot?
   


 Thanks,
 -Aubrey
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-15 Thread Robin Guo
Hi, Paul

  The most feature and bugfix so far towards Navada 87 (or 88? ) will 
backport into s10u6.
It's about the same (I mean from outside viewer, not inside) with 
openSolaris 05/08, 
but certainly, some other features as CIFS has no plan to backport to 
s10u6 yet, so ZFS
will has fully ready but no effect on these kind of area. That depend on 
how they co-operate.

  At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..

Paul B. Henson wrote:
 We've been working on a prototype of a ZFS file server for a while now,
 based on Solaris 10. Now that official support is available for
 openSolaris, we are looking into that as a possible option as well.
 openSolaris definitely has a greater feature set, but is still a bit rough
 around the edges for production use.

 I've heard that a considerable amount of ZFS improvements are slated to
 show up in S10U6. I was wondering if anybody could give an unofficial list
 of what will probably be deployed in S10U6, and how that will compare
 feature wise to openSolaris 05/08. Some rough guess at an ETA would also be
 nice :).

 Thanks...


   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-15 Thread Robin Guo
Hi, Krzys,

  Definitely, s10u6_01 ZFS's version is 10 already,
I never expect it'll downgrade :)

  U5 only inlcude bugfix but without great ZFS feature included,
that's a pity, but anyway, s10u6 will come, sooner or later.

Krzys wrote:
 I was hoping that in U5 at least ZFS version 5 would be included but 
 it was not, do you think that will be in U6?

 On Fri, 16 May 2008, Robin Guo wrote:

 Hi, Paul

  The most feature and bugfix so far towards Navada 87 (or 88? ) will
 backport into s10u6.
 It's about the same (I mean from outside viewer, not inside) with
 openSolaris 05/08,
 but certainly, some other features as CIFS has no plan to backport to
 s10u6 yet, so ZFS
 will has fully ready but no effect on these kind of area. That depend on
 how they co-operate.

  At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..

 Paul B. Henson wrote:
 We've been working on a prototype of a ZFS file server for a while now,
 based on Solaris 10. Now that official support is available for
 openSolaris, we are looking into that as a possible option as well.
 openSolaris definitely has a greater feature set, but is still a bit 
 rough
 around the edges for production use.

 I've heard that a considerable amount of ZFS improvements are slated to
 show up in S10U6. I was wondering if anybody could give an 
 unofficial list
 of what will probably be deployed in S10U6, and how that will compare
 feature wise to openSolaris 05/08. Some rough guess at an ETA would 
 also be
 nice :).

 Thanks...




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 !DSPAM:122,482ce24518355742411484!



-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question regarding gzip compression in S10

2008-05-14 Thread Robin Guo
Hi, Chris,

  The version  5 (Actually, will be v10, same as Opensolaris) will in 
s10u6.

  S10u5 not include much features of ZFS , so the SPA version still keep v4.

Krzys wrote:
 I just upgraded to Sol 10 U5 and I was hoping that gzip compression will be 
 there, but when I do upgrade it only does show v4

 [10:05:36] [EMAIL PROTECTED]: /export/home  zpool upgrade
 This system is currently running ZFS version 4.

 Do you know when Version 5 will be included in Solaris 10? are there any 
 plans 
 for it or will it be in Sol 11 only?

 Regards,

 Chris

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] ZFS and fibre channel issues

2008-05-14 Thread Robin Guo




Hi, William,

 You didn't mention c6t21800080E512C872d0s0 in your command line,
maybe typo of c6t21800080E512C872d14s0?

 # zpool create bottlecap c6t21800080E512C872d14
c6t21800080E512C872d15 

 The warrning looks like by the remain label info may reside on you
disk.
Could you see output by 'zdb -l /dev/dsk/c6t21800080E512C872d14s0' ? it
should
has something related to nalgene. But anyway, if you have re-used that
disk to create
new pool, I suspect this issue has gone.

 - Regards,

Jeff Cheeney wrote:

  The ZFS crew might be better to answer this question. (CC'd here)

   --jc

William Yang wrote:
  
  
I am having issues creating a zpool using entire disks with a fibre 
channel array.  The array is a Dell PowerVault 660F.
When I run "zpool create bottlecap c6t21800080E512C872d14 
c6t21800080E512C872d15", I get the following error:
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6t21800080E512C872d0s0 is part of active ZFS pool nalgene. 
Please see zpool(1M).
I was able to create pool nalgene by using entire disks, but this was 
awhile back.  I have a feeling one of the patches I applied after 
creating nalgene broke something.  I am currently using Solaris 10 
SPARC 8/07 kernel 127111-11.
 
Also, if I append s0 to the disk name (i.e. c6t21800080E512C872d14s0), 
then I can create the new zpool.  Any ideas?
 
William Yang


___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
  

  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  



-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird performance issue with ZFS with lots of simultaneous reads

2008-05-09 Thread Robin Guo
Hi, Chris,

  Good topic, I'd like to see comments from expert as well.

  Firstly, I think it has some punishment from NFS, ZFS/NFS  has 
performance lost,
and the L2ARC cache feature is the way to solve it, so far. (Has in 
opensolaris, but not in s10u4 yet,
will target in s10u6 release).

  And, I also see the performance lost while I try iSCISI from local 
machine,
but I didn't gather the accurate data yet. That might be a problem need 
evaluate.

  I'll trace this thread to see if any advance, thanks for bring out the 
topic.

  - Regards,

Robin Guo

Chris Siebenmann wrote:
  I have a ZFS-based NFS server (Solaris 10 U4 on x86) where I am seeing
 a weird performance degradation as the number of simultaneous sequential
 reads increases.

  Setup:
   NFS client - Solaris NFS server - iSCSI target machine

  There are 12 physical disks on the iSCSI target machine. Each of them
 is sliced up into 11 parts and the parts exported as individual LUNs to
 the Solaris server. The Solaris server uses each LUN as a separate ZFS
 pool (giving 132 pools in total) and exports them all to the NFS client.

 (The NFS client and the iSCSI target machine are both running Linux.
 The Solaris NFS server has 4 GB of RAM.)

  When the NFS client starts a sequential read against one filesystem
 from each physical disk, the iSCSI target machine and the NFS client
 both use the full network bandwidth and each individual read gets
 1/12th of it (about 9.something MBytes/sec). Starting a second set of
 sequential reads against each disk (to a different pool) behaves the
 same, as does starting a third set.

  However, when I add a fourth set of reads thing change; while the
 NFS server continues to read from the iSCSI target at full speed, the
 data rate to the NFS client drops significantly. By the time I hit
 9 reads per physical disk, the NFS client is getting a *total* of 8
 MBytes/sec.  In other words, it seems that ZFS on the NFS server is
 somehow discarding most of what it reads from the iSCSI disks, although
 I can't see any sign of this in 'vmstat' output on Solaris.

  Also, this may not be just an NFS issue; in limited testing with local
 IO on the Solaris machine it seems that I may be seeing the same effect
 with the same rough magnitude.

 (It is limited testing because it is harder to accurately measure what
 aggregate data rate I'm getting and harder to run that many simultaneous
 reads, as if I run too many of them the Solaris machine locks up due to
 overload.)

  Does anyone have any ideas of what might be going on here, and how I
 might be able to tune things on the Solaris machine so that it performs
 better in this situation (ideally without harming performance under
 smaller loads)? Would partitioning the physical disks on Solaris instead
 of splitting them up on the iSCSI target make a significant difference?

  Thanks in advance.

   - cks
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Spare Won't Remove

2008-02-19 Thread Robin Guo




Hi, Christ,

 I just verified this issue could simply reproduce in onnv, I've filed
CR #6664649 to trace it.
Thanks for report.

# zpool create -f tank c3t5d0s1 spare c3t5d0s0
# mkfile 100m /var/tmp/file
# zpool add tank sparc /var/tmp/file
# zpool export tank
# format c3t5d0 ( modify c3t5d0s0 to be unassigned)
# rm /var/tmp/file
# zpool import tank
# zpool status -v tank
 pool: tank
state: ONLINE
scrub: none requested
config:

 NAME STATE READ WRITE CKSUM
 tank ONLINE 0 0 0
 c3t5d0s1 ONLINE 0 0 0
 spares
 c3t5d0s0 UNAVAIL cannot open
  /var/tmp/file UNAVAIL cannot open

If the sparce device status as UNAVAIL, it cannot be removed by 'zpool
remove',
even I tried 'zpool scrub' and get no help.

# zpool remove tank c3t5d0s0
# echo $?
0
# zpool remove tank /var/tmp/file
# echo $?
0
# zpool status -v tank
 pool: tank
state: ONLINE
scrub: none requested
config:

 NAME STATE READ WRITE CKSUM
 tank ONLINE 0 0 0
 c3t5d0s1 ONLINE 0 0 0
 spares
 c3t5d0s0 UNAVAIL cannot open
  /var/tmp/file UNAVAIL cannot open

Christopher Gibbs wrote:

  Oops, I forgot a step. I also upgraded the zpool in snv79b before I
tried the remove. It is now version 10.

On 2/15/08, Christopher Gibbs [EMAIL PROTECTED] wrote:
  
  
The pool was exported from snv_73 and the spare was disconnected from
 the system. The OS was upgraded to snv_79b (SXDE 1/08) and the pool
 was re-imported.

 I think this weekend I'll try connecting a different drive to that
 controller and see if it will remove then.

 Thanks for your help.


 On 2/15/08, Robin Guo [EMAIL PROTECTED] wrote:
  Hi, Christopher,
 
I tried by using raw files as the spare, remove the file, then 'zpool
   remove' ,
   looks the raw files could be eliminated from the pool.
 
But since you use the physical device, I suppose it might be a bug there,
   for the status of spare device has turned to be 'UNAVAIL'.
 
Could you point out the OS you used? I might check with the latest
   onnv nightly to
   see if this issue exist.
 
 
   Christopher Gibbs wrote:
I have a hot spare that was part of my zpool but is no longer
connected to the system. I can run the zpool remove command and it
returns fine but doesn't seem to do anything.
   
I have tried adding and removing spares that are connected to the
system and works properly. Is zpool remove failing because the disk is
no longer connected to the system?
   
# zpool remove tank c1d0s4
# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:
   
NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c2d0ONLINE   0 0 0
c2d1ONLINE   0 0 0
c3d0ONLINE   0 0 0
c3d1ONLINE   0 0 0
c1t0d0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
spares
  c1d0s4UNAVAIL   cannot open
   
errors: No known data errors
   
   
   
   
 
 
   --
 
  Regards,
 
 
   Robin Guo, Xue-Bin Guo
   Solaris Kernel and Data Service QE,
   Sun China Engineering and Reserch Institute
   Phone: +86 10 82618200 +82296
   Email: [EMAIL PROTECTED]
   Blog: http://blogs.sun.com/robinguo
 
 



--
 Chris


  
  

  



-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't share a zfs

2008-02-19 Thread Robin Guo
Hi, Jason,

  Could you succeed by these steps?

# zpool create tank vdev
# zfs set sharenfs=on tank
# share
[EMAIL PROTECTED]  /tank   rw

  The nfs server will be enable automatically while there's any 
shareable dataset exist,
(sharenfs or sharesmb = on)

jason wrote:
 -bash-3.2$ zfs share tank
 cannot share 'tank': share(1M) failed
 -bash-3.2$ 

 how do i figure out what's wrong?
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Spare Won't Remove

2008-02-14 Thread Robin Guo
Hi, Christopher,

  I tried by using raw files as the spare, remove the file, then 'zpool 
remove' ,
looks the raw files could be eliminated from the pool.

  But since you use the physical device, I suppose it might be a bug there,
for the status of spare device has turned to be 'UNAVAIL'.

  Could you point out the OS you used? I might check with the latest 
onnv nightly to
see if this issue exist.

Christopher Gibbs wrote:
 I have a hot spare that was part of my zpool but is no longer
 connected to the system. I can run the zpool remove command and it
 returns fine but doesn't seem to do anything.

 I have tried adding and removing spares that are connected to the
 system and works properly. Is zpool remove failing because the disk is
 no longer connected to the system?

 # zpool remove tank c1d0s4
 # zpool status
   pool: tank
  state: ONLINE
  scrub: none requested
 config:

 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0 0
   raidz1ONLINE   0 0 0
 c2d0ONLINE   0 0 0
 c2d1ONLINE   0 0 0
 c3d0ONLINE   0 0 0
 c3d1ONLINE   0 0 0
 c1t0d0  ONLINE   0 0 0
 c1t1d0  ONLINE   0 0 0
 c1t2d0  ONLINE   0 0 0
 c1t3d0  ONLINE   0 0 0
 spares
   c1d0s4UNAVAIL   cannot open

 errors: No known data errors



   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get ZFS use the whole disk?

2008-02-04 Thread Robin Guo
Hi, Roman

  You can use 'zpool attach' to attach mirror into it. But cannot 'zpool 
add' new slice into it.

   rootpool can be a single disk device, or a device slice, or in a 
mirrored configuration.
If you use a whole disk for a rootpool, you must use a slice notation 
(e.g. c0d0s0) so that it is labeled with an SMI label.

Roman Morokutti wrote:
 Just another thought. After setting up a ZFS root on 
 slice c0d0s4, it should be just possible after starting
 into it, to add the remaining slices into the created
 ZFS pool. Is this possible?

 Roman
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get ZFS use the whole disk?

2008-02-04 Thread Robin Guo
Hi, Roman

  If you need to ocupy a disk on rootpool, you need has at least 2 disks 
in the system on such case.
use c[m]t[n]d[p]s0 as the second device, suppose you've SMI labeled it 
and let s0 take
the entire space of that disk.

  Good luck!

Roman Morokutti wrote:
 Hi,

 I am new to ZFS and recently managed to get a ZFS root to work.
 These were the steps I have done:

 1. Installed b81 (fresh install)
 2. Unmounted /second_root on c0d0s4
 3. Removed /etc/vfstab entry of /second_root
 4. Executed ./zfs-actual-root-install.sh c0d0s4
 5. Rebooted (init 6)

 After selecting ZFS boot entry in GRUB Solaris went up. Great.
 Next I looked how the slices were configured. And I saw that the
 layout hasnĀ“t changed despite slice 4 is now ZFS root. What would
 I have to do, to get a layout where zpool /tank occupies the whole
 disk as presentated by Lori Alt?

 Roman
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Draft one-pager for zfs-auto-snapshots

2008-02-04 Thread Robin Guo




Excellent News, Tim.

 That util will be handy and popular under SMF. Looking forward to
that.

Tim Foster wrote:

  Hi all,

I put together the attached one-pager on the ZFS Automatic Snapshots
service which I've been maintaining on my blog to date.

I would like to see if this could be integrated into ON and believe that
a first step towards this is a project one-pager: so I've attached a
draft version.

I'm happy to defer judgement to the ZFS team as to whether this would be
a suitable addition to OpenSolaris - if the consensus is that it's
better for the service to remain in it's current un-integrated state and
be discovered through BigAdmin or web searches, that's okay by me.
[ just thought I'd ask ]

	cheers,
			tim

  
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get ZFS use the whole disk?

2008-02-04 Thread Robin Guo





Hi, Roman,

 The disable disk cache option is a choice by manual setting, on some
particular envionment.
ZIL default setting to 'on'. Regardless regular pool or rootpool. 

 There's CR #6648965 might also related to this.  It talked about
slog/l2cache/spare should 
able to be supported in a rootpool.

Will Murnane wrote:

  On Feb 4, 2008 4:37 PM, Robin Guo [EMAIL PROTECTED] wrote:
  
  
If you use a whole disk for a rootpool, you must use a slice notation
(e.g. c0d0s0) so that it is labeled with an SMI label.

  
  Will ZFS recognize that it has the whole disk at this point (and thus
leave cache enabled on it) or not?  Many of the machines I administer
provide some service that uses the single (currently SVM mirrored)
root partition, so having cache enabled could be helpful.  ISTR some
talk of ZFS disabling disk cache if it doesn't have the whole disk to
itself.

Thanks!
Will
  



-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool history not found

2007-09-17 Thread Robin Guo
Hi, Sunnie,

  'zpool history' is only introduced from the ZFS version 4.
You could check the update info and pick the bits after Build 62
corresponded

# zpool upgrade -v
This system is currently running ZFS pool version 8.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
For more information on a particular version, including supported 
releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.

sunnie wrote:
 my system is currently running ZFS versionnn 3. 
 And I just can't find the zpool history command. 
 can anyone help me with the problem?
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS panic in space_map.c line 125

2007-09-17 Thread Robin Guo
Hi Matty,

  From the stack I saw, that is 6454482.
But this defect has been marked as 'Not reproducible', I have no idea 
about how to recover
from it, but looks like new update will not hit this issue.

Matty wrote:
 One of our Solaris 10 update 3 servers paniced today with the following error:

 Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
 panic: assertion failed: ss != NULL, file:
 ../../common/fs/zfs/space_map.c, line: 125

 The server saved a core file, and the resulting backtrace is listed below:

 $ mdb unix.0 vmcore.0
   
 $c
 
 vpanic()
 0xfb9b49f3()
 space_map_remove+0x239()
 space_map_load+0x17d()
 metaslab_activate+0x6f()
 metaslab_group_alloc+0x187()
 metaslab_alloc_dva+0xab()
 metaslab_alloc+0x51()
 zio_dva_allocate+0x3f()
 zio_next_stage+0x72()
 zio_checksum_generate+0x5f()
 zio_next_stage+0x72()
 zio_write_compress+0x136()
 zio_next_stage+0x72()
 zio_wait_for_children+0x49()
 zio_wait_children_ready+0x15()
 zio_next_stage_async+0xae()
 zio_wait+0x2d()
 arc_write+0xcc()
 dmu_objset_sync+0x141()
 dsl_dataset_sync+0x23()
 dsl_pool_sync+0x7b()
 spa_sync+0x116()
 txg_sync_thread+0x115()
 thread_start+8()

 It appears ZFS is still able to read the labels from the drive:

 $ zdb -lv  /dev/rdsk/c3t50002AC00039040Bd0p0
 
 LABEL 0
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464
 
 LABEL 1
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464
 
 LABEL 2
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464
 
 LABEL 3
 
 version=3
 name='fpool0'
 state=0
 txg=4
 pool_guid=10406529929620343615
 top_guid=3365726235666077346
 guid=3365726235666077346
 vdev_tree
 type='disk'
 id=0
 guid=3365726235666077346
 path='/dev/dsk/c3t50002AC00039040Bd0p0'
 devid='id1,[EMAIL PROTECTED]/q'
 whole_disk=0
 metaslab_array=13
 metaslab_shift=31
 ashift=9
 asize=322117566464

 But for some reason it is unable to open the pool:

 $ zdb -c fpool0
 zdb: can't open fpool0: error 2

 I saw several bugs related to space_map.c, but the stack traces listed
 in the bug reports were different than the one listed above.  Has
 anyone seen this bug before? Is there anyway to recover from it?

 Thanks for any insight,
 - Ryan
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss