Re: [zfs-discuss] snv_123 to S10U10

2012-02-03 Thread Karl Rossing
I zpool imported from b123 to S10U10 without a problem. I just had to 
svcadm enable svc:/system/iscsitgt:default


I'm was seeing cksum errors on b123 last year 
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg45345.html 
and lived with them.


On S10u10 i'm still seeing the cksum errors but I don't see the xxxK 
repaired anymore.


bash-3.2# zpool status vdipool
  pool: vdipool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are 
unaffected.

action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scan: none requested
config:

NAME   STATE READ WRITE CKSUM
vdipoolONLINE   0 0 0
  raidz1-0 ONLINE   0 0 0
c0t5000C500103F2057d0  ONLINE   0 0 3
c0t5000C5000440AA0Bd0  ONLINE   0 0 4
c0t5000C500103E9FFBd0  ONLINE   0 0 2
c0t5000C500103E370Fd0  ONLINE   0 0 0
c0t5000C500103E120Fd0  ONLINE   0 0 4
logs
  mirror-1 ONLINE   0 0 0
c0t500151795955D430d0  ONLINE   0 0 0
c0t500151795955BDB6d0  ONLINE   0 0 0
cache
  c0t5001517BB271845Dd0ONLINE   0 0 0
spares
  c0t5000C500103E368Fd0AVAIL

I have been running periodic zpool clear vdipool

I'm going to take a snapshot over the weekend, backup the data and the 
run a scrub for good measure.

2012-02-02.18:42:12 zpool clear vdipool
2012-02-02.19:33:33 zpool clear vdipool
2012-02-02.19:34:40 zpool clear vdipool
2012-02-02.19:45:30 zpool clear vdipool
2012-02-02.19:53:27 zpool clear vdipool
2012-02-02.22:59:57 zpool clear vdipool

So could zpool v29 be repairing the bugs ( 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123 
) encountered in b123?


Karl

On 01/31/2012 11:20 AM, Karl Rossing wrote:
I'm going to be moving a non root storage pool from snv_123(I think 
it's pre comstar) to s10u10 box.


I have some zfs iscsi volumes on the pool. I'm wondering if zpool 
export vdipool on the old system and zpool import vdipool on the new 
system will work? Do i need to run any other commands to save the 
iscsi configuration on the old system?


Thanks
Karl




CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in
whole or in part, by anyone other than the intended recipient(s) is strictly
prohibited.  If you have received this communication in error, please notify
the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snv_123 to S10U10

2012-01-31 Thread Karl Rossing
I'm going to be moving a non root storage pool from snv_123(I think it's 
pre comstar) to s10u10 box.


I have some zfs iscsi volumes on the pool. I'm wondering if zpool export 
vdipool on the old system and zpool import vdipool on the new system 
will work? Do i need to run any other commands to save the iscsi 
configuration on the old system?


Thanks
Karl




CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in
whole or in part, by anyone other than the intended recipient(s) is strictly
prohibited.  If you have received this communication in error, please notify
the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss