Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread Jerry Kemp

In your first sentence, you indicate you are using Update 9 (Sol 10 9/10).

Where you have cat'ed out your /etc/release file, this shows that you 
are using Update 8, which predates the zpool split feature.


I am not certain if you can patch an update 8 system to current levels, 
and get the features included in update 9.  It is certainly a 
possibility, as new zfs features have been included in past patches.


Another indicator to me that you are not on Update 9 is your kernel 
revision.  your uname output indicates kernel 141444-09.  Update 9 ships 
with kernel rev 142909-17.


If you need Update 9 to get this feature, you should be able to go to 
Oracle's download site.


Also, regardless of how you upgrade/Live Update/etc, be aware of the new 
auto reg issue.


Hope this helps,

Jerry

--

Linux infrequently disappoints but when it does so, it disappoints so 
brutally that you momentarily forget how badly Windows burned you.



On 09/21/10 00:47, sridhar surampudi wrote:

I am using solaris 10 9/10 SARC x64 version.

Following are output of release file and uname -a respectively.

bash-3.00# cat /etc/release
   Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
Assembled 16 September 2009
bash-3.00# uname -a
SunOS oigtsol12 5.10 Generic_141444-09 sun4u sparc SUNW,Sun-Fire-V440

I think zpool split is available with build 135. Not sure which build signifies 
of the version which I am using.

I have also tried zpool upgrade -a but didn't find any difference.

Thanks&  Regards,
sridhar.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread sridhar surampudi
I am using solaris 10 9/10 SARC x64 version.

Following are output of release file and uname -a respectively.

bash-3.00# cat /etc/release 
  Solaris 10 10/09 s10s_u8wos_08a SPARC
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 16 September 2009
bash-3.00# uname -a
SunOS oigtsol12 5.10 Generic_141444-09 sun4u sparc SUNW,Sun-Fire-V440

I think zpool split is available with build 135. Not sure which build signifies 
of the version which I am using.

I have also tried zpool upgrade -a but didn't find any difference.

Thanks & Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Possible to save custom properties on a zfs file system?

2010-09-20 Thread Peter Taps
Thank you all for your help.

Can properties be set on file systems as well as pools? When I try "zpool set" 
command with a local property, I can an error "invalid property."

Regards,
Peter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Valerio Piancastelli
Many thanks again. 
I'm leaving datasets there till u need. 
Let me know if i can help you in something.
Bye

Valerio Piancastelli
piancaste...@iclos.com

- Messaggio originale -
Da: "Victor Latushkin" 
A: "Valerio Piancastelli" 
Inviato: Martedì, 21 settembre 2010 0:37:57
Oggetto: Re: [zfs-discuss] Cannot access dataset


On Sep 21, 2010, at 2:28 AM, Valerio Piancastelli wrote:

>Object  lvl   iblk   dblk  dsize  lsize   %full  type
> 3116K512 1K512  100.00  ZFS directory
>264   bonus  ZFS znode
>dnode flags: USED_BYTES USERUSED_ACCOUNTED
>dnode maxblkid: 0
>path/
>uid 777
>gid 0
>atime   Fri Jun 26 13:13:46 2009
>mtime   Fri Jun 26 13:14:17 2009
>ctime   Wed Sep  1 15:02:24 2010
>crtime  Fri Jun 26 13:13:46 2009
>gen 38459
>mode777

Looks like mode on the root directory here is again wrong.. So far we have a 
pattern - in at least three cases the problem is related to mode value of the 
filesystem root directory

Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread Ian Collins

On 09/21/10 06:52 AM, sridhar surampudi wrote:

Thank you for your quick reply.

When I run below command it is showing.
bash-3.00# zpool upgrade
This system is currently running ZFS pool version 15.

All pools are formatted using this version.


How can I upgrade to new zpool and zfs versions so that I can have zpool split 
capabilities ?
I am bit new to Soaris and zfs.
Could you please help how can I upgrade ?

   

What operating system version are you running?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Valerio Piancastelli

with ls -li i have no inode number : 
#ls -li
? ?- ? ?   ??? pgsql2

anyway :
# zdb -ddd store/nfs/APPS/prod/pgsql2
Dataset store/nfs/APPS/prod/pgsql2 [ZPL], ID 4595, cr_txg 96929, 18.8G, 6 
objects

Deadlist: 0 (0/0 comp)

Object  lvl   iblk   dblk  dsize  lsize   %full  type
 0716K16K  15.0K16K   18.75  DMU dnode
-1116K512 1K512  100.00  ZFS user/group used
-2116K512 1K512  100.00  ZFS user/group used
 1116K512 1K512  100.00  ZFS master node
 2116K512 1K512  100.00  ZFS delete queue
 3116K512 1K512  100.00  ZFS directory
 4116K512 1K512  100.00  ZFS directory
 5116K512 1K512  100.00  ZFS directory
 6416K   128K  18.8G   128G   14.70  ZFS plain file

and more verbose:

# zdb - store/nfs/APPS/prod/pgsql2
Dataset store/nfs/APPS/prod/pgsql2 [ZPL], ID 4595, cr_txg 96929, 18.8G, 6 
objects, rootbp DVA[0]=<1:322f45fe00:200> DVA[1]=<0:5514dba600:200> [L0 DMU 
objset] fletcher4 lzjb LE contiguous unique double size=800L/200P 
birth=177581L/177581P fill=6 
cksum=170d0f10c3:78d4a37a631:1520d86b3b22d:29c8b0b6752f94

Deadlist: 0 (0/0 comp)


Object  lvl   iblk   dblk  dsize  lsize   %full  type
 0716K16K  15.0K16K   18.75  DMU dnode
dnode flags: USED_BYTES
dnode maxblkid: 0

Object  lvl   iblk   dblk  dsize  lsize   %full  type
-1116K512 1K512  100.00  ZFS user/group used
dnode flags: USED_BYTES
dnode maxblkid: 0
microzap: 512 bytes, 2 entries

0 = 1536
309 = 20215323136

Object  lvl   iblk   dblk  dsize  lsize   %full  type
-2116K512 1K512  100.00  ZFS user/group used
dnode flags: USED_BYTES
dnode maxblkid: 0
microzap: 512 bytes, 2 entries

0 = 3072
ea61 = 20215321600

Object  lvl   iblk   dblk  dsize  lsize   %full  type
 1116K512 1K512  100.00  ZFS master node
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 512 bytes, 7 entries

SHARES = 4
normalization = 0
ROOT = 3
DELETE_QUEUE = 2
utf8only = 0
VERSION = 3
casesensitivity = 0

Object  lvl   iblk   dblk  dsize  lsize   %full  type
 2116K512 1K512  100.00  ZFS delete queue
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 512 bytes, 0 entries


Object  lvl   iblk   dblk  dsize  lsize   %full  type
 3116K512 1K512  100.00  ZFS directory
264   bonus  ZFS znode
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path/
uid 777
gid 0
atime   Fri Jun 26 13:13:46 2009
mtime   Fri Jun 26 13:14:17 2009
ctime   Wed Sep  1 15:02:24 2010
crtime  Fri Jun 26 13:13:46 2009
gen 38459
mode777
size3
parent  3
links   3
pflags  4080144
xattr   0
rdev0x
microzap: 512 bytes, 1 entries

root = 5 (type: Directory)

Object  lvl   iblk   dblk  dsize  lsize   %full  type
 4116K512 1K512  100.00  ZFS directory
264   bonus  ZFS znode
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path/
uid 0
gid 0
atime   Fri Jun 26 13:13:46 2009
mtime   Fri Jun 26 13:13:46 2009
ctime   Fri Jun 26 13:13:46 2009
crtime  Fri Jun 26 13:13:46 2009
gen 38459
mode40555
size2
parent  4
links   2
pflags  4080044
xattr   0
rdev0x
microzap: 512 bytes, 0 entries


Object  lvl   iblk   dblk  dsize  lsize   %full  type
 5116K512 1K512  100.00  ZFS directory
264   bonus  ZFS znode
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path/root
uid 777
gid 60001
atime   Fri Jun 26 13:14:17 2009
mtime   Fri Jun 26 13:14:55 2009
ctime   Wed Sep  1 15:02:24 2010
crtime  Fri Jun 26 13:14:17 2009
gen 38462
mode40777
size3
parent  3
links   2
pflags  4080144
xattr   0
rdev0x
microzap: 512 bytes, 1 entries

vdisk.vmdk = 6 (type: Regular File)

Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Valerio Piancastelli
Here it is:

r...@disk-01:~# zdb -d sas/mail-cts 3
Dataset sas/mail-cts [ZPL], ID 30, cr_txg 8, 149G, 6 objects, rootbp [L0 DMU 
objset] 400L/200P DVA[0]=<0:1e00076000:200> DVA[1]=<0:a00085400:200> fletcher4 
lzjb LE contiguous birth=1314 fill=6 
cksum=abcdfba24:473e07d1142:ef9bd7e3b1d9:220b0c2e1d1c63

Deadlist: 8 entries, 16K

Item   0: 0:a53183000:200 400L/200P F=0 B=954
Item   1: 0:a53182c00:400 4000L/400P F=0 B=954
Item   2: 0:a53182800:400 4000L/400P F=0 B=954
Item   3: 0:a53182400:400 4000L/400P F=0 B=954
Item   4: 0:a4767fc00:400 4000L/400P F=0 B=954
Item   5: 0:a4767f800:400 4000L/400P F=0 B=954
Item   6: 0:a4767f400:400 4000L/400P F=0 B=954
Item   7: 0:a53181e00:600 4000L/600P F=0 B=954

Object  lvl   iblk   dblk  lsize  asize  type
 3116K512512 1K  ZFS directory
 264  bonus  ZFS znode
path/
uid 777
gid 0
atime   Mon Sep 20 21:17:34 2010
mtime   Mon Sep 20 21:17:17 2010
ctime   Mon Sep 20 21:17:17 2010
crtime  Sun Oct 18 00:11:29 2009
gen 444361
mode40777
size3
parent  3
links   2
xattr   0
rdev0x
microzap: 512 bytes, 1 entries

vdisk.raw = 7 (type: Regular File)
Indirect blocks:
   0 L0 0:d600:200 200L/200P F=1 B=9

segment [, 0200) size   512



Valerio Piancastelli
piancaste...@iclos.com

- Messaggio originale -
Da: "Victor Latushkin" 
A: "Valerio Piancastelli" 
Cc: "zfs-discuss" 
Inviato: Lunedì, 20 settembre 2010 22:37:29
Oggetto: Re: [zfs-discuss] Cannot access dataset


On Sep 20, 2010, at 9:03 PM, Valerio Piancastelli wrote:

> in the original server, i have also a dataset that shows this
> 
> #ls -l
> ?-  ? ?   ? ?? pgsql2

could you please show output of 'zdb -d  3' ?

victor


> 
> I'm sending you the IP and credentials to have ssh access.
> Please notify me when you don't need anymore
> 
> Valerio Piancastelli
> +39 348 8072760
> piancaste...@iclos.com
> 
> - Messaggio originale -
> Da: "Victor Latushkin" 
> A: "Valerio Piancastelli" 
> Cc: "mark musante" , "zfs-discuss" 
> 
> Inviato: Lunedì, 20 settembre 2010 18:37:09
> Oggetto: Re: [zfs-discuss] Cannot access dataset
> 
> 
> On Sep 20, 2010, at 7:23 PM, Valerio Piancastelli wrote:
> 
>> Unfortunately not.
>> 
>> When i do 
>> 
>> # /usr/bin/ls -lv /sas/mail-cts
>> brwxrwxrwx 2 777 root 0, 0 Oct 18  2009 
>> /volumes/store/nfs/ICLOS/prod/mail-cts
>> 
>> it seem a block device:
> 
> Yes,it looks like we have bad mode field value in the znode for root 
> directory object of this filesystem. Recently there was similar issue, but 
> there was different sent of bits set in the mode field, so 'ls -l' was 
> displaying directory as question marks.
> 
> Can you provide ssh or Shared Shell (http://sun.com/123) access to your 
> system?
> 
> Regards
> Victor
> 
> 
>> 
>> # stat /sas/mail-cts 
>> File: `/sas/mail-cts'
>> Size: 3   Blocks: 3  IO Block: 512block special file
>> Device: 2d90062h/47775842d  Inode: 3   Links: 2 Device type: 
>> 0,0
>> Access: (0777/brwxrwxrwx)  Uid: (  777/ UNKNOWN)   Gid: (0/root)
>> Access: 2009-10-18 00:11:29.526578221 +0200
>> Modify: 2009-10-18 00:49:05.501732926 +0200
>> Change: 2010-09-17 19:32:10.113622993 +0200
>> 
>> if i do
>> 
>> # /usr/bin/ls -lv /sas/mail-cts/
>> /usr/bin/ls: /volumes/store/nfs/ICLOS/prod/mail-cts/: Not a directory
>> 
>> # stat /sas/mail-cts/
>> stat: cannot stat `/sas/mail-cts/': Not a directory
>> 
>> it seems that "something" turned the directory in block file
>> 
>> 
>> Valerio Piancastelli
>> +39 348 8072760
>> piancaste...@iclos.com
>> 
>> - Messaggio originale -
>> Da: "Mark J Musante" 
>> A: "Valerio Piancastelli" 
>> Cc: "zfs-discuss" 
>> Inviato: Lunedì, 20 settembre 2010 17:18:01
>> Oggetto: Re: [zfs-discuss] Cannot access dataset
>> 
>> On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
>> 
>>> Yes, it is mounted
>>> 
>>> r...@disk-00:/volumes/store# zfs get sas/mail-ccts
>>> NAME  PROPERTY  VALUESOURCE
>>> sas/mail-cts  mounted   yes  -
>> 
>> OK - so the next question would be where the data is.  I assume when you 
>> say you "cannot access" the dataset, it means when you type ls -l 
>> /sas/mail-cts it shows up as an empty directory.  Is that true?
>> 
>> With luck, the data will still be in a snapshot.  Given that the dataset 
>> has 149G referenced, it could be all there.  Does 'zfs list -rt snapshot 
>> sas/mail-cts' list any?  If so, you can try using the most recent snapshot 
>> by looking in /sas/mail-cts/.zfs/snapshot/ and seeing if 
>> all your data are there.  If it looks good, you can zfs rollback to that 
>> snapshot.
>> ___
>> zfs-discuss

Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Victor Latushkin

On Sep 20, 2010, at 9:03 PM, Valerio Piancastelli wrote:

> in the original server, i have also a dataset that shows this
> 
> #ls -l
> ?-  ? ?   ? ?? pgsql2

could you please show output of 'zdb -d  3' ?

victor


> 
> I'm sending you the IP and credentials to have ssh access.
> Please notify me when you don't need anymore
> 
> Valerio Piancastelli
> +39 348 8072760
> piancaste...@iclos.com
> 
> - Messaggio originale -
> Da: "Victor Latushkin" 
> A: "Valerio Piancastelli" 
> Cc: "mark musante" , "zfs-discuss" 
> 
> Inviato: Lunedì, 20 settembre 2010 18:37:09
> Oggetto: Re: [zfs-discuss] Cannot access dataset
> 
> 
> On Sep 20, 2010, at 7:23 PM, Valerio Piancastelli wrote:
> 
>> Unfortunately not.
>> 
>> When i do 
>> 
>> # /usr/bin/ls -lv /sas/mail-cts
>> brwxrwxrwx 2 777 root 0, 0 Oct 18  2009 
>> /volumes/store/nfs/ICLOS/prod/mail-cts
>> 
>> it seem a block device:
> 
> Yes,it looks like we have bad mode field value in the znode for root 
> directory object of this filesystem. Recently there was similar issue, but 
> there was different sent of bits set in the mode field, so 'ls -l' was 
> displaying directory as question marks.
> 
> Can you provide ssh or Shared Shell (http://sun.com/123) access to your 
> system?
> 
> Regards
> Victor
> 
> 
>> 
>> # stat /sas/mail-cts 
>> File: `/sas/mail-cts'
>> Size: 3   Blocks: 3  IO Block: 512block special file
>> Device: 2d90062h/47775842d  Inode: 3   Links: 2 Device type: 
>> 0,0
>> Access: (0777/brwxrwxrwx)  Uid: (  777/ UNKNOWN)   Gid: (0/root)
>> Access: 2009-10-18 00:11:29.526578221 +0200
>> Modify: 2009-10-18 00:49:05.501732926 +0200
>> Change: 2010-09-17 19:32:10.113622993 +0200
>> 
>> if i do
>> 
>> # /usr/bin/ls -lv /sas/mail-cts/
>> /usr/bin/ls: /volumes/store/nfs/ICLOS/prod/mail-cts/: Not a directory
>> 
>> # stat /sas/mail-cts/
>> stat: cannot stat `/sas/mail-cts/': Not a directory
>> 
>> it seems that "something" turned the directory in block file
>> 
>> 
>> Valerio Piancastelli
>> +39 348 8072760
>> piancaste...@iclos.com
>> 
>> - Messaggio originale -
>> Da: "Mark J Musante" 
>> A: "Valerio Piancastelli" 
>> Cc: "zfs-discuss" 
>> Inviato: Lunedì, 20 settembre 2010 17:18:01
>> Oggetto: Re: [zfs-discuss] Cannot access dataset
>> 
>> On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
>> 
>>> Yes, it is mounted
>>> 
>>> r...@disk-00:/volumes/store# zfs get sas/mail-ccts
>>> NAME  PROPERTY  VALUESOURCE
>>> sas/mail-cts  mounted   yes  -
>> 
>> OK - so the next question would be where the data is.  I assume when you 
>> say you "cannot access" the dataset, it means when you type ls -l 
>> /sas/mail-cts it shows up as an empty directory.  Is that true?
>> 
>> With luck, the data will still be in a snapshot.  Given that the dataset 
>> has 149G referenced, it could be all there.  Does 'zfs list -rt snapshot 
>> sas/mail-cts' list any?  If so, you can try using the most recent snapshot 
>> by looking in /sas/mail-cts/.zfs/snapshot/ and seeing if 
>> all your data are there.  If it looks good, you can zfs rollback to that 
>> snapshot.
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread sridhar surampudi
Thank you for your quick reply.

When I run below command it is showing.
bash-3.00# zpool upgrade
This system is currently running ZFS pool version 15.

All pools are formatted using this version.


How can I upgrade to new zpool and zfs versions so that I can have zpool split 
capabilities ?
I am bit new to Soaris and zfs.
Could you please help how can I upgrade ?

Thanks & Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread Freddie Cash
On Mon, Sep 20, 2010 at 11:03 AM, sridhar surampudi
 wrote:
> I have a mirror pool tank having two devices underneath. Created in this way
>
> #zpool create tank mirror  c3t500507630E020CEAd1  c3t500507630E020CEAd0
>
> Created file system tank/home
> #zfs create tank/home
>
> Created another file system tank/home
> #zfs create tank/home/sridhar
> After that I have created files and directories under tank/home and 
> tank/home/sridhar.
>
> Now I detached 2nd device i.e c3t500507630E020CEAd0
>
> Since the above device is part of mirror pool, my guess it will have copy of 
> data which is there in other device till detach and will have metadata with 
> same pool name and file systems created.
>
> Question is is there any way I can create a new stack with renamed stack by 
> providing the new pool name to this detached device and should access the 
> same data for the c3t500507630E020CEAd0 which was created when it was in 
> mirrored pool under tank. ?

If your ZFS version is new enough, there is a "zpool split" command
you can use, for just this purpose.  It splits a mirror vdev in half
and assigns a new pool name to the drive you are removing.  You can
then use that drive to create a new pool, thus creating a duplicate of
the original pool.


-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread sridhar surampudi
Hi,

I have a mirror pool tank having two devices underneath. Created in this way

#zpool create tank mirror  c3t500507630E020CEAd1  c3t500507630E020CEAd0  

Created file system tank/home
#zfs create tank/home

Created another file system tank/home
#zfs create tank/home/sridhar
After that I have created files and directories under tank/home and 
tank/home/sridhar.

Now I detached 2nd device i.e c3t500507630E020CEAd0

Since the above device is part of mirror pool, my guess it will have copy of 
data which is there in other device till detach and will have metadata with 
same pool name and file systems created.

Question is is there any way I can create a new stack with renamed stack by 
providing the new pool name to this detached device and should access the same 
data for the c3t500507630E020CEAd0 which was created when it was in mirrored 
pool under tank. ?

Thanks & Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Valerio Piancastelli
in the original server, i have also a dataset that shows this

#ls -l
?-  ? ?   ? ?? pgsql2

I'm sending you the IP and credentials to have ssh access.
Please notify me when you don't need anymore

Valerio Piancastelli
+39 348 8072760
piancaste...@iclos.com

- Messaggio originale -
Da: "Victor Latushkin" 
A: "Valerio Piancastelli" 
Cc: "mark musante" , "zfs-discuss" 

Inviato: Lunedì, 20 settembre 2010 18:37:09
Oggetto: Re: [zfs-discuss] Cannot access dataset


On Sep 20, 2010, at 7:23 PM, Valerio Piancastelli wrote:

> Unfortunately not.
> 
> When i do 
> 
> # /usr/bin/ls -lv /sas/mail-cts
> brwxrwxrwx 2 777 root 0, 0 Oct 18  2009 /volumes/store/nfs/ICLOS/prod/mail-cts
> 
> it seem a block device:

Yes,it looks like we have bad mode field value in the znode for root directory 
object of this filesystem. Recently there was similar issue, but there was 
different sent of bits set in the mode field, so 'ls -l' was displaying 
directory as question marks.

Can you provide ssh or Shared Shell (http://sun.com/123) access to your system?

Regards
Victor


> 
> # stat /sas/mail-cts 
>  File: `/sas/mail-cts'
>  Size: 3   Blocks: 3  IO Block: 512block special file
> Device: 2d90062h/47775842d  Inode: 3   Links: 2 Device type: 
> 0,0
> Access: (0777/brwxrwxrwx)  Uid: (  777/ UNKNOWN)   Gid: (0/root)
> Access: 2009-10-18 00:11:29.526578221 +0200
> Modify: 2009-10-18 00:49:05.501732926 +0200
> Change: 2010-09-17 19:32:10.113622993 +0200
> 
> if i do
> 
> # /usr/bin/ls -lv /sas/mail-cts/
> /usr/bin/ls: /volumes/store/nfs/ICLOS/prod/mail-cts/: Not a directory
> 
> # stat /sas/mail-cts/
> stat: cannot stat `/sas/mail-cts/': Not a directory
> 
> it seems that "something" turned the directory in block file
> 
> 
> Valerio Piancastelli
> +39 348 8072760
> piancaste...@iclos.com
> 
> - Messaggio originale -
> Da: "Mark J Musante" 
> A: "Valerio Piancastelli" 
> Cc: "zfs-discuss" 
> Inviato: Lunedì, 20 settembre 2010 17:18:01
> Oggetto: Re: [zfs-discuss] Cannot access dataset
> 
> On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
> 
>> Yes, it is mounted
>> 
>> r...@disk-00:/volumes/store# zfs get sas/mail-ccts
>> NAME  PROPERTY  VALUESOURCE
>> sas/mail-cts  mounted   yes  -
> 
> OK - so the next question would be where the data is.  I assume when you 
> say you "cannot access" the dataset, it means when you type ls -l 
> /sas/mail-cts it shows up as an empty directory.  Is that true?
> 
> With luck, the data will still be in a snapshot.  Given that the dataset 
> has 149G referenced, it could be all there.  Does 'zfs list -rt snapshot 
> sas/mail-cts' list any?  If so, you can try using the most recent snapshot 
> by looking in /sas/mail-cts/.zfs/snapshot/ and seeing if 
> all your data are there.  If it looks good, you can zfs rollback to that 
> snapshot.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Has anyone seen zpool corruption with VirtualBox shared folders?

2010-09-20 Thread Warren Strange
Just following up...

I reran memtest diagnostics and let it run overnight again.   This time I did 
see some memory errors - which would be the most likely explanation for the 
errors I am seeing.

 Faulty hardware strikes again. 


Thanks to all for the advice.

Warren


> Comments below...
> 
> On Sep 12, 2010, at 2:56 PM, Warren Strange wrote:
> >> So we are clear, you are running VirtualBox on
> ZFS,
> >> rather than ZFS on VirtualBox?
> >> 
> > 
> > Correct
> > 
> >> 
> >> Bad power supply, HBA, cables, or other common
> cause.
> >> To help you determine the sort of corruption, for
> >> mirrored pools FMA will record
> >> the nature of the discrepancies.
> >>fmdump -eV
> >> will show a checksum error and the associated
> bitmap
> >> comparisons.
> > 
> > Below are the errors reported from the two disks.
> Not sure if anything looks suspicious (other than the
> obvious checksum error)
> > 
> > Sep 10 2010 12:49:42.315641690
> ereport.fs.zfs.checksum
> > nvlist version: 0
> > class = ereport.fs.zfs.checksum
> > ena = 0x95816e82e2900401
> > detector = (embedded nvlist)
> > nvlist version: 0
> > version = 0x0
> > scheme = zfs
> > pool = 0xf3cb5e110f2c88ec
> > vdev = 0x961d9b28c1440020
> > (end detector)
> > 
> > pool = tank
> > pool_guid = 0xf3cb5e110f2c88ec
> > pool_context = 0
> > pool_failmode = wait
> > vdev_guid = 0x961d9b28c1440020
> > vdev_type = disk
> > vdev_path = /dev/dsk/c8t5d0s0
> > vdev_devid =
> id1,s...@sata_wdc_wd15eads-00p_wd-wcavu0351361/a
> > parent_guid = 0xdae51838a62627b9
> > parent_type = mirror
> > zio_err = 50
> > zio_offset = 0x1ef6813a00
> > zio_size = 0x2
> > zio_objset = 0x10
> > zio_object = 0x1402f
> > zio_level = 0
> > zio_blkid = 0x76f
> > cksum_expected = 0x405288851d24 0x100655c808fa2072
> 0xa89d11a403482052 0xf1041fd6f838c6eb
> > cksum_actual = 0x40528884fd24 0x100655c803286072
> 0xa89d111c8af30052 0xf0fbe93b4f02c6eb
> > cksum_algorithm = fletcher4
> > __ttl = 0x1
> > __tod = 0x4c8a7dc6 0x12d04f5a
> > 
> > Sep 10 2010 12:49:42.315641636
> ereport.fs.zfs.checksum
> > nvlist version: 0
> > class = ereport.fs.zfs.checksum
> > ena = 0x95816e82e2900401
> > detector = (embedded nvlist)
> > nvlist version: 0
> > version = 0x0
> > scheme = zfs
> > pool = 0xf3cb5e110f2c88ec
> > vdev = 0x969570b704d5bff1
> > (end detector)
> > 
> > pool = tank
> > pool_guid = 0xf3cb5e110f2c88ec
> > pool_context = 0
> > pool_failmode = wait
> > vdev_guid = 0x969570b704d5bff1
> > vdev_type = disk
> > vdev_path = /dev/dsk/c8t4d0s0
> > vdev_devid =
> id1,s...@sata_st31500341as9vs3b4cp/a
> > parent_guid = 0xdae51838a62627b9
> > parent_type = mirror
> > zio_err = 50
> > zio_offset = 0x1ef6813a00
> > zio_size = 0x2
> > zio_objset = 0x10
> > zio_object = 0x1402f
> > zio_level = 0
> > zio_blkid = 0x76f
> > cksum_expected = 0x405288851d24 0x100655c808fa2072
> 0xa89d11a403482052 0xf1041fd6f838c6eb
> > cksum_actual = 0x40528884fd24 0x100655c803286072
> 0xa89d111c8af30052 0xf0fbe93b4f02c6eb
> > cksum_algorithm = fletcher4
> > __ttl = 0x1
> > __tod = 0x4c8a7dc6 0x12d04f24
> 
> In the case where one side of the mirror is corrupted
> and the other is correct, then
> you will be shown the difference between the two, in
> the form of an abbreviated bitmap.
> 
> In this case, the data on each side of the mirror is
> the same, with a large degree of
> confidence. So the source of the corruption is likely
> to be the same -- some common 
> component: CPU, RAM, HBA, I/O path, etc. You can rule
> out the disks as suspects.
> With some additional experiments you can determine if
> the corruption occurred during
> the write or the read.
>  -- richard
> -- 
> OpenStorage Summit, October 25-27, Palo Alto, CA
> http://nexenta-summit2010.eventbrite.com
> 
> Richard Elling
> rich...@nexenta.com   +1-760-896-4422
> Enterprise class storage for everyone
> www.nexenta.com
> 
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
>
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Victor Latushkin

On Sep 20, 2010, at 7:23 PM, Valerio Piancastelli wrote:

> Unfortunately not.
> 
> When i do 
> 
> # /usr/bin/ls -lv /sas/mail-cts
> brwxrwxrwx 2 777 root 0, 0 Oct 18  2009 /volumes/store/nfs/ICLOS/prod/mail-cts
> 
> it seem a block device:

Yes,it looks like we have bad mode field value in the znode for root directory 
object of this filesystem. Recently there was similar issue, but there was 
different sent of bits set in the mode field, so 'ls -l' was displaying 
directory as question marks.

Can you provide ssh or Shared Shell (http://sun.com/123) access to your system?

Regards
Victor


> 
> # stat /sas/mail-cts 
>  File: `/sas/mail-cts'
>  Size: 3   Blocks: 3  IO Block: 512block special file
> Device: 2d90062h/47775842d  Inode: 3   Links: 2 Device type: 
> 0,0
> Access: (0777/brwxrwxrwx)  Uid: (  777/ UNKNOWN)   Gid: (0/root)
> Access: 2009-10-18 00:11:29.526578221 +0200
> Modify: 2009-10-18 00:49:05.501732926 +0200
> Change: 2010-09-17 19:32:10.113622993 +0200
> 
> if i do
> 
> # /usr/bin/ls -lv /sas/mail-cts/
> /usr/bin/ls: /volumes/store/nfs/ICLOS/prod/mail-cts/: Not a directory
> 
> # stat /sas/mail-cts/
> stat: cannot stat `/sas/mail-cts/': Not a directory
> 
> it seems that "something" turned the directory in block file
> 
> 
> Valerio Piancastelli
> +39 348 8072760
> piancaste...@iclos.com
> 
> - Messaggio originale -
> Da: "Mark J Musante" 
> A: "Valerio Piancastelli" 
> Cc: "zfs-discuss" 
> Inviato: Lunedì, 20 settembre 2010 17:18:01
> Oggetto: Re: [zfs-discuss] Cannot access dataset
> 
> On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
> 
>> Yes, it is mounted
>> 
>> r...@disk-00:/volumes/store# zfs get sas/mail-ccts
>> NAME  PROPERTY  VALUESOURCE
>> sas/mail-cts  mounted   yes  -
> 
> OK - so the next question would be where the data is.  I assume when you 
> say you "cannot access" the dataset, it means when you type ls -l 
> /sas/mail-cts it shows up as an empty directory.  Is that true?
> 
> With luck, the data will still be in a snapshot.  Given that the dataset 
> has 149G referenced, it could be all there.  Does 'zfs list -rt snapshot 
> sas/mail-cts' list any?  If so, you can try using the most recent snapshot 
> by looking in /sas/mail-cts/.zfs/snapshot/ and seeing if 
> all your data are there.  If it looks good, you can zfs rollback to that 
> snapshot.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Valerio Piancastelli
Unfortunately not.

When i do 

# /usr/bin/ls -lv /sas/mail-cts
brwxrwxrwx 2 777 root 0, 0 Oct 18  2009 /volumes/store/nfs/ICLOS/prod/mail-cts

it seem a block device:

# stat /sas/mail-cts 
  File: `/sas/mail-cts'
  Size: 3   Blocks: 3  IO Block: 512block special file
Device: 2d90062h/47775842d  Inode: 3   Links: 2 Device type: 0,0
Access: (0777/brwxrwxrwx)  Uid: (  777/ UNKNOWN)   Gid: (0/root)
Access: 2009-10-18 00:11:29.526578221 +0200
Modify: 2009-10-18 00:49:05.501732926 +0200
Change: 2010-09-17 19:32:10.113622993 +0200

if i do

# /usr/bin/ls -lv /sas/mail-cts/
/usr/bin/ls: /volumes/store/nfs/ICLOS/prod/mail-cts/: Not a directory

# stat /sas/mail-cts/
stat: cannot stat `/sas/mail-cts/': Not a directory

it seems that "something" turned the directory in block file


Valerio Piancastelli
+39 348 8072760
piancaste...@iclos.com

- Messaggio originale -
Da: "Mark J Musante" 
A: "Valerio Piancastelli" 
Cc: "zfs-discuss" 
Inviato: Lunedì, 20 settembre 2010 17:18:01
Oggetto: Re: [zfs-discuss] Cannot access dataset

On Mon, 20 Sep 2010, Valerio Piancastelli wrote:

> Yes, it is mounted
>
> r...@disk-00:/volumes/store# zfs get sas/mail-ccts
> NAME  PROPERTY  VALUESOURCE
> sas/mail-cts  mounted   yes  -

OK - so the next question would be where the data is.  I assume when you 
say you "cannot access" the dataset, it means when you type ls -l 
/sas/mail-cts it shows up as an empty directory.  Is that true?

With luck, the data will still be in a snapshot.  Given that the dataset 
has 149G referenced, it could be all there.  Does 'zfs list -rt snapshot 
sas/mail-cts' list any?  If so, you can try using the most recent snapshot 
by looking in /sas/mail-cts/.zfs/snapshot/ and seeing if 
all your data are there.  If it looks good, you can zfs rollback to that 
snapshot.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Mark J Musante

On Mon, 20 Sep 2010, Valerio Piancastelli wrote:


Yes, it is mounted

r...@disk-00:/volumes/store# zfs get sas/mail-ccts
NAME  PROPERTY  VALUESOURCE
sas/mail-cts  mounted   yes  -


OK - so the next question would be where the data is.  I assume when you 
say you "cannot access" the dataset, it means when you type ls -l 
/sas/mail-cts it shows up as an empty directory.  Is that true?


With luck, the data will still be in a snapshot.  Given that the dataset 
has 149G referenced, it could be all there.  Does 'zfs list -rt snapshot 
sas/mail-cts' list any?  If so, you can try using the most recent snapshot 
by looking in /sas/mail-cts/.zfs/snapshot/ and seeing if 
all your data are there.  If it looks good, you can zfs rollback to that 
snapshot.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Valerio Piancastelli
Yes, it is mounted

r...@disk-00:/volumes/store# zfs get sas/mail-ccts
NAME  PROPERTY  VALUESOURCE
sas/mail-cts  mounted   yes  -




Valerio Piancastelli
+39 348 8072760
piancaste...@iclos.com

- Messaggio originale -
Da: "Mark J Musante" 
A: "Valerio Piancastelli" 
Cc: "zfs-discuss" 
Inviato: Lunedì, 20 settembre 2010 17:00:32
Oggetto: Re: [zfs-discuss] Cannot access dataset

On Mon, 20 Sep 2010, Valerio Piancastelli wrote:

> After a crash i cannot access one of my datasets anymore.
>
> ls -v cts
> brwxrwxrwx+  2 root root   0,  0 ott 18  2009 cts
>
> zfs list sas/mail-cts
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> sas/mail-cts   149G   250G   149G  /sas/mail-cts
>
> as you can see, the space is referenced by this dataset, but i cannot access 
> the directory /sas/mail-cts

Is the dataset mounted?  i.e. what does 'zfs get mounted sas/mail-cts' 
show?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Mark J Musante

On Mon, 20 Sep 2010, Valerio Piancastelli wrote:


After a crash i cannot access one of my datasets anymore.

ls -v cts
brwxrwxrwx+  2 root root   0,  0 ott 18  2009 cts

zfs list sas/mail-cts
NAME   USED  AVAIL  REFER  MOUNTPOINT
sas/mail-cts   149G   250G   149G  /sas/mail-cts

as you can see, the space is referenced by this dataset, but i cannot access 
the directory /sas/mail-cts


Is the dataset mounted?  i.e. what does 'zfs get mounted sas/mail-cts' 
show?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a disk never completes

2010-09-20 Thread Giovanni Tirloni
On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller wrote:

> I have an X4540 running b134 where I'm replacing 500GB disks with 2TB disks
> (Seagate Constellation) and the pool seems sick now.  The pool has four
> raidz2 vdevs (8+2) where the first set of 10 disks were replaced a few
> months ago.  I replaced two disks in the second set (c2t0d0, c3t0d0) a
> couple of weeks ago, but have been unable to get the third disk to finish
> replacing (c4t0d0).
>
> I have tried the resilver for c4t0d0 four times now and the pool also comes
> up with checksum errors and a permanent error (:<0x0>).  The first
> resilver was from 'zpool replace', which came up with checksum errors.  I
> cleared the errors which triggered the second resilver (same result).  I
> then did a 'zpool scrub' which started the third resilver and also
> identified three permanent errors (the two additional were in files in
> snapshots which I then destroyed).  I then did a 'zpool clear' and then
> another scrub which started the fourth resilver attempt.  This last attempt
> identified another file with errors in a snapshot that I have now destroyed.
>
> Any ideas how to get this disk finished being replaced without rebuilding
> the pool and restoring from backup?  The pool is working, but is reporting
> as degraded and with checksum errors.
>
>
[...]

Try to run a `zpool clear pool2` and see if clears the errors. If not, you
may have to detach `c4t0d0s0/o`.

I believe it's a bug that was fixed in recent builds.

-- 
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Cannot access dataset

2010-09-20 Thread Valerio Piancastelli
After a crash i cannot access one of my datasets anymore.

ls -v cts
brwxrwxrwx+  2 root root   0,  0 ott 18  2009 cts

zfs list sas/mail-cts
NAME   USED  AVAIL  REFER  MOUNTPOINT
sas/mail-cts   149G   250G   149G  /sas/mail-cts

as you can see, the space is referenced by this dataset, but i cannot access 
the directory /sas/mail-cts

the system is:
SunOS disk-01 5.11 snv_111b i86pc i386 i86pc Solaris

whit zdb i can see the blocks referenced by this dataset.

Any ideas??

Valerio Piancastelli
piancaste...@iclos.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss