Re: [zfs-discuss] no pool_props for OpenSolaris 2009.06 with old SPARC hardware

2009-06-12 Thread Frank Middleton

On 06/03/09 09:10 PM, Aurélien Larcher wrote:


PS: for the record I roughly followed the steps of this blog entry =  
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live


Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it worked, with the additional step of
making the packages and pkgadding them. Curious as to why pkg
isn't available as a pkgadd package. Is there any reason why
someone shouldn't make them available for download? It would
make it much less painful for those of us who are OBP version
deprived - but maybe that's the point :-)

During the install cycle, ran into this annoyance (doubtless this
is documented somewhere):

# zpool create rpool c2t2d0
creates a good rpool that can be exported and imported. But it
seems to create an EFI label, and, as documented, attempting to boot
results in a bad magic number error. Why does zpool silently create
an apparently useless disk configuration for a root pool? Anyway,
it was a good opportunity to test zfs send/recv of a root pool (it
worked like a charm).

Using format -e to relable the disk so that slice 0 and slice 2
both have the whole disk resulted in this odd problem:

# zpool create -f  rpool c2t2d0s0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  18.6G  73.5K  18.6G 0%  ONLINE  -
space  1.36T   294G  1.07T21%  ONLINE  -
# zpool export rpool
# zpool import rpool
cannot import 'rpool': no such pool available
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
space  1.36T   294G  1.07T21%  ONLINE  -

# zdb -l /dev/dsk/c2t2d0s0
lists 3 perfectly good looking labels.
Format says:
...
selecting c2t2d0
[disk formatted]
/dev/dsk/c2t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c2t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).

However this disk boots ZFS OpenSolaris just fine and this inability to
import an exported pool isn't a problem. Just wondering if any ZFS guru
had a comment about it. (This is with snv103 on SPARC). FWIW this is
an old ide drive connected to a sas controller via a sata/pata adapter...

Cheers -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] no pool_props for OpenSolaris 2009.06 with old SPARC hardware

2009-06-12 Thread Frank Middleton

On 06/03/09 09:10 PM, Aurélien Larcher wrote:


PS: for the record I roughly followed the steps of this blog entry =  
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live


Thanks for posting this link! Building pkg with gcc 4.3.2 was an
interesting exercise, but it worked, with the additional step of
making the packages and pkgadding them. Curious as to why pkg
isn't available as a pkgadd package. Is there any reason why
someone shouldn't make them available for download? It would
make it much less painful for those of us who are OBP version
deprived - but maybe that's the point :-)

During the install cycle, ran into this annoyance (doubtless this
is documented somewhere):

# zpool create rpool c2t2d0
creates a good rpool that can be exported and imported. But it
seems to create an EFI label, and, as documented, attempting to boot
results in a bad magic number error. Why does zpool silently create
an apparently useless disk configuration for a root pool? Anyway,
it was a good opportunity to test zfs send/recv of a root pool (it
worked like a charm).

Using format -e to relable the disk so that slice 0 and slice 2
both have the whole disk resulted in this odd problem:

# zpool create -f  rpool c2t2d0s0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  18.6G  73.5K  18.6G 0%  ONLINE  -
space  1.36T   294G  1.07T21%  ONLINE  -
# zpool export rpool
# zpool import rpool
cannot import 'rpool': no such pool available
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
space  1.36T   294G  1.07T21%  ONLINE  -

# zdb -l /dev/dsk/c2t2d0s0
lists 3 perfectly good looking labels.
Format says:
...
selecting c2t2d0
[disk formatted]
/dev/dsk/c2t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c2t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).

However this disk boots ZFS OpenSolaris just fine and this inability to
import an exported pool isn't a problem. Just wondering if any ZFS guru
had a comment about it. (This is with snv103 on SPARC). FWIW this is
an old ide drive connected to a sas controller via a sata/pata adapter...

Cheers -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] no pool_props for OpenSolaris 2009.06 with old SPARC hardware

2009-06-12 Thread Cindy . Swearingen

Hi Frank,

The reason that ZFS let you create rpool with a EFI label is at this
point, it doesn't know that this is a root pool. Its just a pool named
rpool. The best solution is for us to provide a bootable EFI label.

I see an old bug that says if you already have a pool with the same name 
imported, you see the zpool import error message that you provided. I'm
running build 114 not 103 and I can't reproduce this. In this scenario, 
I see  the correct error message, which is this:


# zpool create rpool c1t4d0s0
# zpool export rpool
# zpool import rpool
cannot import 'rpool': more than one matching pool
import by numeric ID instead

If this pool becomes your active root pool, obviously, you would not be
able to export it. If your root pool was active, any export attempt
should fail like this:

# zpool export rpool
cannot unmount '/': Device busy
#

Cindy

Frank Middleton wrote:

On 06/03/09 09:10 PM, Aurélien Larcher wrote:

PS: for the record I roughly followed the steps of this blog entry =  
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live



Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it worked, with the additional step of
making the packages and pkgadding them. Curious as to why pkg
isn't available as a pkgadd package. Is there any reason why
someone shouldn't make them available for download? It would
make it much less painful for those of us who are OBP version
deprived - but maybe that's the point :-)

During the install cycle, ran into this annoyance (doubtless this
is documented somewhere):

# zpool create rpool c2t2d0
creates a good rpool that can be exported and imported. But it
seems to create an EFI label, and, as documented, attempting to boot
results in a bad magic number error. Why does zpool silently create
an apparently useless disk configuration for a root pool? Anyway,
it was a good opportunity to test zfs send/recv of a root pool (it
worked like a charm).

Using format -e to relable the disk so that slice 0 and slice 2
both have the whole disk resulted in this odd problem:

# zpool create -f  rpool c2t2d0s0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  18.6G  73.5K  18.6G 0%  ONLINE  -
space  1.36T   294G  1.07T21%  ONLINE  -
# zpool export rpool
# zpool import rpool
cannot import 'rpool': no such pool available
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
space  1.36T   294G  1.07T21%  ONLINE  -

# zdb -l /dev/dsk/c2t2d0s0
lists 3 perfectly good looking labels.
Format says:
...
selecting c2t2d0
[disk formatted]
/dev/dsk/c2t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c2t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).

However this disk boots ZFS OpenSolaris just fine and this inability to
import an exported pool isn't a problem. Just wondering if any ZFS guru
had a comment about it. (This is with snv103 on SPARC). FWIW this is
an old ide drive connected to a sas controller via a sata/pata adapter...

Cheers -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
s

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] no pool_props for OpenSolaris 2009.06 with old SPARC hardware

2009-06-12 Thread Lori Alt



Frank Middleton wrote:


On 06/03/09 09:10 PM, Aurélien Larcher wrote:

PS: for the record I roughly followed the steps of this blog entry 
=  http://blogs.sun.com/edp/entry/moving_from_nevada_and_live



Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it worked, with the additional step of
making the packages and pkgadding them. Curious as to why pkg
isn't available as a pkgadd package. Is there any reason why
someone shouldn't make them available for download? It would
make it much less painful for those of us who are OBP version
deprived - but maybe that's the point :-)

During the install cycle, ran into this annoyance (doubtless this
is documented somewhere):

# zpool create rpool c2t2d0
creates a good rpool that can be exported and imported. But it
seems to create an EFI label, and, as documented, attempting to boot
results in a bad magic number error. Why does zpool silently create
an apparently useless disk configuration for a root pool? 


I can't comment on the overall procedure documented in the above blog entry,
but I can address this one issue.

The answer is that there is nothing special about the name rpool.
The thing that makes a pool a root pool is the present of the bootfs
pool property, NOT the name of the pool.

The command above uses a whole disk specifier (c2t2d0) instead of
a slice specifier (e.g. c2t2d0s0) to specify the device for the pool.  The
zpool command does what it always does when given a whole disk
specifier:  it puts an EFI label on the disk before creating a pool.
The instructions in the blog entry show this:

zpool create -f rpool c1t1d0s0

which indicate that the c1t1d0 disk was already formatted (presumably
with an SMI label) and had a s0 slice that would then be used for
the pool.

In the blog entry, I don't see the bootfs property ever being set on the
pool, so I'm not sure how it's booting.  Perhaps the presence of the 
bootfs

command in the grub menu entry is supplying the boot dataset and thus the
system boots anyway.  If so, I think that this is a bug:  the boot 
loader should

insist on the pool having the bootfs  property set to something because
otherwise it's not necessarily a valid root pool.  The reason for being a
stickler about this is that zfs won't allow the bootfs property to be
set on a pool that isn't a valid root pool (because it has an EFI label,
or is a a RAIDZ device).  That's a valuable hurdle when creating a
root pool because it prevents the user from getting into a situation where
the process of creating the root pool seemed to go fine, but then the
pool wasn't bootable.  I will look into this and file a bug if I confirm
that it's appropriate.


Lori



Anyway,
it was a good opportunity to test zfs send/recv of a root pool (it
worked like a charm).

Using format -e to relable the disk so that slice 0 and slice 2
both have the whole disk resulted in this odd problem:

# zpool create -f  rpool c2t2d0s0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  18.6G  73.5K  18.6G 0%  ONLINE  -
space  1.36T   294G  1.07T21%  ONLINE  -
# zpool export rpool
# zpool import rpool
cannot import 'rpool': no such pool available
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
space  1.36T   294G  1.07T21%  ONLINE  -

# zdb -l /dev/dsk/c2t2d0s0
lists 3 perfectly good looking labels.
Format says:
...
selecting c2t2d0
[disk formatted]
/dev/dsk/c2t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c2t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).

However this disk boots ZFS OpenSolaris just fine and this inability to
import an exported pool isn't a problem. Just wondering if any ZFS guru
had a comment about it. (This is with snv103 on SPARC). FWIW this is
an old ide drive connected to a sas controller via a sata/pata adapter...

Cheers -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
s



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] no pool_props for OpenSolaris 2009.06 with old SPARC hardware

2009-06-12 Thread Richard Elling

I'll comment on the parts Cindy and Lori didn't cover :-)

Frank Middleton wrote:

On 06/03/09 09:10 PM, Aurélien Larcher wrote:

PS: for the record I roughly followed the steps of this blog entry 
=  http://blogs.sun.com/edp/entry/moving_from_nevada_and_live


Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it worked, with the additional step of
making the packages and pkgadding them. Curious as to why pkg
isn't available as a pkgadd package. Is there any reason why
someone shouldn't make them available for download? It would
make it much less painful for those of us who are OBP version
deprived - but maybe that's the point :-)

During the install cycle, ran into this annoyance (doubtless this
is documented somewhere):

# zpool create rpool c2t2d0
creates a good rpool that can be exported and imported. But it
seems to create an EFI label, and, as documented, attempting to boot
results in a bad magic number error. Why does zpool silently create
an apparently useless disk configuration for a root pool? Anyway,
it was a good opportunity to test zfs send/recv of a root pool (it
worked like a charm).

Using format -e to relable the disk so that slice 0 and slice 2
both have the whole disk resulted in this odd problem:

# zpool create -f  rpool c2t2d0s0
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool  18.6G  73.5K  18.6G 0%  ONLINE  -
space  1.36T   294G  1.07T21%  ONLINE  -
# zpool export rpool
# zpool import rpool
cannot import 'rpool': no such pool available
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
space  1.36T   294G  1.07T21%  ONLINE  -

# zdb -l /dev/dsk/c2t2d0s0
lists 3 perfectly good looking labels.
Format says:
...
selecting c2t2d0
[disk formatted]
/dev/dsk/c2t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c2t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).


This is what I would wish for...



However this disk boots ZFS OpenSolaris just fine and this inability to
import an exported pool isn't a problem. Just wondering if any ZFS guru
had a comment about it. (This is with snv103 on SPARC). FWIW this is
an old ide drive connected to a sas controller via a sata/pata adapter...


The libdiskmgt library has the routines which check to see if a disk is in
use, which is ultimately the source for is part of an active * messaging.
To perform these checks, it looks at what is on each slice or partition and
tries to determine if it is in use.  By convention, for about 30 years now
the 3rd slice (slice c or s2) is used to represent the entire disk. So s2
overlaps with s0, which is why it appears active.  If you were to add
more overlapping slices, then you'll get the appropriate messaging.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] no pool_props for OpenSolaris 2009.06 with old SPARC hardware

2009-06-03 Thread Aurélien Larcher
Hi,
thanks Cindy for your kind answer ;)

You're right ;) After digging into the documentation I found exactly what you 
say in the boot manpage 
(http://docs.sun.com/app/docs/doc/819-2240/boot-1m?a=view).

So I've set the bootfs property on the zpool and everything is fine now !
My good ol'Ultra 60 is running now 2009.06
Regards,

Aurelien

PS: for the record I roughly followed the steps of this blog entry = 
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss