Re: Legacy ZFS mounts

2019-09-16 Thread Chavdar Ivanov
Thanks, it's fine. I didn't add 'zfs=YES' to /etc/rc.conf as running
'/etc/rc.d/zfs start' without it did not produce the usual warning of
undefined variable.

On Mon, 16 Sep 2019 at 12:21, Brad Spencer  wrote:
>
> Chavdar Ivanov  writes:
>
> > Since yesterday I don't get my zfs fule systems automatically mounted,
> > I have to issue 'zfs mount -a'.
> >
> > Should I add some entry in /etc/fstab or place the above command in
> > one of the scripts?
> >
> > The zvols are accessible as usual.
>
>
> Add zfs=YES to your rc.conf.  It works like LVM and raidframe now...
> This get the module started early and allows you to disable the support
> later (as opposed to by accident which is how it mostly worked before).
>
> It was in the commit message, but should have been mentioned more
> broadly.  Sorry...
>
>
>
>
>
> --
> Brad Spencer - b...@anduin.eldar.org - KC8VKS - http://anduin.eldar.org



-- 



Re: Legacy ZFS mounts

2019-09-16 Thread Brad Spencer
Chavdar Ivanov  writes:

> Since yesterday I don't get my zfs fule systems automatically mounted,
> I have to issue 'zfs mount -a'.
>
> Should I add some entry in /etc/fstab or place the above command in
> one of the scripts?
>
> The zvols are accessible as usual.


Add zfs=YES to your rc.conf.  It works like LVM and raidframe now...
This get the module started early and allows you to disable the support
later (as opposed to by accident which is how it mostly worked before).

It was in the commit message, but should have been mentioned more
broadly.  Sorry...





-- 
Brad Spencer - b...@anduin.eldar.org - KC8VKS - http://anduin.eldar.org


Re: Legacy ZFS mounts

2019-09-16 Thread Chavdar Ivanov
Since yesterday I don't get my zfs fule systems automatically mounted,
I have to issue 'zfs mount -a'.

Should I add some entry in /etc/fstab or place the above command in
one of the scripts?

The zvols are accessible as usual.

On Sat, 7 Sep 2019 at 14:54, Brad Spencer  wrote:
>
> m...@netbsd.org writes:
>
> [snip]
>
> >> The module has to load before zvols will show up and if that is ALL you
> >> were doing, I don't think anything else will prompt the loading of the
> >> module.  That is, your /dev/zvol/* tree would not be there unless you
> >> execute the zfs (and probably the zpool) command prior to trying to use
> >> your devices (I think that it is the opening and use of /dev/zfs that
> >> does prompt the module load, but that isn't needed for pure zvol
> >> access).
> >
> > Would it make sense to make mount_zfs do that?
>
> The prompting of the module load would happen with mount_zfs for the ZFS
> filesystem/dataset case, so in that case you are very correct that a
> additional poke is not needed, but mount_zfs is not use when just
> accessing zvol devices.  In fact, nothing is needed for raw zvol use,
> except the loading of the module.
>
> By example, lets suppose you are using LVM devices as back store to DOMU
> guests.  You still have to do a /sbin/vgscan to get the devices created
> even if you are not going to use the logical volumes for anything else
> but back store.  This might, have not actually looked, prompt the load
> of the device manager module hence our /etc/rc.d/lvm script (among other
> uses, of course, in this case).  In the ZFS zvol case, it would be the
> same thing except that all that is needed is that the module load.  The
> /etc/rc.d/zfs script I propose just does a "zfs list" (and checks the
> return code, as it is possible that /dev/zfs is missing or there were
> other errors and reports this...  it is also possible to build a system
> without ZFS built, we have a make variable for that, and if you happen
> to do a zfs=YES we should not be dumb about that) which is what was
> already present in /etc/rc.d/mountall.  I simply moved it sooner in the
> boot process and made it more literal and intentional with the variable
> as I wanted to use zvol devices before mountall ran.  I would also need
> to do something to get the module loaded if I was going to present zvols
> to DOMU (as I do intend to do some day) but did not use a ZFS
> filesystem/dataset for anything in the DOM0.
>
> To cover the various cases, I don't see how one gets all the bits and
> pieces in place in really any other manor.  As I said, this is all done
> more or less in this way for raidframe and LVM.
>
>
>
>
>
> --
> Brad Spencer - b...@anduin.eldar.org - KC8VKS - http://anduin.eldar.org
>


-- 



Re: Legacy ZFS mounts

2019-09-07 Thread Brad Spencer
m...@netbsd.org writes:

[snip]

>> The module has to load before zvols will show up and if that is ALL you
>> were doing, I don't think anything else will prompt the loading of the
>> module.  That is, your /dev/zvol/* tree would not be there unless you
>> execute the zfs (and probably the zpool) command prior to trying to use
>> your devices (I think that it is the opening and use of /dev/zfs that
>> does prompt the module load, but that isn't needed for pure zvol
>> access).
>
> Would it make sense to make mount_zfs do that?

The prompting of the module load would happen with mount_zfs for the ZFS
filesystem/dataset case, so in that case you are very correct that a
additional poke is not needed, but mount_zfs is not use when just
accessing zvol devices.  In fact, nothing is needed for raw zvol use,
except the loading of the module.

By example, lets suppose you are using LVM devices as back store to DOMU
guests.  You still have to do a /sbin/vgscan to get the devices created
even if you are not going to use the logical volumes for anything else
but back store.  This might, have not actually looked, prompt the load
of the device manager module hence our /etc/rc.d/lvm script (among other
uses, of course, in this case).  In the ZFS zvol case, it would be the
same thing except that all that is needed is that the module load.  The
/etc/rc.d/zfs script I propose just does a "zfs list" (and checks the
return code, as it is possible that /dev/zfs is missing or there were
other errors and reports this...  it is also possible to build a system
without ZFS built, we have a make variable for that, and if you happen
to do a zfs=YES we should not be dumb about that) which is what was
already present in /etc/rc.d/mountall.  I simply moved it sooner in the
boot process and made it more literal and intentional with the variable
as I wanted to use zvol devices before mountall ran.  I would also need
to do something to get the module loaded if I was going to present zvols
to DOMU (as I do intend to do some day) but did not use a ZFS
filesystem/dataset for anything in the DOM0.

To cover the various cases, I don't see how one gets all the bits and
pieces in place in really any other manor.  As I said, this is all done
more or less in this way for raidframe and LVM.





-- 
Brad Spencer - b...@anduin.eldar.org - KC8VKS - http://anduin.eldar.org



Re: Legacy ZFS mounts

2019-09-07 Thread maya
On Sat, Sep 07, 2019 at 06:37:31AM -0400, Brad Spencer wrote:
> m...@netbsd.org writes:
> 
> > when asking for reviews on diffs, please consider having them in an easy
> > to view format, rather than tar files.
> >
> > Having a 'zfs' script to load a module sounds wrong. We have module
> > autoloading for this purpose.
> 
> Thanks for the comments...
> 
> Ya, there are two purposes to the variable, one to make sure that the
> module loads early and the second for determining the desire to mount
> ZFS data sets in the mostly normal manor.  If you have a corrupt ZFS
> cache, for example, you may want to not do the second one.
> 
> The module has to load before zvols will show up and if that is ALL you
> were doing, I don't think anything else will prompt the loading of the
> module.  That is, your /dev/zvol/* tree would not be there unless you
> execute the zfs (and probably the zpool) command prior to trying to use
> your devices (I think that it is the opening and use of /dev/zfs that
> does prompt the module load, but that isn't needed for pure zvol
> access).

Would it make sense to make mount_zfs do that?


Re: Legacy ZFS mounts

2019-09-07 Thread Brad Spencer
m...@netbsd.org writes:

> when asking for reviews on diffs, please consider having them in an easy
> to view format, rather than tar files.
>
> Having a 'zfs' script to load a module sounds wrong. We have module
> autoloading for this purpose.

Thanks for the comments...

Ya, there are two purposes to the variable, one to make sure that the
module loads early and the second for determining the desire to mount
ZFS data sets in the mostly normal manor.  If you have a corrupt ZFS
cache, for example, you may want to not do the second one.

The module has to load before zvols will show up and if that is ALL you
were doing, I don't think anything else will prompt the loading of the
module.  That is, your /dev/zvol/* tree would not be there unless you
execute the zfs (and probably the zpool) command prior to trying to use
your devices (I think that it is the opening and use of /dev/zfs that
does prompt the module load, but that isn't needed for pure zvol
access).  In addition, it is possible to put a FFS (or anything) in a
zvol and use it in /etc/fstab, and the device would have to exist in
that case too (technically the devices would still exist if you have
used them once, but the module supporting them would not be present).  I
know that just trying to access /dev/zvol/* does not load the module.

Hence the variable to prime the pump and get the module loaded early in
the boot sequence.  Off hand I do not know of any other way to get the
desired behavior currently.  Of course, in the root-filesystem-on-ZFS
case this would not be needed (the module would have to be there
already).  In most of the ways that seem important the proposed behavior
is not any different from what is done with LVM or raidframe currently.

The zfs rc.d script may end up doing more in the future, like zpool
import and the like.


Make sense??



-- 
Brad Spencer - b...@anduin.eldar.org - KC8VKS - http://anduin.eldar.org



Re: Legacy ZFS mounts

2019-09-07 Thread maya
when asking for reviews on diffs, please consider having them in an easy
to view format, rather than tar files.

Having a 'zfs' script to load a module sounds wrong. We have module
autoloading for this purpose.