Re: delegating ZFS of jail's root directory

2019-01-22 Thread Willem Jan Withagen

On 21-1-2019 17:42, Michael W. Lucas wrote:

Hi,

Two more book research questions, sorry. If the answer is "it doesn't
work that way," cool, I'll document and move on. It looks like ZFS
delegation isn't widely used.

1) It seems I can successfully delegate managing ZFS datasets to a jail,
sort of. A restart removes my ability to destroy and rename datasets I
created, though.

2) I can't delegate the jail's root to the jail. Obvious question: CAN
you delegate a jail's root dataset, or am I chasing an impossibility
here?

Details:

Real hardware, running yesterday's -current:

FreeBSD storm 13.0-CURRENT FreeBSD 13.0-CURRENT r343219 GENERIC  amd64


Here's my jail.conf.

exec.start="sh /etc/rc";
exec.stop="sh /etc/rc.shutdown";

filedump {
   host.hostname="filedump.mwl.io";
   ip4.addr="203.0.113.224";
   path="/jail/filedump/zroot";
   persist=true;
   mount.devfs=true;
   allow.mount=true;
   allow.mount.zfs=true;
   enforce_statfs=1;
   exec.poststart="/sbin/zfs jail filedump jail/filedump/zroot";
   exec.poststop="/sbin/zfs unjail filedump jail/filedump/zroot";
}

/jail/filedump/zroot contains FreeBSD 12.0 base.tgz extract.

# ls /jail/filedump/zroot/
.cshrc  dev media   rootvar
.profileetc mnt sbin
COPYRIGHT   jailnet sys
bin lib proctmp
bootlibexec rescue  usr

Initial ZFS "jailed" parameter:

# zfs get -r jailed jail/filedump
NAME  PROPERTY  VALUE   SOURCE
jail/filedump jailedoff default
jail/filedump/zroot   jailedoff default
jail/filedump/zroot/cdr   jailedon  local
jail/filedump/zroot/home  jailedon  local
jail/filedump/zroot/home/mwl  jailedon  inherited from 
jail/filedump/zroot/home


Running "service jail start filedump" gives me a working jail. I can
create and destroy datasets.

root@filedump:~ # zfs create jail/filedump/zroot/home/abc
root@filedump:~ # zfs destroy jail/filedump/zroot/home/abc

Gonna recreate that dataset for testing purposes:

root@filedump:~ # zfs create jail/filedump/zroot/home/abc

Now back to the host, restart the jail, and:

root@filedump:~ # zfs destroy jail/filedump/zroot/home/abc
cannot unmount '/jail/filedump/zroot/home/abc': Operation not permitted

I created this dataset within the jail, and can manage it only so long
as it's the same jail instance. A restart wrecks my ability to manage
the dataset.



Second problem:

I would also like to delegate management of the jail's root fileset,
so on the host I run:

# zfs set jailed=on jail/filedump/zroot
# service jail start filedump
Starting jails: cannot start jail  "filedump":
jail: filedump: mount.devfs: /jail/filedump/zroot/dev: No such file or directory
.

Which--of course, the root dir isn't mounted, so /dev can't be mounted.


I'm vaguely confident I've heard of people delegating management of
the root dataset to the jail, though I can't find it. Am I
misremembering?


Hi Michael,

I think I asked that question a some time ago, to be able to run a 
ceph-setup script in a jail


The basic answer was that the jail needs to have access to /dev/zfs in 
the jail to be effectively controlling zfs. But then I think you 
delegate the whole set of zfs capabilities to the jail.


Which in my case was not a problem. But if you want to use a jail as 
separation of control, then this will be way too liberal.


There is a set of configs for devfs in /etc. See `man -k devfs`
But I've not used this in the end.

--WjW



___
freebsd-jail@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"


Re: Passing a limited amount of disk devices to jails

2018-03-07 Thread Willem Jan Withagen
On 27-2-2018 05:11, cstanley wrote:
> Sorry for the extremely late reply! 
> 
> I am interested in any progress you have made on this front.
> 
> I have been playing around with BHYVE - I am able to get guests up and
> running but I am having trouble mapping the raw block devices (/dev/ada5
> etc) to the vm.
> 
> This prompted me to mess around with jails as an alternative, and I came
> across this thread :)

I got side-tracked by different problems...
So there nothing that came out of this.

Other than that configuring this is not easy. Tried several things and
got no where other than just disabling it all and get everything in a jail.

--WjW


___
freebsd-jail@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"


Re: Passing a limited amount of disk devices to jails

2017-06-12 Thread Willem Jan Withagen
On 12-6-2017 11:48, Willem Jan Withagen wrote:
> On 11-6-2017 02:41, Allan Jude wrote:
>> On 06/10/2017 20:13, Willem Jan Withagen wrote:
>>> On 9-6-2017 16:20, Miroslav Lachman wrote:
>>>> Willem Jan Withagen wrote on 2017/06/09 15:48:
>>>>> On 9-6-2017 11:23, Steven Hartland wrote:
>>>>>> You could do effectively this by using dedicated zfs filesystems per
>>>>>> jail
>>>>>
>>>>> Hi Steven,
>>>>>
>>>>> That is how I'm going to do it, when nothing else works.
>>>>> But then I don't get to test the part of building the ceph-cluster from
>>>>> raw disk...
>>>>>
>>>>> I was more thinking along the lines of tinkering with the devd.conf or
>>>>> something. And would appreciate opinions on how to (not) do it.
>>>>
>>>> I totally skipped devd.conf in my mind in previous reply. So maybe you
>>>> can really use devd.conf to allow access to /dev/adaX devices or you can
>>>> use ZFS zvol if you have big pool and need some smaller devices to test
>>>> with.
>>>
>>> I want the jail to look as much as a normal system would, and then run
>>> ceph-tools on them. And they would like to see /dev/{disk}
>>>
>>> Now I have found /sbin/devfs which allows to add/remove devices to an
>>> already existing devfs-mount.
>>>
>>> So I can 'rule add type disk unhide' and see the disks.
>>> Gpart can then list partitions.
>>> But any of the other commands is met with an unwilling system:
>>>
>>> root@ceph-1:/ # gpart delete -i 1 ada0
>>> gpart: No such file or directory
>>>
>>> So there is still some protection in place in the jail
>>>
>>> However dd-ing to the device does overwrite some stuff.
>>> Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt
>>> gpartition.
>>>
>>> But I don't see any sysctl options to toggle that on or off
> 
>> To use GEOM tools like gpart, I think you'll need to unhide
>> /dev/geom.ctl in the jail
>>
>>
> 
> Right, thanx, could very well be the case.
> I'll try and post back here.
> 
> But I'll take a different approach and just enable all devices in /dev
> Since I'm not really needing security, but only need separate compute
> spaces. And jails have the advantage over bhyve that it is easy to
> modify files in the subdomains.
> Restricting afterwards might be an easier job.
> 
> I'm also having trouble expanding /etc/{,defaults/}devfs.rules and have
>   'mount -t devfs -oruleset'
> pick up the changes.
> Even adding any extra ruleset to the /etc/defaults/devfs.rules does not
> get picked up, hence my toying with /sbin/devfs.

Right,
That will help.

Next challenge is to allow zfs to create a filesystem on a partition.

root@ceph-1:/ # gpart destroy -F ada8
ada8 destroyed
root@ceph-1:/ # gpart create -s GPT ada8
ada8 created
root@ceph-1:/ # gpart add -t freebsd-zfs -a 1M -l osd-disk-1 /dev/ada8
ada8p1 added
root@ceph-1:/ # zpool create -f osd.1 /dev/ada8p1
cannot create 'osd.1': permission denied
root@ceph-1:/ #

--WjW


___
freebsd-jail@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"


Re: Passing a limited amount of disk devices to jails

2017-06-12 Thread Willem Jan Withagen
On 11-6-2017 02:41, Allan Jude wrote:
> On 06/10/2017 20:13, Willem Jan Withagen wrote:
>> On 9-6-2017 16:20, Miroslav Lachman wrote:
>>> Willem Jan Withagen wrote on 2017/06/09 15:48:
>>>> On 9-6-2017 11:23, Steven Hartland wrote:
>>>>> You could do effectively this by using dedicated zfs filesystems per
>>>>> jail
>>>>
>>>> Hi Steven,
>>>>
>>>> That is how I'm going to do it, when nothing else works.
>>>> But then I don't get to test the part of building the ceph-cluster from
>>>> raw disk...
>>>>
>>>> I was more thinking along the lines of tinkering with the devd.conf or
>>>> something. And would appreciate opinions on how to (not) do it.
>>>
>>> I totally skipped devd.conf in my mind in previous reply. So maybe you
>>> can really use devd.conf to allow access to /dev/adaX devices or you can
>>> use ZFS zvol if you have big pool and need some smaller devices to test
>>> with.
>>
>> I want the jail to look as much as a normal system would, and then run
>> ceph-tools on them. And they would like to see /dev/{disk}
>>
>> Now I have found /sbin/devfs which allows to add/remove devices to an
>> already existing devfs-mount.
>>
>> So I can 'rule add type disk unhide' and see the disks.
>> Gpart can then list partitions.
>> But any of the other commands is met with an unwilling system:
>>
>> root@ceph-1:/ # gpart delete -i 1 ada0
>> gpart: No such file or directory
>>
>> So there is still some protection in place in the jail
>>
>> However dd-ing to the device does overwrite some stuff.
>> Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt
>> gpartition.
>>
>> But I don't see any sysctl options to toggle that on or off

> To use GEOM tools like gpart, I think you'll need to unhide
> /dev/geom.ctl in the jail
> 
> 

Right, thanx, could very well be the case.
I'll try and post back here.

But I'll take a different approach and just enable all devices in /dev
Since I'm not really needing security, but only need separate compute
spaces. And jails have the advantage over bhyve that it is easy to
modify files in the subdomains.
Restricting afterwards might be an easier job.

I'm also having trouble expanding /etc/{,defaults/}devfs.rules and have
'mount -t devfs -oruleset'
pick up the changes.
Even adding any extra ruleset to the /etc/defaults/devfs.rules does not
get picked up, hence my toying with /sbin/devfs.

--WjW
___
freebsd-jail@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"


Re: Passing a limited amount of disk devices to jails

2017-06-10 Thread Willem Jan Withagen
On 9-6-2017 16:20, Miroslav Lachman wrote:
> Willem Jan Withagen wrote on 2017/06/09 15:48:
>> On 9-6-2017 11:23, Steven Hartland wrote:
>>> You could do effectively this by using dedicated zfs filesystems per
>>> jail
>>
>> Hi Steven,
>>
>> That is how I'm going to do it, when nothing else works.
>> But then I don't get to test the part of building the ceph-cluster from
>> raw disk...
>>
>> I was more thinking along the lines of tinkering with the devd.conf or
>> something. And would appreciate opinions on how to (not) do it.
> 
> I totally skipped devd.conf in my mind in previous reply. So maybe you
> can really use devd.conf to allow access to /dev/adaX devices or you can
> use ZFS zvol if you have big pool and need some smaller devices to test
> with.

I want the jail to look as much as a normal system would, and then run
ceph-tools on them. And they would like to see /dev/{disk}

Now I have found /sbin/devfs which allows to add/remove devices to an
already existing devfs-mount.

So I can 'rule add type disk unhide' and see the disks.
Gpart can then list partitions.
But any of the other commands is met with an unwilling system:

root@ceph-1:/ # gpart delete -i 1 ada0
gpart: No such file or directory

So there is still some protection in place in the jail

However dd-ing to the device does overwrite some stuff.
Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt
gpartition.

But I don't see any sysctl options to toggle that on or off

--WjW

___
freebsd-jail@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"


Re: Passing a limited amount of disk devices to jails

2017-06-09 Thread Willem Jan Withagen
On 9-6-2017 11:23, Steven Hartland wrote:
> You could do effectively this by using dedicated zfs filesystems per jail

Hi Steven,

That is how I'm going to do it, when nothing else works.
But then I don't get to test the part of building the ceph-cluster from
raw disk...

I was more thinking along the lines of tinkering with the devd.conf or
something. And would appreciate opinions on how to (not) do it.

--WjW


> On 09/06/2017 09:45, Willem Jan Withagen wrote:
>> Hi,
>>
>> I'm writting/building a test environment for my ceph cluster, and I'm
>> using jails for that
>>
>> Now one of the things I'd be interested in, is to pass a few raw disks
>> to each of the jails.
>> So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2
>> gets /dev/ada2 and /dev/ada3.
>>
>> AND I would need gpart to be able to work on them!
>>
>> Would this be possible to do with the current jail implementation on
>> 12-CURRENT?

___
freebsd-jail@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"


Passing a limited amount of disk devices to jails

2017-06-09 Thread Willem Jan Withagen
Hi,

I'm writting/building a test environment for my ceph cluster, and I'm
using jails for that

Now one of the things I'd be interested in, is to pass a few raw disks
to each of the jails.
So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2
gets /dev/ada2 and /dev/ada3.

AND I would need gpart to be able to work on them!

Would this be possible to do with the current jail implementation on
12-CURRENT?

Thanx,
--WjW

___
freebsd-jail@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"