Re: Passing a limited amount of disk devices to jails
On 27-2-2018 05:11, cstanley wrote: > Sorry for the extremely late reply! > > I am interested in any progress you have made on this front. > > I have been playing around with BHYVE - I am able to get guests up and > running but I am having trouble mapping the raw block devices (/dev/ada5 > etc) to the vm. > > This prompted me to mess around with jails as an alternative, and I came > across this thread :) I got side-tracked by different problems... So there nothing that came out of this. Other than that configuring this is not easy. Tried several things and got no where other than just disabling it all and get everything in a jail. --WjW ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
Sorry for the extremely late reply! I am interested in any progress you have made on this front. I have been playing around with BHYVE - I am able to get guests up and running but I am having trouble mapping the raw block devices (/dev/ada5 etc) to the vm. This prompted me to mess around with jails as an alternative, and I came across this thread :) -- Sent from: http://freebsd.1045724.x6.nabble.com/freebsd-jail-f5721530.html ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
On 12-6-2017 11:48, Willem Jan Withagen wrote: > On 11-6-2017 02:41, Allan Jude wrote: >> On 06/10/2017 20:13, Willem Jan Withagen wrote: >>> On 9-6-2017 16:20, Miroslav Lachman wrote: Willem Jan Withagen wrote on 2017/06/09 15:48: > On 9-6-2017 11:23, Steven Hartland wrote: >> You could do effectively this by using dedicated zfs filesystems per >> jail > > Hi Steven, > > That is how I'm going to do it, when nothing else works. > But then I don't get to test the part of building the ceph-cluster from > raw disk... > > I was more thinking along the lines of tinkering with the devd.conf or > something. And would appreciate opinions on how to (not) do it. I totally skipped devd.conf in my mind in previous reply. So maybe you can really use devd.conf to allow access to /dev/adaX devices or you can use ZFS zvol if you have big pool and need some smaller devices to test with. >>> >>> I want the jail to look as much as a normal system would, and then run >>> ceph-tools on them. And they would like to see /dev/{disk} >>> >>> Now I have found /sbin/devfs which allows to add/remove devices to an >>> already existing devfs-mount. >>> >>> So I can 'rule add type disk unhide' and see the disks. >>> Gpart can then list partitions. >>> But any of the other commands is met with an unwilling system: >>> >>> root@ceph-1:/ # gpart delete -i 1 ada0 >>> gpart: No such file or directory >>> >>> So there is still some protection in place in the jail >>> >>> However dd-ing to the device does overwrite some stuff. >>> Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt >>> gpartition. >>> >>> But I don't see any sysctl options to toggle that on or off > >> To use GEOM tools like gpart, I think you'll need to unhide >> /dev/geom.ctl in the jail >> >> > > Right, thanx, could very well be the case. > I'll try and post back here. > > But I'll take a different approach and just enable all devices in /dev > Since I'm not really needing security, but only need separate compute > spaces. And jails have the advantage over bhyve that it is easy to > modify files in the subdomains. > Restricting afterwards might be an easier job. > > I'm also having trouble expanding /etc/{,defaults/}devfs.rules and have > 'mount -t devfs -oruleset' > pick up the changes. > Even adding any extra ruleset to the /etc/defaults/devfs.rules does not > get picked up, hence my toying with /sbin/devfs. Right, That will help. Next challenge is to allow zfs to create a filesystem on a partition. root@ceph-1:/ # gpart destroy -F ada8 ada8 destroyed root@ceph-1:/ # gpart create -s GPT ada8 ada8 created root@ceph-1:/ # gpart add -t freebsd-zfs -a 1M -l osd-disk-1 /dev/ada8 ada8p1 added root@ceph-1:/ # zpool create -f osd.1 /dev/ada8p1 cannot create 'osd.1': permission denied root@ceph-1:/ # --WjW ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
On 11-6-2017 02:41, Allan Jude wrote: > On 06/10/2017 20:13, Willem Jan Withagen wrote: >> On 9-6-2017 16:20, Miroslav Lachman wrote: >>> Willem Jan Withagen wrote on 2017/06/09 15:48: On 9-6-2017 11:23, Steven Hartland wrote: > You could do effectively this by using dedicated zfs filesystems per > jail Hi Steven, That is how I'm going to do it, when nothing else works. But then I don't get to test the part of building the ceph-cluster from raw disk... I was more thinking along the lines of tinkering with the devd.conf or something. And would appreciate opinions on how to (not) do it. >>> >>> I totally skipped devd.conf in my mind in previous reply. So maybe you >>> can really use devd.conf to allow access to /dev/adaX devices or you can >>> use ZFS zvol if you have big pool and need some smaller devices to test >>> with. >> >> I want the jail to look as much as a normal system would, and then run >> ceph-tools on them. And they would like to see /dev/{disk} >> >> Now I have found /sbin/devfs which allows to add/remove devices to an >> already existing devfs-mount. >> >> So I can 'rule add type disk unhide' and see the disks. >> Gpart can then list partitions. >> But any of the other commands is met with an unwilling system: >> >> root@ceph-1:/ # gpart delete -i 1 ada0 >> gpart: No such file or directory >> >> So there is still some protection in place in the jail >> >> However dd-ing to the device does overwrite some stuff. >> Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt >> gpartition. >> >> But I don't see any sysctl options to toggle that on or off > To use GEOM tools like gpart, I think you'll need to unhide > /dev/geom.ctl in the jail > > Right, thanx, could very well be the case. I'll try and post back here. But I'll take a different approach and just enable all devices in /dev Since I'm not really needing security, but only need separate compute spaces. And jails have the advantage over bhyve that it is easy to modify files in the subdomains. Restricting afterwards might be an easier job. I'm also having trouble expanding /etc/{,defaults/}devfs.rules and have 'mount -t devfs -oruleset' pick up the changes. Even adding any extra ruleset to the /etc/defaults/devfs.rules does not get picked up, hence my toying with /sbin/devfs. --WjW ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
On 06/10/2017 20:13, Willem Jan Withagen wrote: On 9-6-2017 16:20, Miroslav Lachman wrote: Willem Jan Withagen wrote on 2017/06/09 15:48: On 9-6-2017 11:23, Steven Hartland wrote: You could do effectively this by using dedicated zfs filesystems per jail Hi Steven, That is how I'm going to do it, when nothing else works. But then I don't get to test the part of building the ceph-cluster from raw disk... I was more thinking along the lines of tinkering with the devd.conf or something. And would appreciate opinions on how to (not) do it. I totally skipped devd.conf in my mind in previous reply. So maybe you can really use devd.conf to allow access to /dev/adaX devices or you can use ZFS zvol if you have big pool and need some smaller devices to test with. I want the jail to look as much as a normal system would, and then run ceph-tools on them. And they would like to see /dev/{disk} Now I have found /sbin/devfs which allows to add/remove devices to an already existing devfs-mount. So I can 'rule add type disk unhide' and see the disks. Gpart can then list partitions. But any of the other commands is met with an unwilling system: root@ceph-1:/ # gpart delete -i 1 ada0 gpart: No such file or directory So there is still some protection in place in the jail However dd-ing to the device does overwrite some stuff. Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt gpartition. But I don't see any sysctl options to toggle that on or off --WjW ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org" To use GEOM tools like gpart, I think you'll need to unhide /dev/geom.ctl in the jail -- Allan Jude ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
On 9-6-2017 16:20, Miroslav Lachman wrote: > Willem Jan Withagen wrote on 2017/06/09 15:48: >> On 9-6-2017 11:23, Steven Hartland wrote: >>> You could do effectively this by using dedicated zfs filesystems per >>> jail >> >> Hi Steven, >> >> That is how I'm going to do it, when nothing else works. >> But then I don't get to test the part of building the ceph-cluster from >> raw disk... >> >> I was more thinking along the lines of tinkering with the devd.conf or >> something. And would appreciate opinions on how to (not) do it. > > I totally skipped devd.conf in my mind in previous reply. So maybe you > can really use devd.conf to allow access to /dev/adaX devices or you can > use ZFS zvol if you have big pool and need some smaller devices to test > with. I want the jail to look as much as a normal system would, and then run ceph-tools on them. And they would like to see /dev/{disk} Now I have found /sbin/devfs which allows to add/remove devices to an already existing devfs-mount. So I can 'rule add type disk unhide' and see the disks. Gpart can then list partitions. But any of the other commands is met with an unwilling system: root@ceph-1:/ # gpart delete -i 1 ada0 gpart: No such file or directory So there is still some protection in place in the jail However dd-ing to the device does overwrite some stuff. Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt gpartition. But I don't see any sysctl options to toggle that on or off --WjW ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
Willem Jan Withagen wrote on 2017/06/09 15:48: On 9-6-2017 11:23, Steven Hartland wrote: You could do effectively this by using dedicated zfs filesystems per jail Hi Steven, That is how I'm going to do it, when nothing else works. But then I don't get to test the part of building the ceph-cluster from raw disk... I was more thinking along the lines of tinkering with the devd.conf or something. And would appreciate opinions on how to (not) do it. I totally skipped devd.conf in my mind in previous reply. So maybe you can really use devd.conf to allow access to /dev/adaX devices or you can use ZFS zvol if you have big pool and need some smaller devices to test with. Miroslav Lachman ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
On 9-6-2017 11:23, Steven Hartland wrote: > You could do effectively this by using dedicated zfs filesystems per jail Hi Steven, That is how I'm going to do it, when nothing else works. But then I don't get to test the part of building the ceph-cluster from raw disk... I was more thinking along the lines of tinkering with the devd.conf or something. And would appreciate opinions on how to (not) do it. --WjW > On 09/06/2017 09:45, Willem Jan Withagen wrote: >> Hi, >> >> I'm writting/building a test environment for my ceph cluster, and I'm >> using jails for that >> >> Now one of the things I'd be interested in, is to pass a few raw disks >> to each of the jails. >> So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2 >> gets /dev/ada2 and /dev/ada3. >> >> AND I would need gpart to be able to work on them! >> >> Would this be possible to do with the current jail implementation on >> 12-CURRENT? ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
On Fri, Jun 09, 2017 at 10:45:32AM +0200, Willem Jan Withagen wrote: > Hi, > > I'm writting/building a test environment for my ceph cluster, and I'm > using jails for that > > Now one of the things I'd be interested in, is to pass a few raw disks > to each of the jails. > So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2 > gets /dev/ada2 and /dev/ada3. > > AND I would need gpart to be able to work on them! > > Would this be possible to do with the current jail implementation on > 12-CURRENT? Read about devfs(8) and devfs.conf(5), follow further references from there. In short, devfs allows to specify rules for nodes visibility, and the rules are applied per-mount. Since jails use per-jail devfs mount, you get dedicated namespace for the devfs nodes. ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
You could do effectively this by using dedicated zfs filesystems per jail On 09/06/2017 09:45, Willem Jan Withagen wrote: Hi, I'm writting/building a test environment for my ceph cluster, and I'm using jails for that Now one of the things I'd be interested in, is to pass a few raw disks to each of the jails. So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2 gets /dev/ada2 and /dev/ada3. AND I would need gpart to be able to work on them! Would this be possible to do with the current jail implementation on 12-CURRENT? Thanx, --WjW ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org" ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"
Re: Passing a limited amount of disk devices to jails
Willem Jan Withagen wrote on 2017/06/09 10:45: Hi, I'm writting/building a test environment for my ceph cluster, and I'm using jails for that Now one of the things I'd be interested in, is to pass a few raw disks to each of the jails. So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2 gets /dev/ada2 and /dev/ada3. AND I would need gpart to be able to work on them! Would this be possible to do with the current jail implementation on 12-CURRENT? I don't think jail will ever have access to raw / block devices. It is disallowed by security design. Wouldn't it be better to use bhyve guests for this environment? Miroslav Lachman ___ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"