peter.b...@bsd4all.org wrote:
Well, in my case it panic’ed on 11-stable. I’m only using pf on the
host, not in the jail. I’m using Devin Teske’s jng to create a netgraph
bridge. It is my intention to use the netgrpah bridge with bhyve as well.
The panic (one-time) happened in pfioctl when I refreshed the rules. I
suspect the problem is related to the following message when I stop the
jail.
kernel: Freed UMA keg (pf table entries) was not empty (32 items). Lost
-57 pages of memory.
Current does not display the UMA message. I’m still narrowing down what
happens with the pf table entries. My suspicion is that the netgraph
bridge which creates a ng_ether device which is handed over to the
jail vnet, is causing this.
The panic happened on LIST_REMOVE in keg_fetch_slab
static uma_slab_t
keg_fetch_slab(uma_keg_t keg, uma_zone_t zone, int flags)
{
uma_slab_t slab;
int reserve;
mtx_assert(&keg->uk_lock, MA_OWNED);
slab = NULL;
reserve = 0;
if ((flags & M_USE_RESERVE) == 0)
reserve = keg->uk_reserve;
for (;;) {
/*
* Find a slab with some space. Prefer slabs that are
partially
* used over those that are totally full. This helps to
reduce
* fragmentation.
*/
if (keg->uk_free > reserve) {
if (!LIST_EMPTY(&keg->uk_part_slab)) {
slab = LIST_FIRST(&keg->uk_part_slab);
} else {
slab = LIST_FIRST(&keg->uk_free_slab);
*LIST_REMOVE(slab, us_link);*
LIST_INSERT_HEAD(&keg->uk_part_slab, slab,
us_link);
}
MPASS(slab->us_keg == keg);
return (slab);
}
KDB: stack backtrace:
#0 0xffffffff805df0e7 at kdb_backtrace+0x67
#1 0xffffffff8059d176 at vpanic+0x186
#2 0xffffffff8059cfe3 at panic+0x43
#3 0xffffffff808ebaa2 at trap_fatal+0x322
#4 0xffffffff808ebaf9 at trap_pfault+0x49
#5 0xffffffff808eb336 at trap+0x286
#6 0xffffffff808d1441 at calltrap+0x8
#7 0xffffffff808a871e at zone_fetch_slab+0x6e
#8 0xffffffff808a87cd at zone_import+0x4d
#9 0xffffffff808a4fc9 at uma_zalloc_arg+0x529
#10 0xffffffff80803214 at pfr_ina_define+0x584
#11 0xffffffff807f0734 at pfioctl+0x3364
#12 0xffffffff80469288 at devfs_ioctl_f+0x128
#13 0xffffffff805fa925 at kern_ioctl+0x255
#14 0xffffffff805fa65f at sys_ioctl+0x16f
#15 0xffffffff808ec604 at amd64_syscall+0x6c4
#16 0xffffffff808d172b at Xfast_syscall+0xfb
The panic is so far not reproducible.
On 10 Apr 2017, at 15:50, Ernie Luzar <luzar...@gmail.com
<mailto:luzar...@gmail.com>> wrote:
peter.b...@bsd4all.org <mailto:peter.b...@bsd4all.org> wrote:
There have been issues with pf if I recall correctly. I currently
have issues with stable, pf and vnet. There is an issue with pf table
entries when an interface is moved to a different vnet.
Does anyone no if there is a specific fix for this that hasn’t been
ported to stable? I haven’t had the time to test this on current.
Peter
PF was fixed in 11.0 to not panic when run on a host that has vimage
compiled into the kernel. On 11.0 you can configure pf to run in a
vnet jail but it really does not enforce any firewall rules because pf
needs access to the kernel which jail(8) is blocking by design. As far
as I know this is a show shopper that can not be fixed without a pf
rewrite changing the way it works internally.
This PR gives all the details
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=212013
I also tested using Devin Teske’s jng to create a netgraph bridge on
RELEASE 10.0 and it worked. But when I tried the same configuration on
RELEASE 11.0 it no longer worked.
I strongly suggest you verify pf is functional in your vnet jail before
you go chasing a dump which I suspect is totally misleading.
Setup a simple vnet pf rule set being run in the vnet jail that allows
everything out except port 43 which the "whois" command uses and then
issue the whois command from the vent jail console. If the vnet pf port
43 rule is really blocking port 43 it will cause the whois command to
not return any results. If the whois command returns results this
indicates that even thought you have all the correct settings to run pf
in your vnet jail and have received no error messages about it, pf is
not really enforcing any rules. (IE; not working) I am assuming that
the host has no firewall at all or is at least allowing outbound port 43
packets out.
_______________________________________________
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"