Re: pfctl explaination
I'm having a similar issue as to whats described here. In my situation I have a table with about 200 entries. Im attempting to update that table and add about 200 more entries. I've included network blocks this time with the biggest being a /18. I update my /etc/blackhole.abuse file, then I run pfctl -t abuse -Tflush as described in this thread, and then i reload the pf.conf file with pfctl -f /etc/pf.conf. When I do this any thing in the state table seems to flow as usual, however any new sessions timeout. Im not sure whats going on? I tried bumping up the table-entries limit with no luck. Any help would be appreciated. I've included the relevant lines from my pf.conf file. table abuse persist file /etc/blackhole.abuse set limit { states 100, tables 1000, table-entries 30 } block in log quick on { $ext_if } proto { tcp udp } from abuse to any label abuse On 6/21/07, Francesco Toscan [EMAIL PROTECTED] wrote: 2007/6/21, Peter N. M. Hansteen [EMAIL PROTECTED]: You may be hitting one or more of the several relevant limits, but have you tried something like 'pfctl -T flush -t tablename' before reloading the table data? Yes, if I first flush the table it works flawlessy. The 'problem' occurs only reloading the ruleset directly with pfctl -f, without flushing / cleaning anything, when pfctl has two full copies of this large_table. f.
Re: pfctl explaination
2007/6/20, Ted Unangst [EMAIL PROTECTED]: yes, reloading the rules makes another copy then switches over. if you have a really large table, this means having two copies of the table during the transition. Thank you for your answer. I've just tried to set table-entries to 550K, more than double the content of large_table (210144 entries) but reload always gives: /etc/pf.conf.queue:17: cannot define table large_table: Cannot allocate memory I guess pfctl needs even more entries or I hit another kind of limit, I'll look into it. f.
Re: pfctl explaination
Francesco Toscan [EMAIL PROTECTED] writes: I've just tried to set table-entries to 550K, more than double the content of large_table (210144 entries) but reload always gives: /etc/pf.conf.queue:17: cannot define table large_table: Cannot allocate memory You may be hitting one or more of the several relevant limits, but have you tried something like 'pfctl -T flush -t tablename' before reloading the table data? -- Peter N. M. Hansteen, member of the first RFC 1149 implementation team http://www.blug.linux.no/rfc1149/ http://www.datadok.no/ http://www.nuug.no/ First, we kill all the spammers The Usenet Bard, Twice-forwarded tales delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.
Re: pfctl explaination
2007/6/21, Peter N. M. Hansteen [EMAIL PROTECTED]: You may be hitting one or more of the several relevant limits, but have you tried something like 'pfctl -T flush -t tablename' before reloading the table data? Yes, if I first flush the table it works flawlessy. The 'problem' occurs only reloading the ruleset directly with pfctl -f, without flushing / cleaning anything, when pfctl has two full copies of this large_table. f.
Re: pfctl explaination
On 6/20/07, Francesco Toscan [EMAIL PROTECTED] wrote: when I first load the rules everything works fine; when I reload the rules with pfctl -f pf.conf, pfctl segfaults or exits returning Cannot allocate memory as if table-entries limit were not high enough. If I first flush the large table and then reload the rules everything works fine again. I once read on misc@ Henning Brauer saying pfctl -f performs operations atomically: should I assume pfctl creates another copy of large_table in this process? How does it work? It's really just a curiosity about pfctl internals. yes, reloading the rules makes another copy then switches over. if you have a really large table, this means having two copies of the table during the transition.