Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-31 Thread Dave Taht
On Sat, Dec 31, 2016 at 12:15 AM, TheWerthFam  wrote:
> Quick report -
> So I didn't test pihole per say, but used that method of storing the
> blacklist into the hosts file for dnsmasq to use.  Dnsmasq must use a
> different storage method for its hosts file. I loaded 850439 entries in the
> hosts file and restarted dnsmasq. I uses 1/2 as much memory than if loaded
> as a conf-file like adblock does.  And its super fast and virtually non
> existent cpu usage.  DNS lookups perform just like it should.   Though the
> hosts file is now returning an IP address I specified for the blocked hosts
> - would have been nice to do the nxdomain.  Think this will work for my
> needs, I can put a second IP address on the router and run pixelserv on it
> or something like that.

Good to know. I'm still interested in finding more
"read-only-thus-discardable data" methods for protecting home networks
and routers, this for example:

https://plus.google.com/u/0/107942175615993706558/posts/635rm12isPq?sfc=true

> Cheers
> Derek
>
>
>
> On 12/29/2016 11:11 AM, Dave Taht wrote:
>>
>> On Thu, Dec 29, 2016 at 8:09 AM, TheWerthFam 
>> wrote:
>>>
>>> Right now I'd rather not customize the code.  There are two directions
>>> I'm
>>> going to try first.
>>> Give unbound a try to serve DNS, keeping Dnsmasq for DHCP.  If that
>>> doesn't
>>> work try converting the list to a hosts file pointing to a local pixelsrv
>>> address.  There are some other blog posts that indicate that the hosts
>>> file
>>> can handle a lot more entries.  Like https://github.com/pi-hole/pi-hole
>>> Maybe just run pi-hole on openwrt.
>>
>> Well, I've had a bit of fun feeding large blocklists into cmph. Using
>> the "chd" algorithm, it creates an index file from a 24MB blocklist
>> into a 800K one. (but you still need the original data and a secondary
>> index) I also fiddled a bit with bloom filters, which strike me as
>> appropo. It seems feasible to establish a large dataset of read-only
>> data with a fast index (that can be discarded in low memory
>> situations, rather than swapped out)
>>
>> I'll take a look at pi-hole...
>>
>>> Cheers
>>> Derek
>>>
>>>
>>> On 12/28/2016 02:21 PM, Dave Taht wrote:

 On Tue, Dec 27, 2016 at 11:03 PM, TheWerthFam 
 wrote:
>
> Thanks for the feedback, I'll look into NFQUEUE.  I'm forcing the use
> of
> my
> dns by iptables.  I'm also using a transparent squid and e2guardian to
> filter content.  I like the idea of the dns based blacklist to add some
> filtering capabilities since I don't want to try and filter https types
> sites.  I know no solution in perfect.

 I've been thinking about this, and given the large amount of active
 data in a very small memory space have been thinking that another
 approach would be more fruitful. Convert the giant table into a
 "minimally perfect hash", and mmap it into memory read-only, so it can
 be discarded under memory pressure, unlike ipset, squid, or dnsmasq
 based approaches.


> Cheers
>Derek
>
>
>
> On 12/27/2016 01:53 PM, philipp_s...@redfish-solutions.com wrote:
>>>
>>> On Dec 26, 2016, at 10:32 AM, TheWerthFam 
>>> wrote:
>>>
>>> Using the adblock set of scripts to block malware and porn sites. The
>>> porn sites list is 800,000 entries, about 10x the number of sites
>>> adblock
>>> normally uses.  With the full list of malware and porn domains
>>> loaded,
>>> dnsmasq takes 115M of memory and normally sits around 50% CPU usage
>>> with
>>> moderate browsing usage.  CPU and RAM usage isn't really a problem
>>> other
>>> than lookups are slow now. Platform is cc 15.05.1 r49389 on banana pi
>>> r1.
>>>
>>> The adblock script takes the different lists, creates files in
>>> /tmp/dnsmasq.d/ entries looking like
>>> local=/domainnottogoto.com/   one entry per line.  The goal is to
>>> return
>>> NXDOMAIN to entries in the lists. Lists are sorted and with unique
>>> entries.
>>>
>>> I've tried increasing the cachesize to 10,000 but that made no
>>> change.
>>> Tried neg-ttl=3600 with default negative caching enabled with no
>>> change.
>>>
>>> Are there dnsmasq setting that will improve the performance?  or
>>> should
>>> it be configured differently to achieve this goal?
>>> Perhaps unbound would be better suited?
>>>
>>> Cheers
>>>   Derek
>>
>>
>> Not to rain on your parade, but the obvious defeat of this solution
>> would
>> be to point to an external website which does DNS lookups for you, and
>> then
>> edit the URL to have an IP address in place of the host name.
>>
>> I would use netfilter’s NFQUEUE and make a user-space decision based
>> on
>> packet-destination (since it seems you’re 

Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-30 Thread TheWerthFam

Quick report -
So I didn't test pihole per say, but used that method of storing the 
blacklist into the hosts file for dnsmasq to use.  Dnsmasq must use a 
different storage method for its hosts file. I loaded 850439 entries in 
the hosts file and restarted dnsmasq. I uses 1/2 as much memory than if 
loaded as a conf-file like adblock does.  And its super fast and 
virtually non existent cpu usage.  DNS lookups perform just like it 
should.   Though the hosts file is now returning an IP address I 
specified for the blocked hosts - would have been nice to do the 
nxdomain.  Think this will work for my needs, I can put a second IP 
address on the router and run pixelserv on it or something like that.

Cheers
Derek


On 12/29/2016 11:11 AM, Dave Taht wrote:

On Thu, Dec 29, 2016 at 8:09 AM, TheWerthFam  wrote:

Right now I'd rather not customize the code.  There are two directions I'm
going to try first.
Give unbound a try to serve DNS, keeping Dnsmasq for DHCP.  If that doesn't
work try converting the list to a hosts file pointing to a local pixelsrv
address.  There are some other blog posts that indicate that the hosts file
can handle a lot more entries.  Like https://github.com/pi-hole/pi-hole
Maybe just run pi-hole on openwrt.

Well, I've had a bit of fun feeding large blocklists into cmph. Using
the "chd" algorithm, it creates an index file from a 24MB blocklist
into a 800K one. (but you still need the original data and a secondary
index) I also fiddled a bit with bloom filters, which strike me as
appropo. It seems feasible to establish a large dataset of read-only
data with a fast index (that can be discarded in low memory
situations, rather than swapped out)

I'll take a look at pi-hole...


Cheers
Derek


On 12/28/2016 02:21 PM, Dave Taht wrote:

On Tue, Dec 27, 2016 at 11:03 PM, TheWerthFam 
wrote:

Thanks for the feedback, I'll look into NFQUEUE.  I'm forcing the use of
my
dns by iptables.  I'm also using a transparent squid and e2guardian to
filter content.  I like the idea of the dns based blacklist to add some
filtering capabilities since I don't want to try and filter https types
sites.  I know no solution in perfect.

I've been thinking about this, and given the large amount of active
data in a very small memory space have been thinking that another
approach would be more fruitful. Convert the giant table into a
"minimally perfect hash", and mmap it into memory read-only, so it can
be discarded under memory pressure, unlike ipset, squid, or dnsmasq
based approaches.



Cheers
   Derek



On 12/27/2016 01:53 PM, philipp_s...@redfish-solutions.com wrote:

On Dec 26, 2016, at 10:32 AM, TheWerthFam 
wrote:

Using the adblock set of scripts to block malware and porn sites. The
porn sites list is 800,000 entries, about 10x the number of sites
adblock
normally uses.  With the full list of malware and porn domains loaded,
dnsmasq takes 115M of memory and normally sits around 50% CPU usage
with
moderate browsing usage.  CPU and RAM usage isn't really a problem
other
than lookups are slow now. Platform is cc 15.05.1 r49389 on banana pi
r1.

The adblock script takes the different lists, creates files in
/tmp/dnsmasq.d/ entries looking like
local=/domainnottogoto.com/   one entry per line.  The goal is to
return
NXDOMAIN to entries in the lists. Lists are sorted and with unique
entries.

I've tried increasing the cachesize to 10,000 but that made no change.
Tried neg-ttl=3600 with default negative caching enabled with no
change.

Are there dnsmasq setting that will improve the performance?  or should
it be configured differently to achieve this goal?
Perhaps unbound would be better suited?

Cheers
  Derek


Not to rain on your parade, but the obvious defeat of this solution
would
be to point to an external website which does DNS lookups for you, and
then
edit the URL to have an IP address in place of the host name.

I would use netfilter’s NFQUEUE and make a user-space decision based on
packet-destination (since it seems you’re filtering outbound traffic
requests).

After all, it’s not the NAME you don’t want to talk to… it’s the HOST
that
bears that NAME.

-Philip


___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel







___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-29 Thread Juliusz Chroboczek
> I also fiddled a bit with bloom filters, which strike me as appropo.

Bloom filters trade accuracy for space -- they're arbitrarily smaller than
hash tables, but at the cost of causing more false positives.  Since your
tests indicate that perfect hash tables are small enough, a Bloom filter
would probably not be useful here.

If I had a few days to spare on the issue, I'd rework the data structures
in dnsmasq to deal with that case.  While I haven't looked at the dnsmasq
code, 100 000 entries is not a lot, if dnsmasq cannot deal with that, it's
probably using very naive data structures, it should be easy enough to use
something better.

(I'd use a B-tree, by the way, which is a pain to implement but should
give much better performance than open hashing.  If you're too lazy to
implement B-trees, then use pre-randomized binary search trees, they
should be just as good as AVL or RB-trees and trivial to implement.)

-- Juliusz
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-29 Thread Dave Taht
On Thu, Dec 29, 2016 at 8:09 AM, TheWerthFam  wrote:
> Right now I'd rather not customize the code.  There are two directions I'm
> going to try first.
> Give unbound a try to serve DNS, keeping Dnsmasq for DHCP.  If that doesn't
> work try converting the list to a hosts file pointing to a local pixelsrv
> address.  There are some other blog posts that indicate that the hosts file
> can handle a lot more entries.  Like https://github.com/pi-hole/pi-hole
> Maybe just run pi-hole on openwrt.

Well, I've had a bit of fun feeding large blocklists into cmph. Using
the "chd" algorithm, it creates an index file from a 24MB blocklist
into a 800K one. (but you still need the original data and a secondary
index) I also fiddled a bit with bloom filters, which strike me as
appropo. It seems feasible to establish a large dataset of read-only
data with a fast index (that can be discarded in low memory
situations, rather than swapped out)

I'll take a look at pi-hole...

> Cheers
>Derek
>
>
> On 12/28/2016 02:21 PM, Dave Taht wrote:
>>
>> On Tue, Dec 27, 2016 at 11:03 PM, TheWerthFam 
>> wrote:
>>>
>>> Thanks for the feedback, I'll look into NFQUEUE.  I'm forcing the use of
>>> my
>>> dns by iptables.  I'm also using a transparent squid and e2guardian to
>>> filter content.  I like the idea of the dns based blacklist to add some
>>> filtering capabilities since I don't want to try and filter https types
>>> sites.  I know no solution in perfect.
>>
>> I've been thinking about this, and given the large amount of active
>> data in a very small memory space have been thinking that another
>> approach would be more fruitful. Convert the giant table into a
>> "minimally perfect hash", and mmap it into memory read-only, so it can
>> be discarded under memory pressure, unlike ipset, squid, or dnsmasq
>> based approaches.
>>
>>
>>> Cheers
>>>   Derek
>>>
>>>
>>>
>>> On 12/27/2016 01:53 PM, philipp_s...@redfish-solutions.com wrote:
>
> On Dec 26, 2016, at 10:32 AM, TheWerthFam 
> wrote:
>
> Using the adblock set of scripts to block malware and porn sites. The
> porn sites list is 800,000 entries, about 10x the number of sites
> adblock
> normally uses.  With the full list of malware and porn domains loaded,
> dnsmasq takes 115M of memory and normally sits around 50% CPU usage
> with
> moderate browsing usage.  CPU and RAM usage isn't really a problem
> other
> than lookups are slow now. Platform is cc 15.05.1 r49389 on banana pi
> r1.
>
> The adblock script takes the different lists, creates files in
> /tmp/dnsmasq.d/ entries looking like
> local=/domainnottogoto.com/   one entry per line.  The goal is to
> return
> NXDOMAIN to entries in the lists. Lists are sorted and with unique
> entries.
>
> I've tried increasing the cachesize to 10,000 but that made no change.
> Tried neg-ttl=3600 with default negative caching enabled with no
> change.
>
> Are there dnsmasq setting that will improve the performance?  or should
> it be configured differently to achieve this goal?
> Perhaps unbound would be better suited?
>
> Cheers
>  Derek


 Not to rain on your parade, but the obvious defeat of this solution
 would
 be to point to an external website which does DNS lookups for you, and
 then
 edit the URL to have an IP address in place of the host name.

 I would use netfilter’s NFQUEUE and make a user-space decision based on
 packet-destination (since it seems you’re filtering outbound traffic
 requests).

 After all, it’s not the NAME you don’t want to talk to… it’s the HOST
 that
 bears that NAME.

 -Philip

>>> ___
>>> openwrt-devel mailing list
>>> openwrt-devel@lists.openwrt.org
>>> https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel
>>
>>
>>
>



-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-29 Thread TheWerthFam
Right now I'd rather not customize the code.  There are two directions 
I'm going to try first.
Give unbound a try to serve DNS, keeping Dnsmasq for DHCP.  If that 
doesn't work try converting the list to a hosts file pointing to a local 
pixelsrv address.  There are some other blog posts that indicate that 
the hosts file can handle a lot more entries.  Like 
https://github.com/pi-hole/pi-hole  Maybe just run pi-hole on openwrt.

Cheers
   Derek

On 12/28/2016 02:21 PM, Dave Taht wrote:

On Tue, Dec 27, 2016 at 11:03 PM, TheWerthFam  wrote:

Thanks for the feedback, I'll look into NFQUEUE.  I'm forcing the use of my
dns by iptables.  I'm also using a transparent squid and e2guardian to
filter content.  I like the idea of the dns based blacklist to add some
filtering capabilities since I don't want to try and filter https types
sites.  I know no solution in perfect.

I've been thinking about this, and given the large amount of active
data in a very small memory space have been thinking that another
approach would be more fruitful. Convert the giant table into a
"minimally perfect hash", and mmap it into memory read-only, so it can
be discarded under memory pressure, unlike ipset, squid, or dnsmasq
based approaches.



Cheers
  Derek



On 12/27/2016 01:53 PM, philipp_s...@redfish-solutions.com wrote:

On Dec 26, 2016, at 10:32 AM, TheWerthFam  wrote:

Using the adblock set of scripts to block malware and porn sites. The
porn sites list is 800,000 entries, about 10x the number of sites adblock
normally uses.  With the full list of malware and porn domains loaded,
dnsmasq takes 115M of memory and normally sits around 50% CPU usage with
moderate browsing usage.  CPU and RAM usage isn't really a problem other
than lookups are slow now. Platform is cc 15.05.1 r49389 on banana pi r1.

The adblock script takes the different lists, creates files in
/tmp/dnsmasq.d/ entries looking like
local=/domainnottogoto.com/   one entry per line.  The goal is to return
NXDOMAIN to entries in the lists. Lists are sorted and with unique entries.

I've tried increasing the cachesize to 10,000 but that made no change.
Tried neg-ttl=3600 with default negative caching enabled with no change.

Are there dnsmasq setting that will improve the performance?  or should
it be configured differently to achieve this goal?
Perhaps unbound would be better suited?

Cheers
 Derek


Not to rain on your parade, but the obvious defeat of this solution would
be to point to an external website which does DNS lookups for you, and then
edit the URL to have an IP address in place of the host name.

I would use netfilter’s NFQUEUE and make a user-space decision based on
packet-destination (since it seems you’re filtering outbound traffic
requests).

After all, it’s not the NAME you don’t want to talk to… it’s the HOST that
bears that NAME.

-Philip


___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel




___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-28 Thread Dave Taht
On Tue, Dec 27, 2016 at 11:03 PM, TheWerthFam  wrote:
> Thanks for the feedback, I'll look into NFQUEUE.  I'm forcing the use of my
> dns by iptables.  I'm also using a transparent squid and e2guardian to
> filter content.  I like the idea of the dns based blacklist to add some
> filtering capabilities since I don't want to try and filter https types
> sites.  I know no solution in perfect.

I've been thinking about this, and given the large amount of active
data in a very small memory space have been thinking that another
approach would be more fruitful. Convert the giant table into a
"minimally perfect hash", and mmap it into memory read-only, so it can
be discarded under memory pressure, unlike ipset, squid, or dnsmasq
based approaches.


> Cheers
>  Derek
>
>
>
> On 12/27/2016 01:53 PM, philipp_s...@redfish-solutions.com wrote:
>>>
>>> On Dec 26, 2016, at 10:32 AM, TheWerthFam  wrote:
>>>
>>> Using the adblock set of scripts to block malware and porn sites. The
>>> porn sites list is 800,000 entries, about 10x the number of sites adblock
>>> normally uses.  With the full list of malware and porn domains loaded,
>>> dnsmasq takes 115M of memory and normally sits around 50% CPU usage with
>>> moderate browsing usage.  CPU and RAM usage isn't really a problem other
>>> than lookups are slow now. Platform is cc 15.05.1 r49389 on banana pi r1.
>>>
>>> The adblock script takes the different lists, creates files in
>>> /tmp/dnsmasq.d/ entries looking like
>>> local=/domainnottogoto.com/   one entry per line.  The goal is to return
>>> NXDOMAIN to entries in the lists. Lists are sorted and with unique entries.
>>>
>>> I've tried increasing the cachesize to 10,000 but that made no change.
>>> Tried neg-ttl=3600 with default negative caching enabled with no change.
>>>
>>> Are there dnsmasq setting that will improve the performance?  or should
>>> it be configured differently to achieve this goal?
>>> Perhaps unbound would be better suited?
>>>
>>> Cheers
>>> Derek
>>
>>
>> Not to rain on your parade, but the obvious defeat of this solution would
>> be to point to an external website which does DNS lookups for you, and then
>> edit the URL to have an IP address in place of the host name.
>>
>> I would use netfilter’s NFQUEUE and make a user-space decision based on
>> packet-destination (since it seems you’re filtering outbound traffic
>> requests).
>>
>> After all, it’s not the NAME you don’t want to talk to… it’s the HOST that
>> bears that NAME.
>>
>> -Philip
>>
> ___
> openwrt-devel mailing list
> openwrt-devel@lists.openwrt.org
> https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel



-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-27 Thread TheWerthFam
Thanks for the feedback, I'll look into NFQUEUE.  I'm forcing the use of 
my dns by iptables.  I'm also using a transparent squid and e2guardian 
to filter content.  I like the idea of the dns based blacklist to add 
some filtering capabilities since I don't want to try and filter https 
types sites.  I know no solution in perfect.

Cheers
 Derek


On 12/27/2016 01:53 PM, philipp_s...@redfish-solutions.com wrote:

On Dec 26, 2016, at 10:32 AM, TheWerthFam  wrote:

Using the adblock set of scripts to block malware and porn sites. The porn 
sites list is 800,000 entries, about 10x the number of sites adblock normally 
uses.  With the full list of malware and porn domains loaded, dnsmasq takes 
115M of memory and normally sits around 50% CPU usage with moderate browsing 
usage.  CPU and RAM usage isn't really a problem other than lookups are slow 
now. Platform is cc 15.05.1 r49389 on banana pi r1.

The adblock script takes the different lists, creates files in /tmp/dnsmasq.d/ 
entries looking like
local=/domainnottogoto.com/   one entry per line.  The goal is to return 
NXDOMAIN to entries in the lists. Lists are sorted and with unique entries.

I've tried increasing the cachesize to 10,000 but that made no change.  Tried 
neg-ttl=3600 with default negative caching enabled with no change.

Are there dnsmasq setting that will improve the performance?  or should it be 
configured differently to achieve this goal?
Perhaps unbound would be better suited?

Cheers
Derek


Not to rain on your parade, but the obvious defeat of this solution would be to 
point to an external website which does DNS lookups for you, and then edit the 
URL to have an IP address in place of the host name.

I would use netfilter’s NFQUEUE and make a user-space decision based on 
packet-destination (since it seems you’re filtering outbound traffic requests).

After all, it’s not the NAME you don’t want to talk to… it’s the HOST that 
bears that NAME.

-Philip


___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-27 Thread philipp_subx

> On Dec 26, 2016, at 10:32 AM, TheWerthFam  wrote:
> 
> Using the adblock set of scripts to block malware and porn sites. The porn 
> sites list is 800,000 entries, about 10x the number of sites adblock normally 
> uses.  With the full list of malware and porn domains loaded, dnsmasq takes 
> 115M of memory and normally sits around 50% CPU usage with moderate browsing 
> usage.  CPU and RAM usage isn't really a problem other than lookups are slow 
> now. Platform is cc 15.05.1 r49389 on banana pi r1.
> 
> The adblock script takes the different lists, creates files in 
> /tmp/dnsmasq.d/ entries looking like
> local=/domainnottogoto.com/   one entry per line.  The goal is to return 
> NXDOMAIN to entries in the lists. Lists are sorted and with unique entries.
> 
> I've tried increasing the cachesize to 10,000 but that made no change.  Tried 
> neg-ttl=3600 with default negative caching enabled with no change.
> 
> Are there dnsmasq setting that will improve the performance?  or should it be 
> configured differently to achieve this goal?
> Perhaps unbound would be better suited?
> 
> Cheers
>Derek


Not to rain on your parade, but the obvious defeat of this solution would be to 
point to an external website which does DNS lookups for you, and then edit the 
URL to have an IP address in place of the host name.

I would use netfilter’s NFQUEUE and make a user-space decision based on 
packet-destination (since it seems you’re filtering outbound traffic requests).

After all, it’s not the NAME you don’t want to talk to… it’s the HOST that 
bears that NAME.

-Philip
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-27 Thread Lucian Cristian

On 27.12.2016 04:54, TheWerthFam wrote:
Problem with this method is that it misses lots of HTTPS based sites.  
I do already run squid though.  Am I wrong that it will not proxy 
https sites unless you use MITM type setup?

Thanks


On 12/26/2016 08:47 PM, Lucian Cristian wrote:

On 26.12.2016 19:32, TheWerthFam wrote:
Using the adblock set of scripts to block malware and porn sites. 
The porn sites list is 800,000 entries, about 10x the number of 
sites adblock normally uses. With the full list of malware and porn 
domains loaded, dnsmasq takes 115M of memory and normally sits 
around 50% CPU usage with moderate browsing usage. CPU and RAM usage 
isn't really a problem other than lookups are slow now. Platform is 
cc 15.05.1 r49389 on banana pi r1.


The adblock script takes the different lists, creates files in 
/tmp/dnsmasq.d/ entries looking like
local=/domainnottogoto.com/   one entry per line.  The goal is to 
return NXDOMAIN to entries in the lists. Lists are sorted and with 
unique entries.


I've tried increasing the cachesize to 10,000 but that made no 
change.  Tried neg-ttl=3600 with default negative caching enabled 
with no change.


Are there dnsmasq setting that will improve the performance? or 
should it be configured differently to achieve this goal?

Perhaps unbound would be better suited?

Cheers
Derek
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


use squid and squidguard

regards
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel

___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


I'm guessing that if you implement those restrictions I think that every 
client has the proxy enforced into the browser so https would be 
processed by squidguard too, for transparent https proxy you would need 
to do sslbump


regards
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-26 Thread TheWerthFam
Problem with this method is that it misses lots of HTTPS based sites.  I 
do already run squid though.  Am I wrong that it will not proxy https 
sites unless you use MITM type setup?

Thanks


On 12/26/2016 08:47 PM, Lucian Cristian wrote:

On 26.12.2016 19:32, TheWerthFam wrote:
Using the adblock set of scripts to block malware and porn sites. The 
porn sites list is 800,000 entries, about 10x the number of sites 
adblock normally uses.  With the full list of malware and porn 
domains loaded, dnsmasq takes 115M of memory and normally sits around 
50% CPU usage with moderate browsing usage. CPU and RAM usage isn't 
really a problem other than lookups are slow now. Platform is cc 
15.05.1 r49389 on banana pi r1.


The adblock script takes the different lists, creates files in 
/tmp/dnsmasq.d/ entries looking like
local=/domainnottogoto.com/   one entry per line.  The goal is to 
return NXDOMAIN to entries in the lists. Lists are sorted and with 
unique entries.


I've tried increasing the cachesize to 10,000 but that made no 
change.  Tried neg-ttl=3600 with default negative caching enabled 
with no change.


Are there dnsmasq setting that will improve the performance?  or 
should it be configured differently to achieve this goal?

Perhaps unbound would be better suited?

Cheers
Derek
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


use squid and squidguard

regards
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel

___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


Re: [OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-26 Thread Lucian Cristian

On 26.12.2016 19:32, TheWerthFam wrote:
Using the adblock set of scripts to block malware and porn sites. The 
porn sites list is 800,000 entries, about 10x the number of sites 
adblock normally uses.  With the full list of malware and porn domains 
loaded, dnsmasq takes 115M of memory and normally sits around 50% CPU 
usage with moderate browsing usage. CPU and RAM usage isn't really a 
problem other than lookups are slow now. Platform is cc 15.05.1 r49389 
on banana pi r1.


The adblock script takes the different lists, creates files in 
/tmp/dnsmasq.d/ entries looking like
local=/domainnottogoto.com/   one entry per line.  The goal is to 
return NXDOMAIN to entries in the lists. Lists are sorted and with 
unique entries.


I've tried increasing the cachesize to 10,000 but that made no 
change.  Tried neg-ttl=3600 with default negative caching enabled with 
no change.


Are there dnsmasq setting that will improve the performance?  or 
should it be configured differently to achieve this goal?

Perhaps unbound would be better suited?

Cheers
Derek
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


use squid and squidguard

regards
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel


[OpenWrt-Devel] Slow DNSMasq with > 100, 000 entries in additional addresses file

2016-12-26 Thread TheWerthFam
Using the adblock set of scripts to block malware and porn sites. The 
porn sites list is 800,000 entries, about 10x the number of sites 
adblock normally uses.  With the full list of malware and porn domains 
loaded, dnsmasq takes 115M of memory and normally sits around 50% CPU 
usage with moderate browsing usage.  CPU and RAM usage isn't really a 
problem other than lookups are slow now. Platform is cc 15.05.1 r49389 
on banana pi r1.


The adblock script takes the different lists, creates files in 
/tmp/dnsmasq.d/ entries looking like
local=/domainnottogoto.com/   one entry per line.  The goal is to return 
NXDOMAIN to entries in the lists. Lists are sorted and with unique entries.


I've tried increasing the cachesize to 10,000 but that made no change.  
Tried neg-ttl=3600 with default negative caching enabled with no change.


Are there dnsmasq setting that will improve the performance?  or should 
it be configured differently to achieve this goal?

Perhaps unbound would be better suited?

Cheers
Derek
___
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel