I googled and got a Yubikey to lock down my debian stretch based server. The 
thought being, I could deploy squid with an ip blacklist and use opendns and 
protect the whole setup by requiring a Yubikey to log in. Originally, I tried 
using e2guardian as well.

The e2guardian filter is inaccurate and has no override feature, it is also not 
space friendly. My server has a 500 gb hard disk that kept filling up when I 
deployed e2guardian. In theory, it is is more efficient to look at meta tags 
when they exist to judge content type than to rely on a simple blacklist file 
for squid. In practice,
a combination of squid blacklist and opendns seems to work better than squid 
with e2guardian in front.

Since my debian server is based on a 500 gb western digital black drive, there 
isn't much space to hold excessively large log files or excessive cache.
The e2guardian filter needs to log a lot less, allow override via a password at 
reasonable hours, and e2guardian needs to store less in general. The
alternative of a squid blacklist file is cumbersome at best, think 100's of 
site URLs in your blacklist rather quickly. In concept, the maintenance of the
blacklist needs to be automated somehow. Using something like e2guardian when a 
sites nature is unknown makes sense, but then record it if appropriate
in a blacklist and don't go through the overhead of using e2guardian again.

Another problem that has come up is that a majority of sites are using https 
now which squid should not be in the middle of or that is a man in the middle 
attack.
You also run the risk if you run https through squid of breaking legitimate 
sites. Ideally, the first time you go to an https URL you would go through 
e2guardian
and if the site checks out the firewall would change and you'd be allowed to 
access that site without going through the proxy. You want something lighter 
than
a proxy where your firewall dynamically checks which site you are going to and 
allows direct access for sites on a whitelist. Running https through squid is 
dicey at best, I've googled how to do this and there aren't clear instructions 
on how to do this let alone guarantees that this will work right. Trying to 
filter all Internet means
you want to block access to remote proxies... how best to do that though?

As far as the Yubikey, set up was tricky and it has been a long time where I'm 
concerned that I won't remember the changes that had to be made. Another 
problem,
ssh access allows you to bypass 2FA altogether. Allowing commands such as sudo 
and su is also a problem for a 2FA environment. You should not be able to gain
administrative access without the key where the key should be hidden possibly 
and retrieved by someone responsible as needed. 

I keep saying I'm going to uninstall ssh, but I'm having to tweak the firewall 
and sometimes I need to add to the squid blacklist file. An iptables firewall 
is very low level and implementing a robust one very challenging. If you decide 
to go with a default policy of DROP, you have to be certain that all the 
necessary ACCEPT rules are in place. I have yet to enforce transparent proxy 
properly. I wonder if instead of explicit block rules for certain https sites 
iptables could call a helper program to look up the site name in a whitelist to 
decide whether or not to allow the connection? If it's not in the whitelist, 
you get sent to a local web page that says so which allows you to whitelist it 
on the fly given the correct password. Give three different options, whitelist 
for this time only, blacklist, or whitelist indefinitely with any meta tag 
information provided to aid in your decision. The administrative action of 
allowing exceptions or changing whitelists and blacklists n!
 eeds to b
 e protected action. There needs to be an audit trail too, person responsible 
asks you why you overrode e2guardian's recommendation to block for example 
bigboobs.net...(The name is fiction I hope). Another example of where an 
override may be needed is for youtube.com or webmd.com. There are cases where 
you are looking at pictures that could be considered sexual for legitimate 
medical reasons. Or maybe, eskimo.com comes up as a sex site where it in fact 
is not. I'm thinking something higher than an iptables firewall is needed to 
determine at surf time which site accesses are acceptable and which ones aren't 
with a reasonable way to override a wrong automated decision. The only time you 
need to go through a proxy is to get meta tag info or to save bandwidth. The 
problems with proxying https are dicey ones, I'm not sure what to do instead to 
determine content nature before allowing access via the https protocol.

 -- Michael C. Robinson
_______________________________________________
PLUG mailing list
[email protected]
http://lists.pdxlinux.org/mailman/listinfo/plug

Reply via email to