Y'all, I've come up with an interesting project. As most of you know, it is possible to filter for viruses on the mail server, using amavis-new and clamav with postfix, for example. I would like to filter web traffic in a similar manner, using available open source tools whenever possible.
The best setup would be to find a web proxy engine that allowed plugins, which would enable me to write just the scanning plugin. I don't yet know if squid (my preferred proxy) allows this. The next best thing, in my mind, is to do something like amavis does for sendmail. In that scenario, you run 2 copies of sendmail, with amavis sitting between them, such that all email passes through amavis and whatever scanners amavis is configured to use. Bringing this configuration over to my project. I envision running 2 instances of squid on different ports, telling the user-facing proxy (the one the browser is configured to use) to fetch everything from an upstream proxy. In between the two, I run my malware scanner. Remember, #3 is profit, but #2 is the hard part ;-) Here are a few random thoughts... Clamav can be run as a standalone daemon, accepting discrete files. If will also can a file or directory if you tell it to. It will give you a thumbs-up or thumbs-down, the output of which can be used to feed bad URLs to a blocklist which the internal copy of squid can make use of. I'm worried about speed problems, but the cost of piping everything though a scanner might be offset by the fact that we're running a cache, after all. Web malware can be defined as virii, spyware, cross-site scripting, bad javascript, etc.. I wouldn't know how to scan for much except viruses, but others can write plugins if we can come up with a working framework for scanning web traffic. Thanks for reading. All feedback is welcome. Help is even more welcome :-) -- Joey Kelly < Minister of the Gospel | Linux Consultant > http://joeykelly.net "I may have invented it, but Bill made it famous." --- David Bradley, the IBM employee that invented CTRL-ALT-DEL
