"Per Gustav Ousdal" <[EMAIL PROTECTED]> writes:

> > very few do content and protocol parsing, and even those
> > are limited based on the designers' knowledge of attacks in the
> > protocols that are being analysed and proxied.
>
>Actually, I am shocked! :/ When you say very few; Does that include the Proxies? 

Especially the proxies.

>What's the point with a proxie then? Has it become so that ppl. write proxies simply 
>as a means for certain traffic to travel across a dualhomed host with IP forwarding 
>disabled (with no thought to security; no effort at blocking buffer overflows, known 
>bugs, etc. at all)

As far as I can tell, most proxies are not much better than just
having an IP access list in a router. The first generation of
proxies actually did a fair bit of analysis about the various
types of traffic flowing through them. For example, the first
FTP proxy did rate limiting and command parsing/control on the
command stream. But even it wasn't smart enough to understand
FTP bounce attacks. It didn't "understand" buffer overruns
except by accident - it used static buffers (with bounds checking)
on commands, and would error out if it got something unexpected.

Most of the current generation of proxies are written to "just
get the data back and forth" and never mind doing security
processing. For example, a "smart" web proxy would have to collect
the whole document/data stream, look at it, and then decide
whether or not to send it in/out. That breaks web streaming. The
customers scream so the checks are removed. The firewall toolkit
(and by extension early versions of Gauntlet) looked for about
4 well-known attacks against sendmail in the mail proxy, and
FTP bouncing in FTP command streams. That was _it_.

Nowadays firewall makers are rewarded for hauling data back
and forth at peak bandwidth, not for performing security checks.
As a consequence, few of them do. I don't think it'd make any
difference because nowadays the application protocols are not
public information. Making checks in an FTP proxy was possible
because FTP was a well-known protocol. What about netmeeting?
Or ICQ? Or some other half-assed new application protocol thrown
together last night by the startup down the street? The proxies
just pass the data because nobody understands it anyhow and the
vendors are free to change it from release to release.


>Also in your debate on FW's (obsolete or not), you state: "Some firewalls perform 
>application specific security on data streams. -Others do not -Sometimes you can't" 
>What do you mean by the last one ("... you can't")? Why not? :)

SSL, for example. Even if you had a proxy that "understood"
buffer overruns in HTTP, what about buffer overruns triggered
over SSL? Mostly, inside the web server, the accesses wind up
going down the same code-path once they've gotten pulled off
the HTTP or SSL transports.

>It really seems like many computer security proffesionals don't understand the 
>incoming traffic problem either :/

Nope. :( So far we've been spared the next one, which is the
"outgoing traffic problem" -- in which the bad guys realize
that 99.7% of the firewalls out there are transparently
permeable from the inside going toward the outside. Which means
that a "firewall buster" trojan horse that knows how to tunnel
out through a firewall (usually by just making a connection on
port 80) will be able to easily make the firewall a moot issue.
Imagine if someone wired a firewall buster into a virus like
Melissa. How would network admins react? I know of no palatable
solutions to this problem.

mjr.
--
Marcus J. Ranum, CEO, Network Flight Recorder, Inc.
work - http://www.nfr.net
home - http://www.clark.net/pub/mjr
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]

Reply via email to