> > (Sorry Paul, I like your posting a lot (and I am really more "on your side"),
> > so no offense, but) "mht" *might* have covered these cases by saying
> > "if the network architecture was done correctly ..". Probably far
> > fetched, but this term *could* include ActiveX & Java disabeled on all
> > hosts, and a "mail filter*" that would put ANY attachment in /dev/null.
>
> Which still doesn't cover HTTP downloads, especially of self-updating
> HTTP-based things like say a new release of IE. There are simply
> becomming too many vectors to get data "in."
Yeah, I agree with you, if I was "mht" I "wouldn't be sleeping as good at night" as
he/she seems to ;) HTTP is a problem, it just to flexible. And I agree: self-updating
software is not a good thing.
But what if the security policy state that only the SysAdmin (or atleast someone who
knows about MD5, etc) is allowed to download & install new software. Or new software
should ONLY be dl from the intranet server. Is there a way to enforce such a policy?
(i.e. looks at what is beeing sent via HTTP, filters out the desired MIME types,
unless the person is authorised) Might be possible depending on the size and
requirements of the organisation, but does it exist?
> > Call me crazy, but: If security is absolutely vital to an organisation
> > their policy should be like this (and they should use alternative ways
> > to distribute files). (See my post to mht for what I think *Marcus*
> > really meant)
>
> My "problem" with this is that you can't have 401K plans over the Web,
> business-to-business ordering, and the like without HTTP access, so even
> if you do all of the above, you're still talking about limiting HTTP and
> HTTPS traffic significantly. I no longer think that's a realistic goal
> in most commercial environments. Believe me, I'm probably one of the
> most vocal and as far as anyone in a company the size of mine, practicing
> opponents of limiting such things.
Right, HTTP is a difficult one :/ I think part of the problem here is that to many WWW
== the Internet, I mean they do everything thru the web interface.
Virus scanning everything should help some, ofcoz it woun't prtotect you against new
stuff (but maybe your security policy allows you to settle with this).
One solution to minimizing the risk *could* be to have a net (firewalled from the
production network, and with a different security policy (maybe the modem pool to wich
the users connect from home ;)) of sacrificable lambs that allows web browsing. And
then limit as much as possible, or totaly block WWW on the production network.
> > * This gave me an idea (maybe it exists already, but): How about a mail
> > filter that would require all attachments to be (PGP?? (probably not,
> > but maybe a custom thing that would use a key with the same level of
> > security of the PGP secret key)) encrypted with a "public" key that was
> > given only to those with a valid need to send attachments to the
> > organisation (thru a secure channel of'coz)? Any other attachments would
> > go to dev/null. Sure a lot of hassle, but if that level of security is
> > needed, this might be a solution? Anyone?
>
> Well, in my case, there are large numbers of people within the
> organization who must be able to receive such data from members of the
> general public, so it wouldn't work. Perhaps for your business it would.
Ah, well in that case ... :/ Yup, I think for my buisness (and probably many others)
that would be a viable solution. Esp. if the filter notified me when an attachment was
deleted, or maybe it wouldn't have to delete it, it could simply store them in a safe
place (i.e. where the user could not get to them). One could then get confirmation
from the person sending the attachment, before one decided on wether the attachment
was forwarded or trashed.
I'd really like to see a (at least partially) solution here, since there seems to be
no end to this type of virus these days.
> > > The point that the firewall's protection mechanism is based on what's
> > > blocked, not what's passed is still valid. Incomming traffic doesn't
> > > have to be externally initiated, it can be DNS, HTTP, SMTP...
> >
> > Good points.
> >
> > Any thoughts on which attacks/threats are possible in this situation (i.e.
> > traffic internally initiated)? I can think of threats like spoofing, and
> > redirection (to sites that claim to be what they are not). But are there
> > any attacks that can be accomplished this way? To compromise either the
> > FW or hosts on the inside?
>
> Sure, spoof or redirect *.microsoft.com when Exploder 6.0 comes out and
> replace it with a trojan. That's about the simplist.
Yes, SCARY! But i'd be really interested to know if you (or anyone else) can think of
any others?
> > BTW: Hey, how come everyone else in this tread is @clark.net?? :)
>
> We all chose the best regional ISP before the borg ship that is Verio
> moved in. Or it's all a conspiricy ;)
Wow, quick, block every thing from clark.net :D Ok, so you all know how to choose the
best ISP, still find it amazing that you are all from the same region :) BTW: what
region is that?
Regards,
Per
-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]