[sniffer] Re: FTP server / firewall issues - Resolved.

2007-01-05 Thread Darin Cox
Hi Pete,

Why the change?  FTP is more efficient for transferring files than HTTP.

Can we request longer support for FTP to allow adequate time for everyone to
schedule, test, and make the change?

I remember trying dHTTP initially when this was set up, but it wasn't
working reliably, plus FTP is more efficient, so we went that way.  wget may
work better when we have time to try it.

Also, what's this about gzip?  Is the rulebase being changed to a .gz file?
Compression is a good move to reduce bandwidth, but can we put in a plug for
a standard zipfile?

Do you have scripts already written to handle downloads the way you want
them now?  If so, how about a link?

Darin.


- Original Message - 
From: Pete McNeil [EMAIL PROTECTED]
To: Message Sniffer Community sniffer@sortmonster.com
Sent: Friday, January 05, 2007 4:39 PM
Subject: [sniffer] FTP server / firewall issues - Resolved.


Hello Sniffer Folks,

The firewall issues we were having with our new delivery server appear
to have been resolved. I am showing good traffic via FTP at this time.

Normal ftp access for log uploads and SNF rulebase downloads via
www.sortmonster.net / ftp.sortmonster.net should work correctly now.

Note that FTP downloads of SNF rulebases is deprecated. If you are
using FTP to download your rulebase files you should switch to using
http w/ gzip as soon as practical.

FTP access to SNF rulebase files will continue for a time but support
may be removed without notice in the future. It's a safe bet that FTP
access for SNF rulebase files will remain functional through the end
of this month however.

Thanks!

_M

-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]




#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]



[sniffer] Re: FTP server / firewall issues - Resolved.

2007-01-05 Thread Heimir Eidskrem

Now when I run snf2check.exe the rule base fails.
Tried to download several time now.

using wget and this has been working for years.

Suggestions?



Darin Cox wrote:

Hi Pete,

Why the change?  FTP is more efficient for transferring files than HTTP.

Can we request longer support for FTP to allow adequate time for everyone to
schedule, test, and make the change?

I remember trying dHTTP initially when this was set up, but it wasn't
working reliably, plus FTP is more efficient, so we went that way.  wget may
work better when we have time to try it.

Also, what's this about gzip?  Is the rulebase being changed to a .gz file?
Compression is a good move to reduce bandwidth, but can we put in a plug for
a standard zipfile?

Do you have scripts already written to handle downloads the way you want
them now?  If so, how about a link?

Darin.


- Original Message - 
From: Pete McNeil [EMAIL PROTECTED]

To: Message Sniffer Community sniffer@sortmonster.com
Sent: Friday, January 05, 2007 4:39 PM
Subject: [sniffer] FTP server / firewall issues - Resolved.


Hello Sniffer Folks,

The firewall issues we were having with our new delivery server appear
to have been resolved. I am showing good traffic via FTP at this time.

Normal ftp access for log uploads and SNF rulebase downloads via
www.sortmonster.net / ftp.sortmonster.net should work correctly now.

Note that FTP downloads of SNF rulebases is deprecated. If you are
using FTP to download your rulebase files you should switch to using
http w/ gzip as soon as practical.

FTP access to SNF rulebase files will continue for a time but support
may be removed without notice in the future. It's a safe bet that FTP
access for SNF rulebase files will remain functional through the end
of this month however.

Thanks!

_M

  



#
This message is sent to you because you are subscribed to
 the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]



[sniffer] Re: FTP server / firewall issues - Resolved.

2007-01-05 Thread Pete McNeil
Hello Darin,

Friday, January 5, 2007, 6:23:22 PM, you wrote:

 Hi Pete,

 Why the change?

Many reasons. HTTP is simpler to deploy and debug, simpler to scale,
less of a security problem, etc...

Also, the vast majority of folks get their rulebase files from us with
HTTP - probably for many of the reasons I mentioned above.

 FTP is more efficient for transferring files than HTTP.

Not necessarily ;-)

 Can we request longer support for FTP to allow adequate time for everyone to
 schedule, test, and make the change?

I'm not in a hurry to turn it off at this point, but I do want to put
it out there that it will be turned off.

 I remember trying dHTTP initially when this was set up, but it wasn't
 working reliably, plus FTP is more efficient, so we went that way.  wget may
 work better when we have time to try it.

 Also, what's this about gzip?  Is the rulebase being changed to a .gz file?
 Compression is a good move to reduce bandwidth, but can we put in a plug for
 a standard zipfile?

Gzip is widely deployed and an open standard on all of the platforms
we support. We're not moving to a compressed file -- the plan is to
change the scanning engine and the rulebase binary format to allow for
incremental updates before too long - so for now we will keep the file
format as it is.

Apache easily compresses files on the fly when the connecting client
can support a compressed format. The combination of wget and gzip
handle this task nicely. As a result, most achieve the benefits of
compression during transit almost automatically.

 Do you have scripts already written to handle downloads the way you want
 them now?  If so, how about a link?

We have many scripts on our web site:

http://kb.armresearch.com/index.php?title=Message_Sniffer.TechnicalDetails.AutoUpdates

My personal favorite is:

http://www.sortmonster.com/MessageSniffer/Help/UserScripts/ImailSnifferUpdateTools.zip

I like it because it's complete as it is, deploys in minutes with with
little effort, generally folks have no trouble achieving the same
results, and an analog of the same script is usable on *nix systems
where wget and gzip are generally already installed.

There are others of course.

Hope this helps,

_M


-- 
Pete McNeil
Chief Scientist,
Arm Research Labs, LLC.


#
This message is sent to you because you are subscribed to
  the mailing list sniffer@sortmonster.com.
To unsubscribe, E-mail to: [EMAIL PROTECTED]
To switch to the DIGEST mode, E-mail to [EMAIL PROTECTED]
To switch to the INDEX mode, E-mail to [EMAIL PROTECTED]
Send administrative queries to  [EMAIL PROTECTED]



[sniffer] Re: FTP server / firewall issues - Resolved.

2007-01-05 Thread Darin Cox
Hi Matt,

Hmmm you're right.  I have heard of FTP configuration issues through some 
firewalls, though I haven't seen the problem myself.  Good point.  Thanks for 
commenting.  And yes, the compression (though it's not being used now) would 
obviously be of significant benefit.  

Darin.


- Original Message - 
From: Matt 
To: Message Sniffer Community 
Sent: Friday, January 05, 2007 11:48 PM
Subject: [sniffer] Re: FTP server / firewall issues - Resolved.


Darin,

There are many people with firewall or client configuration issues that cause 
problems with FTP, however HTTP rarely experiences issues and is definitely 
easier to support.  As far as efficiency goes, since the rulebases will all be 
zipped, there is little to be gained from on-the-fly improvements to FTP (and 
there are some for HTTP as well).  In such a case, I would consider it to be 
effectively a wash, nothing gained, nothing lost (measurably).

Matt



Darin Cox wrote: 
Thanks, Pete.  Appreciate you taking the time to explain what's happening in
more detail.

I'm curious as to why FTP is more difficult than HTTP to debug, deploy,
secure, and scale, though. I tend to think of them on equal footing, with
the exception of FTP being faster and more efficient to transfer files in my
experience.

Thanks for the link to save some time.  Much appreciated.

Darin.


- Original Message - 
From: Pete McNeil [EMAIL PROTECTED]
To: Message Sniffer Community sniffer@sortmonster.com
Sent: Friday, January 05, 2007 9:47 PM
Subject: [sniffer] Re: FTP server / firewall issues - Resolved.


Hello Darin,

Friday, January 5, 2007, 6:23:22 PM, you wrote:

  Hi Pete,

  Why the change?

Many reasons. HTTP is simpler to deploy and debug, simpler to scale,
less of a security problem, etc...

Also, the vast majority of folks get their rulebase files from us with
HTTP - probably for many of the reasons I mentioned above.

  FTP is more efficient for transferring files than HTTP.

Not necessarily ;-)

  Can we request longer support for FTP to allow adequate time for everyone
to
  schedule, test, and make the change?

I'm not in a hurry to turn it off at this point, but I do want to put
it out there that it will be turned off.

  I remember trying dHTTP initially when this was set up, but it wasn't
working reliably, plus FTP is more efficient, so we went that way.  wget
may
  work better when we have time to try it.

  Also, what's this about gzip?  Is the rulebase being changed to a .gz
file?
  Compression is a good move to reduce bandwidth, but can we put in a plug
for
  a standard zipfile?

Gzip is widely deployed and an open standard on all of the platforms
we support. We're not moving to a compressed file -- the plan is to
change the scanning engine and the rulebase binary format to allow for
incremental updates before too long - so for now we will keep the file
format as it is.

Apache easily compresses files on the fly when the connecting client
can support a compressed format. The combination of wget and gzip
handle this task nicely. As a result, most achieve the benefits of
compression during transit almost automatically.

  Do you have scripts already written to handle downloads the way you want
them now?  If so, how about a link?

We have many scripts on our web site:

http://kb.armresearch.com/index.php?title=Message_Sniffer.TechnicalDetails.AutoUpdates

My personal favorite is:

http://www.sortmonster.com/MessageSniffer/Help/UserScripts/ImailSnifferUpdateTools.zip

I like it because it's complete as it is, deploys in minutes with with
little effort, generally folks have no trouble achieving the same
results, and an analog of the same script is usable on *nix systems
where wget and gzip are generally already installed.

There are others of course.

Hope this helps,

_M


  

[sniffer] Re: FTP server / firewall issues - Resolved.

2007-01-05 Thread Pete McNeil
Hello Darin,

Friday, January 5, 2007, 11:22:54 PM, you wrote:

 Thanks, Pete.  Appreciate you taking the time to explain what's happening in
 more detail.

 I'm curious as to why FTP is more difficult than HTTP to debug, deploy,
 secure, and scale, though. I tend to think of them on equal footing, with
 the exception of FTP being faster and more efficient to transfer files in my
 experience.

Technically, ftp is a challenge because it requires two pipes instead
of one. In the case of active ftp (old school I know, but still out
there), the server has to actually create a connection back to the
client -- if there is a client firewall in place that often won't (and
probably shouldn't) work.

The shouldn't part has to do with security-- there's no good reason
to allow incoming connections to anything other than a server (most of
the time).

If the inbound connection is to a server it is a good rule of thumb
that the inbound connection should ONLY be allowed for some service
that server is itself providing. Other ports should be strictly
off-limits.

Some of this can be simplified for the client side of things with
passive FTP... but what about on our end? With FTP of any kind we have
to have a lot more holes in the firewall with FTP because that
second pipe has to come through somewhere -- and unless we're going to
serve only one client at a time that means lots of inbound ports left
open. (I know I'm oversimplifying).

Anyway - the advantages to HTTP is the way we are using it are:

* HTTP is stateless and transaction oriented - that matches exactly
what we want in this case --- The request is simple (give me the file
if it's newer) and the response is just as simple (here's the file,
you don't want it, or I don't have it.) Stateless translates directly
into reliability and scalability -- If a server goes down in the
middle of a transaction - (or more likely between transactions) - the
next exchange of bytes simply goes to a different server. There is no
session to keep track of in this case.

Load-balancing is a snap to understand and deploy because there is
always a single, simple TCP connection and a short exchange - once
it's over it's over. Since we're only serving files with this (not
applications) we can strip off anything that might execute a command
on the HTTP server. No commands ever go to the OS - only to the HTTP
software which is only capable (in this case) of reading a file and
sending it to the client.

Although FTP can be used this way - under the covers it is much more
complex because it is designed as a sesson-based protocol. You log in,
use a wide range of commands to browse and otherwise do what you want,
and then you log out... and if something happens during that session
you have a problem to resolve. Did the server go away? Did the client
go away? Did some error occur and if so how do you want to handle
that? Lots of options for every case, as long as the session is still
active, the client can do the unpredictable. If you restrict the
client's options then folks have trouble because there's no single
correct way to use an FTP session.

Since not all FTP clients are created equally, and not all FTP scritps
are likely to be equal - the possibility for problems or security
hassles to creep in is much bigger. Even now we have a constant, low
level of problems with log file uploads due to the security measures
we have in place. To a lesser extent the same thing is true of
rulebase downloads via FTP...

For security reasons we strictly limit the commands that are accepted
on our FTP server. It never fails that someone will try to use a
command we don't allow and as a result the system is broken from their
perspective. A little coaching and debugging is generally required in
order to figure out what they or the script or FTP client is trying to
do that isn't allowed, or whether the firewall is the problem
(blocking the data link is a common recurring problem that is often
reported incorrectly or simply causes an ftp client to hang)...

In contrast, with HTTP - if you have a connection then you have the
connection you need. There is no session to break --- you make your
request and you get your response. Even there - the options are pretty
strictly limited and there is a single correct way-- GET. There's no
need to POST anything so it's not correct. You don't need to navigate
anywhere - your URL _IS_ where you are going.

I'm rambling on again... but I think I've made my point. FTP = more
complex than HTTP all the way around.

* Just about every firewall setup will allow statefull outbound
connections from the client side to us (that's pretty secure). If some
stone-age router (there's always one out there) won't make a statefull
connection - then at least there is only a single port to open and it
is so ubiquitous that almost nobody will be confused about it.

* Everybody (with very few exceptions) has a handy debugging tool for
http on their computer -- their ordinary web browser. In