Hi Matt,

Hmmm.... you're right.  I have heard of FTP configuration issues through some 
firewalls, though I haven't seen the problem myself.  Good point.  Thanks for 
commenting.  And yes, the compression (though it's not being used now) would 
obviously be of significant benefit.  

Darin.


----- Original Message ----- 
From: Matt 
To: Message Sniffer Community 
Sent: Friday, January 05, 2007 11:48 PM
Subject: [sniffer] Re: FTP server / firewall issues - Resolved.


Darin,

There are many people with firewall or client configuration issues that cause 
problems with FTP, however HTTP rarely experiences issues and is definitely 
easier to support.  As far as efficiency goes, since the rulebases will all be 
zipped, there is little to be gained from on-the-fly improvements to FTP (and 
there are some for HTTP as well).  In such a case, I would consider it to be 
effectively a wash, nothing gained, nothing lost (measurably).

Matt



Darin Cox wrote: 
Thanks, Pete.  Appreciate you taking the time to explain what's happening in
more detail.

I'm curious as to why FTP is more difficult than HTTP to debug, deploy,
secure, and scale, though. I tend to think of them on equal footing, with
the exception of FTP being faster and more efficient to transfer files in my
experience.

Thanks for the link to save some time.  Much appreciated.

Darin.


----- Original Message ----- 
From: "Pete McNeil" <[EMAIL PROTECTED]>
To: "Message Sniffer Community" <sniffer@sortmonster.com>
Sent: Friday, January 05, 2007 9:47 PM
Subject: [sniffer] Re: FTP server / firewall issues - Resolved.


Hello Darin,

Friday, January 5, 2007, 6:23:22 PM, you wrote:

  Hi Pete,
    
  Why the change?
    
Many reasons. HTTP is simpler to deploy and debug, simpler to scale,
less of a security problem, etc...

Also, the vast majority of folks get their rulebase files from us with
HTTP - probably for many of the reasons I mentioned above.

  FTP is more efficient for transferring files than HTTP.
    
Not necessarily ;-)

  Can we request longer support for FTP to allow adequate time for everyone
    to
  schedule, test, and make the change?
    
I'm not in a hurry to turn it off at this point, but I do want to put
it out there that it will be turned off.

  I remember trying dHTTP initially when this was set up, but it wasn't
working reliably, plus FTP is more efficient, so we went that way.  wget
    may
  work better when we have time to try it.
    
  Also, what's this about gzip?  Is the rulebase being changed to a .gz
    file?
  Compression is a good move to reduce bandwidth, but can we put in a plug
    for
  a standard zipfile?
    
Gzip is widely deployed and an open standard on all of the platforms
we support. We're not moving to a compressed file -- the plan is to
change the scanning engine and the rulebase binary format to allow for
incremental updates before too long - so for now we will keep the file
format as it is.

Apache easily compresses files on the fly when the connecting client
can support a compressed format. The combination of wget and gzip
handle this task nicely. As a result, most achieve the benefits of
compression during transit almost automatically.

  Do you have scripts already written to handle downloads the way you want
them now?  If so, how about a link?
    
We have many scripts on our web site:

http://kb.armresearch.com/index.php?title=Message_Sniffer.TechnicalDetails.AutoUpdates

My personal favorite is:

http://www.sortmonster.com/MessageSniffer/Help/UserScripts/ImailSnifferUpdateTools.zip

I like it because it's complete as it is, deploys in minutes with with
little effort, generally folks have no trouble achieving the same
results, and an analog of the same script is usable on *nix systems
where wget and gzip are generally already installed.

There are others of course.

Hope this helps,

_M


  

Reply via email to