Re: Sybase contact

2002-07-05 Thread Ryan Russell

It used to be me.  The alias security at sybase.com may still reach people
in the information security group. (Note: this is the people in charge of
Sybase's own security, not their products... at least it was when I left,
but there was no contact for the engineering group to take security
reports.)  They can help get the report to the right people.

If all else fails, please contact me, and I'll put you in touch with
people I know there.

Ryan

On Fri, 5 Jul 2002, Aaron C. Newman wrote:

 Does anyone know of a contact at Sybase to which Advisories can be
 forwarded? I have been unable to find any contact information on their
 web site.

 Thanks,
 Aaron






Re: Mitigating some of the effects of the Code Red worm

2001-07-20 Thread Ryan Russell

On Thu, 19 Jul 2001, LARD BENJAMIN LEE wrote:

 I'm not sure of the ethical or legal aspects of this, but I don't see why
 we can't take advantage of three facts:

 1) There is something of an ongoing log of affected machines that can be
 obtained from boxes earlier in the IP list.

The victim boxes won't tend to have a lot of logs lying around, but there
are such lists.

 2) Machines which have been compromised can STILL be compromised.

Yes.

 3) The worm has a lysine deficiency which can be remotely introduced.

Yes... I can also change what it is with a hex editor in about 20
seconds...


 What I'm getting at, is for someone to create another exploit that creates
 the C:\notworm file in infected machines

Uh oh.


 and does something to
 notify whoever is in charge of a particular box (even something as simple
 as placing you_are_hacked.txt and a link to the patch on the desktop could
 be beneficial).

If a you've been hacked by the Chinese page doesn't do it, why should a
file on the desktop?

 Even better, an exploit to patch a machine (through
 removing the .ida and .idq extensions) would prevent the inevitable wave
 of post-attacks (both from this worm and future attacks).

You'd never get 100% success rate.


 Of course, I'm guessing this is illegal, although I highly doubt you'd be
 prosecuted.

You're kidding, right?  We just threw a Russian citizen in jail for
cracking ROT13.  Anyone who tries such a stunt had better make sure they
launch it anonymously.

 If someone has the expertise to create a white hack such as
 this, I'm sure there are daring admins out there who would happily attempt
 to stem the flow. If we don't do something, you know it's just a (very
 short) matter of time before script kiddies, armed with a modified worm
 and a log of infected machines, do something more sinister.

Let's be very specific:
They only people who would thank anyone for such a stunt would be the
clueless admin who can't install the patch on their own.  Now, obviously,
there are lots of those.

OK, cut to the chase, here's my list of reasons hy this is bad, to be
trotted out whenever someone suggests a nice worm:

-What about the traffic it takes up?
-What about the boxes that don't patch properly, don't make it back after
reboot, or took down etrade in the middle of a trading day?
-How does your worm know when it's done?
-Maybe I don't want my box patched, the patch broke my app
-How do I tell your good worm apart from the original bad worm, or the
other worm which looks like the good worm, but is really a bad worm?
-How about people like us who track attack data, and you just skewed the
heck out of it?  When does www1.whitehouse.gov get to come back?  If
there's still *A* worm around on the 1st, which one is it?
-Do we really want an Internet-sized game of corewars?
-Why stop at patching?  Don't clueless NT admins deserve to have the hard
drives reformatted until they learn how to apply patches? (and if you're
no good at spotting sarcasm, please be sure to send me flames.)

Having done my usual lecturing, I will say that this is the first time
I've even been willing to entertain the idea of a good worm... I just
don't know what else can fix a problem of this scale.  You will never,
ever come to agreement on how it should be done.  Either some government
will decide for you, or some hacker who is willing to take one for the
team.  I'm not real comfortable with either of those two setting policy
for the Internet.

Ryan




Re: Code Red worm - there MUST be at least two versions.

2001-07-20 Thread Ryan Russell

On Fri, 20 Jul 2001, Don Papp wrote:

   I wonder if I have seen this variant - a person I emailed has a
 server whose web pages served looks a lot like the Code Red worm's output
 (1 line of simple html) displaying

   FUCK CHINA GOVERNENT
   and p0isonb0x (or something like that)

   On a black background.  The web files themselves are untouched.

   Actually I have the source of what it spits out:

 htmlbody bgcolor=blackbrbrbrbrbrbrtable width=100%tdp
 align=centerfont size=7 color=redfuck CHINA
 Government/fonttrtdp align=centerfont size=7 color=redfuck
 PoizonBOxtrtdp align=centerfont size=4
 color=redcontact:[EMAIL PROTECTED]/html


I would tend to assume that isn't a variant of the worm.  It's certainly
not CRv1 or CRv2.  The HTML is about 40 bytes longer than that in Code
Red, so it would be a bit more than simply changing the HTML code in the
worm and relaunching; you'd have to adjust the content length reference,
and a number of other items.  I would think it's non-trivial.

I would think this was a hand-done response to Code Red.

Ryan




Re: 'Code Red' does not seem to be scanning for IIS

2001-07-19 Thread Ryan Russell

On Thu, 19 Jul 2001, Mike Brockman wrote:

 From what i read about the 'Code Red'-worm, it was supposed to be scanning
 for IIS-servers. It obviously is'nt, i believe it tries to infect
 everything they find on port 80, or something as simple as that.


Run nc -l -p 80  worm, and you'll get a copy.  It's not scanning
in any sense, it just tries a connect, and sends the string.

Ryan




Re: Full analysis of the .ida Code Red worm.

2001-07-19 Thread Ryan Russell

On Thu, 19 Jul 2001, Laurence Hand wrote:


 I know MS watches this list, so I hope they will be checking their
 servers before this starts the DDOS tomorrow.


I believe the DDoS started an hour and a half ago, at 5:00 PDT (0:00 UTC,
the next day).  I was getting 5-10 attempts an hour, and I've had 0
since 4:43:29 PDT.

Folks will notice that www.whitehouse.gov is still accessible.  The worm
authors only put in one IP address, the one for www1.whitehouse.gov.  BBN
(who appears to be the provider for whitehouse.gov, according to my
tracert) has blocked that single IP address at their peering points.  So
www2.whitehouse.gov is still running just fine.

Presumably, www.whitehouse.gov used to be RR DNS between the two.  Now,
www.whitehouse.gov resolves to just 198.137.240.92, and it has a TTL of
only 872.

For a relatively clever worm, the author sure screwed up his target list.
Whoops.

Ryan




Re: SurfControl Bypass Vulnerability

2001-03-26 Thread Ryan Russell

On Fri, 23 Mar 2001, Dan Harkless wrote:

 A URL containing an IP address is not canonical for HTTP.  HTTP 1.1 does
 virtual hosting via the "Host:" header, so multiple distinct servers can be
 on a single IP.  If you restrict based on IP, you'll block access to both
 http://www.juicysex.com/ and http://www.bible-history.org/, should they both
 be on the same box.

Quite true.  However, one or none of the sites has the be the default for
requests where the site isn't specified.  So, if the default is juicysex,
then the IP address can be blocked.  If it's bible history, then you
don't.  The bypass only "works" if the restricted site is the default.

Ryan



Re: Security information for dollars?

2001-02-02 Thread Ryan Russell

On Fri, 2 Feb 2001, Shalon Wood wrote:

 So, my question to Paul and company is: Why *should* anyone other than
 critical infrastructure get that notice? I'm willing to be convinced;
 I just haven't seen an answer to this question yet. And note that
 'They bitched and screamed because we didn't notify them this time'
 isn't a good enough reason.


It's awfully convienient to upgrade BIND via an RPM, PKG file, etc..
I'm a big fan of the up2date service w/Redhat and the windowsupdate.
microsoft.com website that lets people who don't know what they are doing
patch themselves.

Of course, lists like Bugtraq have never been about keeping the masses
safe, but rather keeping those who are willing to pay attention and who
can fend for themselves, safe.

I also feel that I should point out that this has been tried before.  A
couple of years ago, Microsoft had identified a bug on their own, and
released an advisory stating that they were only going to release the info
to those who "needed" it.  In that case, it was a professional
organization of remote vulnerability scanner vendors.  I believe Elias
forwarded the exploit to Bugtraq the next day.

Ryan



Re: BugTraq: EFS Win 2000 flaw

2001-01-24 Thread Ryan Russell

I've got a couple of question on this issue..

The concern is that a temp file of the original plaintext may be left
around for an attack to "undelete".  It's understandable why this might be
neccessary for a rollback in case of machine failure at just the wrong
time.  (Though one could argue that that's what backups are for.. since
you could lose the file to a blackout either before or after the encrypt
operation, I'm not so sure why it's such a big deal if you lose it
during.)

Where does the encrypted file go when it's done?  Does the OS make sure it
goes back on the same sectors where the original unencrypted file was?  If
not, then then original plaintext file is lying around with the delete
flag set, and the temp file is a bit of a moot point.

If the crypted bits are carefully written over the plaintext bits, then
yes, the temp file being left in plaintext on disk seems like a silly
mistake.

How about this set of steps to ensure careful overwriting of plaintext:

-Leave plaintext file where it is for the moment
-Create a temp file which consists of the crypted version of the plaintext
-Swap file names (file.txt and file.txt.tmp) The file with the .tmp
extension is now the plaintext version.
-Copy crypted bits over the plaintext bits (i.e. copy file.txt
file.txt.tmp)
-Mark file.txt.tmp as deleted

There are points during this process where you can halt the machine, and
there will be a plaintext version left on disk.  Since you started with a
plaintext on disk to begin with, it's not possible to create a set of
steps that might be interrupted at the wrong time and not leave plaintext
on the disk.  At least, not if you want to maintain the rollback feature.

None of this takes into account the possibility that one can get previous
generations of writes via some mechanism.  Heck, it doesn't even take into
account that the drive might decide all of a sudden that it doesn't like
that sector anymore, and remap it under your OS' nose, without it knowing
a thing about it... leaving the original sector physically on the disk, at
the top layer of writes, but totally unavailable to normal software.

Which boils down to me agreeing with Dan's statement... the only way to
make sure your plaintext doesn't come back to haunt you is to never write
it to disk.  For EFS, I believe this would require you taking a virgin
drive and creating nothing but EFS partitions that cover the entire drive,
and THEN do your work.

Ryan



Re: BugTraq: EFS Win 2000 flaw

2001-01-23 Thread Ryan Russell

On Fri, 19 Jan 2001, Russ wrote:

 To the best of my knowledge, Peter Guttman(sp?) has demonstrated for years
 now that there is no form of over-writing which makes any substantial
 difference to the ability to recover previously written data from a computer
 hard disk.

 My understanding of current "high security" standards wrt the re-use of
 disks which previously contained classified materials is that they only be
 re-used in similarly classified systems, or, are destroyed beyond any form
 of molecular reconstruction (e.g. melted).

I see a big difference in being able to recover some files by simply
booting to a different OS vs. having to break out the electron microscope
and manually piece bits together.  I could boot DOS or Linux to read a
deleted file... I don't think I'd be able to find someone who could read
the bits from 3 writes ago off of a physical disk surface for me... unless
you gave me a huge amount of time and money.

If the problem does exist as described... the possibility that a
government forensics lab might recover some data is no exucse for not
handling temp files properly for EFS.

Ryan



Reply to EFS note on Bugtraq

2001-01-23 Thread Ryan Russell

Due to some mail trouble, I'm manually forwarding this note.  The
signature should check out.

Ryan

From:   Microsoft Security Response Center
Sent:   Monday, January 22, 2001 2:17 PM
To: '[EMAIL PROTECTED]'
Cc: Microsoft Security Response Center
Subject:Re: BugTraq: EFS Win 2000 flaw

-BEGIN PGP SIGNED MESSAGE-

Hi All -

While EFS does indeed work as Rickard discusses, this is not new
information.  For instance, "Encrypting File System for Windows 2000"
(http://www.microsoft.com/WINDOWS2000/library/howitworks/security/encr
ypt.asp, p 22) notes the following:
 "EFS also incorporates a crash recovery scheme whereby no data is
lost in the event of a fatal error such as system crash, disk full,
or hardware failure. This is accomplished by creating a plaintext
backup of the original file being encrypted or decrypted. Once the
original is successfully encrypted or decrypted, the backup is
deleted.  NOTE: Creating a plaintext copy has the side-effect that
the plaintext version of the file may exist on the disk, until those
disk blocks are used by NTFS for some other file."

The plaintext backup file is *only* created if an existing plaintext
document is coverted to encrypted form.  If a file is created within
an encrypted folder, it will be encrypted right from the start, and
no plaintext backup file will be created.  Microsoft recommends this
as the preferred procedure for using EFS to protect sensitive
information.   "Encrypting File System for Windows 2000", page 22,
makes this recommendation:
 "... it is recommended that it is always better to start by creating
an empty encrypted folder and creating files directly in that folder.
Doing so, ensures that plaintext bits of that file never get saved
anywhere on the disk. It also has a better performance as EFS does
not need to create a backup and then delete the backup, etc."

Even if the plaintext backup file were not created, it would still be
a bad idea to create a sensitive file in plaintext and then encrypt
it later.  Many common operations, such as adding data to or removing
data from a file, compressing and decompressing a file, defragmenting
a drive, or opening a file using an application that creates
temporary files, can result in plaintext being left on the drive.  It
is simply not feasible for any software package to be able to track
down and erase all the plaintext "shreds" that may have been created
during the file's plaintext existence.  The only way to ensure that
there is no plaintext on the drive is to encrypt the file right from
the start.

Nevertheless, we are investigating this issue to see whether there
are improvements we can make.  No matter what the solution, it will
still be better for customers to create sensitive files encrypted
from the start; however, we believe it may be possible to prevent the
plaintext backup file from being retained on the drive.  Regards,

Scott Culp
Security Program Manager
Microsoft Security Response Center



- - -Original Message-
From: Rickard Berglind [EMAIL PROTECTED] 
Sent: Fri, 19 Jan 2001 12:29:50 +0100
To: [EMAIL PROTECTED]
Subject: BugTraq: EFS Win 2000 flaw

I have found a major problem with the encrypted filesystem
( EFS ) in Windows 2000 which shows that encrypted files
are still very available for a thief or attacker.

The problem comes from how EFS works when the encryption
is done. When a user marks a file for encryption a
backup-file, called efs0.tmp, will be created. When
the copy is in place the orginal file will be deleted
and then recreated, now encrypted, from the efs0.tmp-
file.
And finally, when the new encrypted file is succesfully
created, the temporary-file ( which will never be shown
in the user interface ) will be deleted as well.
So far, so good. The only file remaining is the one
which is encrypted.

But the flaw is this: the temporary-file is deleted
in the same way any other file is "deleted" - i.e.
the entry in the $mft is marked as empty and the clusters
where the file was stored will be marked in the $Bitmap
as available, but the psysical file and the information it
contains will NOT be deleted. The information in the
file which the user have encrypted will be left in the backup
file efs0.tmp in total plaintext on the surface of the disk.
When new files are added to the partition will they
gradually overwrite the secret information, but if
the encrypted file was large - the information could
be left for months.
So how can this be exploited ? If someone steals
a laptop or have psysical access to the disk it will
be easy to use any low level disk editor to search
for the information. For example, the Microsoft
Support Tool "dskprobe.exe" works fine for locating
old efs0.tmp-files and read information, in plain-text,
that the user thought was safe.
In my opinion there should be a function in the EFS
which physically overwrites the efs0.tmp at least once
to make it a lot harder for an 

Re: New DDoS?

2001-01-10 Thread Ryan Russell

On Wed, 10 Jan 2001, Darren Reed wrote:

 What about placement (or addition) of an ActiveX control (which downloads
 into IE on the quiet) that's not quite so benign ?


The important criteria IMHO is stealth, if the exploit has any hope of
staying hidden long enough to nail enough clients.  I believe lots of
people have IE configured to warn about even signed ActiveX controls.  It
comes default that way for the majority of controls.  Some folks will shut
off the warnings, because they are given the option every time they have
to answer the question.

There are a number of trusted ActiveX controls that Microsoft has put out,
which load silently.  Georgi has been able to leverage at least one
for exploit purposes:

http://www.securityfocus.com/bid/1754

This particular problem has been patched of course, but it illustrates the
concept.

So, ActiveX holes could be exploited, along with any browser hole.

To be extra clean, most web servers provide an easy way to serve up
different pages, depending on the user agent info the browser supplies
(i.e. the info that the browser sends that identifies the type and
version).  Using that, the defaced web site could be configured to serve
up the appropriate exploit for Netscape or IE, or no exploit at all if the
client appears to be a non-vulnerable version.  To hide even further, it
could only exploit 1 in 100 clients, making it even harder to identify.
(No, that site couldn't have hacked you... I just combed through the code
by hand, and it's clean...)  Obviously, it's a little less effective at
that point.  I have no idea what the ideal exploit/hide ratio would be.

Even .jpgs aren't safe, as there is an exploit for Netscape that is
delivered via .jpg files:

http://www.securityfocus.com/bid/1503

In short, if you've got a malicious web server, or a web server that has
been compromised in a non-obvious way, the problem is much more serious
than a DoS or DDoS.

Ryan



Re: major security bug in reiserfs (may affect SuSE Linux)

2001-01-10 Thread Ryan Russell

On Wed, 10 Jan 2001, Christian Zuckschwerdt wrote:

 I've read the directory with a bunch of other tools (perl, find) and
 that makes me believe it's not and ls bug.


What do echo * and strings . produce?

Ryan



Re: New DDoS?

2001-01-09 Thread Ryan Russell

On Tue, 9 Jan 2001, nealk wrote:

 Alternate (New) DDoS model:
   - Server 'A' directly prevents all clients from accessing server 'B'.

I don't see how this is particularly "distributed".

 Let's say that someone placed a corrupt Flash (SWF) file on a web server.
 All clients that access the web server and that view the Flash file
 (about 90% of all browsers can, so this is a good assumption) will
 have their browsers crash or hang.


I.e. if you can hack the server, then the clients will be susceptible to
client holes.  Yes, absolutely.  I've been waiting for this one for some
time... rather that make an obvious defacement when one breaks into a web
site, leave the site up as-is (at a superficial level), but with a browser
hole embedded in the HTML.

The problems with this being terribly effective is that it will be found
relatively quickly (at least, if it's a popular site) and that there is a
central place to fix it quickly.  Even if the defacement sticks around for
a few days, even non-technical users will pretty quickly learn that when
they visit example.com, their browser crashes.

The attack would have to be subtle (i.e. not crash the browser) and the
site would have to be popular, but not very carefully watched by the
administrators.  In fact, given a powerful enough hole, this is a good way
to build an army of traditional zombies.  Or steal loads of personal info
off of clients.

Ryan



Apache 1.3.12

2000-02-25 Thread Ryan Russell

From:
http://www.apache.org/dist/Announcement.html

Apache 1.3.12 Released

The Apache Software Foundation and The Apache Server Project are pleased
to announce the release of version 1.3.12 of the Apache HTTP server.

The primary changes in this version of Apache are those related to the
``cross site scripting'' security alerts described at
http://www.cert.org/advisories/CA-2000-02.html
http://www.apache.org/info/css-security/index.html

Specifically, charset
handling has been improved and reinforced (including a new directive:
AddDefaultCharset) and server generated pages properly escape ``userland''
input.

A complete listing with detailed descriptions is provided in the
CHANGES file.

NOTE: This official release incorporates a slightly
different version of the original patch for the 'css' issue. In
particular, the AddDefaultCharsetName directive was removed and this
function is now completely handled by the AddDefaultCharset directive. If
you were using this patch, you will need to adjust your configuration file
to reflect this change.

We consider Apache 1.3.12 to be the best version
of Apache available and we strongly recommend that users of older
versions, especially of the 1.1.x and 1.2.x family, upgrade as soon as
possible. No further releases will be made in the 1.2.x family.

Apache
1.3.12 is available for download from

http://www.apache.org/dist/

Please
see the CHANGES_1.3 file in the same directory for a full list of changes
in the 1.3 version.

Binary distributions are available from

http://www.apache.org/dist/binaries/

As of Apache 1.3.12 binary
distributions contain all standard Apache modules as shared objects (if
supported by the platform) and include full source code. Installation is
easily done by executing the included install script. See the
README.bindist and INSTALL.bindist files for a complete explanation.
Please note that the binary distributions are only provided for your
convenience and current distributions for specific platforms are not
always available.

The source and binary distributions are also available via any of the
mirrors listed at

http://www.apache.org/mirrors/

For an overview of new features in 1.3 please see

http://www.apache.org/docs/new_features_1_3

In general, Apache 1.3 offers
several substantial improvements over version 1.2, including better
performance, reliability and a wider range of supported platforms,
including Windows 95/98 and NT (which fall under the "Win32" label).

Apache is the most popular web server in the known universe; over half of
the servers on the Internet are running Apache or one of its variants.

IMPORTANT NOTE FOR WIN32 USERS: Over the years, many users have come to
trust Apache as a secure and stable server. It must be realized that the
current Win32 code has not yet reached the levels of the Unix version, but
is of acceptable quality. Any Win32 stability or security problems do not
impact, in any way, Apache on other platforms. With the continued donation
of time and resources by individuals and companies, we hope that the Win32
version of Apache will grow stronger through the 1.3.x release cycle.

Thank you for using Apache. ---
See you at ApacheCon 2000 in Orlanda, Florida, March 8-10, 2000.



Re: DDOS Attack Mitigation

2000-02-17 Thread Ryan Russell

On Tue, 15 Feb 2000, Alan Brown wrote:

 On Sun, 13 Feb 2000, Darren Reed wrote:

  You know if anyone was of a mind to find someone at fault over this,
  I'd start pointing the finger at ISP's who haven't been doing this
  due to "performance reasons".

 To be fair, if you do this on most terminal servers (eg, Cisco 5300, Max
 4000), they will collapse under the load.


How exactly are you configuring these things?  You're not trying to do
filtering on a per-dialup or per-user basis, are you?  You put one
outbound filter on the Ethernet or WAN interface that covers the dialup
address pool.  Or on the next router out.  All the ISPs I've seen (and
granted, it's only a few) have another router in front of the dialup
router.  Sure, dialup users will still be able to spoof at each-other, but
I assume that's a much smaller concern.

Ryan



Re: snmp problems still alive...

2000-02-16 Thread Ryan Russell

Nice summary.

 - Windows 98 (not 95) - public

You have to install the agent, it's not stock.  And it's not so much that
the world-writable string is "public" as it is that there isn't one.
You'll get write access no matter what community name you use.  MS made
improvments under NT, 'cause it was the same, but it's still broken in 9x
AFAIK.  Check:
http://www.nai.com/nai_labs/asp_set/advisory/30_nai_snmp.asp

 - Sun/SPARC Ultra 10 (Ultra-5_10) - private

I'm sure I won't be the only one to point out that the SNMP problem is
part of the OS (Solaris 2.6 and later) not the hardware. I suspect Sparc
OpenBSD will be OK. :)

Solaris 2.6 was the first version (I believe) to install an SNMP agent as
part of a standard Solaris install.  There were hard-coded SNMP community
names that gave write access.  There was also a patch. Check out:
http://www.securityfocus.com/vdb/bottom.html?vid=177

At a previous job, Lucent installed a remote access server and left the
SNMP write community as public.

I don't think SNMP issues have gotten as much attention as they should.
There are some really bad things one can do.  Depending on platform, you
can start and stop programs, kill processes, download all passwords, shut
down the boxes, change hardware settings, all without any loggin in most
cases.

You really want to not have this problem.

Ryan



Re: Some discussion in http-wg ... FW: webmail vulnerabilities: a new pragma token?

2000-01-21 Thread Ryan Russell

A couple of comments in a couple different directions...

Eric states that there will be implementation issues.

To be nastier about it, if the browser vendors can't shut off
Javascript when I hit the checkbox, why think they could
do it by following an HTML directive?

And to pre-hack the idea.. chances are that I'm going to be able
to do something to escape the headers... i.e. I'll find a way to start
a new set of headers, perhaps opening a new frame.

 It would be nice if there were on an HTTP header that, if sent to the
 client, would cause the client to disable javascript, vbscript, etc. for
 that document only. Sites who wished to display untrusted pages (webmail
 sites, web discussion forums, etc.) could then use a multi-frame layout.
 Any frame that contained untrusted code would have this header included in
 the delivery of its content to ensure that the scripts would not be
 evaluated, regardless of the normal client settings; other frames, whose
 "trusted" documents would be sent without this header, would still be able
 to use scripting (if enabled on the client).

I don't want to discourage the idea neccessarily, just pick on the
browser vendors.  Perhaps they'd have a better chance of
getting it right the first time that way.

On a different tangent:

Several folks suggested that all tags be stripped unless they are
"known safe".

Doing so will kill your ability to mail around C code, unless you
HTMLize it first.  If you don't, all your #includes will dissappear,
and perhaps the rest of the note if it's waiting for a #/include :)

 Ryan



Re: Default configuration in WatchGuard Firewall

1999-09-08 Thread Ryan Russell

It's always a good idea to disable pings from the outside to your internal
network.  I don't mean to discourage anyone from doing so, but...

# route add -net 192.168.0.0 netmask 255.255.255.0 gw 100.100.100.100

This only works if you are on the 100.100.100 network, i.e. one hop way.  Won't
work all the way across the Internet.  Have you tried it with source-routing?

Solution is easy ... do not let pings to internal network.

Please do.  Does Watchguard give you some flexibility about what ICMP to let
in?  I.e. can you shut off the pings in, but still leave on ICMP unreachables,
in order to not break path MTU discovery?  Does it do the stateful thing and
let ICMP echo replies in only if a request was sent, etc?

ICMP is also one of the many interesting things that Firewall-1 leaves on by
default.  Newbie FW-1 admins usually don't know to go through the properties
screen and disable all the things on by default.

  Ryan