[Fwd: Re: Cross-Site Request Forgeries (Re: The Dangers of Allowing Users to Post Images)]

2001-06-19 Thread Peter W

Regarding IMG tags in HTML email, here is a good point I received off-list.
The sender did not wish to post directly, but approved forwarding this note.

-Peter

- Forwarded message (anonymous, forwarded with permission) -

Date: Sat, 16 Jun 2001 22:55:41 +0200
To: Peter W [EMAIL PROTECTED]
From: anonymous
Subject: Re: Cross-Site Request Forgeries (Re: The Dangers of Allowing Users to Post 
Images)

On 2001-06-15 at 09:26 -0400, Peter W gifted us with:
   I wonder how it would work in HTML mail clients, though? You
 could restrict to the sender's domain, but email is the easiest thing in
 the world to spoof. (And an effective attack vector, especially against
 things like messageboards that expose email addresses.) HTML email is just
 evil evil evil. I can't see much need for HTML email to reference *any*
 external documents, or to allow, as Jay Beale suggested, the use of things
 like META refresh tags to effect client-pull CSRF attacks.

Personally, I loath HTML email.  However, surely a better approach for
this is simply to restrict IMG links by insisting that the source be
inline to the email.

RFC 2112, multipart/related.  Require that email IMG URLs start cid:;
in order to be automatically rendered.  Whether or not to _allow_ the
rendering of other IMGs is a more contentious issue.

(Just one of those ideas which I've had floating around for a while,
 mainly to stop tracking of who's reading spam HTML email)

- End forwarded message -



Re: never-ending Referer arguments (The Dangers of Allowing Users to Post Images)

2001-06-19 Thread Peter W

On Tue, Jun 19, 2001 at 03:44:10PM +0200, Henrik Nordstrom wrote:
 [EMAIL PROTECTED] wrote:
 
  Folks are missing the point on the Referer check that I suggested.
 
 I intentionally selected to not go down that path in my message as there
 are quite a bit of pitfalls with Referer, and it can easily be
 misunderstood allowing the application designer falsely think they have
 done a secure design using Referer.

Henrik,

You also revealed your lack of understanding the Referer check logic when
you wrote It is well known that Referer can be forged, and to further add
to this some browsers preserve Referer when following redirects, allowing
this kind of attacks to bypass any Referer check if your users follows URL's
(direct or indirect via images) posted by other users or even your own staff
when linking to external sites. Neither forging Referers nor preserving
Referers across redirects threatens the model I suggested.

 Also, as shown earlier in the thread, using Referer may render the
 service less useful for some people. There are people who filter out
 Referer from their HTTP traffic becuase there is too many bugs in
 user-agents showing Referer to things it should not expose externally.

I mentioned that myself, as you may recall.

As for recommending one-time tickets, we agree there.

All this chatter about Referer checks amounts to two things:
 - some folks not understanding the model
 - folks legitiately disagreeing on the number of user who might be
   locked out by a Referer check.

-Peter
Web applications designer and Squid user :-)



Re: The Dangers of Allowing Users to Post Images

2001-06-16 Thread Peter W

On Thu, Jun 14, 2001 at 09:12:05PM -0400, Chris Lambert wrote:

 would it be safe to check
 that if a referer is present, it contains the sites' domain name,

Yes.

 but if it
 isn't, it most likely wouldn't have been referenced in an img tag or
 submitted via JavaScript?

You mean it's safe/legitimate? No. Client-pull META tags generate requests
without Referers, as I've said a couple times in this thread, and in
previous Bugtraq discussions, too. :-)

If you don't see the Referer, you can't trust the request. Your best bet is 
to lock out users who won't pass Referers.

Or at least, when you initialize a user session, note if they seem to be
passing Referer values. If they are, then you should certainly reject any
later request that seems to be theirs, but lacks a Referer header.

Note that in some cases, MSIE won't send a Referer if the TARGET of a link 
is a different window, or that used to be the case. 

This is messy.

-Peter



Re: Cross-Site Request Forgeries (Re: The Dangers of Allowing Users to Post Images)

2001-06-15 Thread Peter W

On Fri, Jun 15, 2001 at 02:09:57AM -0400, Chris Lambert wrote:

 Yes, you're correct that its the target of the exploit which needs to be
 protected. However, the reason we originally related it to message boards
 was because the source and the target were tightly related.

Yes, of course. It's a ripe target. Low-lying fruit. Worth special concern.

 |  img src=https://trading.example.com/xfer?from=MSFTto=RHATconfirm=Y;
 |  img src=https://books.example.com/clickbuy?book=ISBNherequantity=100;
 
 Eek! While its a problem, the user exposed WOULD have to be registered on
 the target site, and unless the target site is the source site, its not as
 much of an issue as inter-system attack would be. Inter-system being, for
 example, an image referenced on an EBay product page which bids on the item
 featured on the page.

Yes, the tightly coupled attack is definitely more effective. I can
imagine a fair number of shenanigans on Slashdot et. al. if they're
vulnerable, e.g., Anonymous Cowards posting links to what purports to be
useful information... but I'm more concerned about things like commerce
sites and webmail sites. As one pf the long messages reference in my long
bugtraq post points out, I expect on many webmail systems an attacker
could use CSRF to make a victim send abusive email without their
knowledge. With more governments like that of the United Kindom bringing
transactional capabilities to the Web, I hope developers will pay serious
attaention to the problems of verifying origin and intent.

 Opera lets you disable automatic 302 redirection. Probably a good idea if
 the redirect takes you off site.

Yes, that does sound like generally a good idea, but it would frustrate
some applications that expect the usual behavior (I've written some such
apps myself). I wonder how it would work in HTML mail clients, though? You
could restrict to the sender's domain, but email is the easiest thing in
the world to spoof. (And an effective attack vector, especially against
things like messageboards that expose email addresses.) HTML email is just
evil evil evil. I can't see much need for HTML email to reference *any*
external documents, or to allow, as Jay Beale suggested, the use of things
like META refresh tags to effect client-pull CSRF attacks.

 | ** The 90% solution: Referer tests
 
 This would work, but not all clients pass referers. For referer checking,
 I'd suggest validating the form if the referer matches correctly, or if no
 referer exists. Although not the perfect solution, I think that it'd be a
 safe bet. Let's say 1% of all clients don't pass referers. The 1% would be
 validated, therefore preventing them from getting an error when simply
 interacting as usual.

I haven't thought this through very much. But check the securityfocus URL
in my post; in my testing, 100% of clients will not send Referer
information in client-pull (META refresh) scenarios. So be careful when
working with your 1% rule: with client-pull, an attacker can make anyone
suppress Referer information.

 | snipped Passing tokens or tokenized field names /snipped
 
 Good idea, looks to be the best solution yet.

The problem with tokenized argument names is the /logo.gif 302 attack. If
an attacker embeds an IMG reference to /logo.gif on their server, then
their server is passed a Referer (in most cases, probably 99%+) that
exposes the argument names used to read the CSRF message. I suspect you
would not be (socially) able to rename the arguments with each page
delivered. Doing things like that really frustrates users. I've tried it,
and had to back out such protection on applications used by regular folks.

The one-time-use authorization tokens seem to be solid, though. And easy 
to use, once you've got the basic Create/Verify/Invalidate methods written.

 | I'm afraid CSRF is going to be a mess to deal with in many cases.
 | Like trying to tame the seas.
 
 I'm in the process of launching a company that will secure web enabled
 applications from these, and other, types of attacks. CSRF seems to be one
 of the more generic types of holes which can be exploited, but there are
 certainly plenty more when dealing with user input, file handling, and
 database access.

A pattern I've seen in Web deveopment over the years -- and I suspect it's 
not just the relatively young Web area that exhibits this -- is that 
developers tend to be more paranoid when building applications for pulic / 
unathenticated access. They tend to let their guards down when writing 
applications for users who are logged in. The thinking often goes 
something like this: Well, if the site's editor is logged in and asks to 
do something stupid, that's *her fault* and I'm not going to make this 
management application idiot-proof. Hopefully this discussion of CSRF 
helps explain the importance of making sure the HTTP request your 
application handles really is the site's editor asking to do somethign 
stupid, and not simply reading an HTML email with a CSRF IMG 

Re: Webtrends HTTP Server %20 bug (UTF-8)

2001-06-10 Thread Peter W

On Fri, Jun 08, 2001 at 04:51:57AM +0100, Glynn Clements wrote:
 
 Eric Hacker wrote:

  Conveniently, UTF8 uses the same
  values as ASCII for ASCII representation. Above the standard ASCII 127
  character representation, UTF8 uses multi-byte strings beginning with 0xC1.
 
 No; the sequences for codes 128 to 255 begin with 0xC2 and 0xC3

And encodings for 256 - (2^32 -1) use other values in the first octet.

Two points here:

 1) Eric wrote As a URL cannot contain spaces or other special characters, 
URL encoding is used to transport them. Thus all UTF8 characters above ASCII 
are supposed to be URL encoded in order to be sent.

It's not at all clear to me a) that UTF-8 sequences are allowed in *any*
HTTP headers (request or response) or b) how a server or client would decide
whether a possible UTF-8 sequence like %C3%B3 is UTF-8 for the single value
0xF3 or the two-character phrase 0xC3 + 0xB3. All indications in the RFCs
(2068, 1738, 1808) suggest that only characters 0x00 - 0xFF are expected in
the various headers, and that no UTF-8, double-byte, or other
representations are allowed.

 2) The UTF-8 rules are kinda funny. 0xFE and 0xFF are illegal everywhere,
and other characters may be illegal depending on their placement, e.g. a
starting octet with 2^7 on and 2^6 off, or a subsequent octet that
doesn't have 2^7 on and 2^6 off. I wouldn't be surprised if some UTF-8
parsing routines don't handle illegal characters gracefully, or if
applications don't gracefully trap errors reported by the UTF-8 parsing
routines, etc. This might be worth some testing.

-Peter



Re: Network Solutions Crypt-PW Authentication-Scheme vulnerability

2001-06-10 Thread Peter W

On Fri, Jun 08, 2001 at 12:37:34AM -0700, Peter Ajamian wrote:

 While crypt password authentication is not in and of itself very secure,
 Network Sulotions have made it even less so by including the first two
 characters of the password as the salt of the encrypted form.  While the
 password is transmitted via a secure session, the encrypted form is
 returned almost immediately in a non-encrypted www session.  Also, this
 password is typically emailed back and forth to the user no less than two
 times (and often times more).  This allows several opportunities for
 someone to observe the encrypted password, this in and of itself is not
 good.

Plus when you submit a change request template, your email contains the 
plaintext password. :-(

And that's the problem: not the crypt routine, but the cleartext data xfer.

 Possible Workarounds:
 
 Do not use the Crypt-PW authentication-scheme.  Instead use the MAIL_FROM
 or PGP scheme instead.

If someone attempts to make changes to a domain with a Network Solutions
old-style[0] admin or billing handle, Network Solutions will email the
responsible handle's address. With MAIL_FROM, the email address is availble
via a whois query. Easily obtained, easily spoofed, and if you get cracked,
you have to get NetSol involved to clean up. *Do NOT use mail_from!!!*

You're in just as much trouble if someone gets your encrypted NetSol 
CRYPT-PW password. But, unlike the email address, the encrypted password is 
not readiliy available. An attacker without the encrypted password can only 
attempt to guess the password. And the attacker must send a change request 
to test their guess. And you get emailed each time they try. The only 
effective way to crack a CRYPT-PW handle is to sniff the email channel [so 
the Echelon folks probably know all our NetSol CRYPT-PW passwords ;-)].

Which gets us to footnote [0]: for many months, Network Solutions has been 
using a fully Web-based system for domain/handle maintenance.

So to the extext you're concerned about CRYPT_PW, I'd suggest two viable 
alternatives: change the authentication method to PGP (very easy), or create 
new NIC handles for the Web-based management system and transfer your 
domains' contact handles to the Web-based handles. Those with many domains 
will likely find the Web-based interface annoying, especially for batch 
updates.

But for goodness' sake, do *not* use MAIL_FROM !!!

-Peter

 If you must use CRYPT-PW then the following suggestions are recommended:

Changing your password means sending the cleartext value to NetSol via 
email. So changing your password involves risk. :-(




Re: SSH / X11 auth: needless complexity - security problems?

2001-06-05 Thread Peter W

On Mon, Jun 04, 2001 at 03:17:04PM -0700, [EMAIL PROTECTED] wrote:
 On Mon, Jun 04, 2001 at 11:19:37AM -0400, David F. Skoll wrote:
  I could not duplicate this with OpenSSH 2.9p1-1 on Red Hat 6.2

 The problem code is invoked in the X forwarding of ssh. If you try
 again, this time passing -X as a command line argument to the ssh
 client, you may find different results. Depending upon the user's
 combination of ssh_config and the server's sshd_config, this may or
 may not be (quickly) exploitable on your system. [1] Running ssh -X
 will create the /tmp/ssh- directory that is needed for the
 exploit.

The sshd documentation says that sshd wil not invoke 'xauth' if it finds and 
can execute either $HOME/.ssh/rc or /etc/ssh/sshrc. And you can get sshd to 
more safely write an xauthority cookie file using /etc/ssh/sshrc. But it 
still creates /tmp/ssh-/cookies -- in this case, making an empty 
file. Not only that, but sshd resets the XAUTHORITY value to point to this 
empty cookie, crippling the work done by /etc/ssh/sshrc. And then when the
user logs out, sshd wipes out the empty, useless /tmp/ssh-/cookies.

As for the patches that are more careful when creating 
/tmp/ssh-/cookies -- isn't there still an assumption that 
/tmp/ssh-/cookies won't be removed before the ssh session ends? Many 
system use tools like 'tmpwatch' to prune unused files while the system is 
running (instead of depending on things like tmpfs cleaning /tmp at reboot 
time). On those systems, if someone logs in with X forwarding enabled, but 
never runs any apps that need to read $XAUTHORITY, and stays on log enough
that 'tmpwatch' removes the whole /tmp/ssh- directory, then don't 
you have another attack vector -- regardless of how careful you were when 
creating the cookies file  its parent directory?

It seems to me this whole xauthority business may be adding complexity for
no good reason. Since the DISPLAY name changes, and an Xauthority file can
hold multiple X cookie credentials, is there any good reason why OpenSSH
need to make, and then, wipe out, a special xauthority file? why it can't
just add credentials to the default xauthority file? Wouldn't that be 
simpler and, almost by definition, more secure? If you really want to be 
polite/clean, you can use the xauth remove command to purge the cookie 
from ~/.Xauthority

-Peter

--
Cheap X run as hack available at http://www.tux.org/~peterw/



Re: SECURITY.NNOV: Outlook Express address book spoofing

2001-06-05 Thread Peter W

On Tue, Jun 05, 2001 at 12:59:03PM -0700, Dan Kaminsky wrote:

 An immediate design fix would be to use a different coloring and fontfacing
 scheme to refer to full names, rather than quoted email addresses from the
 address book.  This should self-document decently, since over the course of
 sending a number of mails users should learn to associate one character type
 with one form of name and the other with the other.  Then, when the attack
 hits, people see things backwards and some method of investigation can be
 made available.

Nice idea.

Novell Groupwise has similar problems with displaying the address book
name instead of the address (though Groupwise is *not* vulnerable to the
same attack that forces the spoofed entry into the address book). It would
be nice if these email systems would always display both the name and the
address. Perhaps use both different colors, and the familiar  construct,
e.g. [EMAIL PROTECTED] [EMAIL PROTECTED] the way
other packages like Netscape Messenger, Mozilla Mail, Pine, and Mutt do.

-Peter



Re: Mail delivery privileges

2001-05-19 Thread Peter W

On Fri, May 18, 2001 at 04:35:08PM -0400, Greg A. Woods wrote:

 [ On Friday, May 18, 2001 at 11:18:51 (-0400), Wietse Venema wrote: ]

  3 - User-specified shell commands. Traditionally, a user can specify
  any shell command in ~user/.forward, and that command will execute
  with the privileges of that user.

 Personally I'm loathe to allow ordinary users to specify delivery to
 programs in the first place, and forcing them at minimum to arrange for
 their mail filters to run unprivileged seems like a very small price

 That's certainly the way it works on Plan 9:

If  the file /mail/box/username/pipeto exists and is read-
able and executable by everyone, it will be run  for  each
incoming  message for the user.  The message will be piped
to it rather than appended to his/her mail box.  The  file
is run as user `none'.

So users with pipeto scripts are vulnerable to other users' pipeto
scripts, since they all run as the same user. Mutual Assured Corruption 
you might say. I think that sounds like a *large* price to pay!

 Note that there are solutions to the filtering issue which do not
 require the final destination of filtered messages to be an inbox that's
 writable by the unprivileged user (eg. just pass them back to the mail
 system for re-delivery to a new mailbox).

Your earlier post assumed that users didn't want to use ~/.forward to
specify custom actions. Now you're assuming all the user wants to do
is filter the mail, i.e., decide which mailbox to put it in. But
users want to do more with their mail than simply filter it.

To protect users from each others' ~/.forward instructions, it is necessary,
as Wietse said, for the delivery agent to start with superuser privileges.
There are ways to make things a little bit safer, e.g. have the delivery
agent drop privileges to nobody:bobpipe (where only bob is a member of 
bobpipe) instead of bob:users when running the ~/.forward command, but that
only protects bob from his own mistakes in ~/.forward and still leaves
the delivery agent starting out with superuser privs...

-Peter



Re: CORRECTION to CODE: FormMail.pl can be used to send anonymous email

2001-03-12 Thread Peter W

On Sun, Mar 11, 2001 at 10:36:32PM +0100, Palmans Pepijn wrote:

 The problem is in the sub check_url:
 It sets $check_referer = 1 if there is no $ENV{'HTTP_REFERER'}
 Under normal conditions your server will always be able to get the HTTP_REFERER.

Not true. Many firewalls block Referer headers, so requiring this
information will frustrate legitimate users, while not stopping abusers.

 simple solution is: change the 1 into a 0 after the else {

That's hardly a solution. The "Referer" information is client-supplied
data; any intelligent spammer will cobble together code that connects
to the httpd and feeds it whatever data it wants.

Basing security decisions on client-provided data like the Referer
HTTP header is Just Plain Bad Design. But the Referer check isn't
the real problem here: trusting the rest of the user-supplied data is.

If you want something like this to be "secure", you need a way to
verify the client-supplied data (in this case, things like the email
recipient that should be embedded in the page with the form).
A few common techniques are:

 - Embed a checksum as a hidden field and ensure that all "important"
   fields check out. One problem with this is that the Web page authors
   need to be able to calculate checksums (preferably without knowing
   the algorithm), and have to update the checksum if any hidden form
   element changes: a real pain.

 - Put the settings in a separate file or repository (not hidden form
   fields) where the backend sees the data but the client does not,
   and cannot override the proper settings at all. I've used this
   approach for systems that are configured by more "technical" staff.
   This approach also saves bandwidth.

 - Have the back-end request the URL the client claims the form
   is on (assuming it looks like something the back-end should honor),
   parse the hidden fields, and override anything the client may
   have submitted. This is my favorite approach, as it's easy on the
   Web page authors, and involves no special tricks. This can even
   work for authenticated forms iff using cookie-based auth and the
   back-end has a URL that will receive the auth cookie(s).

Any of these approaches would at least prevent the spammer from
reaching anything other than the officially sanctioned address, though
they can email that as often as they like...

...as for the observation that the resulting email will only show
the IP address of the Web server; yes, true. That's why all my Web - mail
apps add "X-" mail headers with debugging information (scrubbed of
any unexpected data!) to facilitate debugging. See RFC 822. E.G.,

X-Sender-Network-Address: 10.2.3.4
X-Mail-Origin: http://www.example.com/ webmail system
X-Disclaimer: This is not an official example.com mail message.
X-Apparent-Source-Page: http://www.example.com/mailform.html

This is just Secure Programming 101 + Web Programming 101. It's a
shame, but it certainly seems that a lot of these freebie Web scripts
are really quite awful when it comes to security.

Bugtraq could be flooded with noise if it started to accept posts
on stupid Web programming mistakes in freebie software; please, let's
not go down that road!

-Peter

 ---snip---
 sub check_url {

 if ($ENV{'HTTP_REFERER'}) {
 foreach $referer (@referers) {
 if ($ENV{'HTTP_REFERER'} =~ m|https?://([^/]*)$referer|i) {
 $check_referer = 1;
 last;
 }
 }
 }
 else {
 $check_referer = 1;   === YEAH, THIS ONE ! :)
 }


 # If the HTTP_REFERER was invalid, send back an error.   #
 if ($check_referer != 1) { error('bad_referer') }
 }
 ---snip---

 On the other hand, there must be a reason why they've put that else in it so if it 
fails to work for you 

 On Sat, 10 Mar 2001, Michael Rawls wrote:

  Hi All,
 I did a little playing with FormMail.pl after a run in with a spammer
  abusing our webserver. Apparently ALL FormMail.pl cgi-bin scripts can be
  used to spam anonymously.  I found another server with FormMail.pl and
  tried the same exploit to send myself an email and it worked.
 
  The email will not show the spammer's real IP.  Only the web servers IP
  will show.  The web server logs will however show the true IP address of
  the spammer.



Re: HeliSec: StarOffice symlink exploit

2001-02-20 Thread Peter W

On Sat, Feb 17, 2001 at 04:57:23PM +0100, JeT Li wrote:

   One way to fix the problem is to create a directory inside your
 home directory which is inaccessible to anyone but yourself (permissions 700),
 called tmp. Then insert an entry in your login start-up file to set the $TMP
 environment variable to $HOME/tmp, so it will direct StarOffice to use your
 temporary directory, rather than the system /tmp. Something like this (in
 bash):

   [wushu@JeT-Li]$ TMP=$HOME/tmp ; export TMP
   (not permanent)
   or modify the .bash_profile adding TMP=$HOME/tmp and including this
 variable in the export.

BTW, I have some fairly sophisticated TMPDIR/TMP scripts in the CVS
repository for Bastille (http://sourceforge.net/projects/bastille-linux)
that folks might find useful. The scripts allow you to put TMPDIR
somewhere other than $HOME (say, local /tmp if $HOME is on NFS), to keep
track of TMPDIRs on a host-by-host basis, to hide the number of files
and last access time of $TMPDIR, etc. There's also an ancillary script
designed to keep pruning tools like 'tmpwatch' (frequently found on
Linux systems) from removing $TMPDIR while you're logged in, and to
warn you via multiple channels if something is amiss with your temp dir.

Look for bastille-tmpdir.sh, bastille-tmpdir.csh, and
bastille-tmpdir-defense.sh (the anti-'tmpwatch' tool).

bastille-tmpdir.* go in /etc/profile.d where many systems will run them
at login time (via /etc/bashrc or /etc/csh.login scanning /etc/profile.d)
bastille-tmpdir-defense.sh goes in /etc. All three should be mode 0755.

These apps will be optional in the soon-to-be-release Bastille 1.2.0
hardening tool for Red Hat and Mandrake Linux distributions. I've only
tested the scripts under Linux, but they should be fairly portable. Your
feedback would be most appreciated.

It's nice that apps let you pick your own preferred temp space ($HOME
in some cases is a poor choice), but it's a shame that some apps *need*
you to do so to behave safely. :-(

-Peter



Re: vixie cron possible local root compromise

2001-02-15 Thread Peter W

I can't believe how much has been written about an issue
that's apparently fixed with a few lines of code.

More patches, less pedantic finger pointing. Bottom line
is the app does not, cannot enforce length constraints on
usernames, so it needs to do proper bounds checking.

-Peter



Re: Palm Pilot - How to view hidden files

2001-02-12 Thread Peter W

On Sun, Feb 11, 2001 at 05:15:53PM -0300, Paulo Cesar Breim wrote:

 The software Tiny Sheet, present in all versions of Palm Pilot,

http://www.iambic.com/pilot/tinysheet3/

To clarify: it's not included with PalmOS; it's 3rd-party software.

 has a function called IMPORT file.
 Well when this function is use ALL FILES, including the hidden files
 protetex with password, can be imported to a Sheet.

The "private" flag in PalmOS is advisory only. As has been noted in previous
discussions (most notably L0pht/@stake's PalmOS password recovery discovery),
the Palm platform is not designed to be secure. Physical access means access
to all its data.[0] So there's not much new about Tiny Sheet apparently not
following the guidelines. It's just another example of the limitations in PalmOS.

If you want to protect data stored on a PalmOS device, encrypt it. Hmm, I'd
be interested to see some work on PalmOS memory attacks, e.g. after you've
run a crypto app, can you run another app that scours the device's memory
for information left behind, e.g., passphrases or decrypted keys?

-Peter

[0] Unless the device is "locked" and has 3rd-party security extensions
loaded that prevent non-destructive device resets.



iPlanet FastTrack/Enterprise 4.1 DoS clarifications

2001-01-24 Thread Peter W

Regarding Peter Guendl's discovery of DoS attacks against iWS 4.1:

1) Peter G. reports that disabling the cache with cache-init is not
   an effective workaround for the FastTrack problem.

2) I wrote that iWS 4.1 has "at least one huge hole (remote code execution
   via SSL/TLS implementation bug)". Another reader has pointed out that
   the SSL/TLS problem was announced as a Denial of Service vulnerability.

3) The note about Service Pack levels for iPlanet Enterprise 4.1 in
   Peter Gruendl's "Netscape Enterprise Server Dot-Dot DoS" was somewhat
   confusing. The iPlanet URL he refers to correctly states that the
   latest supported iPlanet Web servers[0] are 4.0sp6 and 4.1sp5. 4.1sp6
   has not been released or officially announced by iPlanet.

Thanks,

-Peter

[0] All Netscape-branded Web server products, including Netscape Enterprise 3.6,
have officially passed their end-of-life dates and are no longer supported.



Re: def-2001-05: Netscape Fasttrack Server Caching DoS

2001-01-23 Thread Peter W

On Mon, Jan 22, 2001 at 01:30:33PM +0100, Peter Grndl wrote:

Defcom Labs Advisory def-2001-05

Oooh, how fancy! ;-)

 --=[Detailed Description]=
 The Fasttrack 4.1 server caches requests for non-existing URLs with
 valid extensions (eg. .html). The cached ressources are not freed
 again (at least not after half an hour), so a malicious user could
 cause the web server to perform very sluggishly, simply by requesting
 a lot of non-existing html-documents on the web server.

 ---=[Workaround]=-
 None known.

I can't test these because iPlanet's download system is too broken,
stupid, and annoying for me to grab iWS ft 4.1 to verify, but:

Almost certainly effective workaround #1:
 Disable caching per http://help.netscape.com/kb/corporate/2313-1.html

Probable workaround #2:
 Most  of the NES/iWS built-in functions are cache-safe. That is, using
them does not prevent the server from using its cache accelerator. Some
functions are conditionally cache-safe, e.g. the "flex-log" function is
cache safe with the default configuration, but if certain attributes of
requests are logged, then the cache cannot function.
 3rd-party functions are assumed to be cache-unsafe unless they
explicitly set the rq-directive_is_cacheable flag (see
http://developer.netscape.com:80/docs/manuals/enterprise/nsapi/svrplug.htm)
so you should be able to write a quick NSAPI module like this:
  NSAPI_PUBLIC int PW_null(pblock *pb, Session *sn, Request *rq)
  {
/* note we do not touch rq-directive_is_cacheable */
return REQ_NOACTION;
  }
and then use that in obj.conf, e.g.
 # near the top of obj.conf
 Init fn="load-modules" shlib="/path/to/PW_null.so" funcs="PW-null"
 #
 # inside your Object config
 Error reason="Not Found" fn="PW-null"
 # then your regular 404 handler, e.g.
 Error reason="Not Found" fn="send-error" path="/path/to/errorpage.html"
This should make iWS realize that the file not found URLs are not
cacheable, without affecting other documents.

I also expect that sites using query-handler instead of send-error for
their 404 errors won't have the problem Herr Gruendl describes.

 -=[Vendor Response]=--
 This issue was brought to the vendor's attention on the 7th of
 December, 2000. Vendor replied that the Fasttrack server is not meant
 for production environments and as that, the issue will not be fixed.

Also worth noting is that there do not seem to be *any* service packs
for iWS FastTrack 4.1. Since iWS Enterprise has had at least one huge
hole (remote code execution via SSL/TLS implementation bug), I expect
that iWS FastTrack is an awfully dangerous app to make available to
others. Probably a good idea to limit iWS ft to local access with some
sort of on-host firewall or packet filter.

I assume you have not found iWS Enterprise Edition to be vulnerable?

-Peter
http://www.tux.org/~peterw/



win32/memory locking (Re: Reply to EFS note on Bugtraq)

2001-01-23 Thread Peter W

On Mon, Jan 22, 2001 at 05:28:50PM -0800, Ryan Russell wrote:

 Due to some mail trouble, I'm manually forwarding this note.

 From:   Microsoft Security Response Center

 Subject:Re: BugTraq: EFS Win 2000 flaw

  "... it is recommended that it is always better to start by creating
 an empty encrypted folder and creating files directly in that folder.
 Doing so, ensures that plaintext bits of that file never get saved
 anywhere on the disk. It also has a better performance as EFS does
 not need to create a backup and then delete the backup, etc."

Bits _never_ get written to the disk? Guaranteed never to use swap space?

The GnuPG FAQ (http://www.gnupg.org/faq.html#q6.1) suggests that it is
not possible to make a Windows program insist on physical RAM the way a
program can in Open Systems. Does EFS really use only physical RAM? If
so, is there some win32 API that can be used by other application designers
who want to guarantee that certain blocks of allocated memory are *never*
swapped out to disk? The most likely candidate I've come across is
VirtualLock() which, unfortunately, "does not mean that the page will not be
paged to disk" (http://msdn.microsoft.com/library/techart/msdn_virtmm.htm).

Thanks,

-Peter



Re: [SAFER 000317.EXP.1.5] Netscape Enterprise Server and '?wp'tags

2000-03-23 Thread Peter W

At 5:48pm Mar 22, 2000, Vanja Hrustic wrote:

 amonotod wrote:

  Netscape ENT 3.6 SP3 -or maybe it's SP2- on NT4.0 SP4, vulnerable, even though
  WebPublishing has never (not even just to try it out) been enabled.

Same here. If directory browsing is enabled, wp-cs-dump gives a listing.

 - ACLs can not stop this problem; looks like NES parses '?wp' tags even
 before it is checked against ACLs (tried under Solaris)

More likely the ACL's don't match on query string information. (ACL's
usually trigger on ppath, which does not include the query string.)

 The only way to disable this 'feature' was to edit file ns-httpd.so
 (under Solaris), and modify strings inside; for example, to change
 '?wp-cs-dump' into '?ab-cd-efg' - or whatever.

Editing DLL's. Eek.

The attached NSAPI code was tested on NES 3.63 on Solaris and seems to
stop the problem on the server we can't disable directory browsing on. I'd
love to talk off-list with others working on this to see if ther are other
things this doesn't catch, you know, weird URI-encoding, etc. If anyone
has more info on how to explout the tags, that would be nice, too.

Netscape, if you're listening: this is a workaround; I'd like a fix. ;-)

-Peter

http://www.bastille-linux.org/ : working towards more secure Linux systems


#include "base/pblock.h"/* pblock_findval */
#include "frame/http.h" /* PROTOCOL_NOT_FOUND */

/*
PW-no-wpleak.so

   Usage:
   At the beginning of obj.conf:
  Init fn=load-modules shlib=PW_no_wpleak.so funcs="PW-no-wpleak"
   Inside an object in obj.conf (preferably at the top of the default object):
  PathCheck fn=PW-no-wpleak
   
   The PathCheck gives a 404 for any request containing known WebPublisher tags.
(i.e. with a QUERY_STRING beginning with a known tag)
 */
 
NSAPI_PUBLIC int PW_no_wpleak(pblock *pb, Session *sn, Request *rq)
{
/* working variables */
char *requestQuery = pblock_findval("query", rq-reqpb);
char *webPubTags[] = { 
"wp-cs-dump",
"wp-ver-info",
"wp-html-rend",
"wp-usr-prop",
"wp-ver-diff",
"wp-verify-link",
"wp-start-ver",
"wp-stop-ver",
"wp-uncheckout",
NULL
};
int i = 0;

/* bail out if we've got nothing to work with */ 
if (!requestQuery) return REQ_NOACTION;

/* check the query string against known tags */
while ( webPubTags[i] != NULL ) {
if (strstr(requestQuery,webPubTags[i++]) == requestQuery ) {
/* found a match, throw a 404 error */
protocol_status(sn, rq, PROTOCOL_NOT_FOUND, NULL);
return REQ_ABORTED;
}
}

/* looks OK */
return REQ_NOACTION;
}



Re: Process hiding in linux

2000-03-20 Thread Peter W

At 11:44pm Mar 15, 2000, Pavel Machek wrote:

 /proc/pid allows strange tricks (2.3.49):

 pavel@bug:~/misc$ ps aux | grep grep
 Warning: /boot/System.map has an incorrect kernel version.
 Warning: /usr/src/linux/System.map has an incorrect kernel version.

... interesting bits about /proc/$PID/status interface and how having
an open filehandle to a defunct proc's status can hide info from ps ...

1) The 2.3.x series [like all N.M.x kernels where ((M % 2) == 1)] are
   development kernels, not for production use.

2) The 2.3.x development tree is up to 2.3.99-pre1, according to
   http://www.kernel.org/ (Granted, 2.3.49 was only superceded nine
   days ago, and 2.3.99-pre1 appears to really be 2.3.52, but that just
   goes to illustrate that this is a developers' alpha release.)

In other words, check it on the current code (and what's up with having
the wrong System.map installed?) and post to the linux kernel-dev mailing
list if the dev kernel seems to have a bug. If they ignore you and seem
happy to release what you believe to be a product with a security flaw,
let the world know.

-Peter

http://www.bastille-linux.org/ : working towards more secure Linux systems



Re: DoS for the iPlanet Web Server, Enterprise Edition 4.1

2000-02-24 Thread Peter W

At 10:31am Feb 23, 2000, -Eiji Ohki- wrote:

 I could find out the denial of service effected to iPlanet
 Web Server, Enterprise Edition 4.1 on Linux 2.2.5(Redhat6.1J;
 Kernel 2.2.12).

http://www.iplanet.com/downloads/download/detail_161_284.html
"Version Description: Please note this is a pre-release version"

 to the Enterprise Server International Edition 3.6SP2 on
 Solaris 2.6J (Sparc), the Enterprise Server 3.6SP3 on Solaris
 2.6J (Sparc) , the iPlanet Web Server, Enterprise Edition 4.0SP3
 on Solaris 2.6J (Sparc)

All officially released, supported versions. Note iWS 4.0 is now at SP4.

I'll agree that Netscape's bug feedback leaves something to be desired,
but I wouldn't panic about this *yet*. ;-)

-Peter

http://www.bastille-linux.org/ : working towards more secure Linux systems



Re: VMware 1.1.2 Symlink Vulnerability (not)

2000-01-25 Thread Peter W

Aleph, please nuke my previous post on this thread.

Oops. Vmware does try to create $TMPDIR/vmware-log or /tmp/vmware-log,
even if given a config file argument (though if given a .cfg argument, it
quickly unlinks the temporary log file).

New recommendations:
 - set $TMPDIR to something sane like $HOME/tmpfiles

The exploit is not as silly as it first looked to me, but neither is it as
serious as the advisory seems to suggest.

Apologies to Harikiri for not checking a wee bit more thoroughly before
posting a response. Doh.

To Vmware: it would be nice if the first choice was $HOME/vmware,
then $TMPDIR (maybe), then a fatal complaint.

-Peter

At 11:50pm Jan 24, 2000, Peter W wrote:

 At 8:48am Jan 24, 2000, harikiri wrote:

  w00w00 Security Advisory - http://www.w00w00.org/
 
  Title:  VMware 1.1.2 Symlink Vulnerability
  Platforms:  Linux Distributions with VMware 1.1.2 (build 364)

  Due to the low-level requirements of VMware, it is necessary to run the
  program at a high privilege level, typically root.

 Vmware installs kernel modules, but the app itself may be run by
 unprivileged users.

  VMware creates the file "/tmp/vmware-log" on startup. The existance and
  owner of the file is not checked prior to writing startup information to
  the file.
 
  NOTE: VMware uses other files in the /tmp directory. The one cited above
  is only a single example.

 Vmware normally writes a log file and other files to the same diretory as
 your guest OS's vmware configName.cfg file, but I don't expect many
 folks would save their configurations in /tmp, _especially_ since (1)
 that's where the precious virtual disk file is located and (2) vmware
 defaults to $HOME/vmware/configName. It looks like all of these files
 persist after vmware shuts down, and if I rename a file, vmware honors my
 umask when it re-creates it. If I link to a root:root mode 644 file,
 vmware complains about not being able to write to it.

 If vmware cannot create configName.log in the same dir as
 configName.cfg, _then_ it will try to make a generic "vmware-log" file.
 Note it will write this in $TMPDIR rather than /tmp if $TMPDIR is set.
 Again, I can't imagine the vmware user not being able to write in the
 virtual config directory. If I link $TMPDIR/vmware-log to a file I don't
 have write perms to, vmware refuses to run.

  Local users may create a symlink from an arbitrary file to /tmp/vmware-log.
  When VMware is executed, the file pointed to by the symlink will be overwritten.
 
  This may be used as a local denial of service attack. There may also be a
  method to gain elevated privileges via the symlink attack, though none is
  known at this time.

 The limit of this exploit is the following: any attacker who can replace
 any special vmware config files (or set up sym links before they're
 created) can trick a local vmware user into clobbering a file they already
 have write priviliges to.

http://www.bastille-linux.org/ : working towards more secure Linux systems



Re: Multiple WebMail Vendor Vulnerabilities

2000-01-13 Thread Peter W

Please note that such wrappers should produce normal HTML pages with
hyperlinks and HTTP-EQUIV "client pull" tags. If the wrapper simply uses a
Location: redirect, many clients will send the URL of the original page,
not the URL of the intermediate wrapper (verified in Netscape 4.7 and MSIE
4.0). For things like this click-through wrapper, this behavior[0] is
important to understand.

E.G.

Example 1:
http://mail.example.com/foo
contains link to http://mail.example.com/redir?http://example.org/

http://mail.example.com/redir?http://example.org/
uses Location: to redirect client to http://example.org/

http://example.org/
sees HTTP_REFERER as "http://mail.example.com/foo"

Example 2:
http://mail.example.com/foo
contains link to http://mail.example.com/redir?http://example.org/

http://mail.example.com/redir?http://example.org/
creates HTML page with
META HTTP-EQUIV=refresh CONTENT="1; url=http://example.org/"

http://example.org/
HTTP_REFERER is either empty[1] or contains
"http://mail.example.com/redir?http://example.org/"

Which also means you probably want to be careful what your wrapper
puts in the CONTENT attribute of the client-pull tag. Of course all
this depends on the behavior of the browser. ;-) Happy coding,

-Peter
http://www.bastille-linux.org/ : working towards more secure Linux systems

[0] This allows helpful/good things like browsers telling what the last
page really was when the user follows a server side image map; having a
referer like http://bignewssite.example.com/headlines.map?1,2 is not as
helpful as http://bignewssite.example.com/daily/12jan/sportsnews.html

[1] For Netscape 4.7 and MSIE 4.0, if the user's browser follows the
client-pull META tag, the browser will not send *any* Referer header to
http://example.org/; but if the wrapper creates a normal A HREF="..."
hyperlink, the browser will send the URL of the wrapper to the server
handling http://example.org/. So a client-pull with a short delay in the
CONTENT attribute is most likely to anonymize the hyperlink.

At 8:48am Jan 12, 2000, CDI wrote:

 [2] A wrapper implementation looks at each incoming email. Any link found in
 the email which leads offsite will be "wrapped".  An example;

 original: http://www.example.com/
 wrapped : http://www.cp.net/cgi-bin/wrapper?http://www.example.com/

 The wrapper CGI in this instance foils the Referer bug by changing the
 Referer to itself. In most cases, the resultant referer is identical to
 the 'wrapped' URL shown above.  This method of preventing the bug is
 effective, but certainly not perfect.



Re: BIND bugs of the month (spoofing secure Web sites?)

1999-11-14 Thread Peter W

At 1:14am Nov 13, 1999, D. J. Bernstein wrote:

 A sniffing attacker can easily forge responses to your DNS requests. He
 can steal your outgoing mail, for example, and intercept your ``secure''
 web transactions. This is obviously a problem.

If by secure web transactions, you mean https, SSL-protected, then, no
they can't. SSL-enabled HTTP uses public keys on the server side to verify
server identity. These keys are typically signed by a Certificate
Authority (Verisign, Thawte, etc.) and clients will not trust server keys
unless they have a valid, non-expired certificate from a known, trusted
CA. Even if the attackers monitored all your network communications, they
still would not have your web server's private key and its passphrase.

While DNS spoofs may be practical, impersonating an SSL-enabled Web server
requires considerably more than lying about IP addresses.

-Peter

 We know how to solve this problem with cryptographic techniques. DNSSEC
 has InterNIC digitally sign all DNS records, usually through a chain of
 intermediate authorities. Attackers can't forge the signatures.

 Of course, this system still allows InterNIC to steal your outgoing
 mail, and intercept your ``secure'' web transactions. We know how to
 solve this problem too. The solution is simpler and faster than DNSSEC,
 though it only works for long domain names: use cryptographic signature
 key hashes as domain names.



Re: Linux kernel source problem

1999-10-27 Thread Peter W

Unfortunately, many documents suggest doing this work as root. See
  http://www.redhat.com/mirrors/LDP/HOWTO/Kernel-HOWTO-3.html#ss3.2

Some re-education may be in order. :-(

-Peter

cc: Brian Ward, the Kernel-HOWTO maintainer

At 10:06pm Oct 25, 1999, Alessandro Rubini wrote:

  There is a (mostly useful) feature in "tar" [...]

  So you do this as root, needing write access to /usr/src.

 Sorry, it's a non-issue.  Nobody sane should ever untar anything using
 root permissions. A tar file can include almost anything, including
 device nodes or an open /etc/passwd.

 In the specific Linux case, you don't need to extract sources in
 /usr/src (I have them all over the place, and they compile fine). Even
 if you want to do that in /usr/src, you'd better chown the directory
 to your personal account and avoid working as root.



Re: IE and cached passwords

1999-08-29 Thread Peter W

On Fri, 27 Aug 1999, Paul Leach (Exchange) wrote:

 The server gets to say, in the WWW-Authenticate challenge header field, for
 which "realm" it wants credentials (name+password). If both www.company.com
 and www.company.com:81 send the same realm, then the same password will
 continue to work.

 This behavior is as spec'd for HTTP Authentication, RFC 2617.

Not the way I read the RFC's. The "realm" is supposed to apply to the
absoluteURI minus the URI path component, which means "example.com" and
"example.com:81" are different. Details follow.

Section 1.2 of RFC 2617:

   The realm directive (case-insensitive) is required for all
   authentication schemes that issue a challenge. The realm value
   (case-sensitive), in combination with the canonical root URL (the
   absoluteURI for the server whose abs_path is empty; see section 5.1.2
   of [RFC 2616]) of the server being accessed, defines the protection
   space.

Section 5.1.2 of RFC 2616 gives an example absoluteURI:

   The absoluteURI form is REQUIRED when the request is being made to a
   proxy.

...snip...

   An example Request-Line would be:

   GET http://www.w3.org/pub/WWW/TheProject.html HTTP/1.1

...snip... [and here the RFC clearly indicates what an abs_path is]

   The most common form of Request-URI is that used to identify a
   resource on an origin server or gateway. In this case the absolute
   path of the URI MUST be transmitted (see section 3.2.1, abs_path) as
   the Request-URI, and the network location of the URI (authority) MUST
   be transmitted in a Host header field. For example, a client wishing
   to retrieve the resource above directly from the origin server would
   create a TCP connection to port 80 of the host "www.w3.org" and send
   the lines:

   GET /pub/WWW/TheProject.html HTTP/1.1
   Host: www.w3.org

So in the RFC 2616 example, "absoluteURI" is
"http://www.w3.org/pub/WWW/TheProject.html" and abs_path is
"/pub/WWW/TheProject.html". Applying the definition of "canonical root
URL" from the appropriate section of the RFC you cite, we get a canonical
root URL of "http://www.w3.org".

Naturally, if the w3.org Web server were running on a different port, our
"math" would look like:
http://www.example.org:81/pub/WWW/TheProject.html
  -  /pub/WWW/TheProject.html
  --
http://www.example.org:81
and the :81 would be part of the "authority" segment of the URI (as
described in section 3.2 of RFC 2396 and section 14.23 of RFC 2616;
discussion in section 3.2.2 of RFC 1945 [HTTP 1.0 spec] is similar).

-Peter

The Intel Pentium III chip: designed to deny your privacy
Boycott Intel. http://www.privacy.org/bigbrotherinside/