Re: [Fwd: Re: Cross-Site Request Forgeries (Re: The Dangers of Allowing Users to Post Images)]

2001-06-19 Thread Lincoln Yeoh

Re: images in html email. 

It's not just images. There are other tags - embed, etc.

And if Microsoft Word becomes very intertwined with IE (word uses IE to
fetch stuff) then word documents with image/object links will also be an
issue. Mix well and add a few macros to taste ;).

Cheerio,
Link.





Re: The Dangers of Allowing Users to Post Images (fwd)

2001-06-16 Thread Lincoln Yeoh

At 10:29 AM 6/15/01 -0400, Shafik Yaghmour wrote:
   Yeah this is kind'a old if you have been developing sites for a
while, you also need to consider that someone can also do this off the
site as well. So if they have the ability to link to a site from your
site they can get people to go to that site and then do the post from that
site and this defeats this protection. Therefore, although, everyone
disparages
HTTP_REFERER checking, in this case it will protect the innocent user.

I agree, it's an old problem (well as long as the web has been around :) )
and there are various ways to try to solve it depending on the
circumstances. I did post to vuln-dev regarding this last year - asking for
other people's ideas on how they solve it (Subject: How to prevent
malicious linking/posting to webapps?).

   For critical areas you could also force the user to enter their
password or something similar which will also prevent this attack from
working. 

This does work reasonably well, but I'd reserve this for very critical
areas only (Are you sure you want to buy 10 million shares of Company X).
This is used in at least one local bank. If you use this too often the
users get annoyed, but so far they like it when it's used judiciously.

For less critical but still important things, I've been using confirmation
pages and checksums.

Basically, the cgi parameters are passed to the app. If there's no
confirmation value, a confirmation value is generated using a cryptographic
hash of the active session's random string, the relevant cgi parameters,
and a secret, then a confirmation form is displayed with hidden fields
containing those parameters. If the user clicks yes, the values are
resubmitted along with the new valid confirmation value. If the user clicks
no, the user is returned to the calling page using the decoded stack cgi
parameter (if present) .

If an invalid confirmation value is provided, the app logs the attempt and
the HTTP-Referer, and displays the confirmation form with a warning as well.

Only if the confirmation value is correct will the action take place.

One problem of linking confirmation values to the session is that if the
user times out the forms have to be reloaded, regenerated and some info
might be lost. 

For example, if user is typing out a message in a form containing the
confirmation value but times out, if the user clicks submit, the user has
to log in again. If it weren't for the confirmation value being tied to the
session, the user's data could be submitted automatically after a
successful login. I still prefer it being tied to the session tho, because
that reduces further the chance of replay attacks - once the session is
invalid, you can't use that confirmation value anymore, even if it's for
the same action and same objects.

Regards,
Link.





Re: Raptor 6.5 http vulnerability (fwd)

2001-03-27 Thread Lincoln Yeoh

At 10:16 PM 27-03-2001 +1000, Peter Robinson wrote:
Most http Proxy solutions (including squid and fw1) do this unless you
specify otherwise.
If you don't know what your doing... you don't know what your doing!!.

Don't blame the software.

This is NOT a bug, just a feature  .. Often you want people to use their
proxy to access web sites on other ports.

Actually it looks like bad design to me. It's common but bad. I blame the
software and the designers. I don't know why they're doing what they're doing.

They seem to be making a single proxy do the job of two or more proxies.
Just because it's a http proxy doesn't mean it should do everything to do
with http.

I think the different functions should be split to different software with
different goals.

e.g.
http proxy to protect internal clients from the big bad webservers outside.
With hooks for antivirus scanning etc.

http proxy for performance: client caching, which can be chained to the
"save the users" proxy.

http proxy to protect internal webservers from the naughty script kiddies
outside.

HTTP accelerator to speed things up for servers- load balancing, output
buffering etc. (Probably not on firewall).

You could combine some http client proxies, but I think it's a bad idea to
combine http client and server proxies into one big do everything proxy.
Why do that? It's seems like asking for trouble to me.

That said, I have not seen any mainstream vendor coming up with a
specialised http proxy to protect webservers. It's not easy to do right due
to the loads involved, but it should actually be simpler if the software is
specialised.

Cheerio,
Link.



Re: Loopback and multi-homed routing flaw in TCP/IP stack.

2001-03-07 Thread Lincoln Yeoh

At 08:18 PM 06-03-2001 -, David Litchfield wrote:

This affects Windows NT as well. I spoke of the exact same problem back in
the December of 1998 (http://www.securityfocus.com/vdb/bottom.html?vid=1692
for the BID and http://oliver.efri.hr/~crv/security/bugs/NT/msproxy3.html
for the details) whereby we could get to the "clean" interface via the
"dirty" interface on MS Proxy II and from there to the rest of the

Does it really affect Windows NT?

I find if IP forwarding is on, then yes you can ping its 127.0.0.1
interface (this seems expected to me). But if it's off 127.0.0.1 is not
accessible (just like in Windows 9x).

I tested this sometime last year with Linux 2.0.

Recently I found that Linux 2.2 seems to behave strangely - I couldn't
bring down the lo0 interface and ping a remote 127.0.0.1

Freebsd 4.2 and Linux 2.0 are indeed vulnerable to this multihome thingy.
In fact I did use this feature for a Linux 2.0 firewall - I used the IPs as
DMZ IPs.

However it appears to me that it would be hard to exploit this from a host
more than one network away.

Cheerio,
Link.



Re: Security information for dollars?

2001-02-03 Thread Lincoln Yeoh

At 07:06 AM 2/2/01 -0600, Shalon Wood wrote:
Cooper [EMAIL PROTECTED] writes:

 Now, could someone explain to me why a select list of individuals should
 get an earlier warning?

I think this is the crux of the matter. Before you can say that this
is a good idea, you first have to show that some people should get
early notice. Quite frankly, I can see a *very* strong argument in
favor of the root servers, CCTLD, c operators getting advance

Sure, but how will they actually get early notice?

Unless ISC _pays_ people who announce security issues to the closed list
exclusively, I don't see how it's really going to work significantly
better. Why announce to the closed list, vs Bugtraq?

So how about:
The listeners pay.
The bug announcers get paid.
ISC gets what's left.

The more bugs the less ISC gets.

One way to cut costs would be to pay using fancy cheques (stating what
exploit it's for) which would be more likely to be framed up than cashed. ;).

Cheerio,
Link.