I'm afraid the proposed solution to this issue will solve nearly nothing
because what does prevent the code inside the page to produce the SOAP
message itself and then send it using HTTP without invoking the SOAP stack
implementation inside the browser, thus preventing it to insert the SOAP
Header with the origin of the page. The server will then have no input how
to decide whether it should accept the request or not.

I think that the only possible solution is to decide on the server from
which clients the calls are to be accepted and from which not. The identity
of the client should be determined using some reasonable authentication
mechanism (like Kerberos, etc.). Because the script inside the page has no
access to the client credentials, it must use the browser implementation of
the communication protocol, whatever it is (e.g. SOAP stack), otherwise the
call will not be authenticated and thus not trusted by the server.

Relying on the client, that he or she will insert something into the message
and trust this information without even knowing from where it came, because
there is nothing, where the server could ensure the authenticity of the
information inside the header will never work.

Sincerely,

Jan Alexander
[EMAIL PROTECTED]


----- Original Message -----
From: "Ray Whitmer" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, 08 February, 2002 16:51
Subject: Security: SOAP server security from untrusted scripts, a proposal


> This is a long message.  At the end I propose a standard SOAP header which
> would eventually be supported by SOAP servers.  I need some opinions on
> this so that I can advance the SOAP support in the Mozilla project to
> be part of the default build.
>
> First the problem:
>
> Browser scripts can endanger services that are supposed to be protected
> behind a firewall, because an unsuspecting user loads a page which now
> has access to services behind a firewall and can send data back home.
>
> The common solution is allow the user to designate domains which are
> trusted and should be permitted to generally make such calls.  If the
> script is already trusted enough, it may be permitted to ask the user to
> grant it the necessary privileges to make such a call.  But this approach
> seems deeply flawed:
>
> 1.  These security schemes rely on a configured browser that knows the
> difference between trusted and untrusted sites.  Security needs to work
> well in the absence of an intelligent user to make such configurations,
> much like a Java sandbox does.
>
> 2.  Pages which should be permitted to make calls to other sites outside
> of a firewall should not be able to make calls to other sites inside the
> firewall, but mechanisms do not seem to distinguish at present.
>
> 3.  Placing the responsibility on the user of the browser is wrong,
> because the client likely does not even know that he might be
> endangering important services, which do not directly belong to him.
> He is likely not to even really understand the purpose of the firewall,
> know which domains are behind the firewall, or what type of calls might
> place these things at risk.
>
> 4.  Even for the providers of services not behind a firewall, this makes
> the services less available to browser users, because browser suppliers
> are forced to place very draconian checks with dire warnings on the
> SOAP mechanisms, because the user who disables them for a particular
> domain with pages he needs risks compromising services behind the
> firewall, even though al the suppliers and services involved may be
> quite legitimate.
>
> I have coded basic SOAP support for Mozilla, available from Javascript
> inside web pages, which will be available in an upcoming Netscape release,
> but I find myself in the position of having to make its use very
> inconvenient to use, yet at the same time still quite insecure once
> someone does give consent to make SOAP calls from a particular web page.
>
> Now the proposal:
>
> I propose that some of the responsibility be delegated to servers by
> introduction of a new header: "foo:untrustedSource", which contains
> the URI of the page making the request (also the company name in the
> case of a script with a verifiable signature).
>
> The reason I am bringing this up in this forum is that I feel I need
> some kind of unofficial agreement from server folks before proceeding.
> My time frame for releasing the client is pretty short, since I'd
> like to see it in Mozilla 1.0 (no guarantees).  The time frame for
> actually mulling over details and going from such an abstract
> discussion to a more concrete plan for the server I expect to take
> much longer, but that does not seem to be a problem since the
> ability to ignore or object to mandatory headers seems to already
> be present in Apache SOAP.  I believe I could ultimately help contribute
> whatever parts might be needed for the Axis package or others was
> to properly support this, which we expect to become common after the
> release of Mozilla, Netscape, etc.
>
> Here is how it works:
>
> For legacy purposes, we keep in place in Mozilla the existing ability to
> express complete faith in a domain and permit scripts within that domain
> to make all the SOAP calls they want to.  But as always, we discourage its
> use especially for general internet domains.
>
> When a script wants to make calls but does not fall under this clumsy/
> insecure exemption, then the implementation adds a header to the message
> called "untrustedSource", with mustUnderstand="true", to make the security
> opt-out instead of opt-in.  This header also contains the URI (and company
> name if it is verifiably signed) of the source of the script.
>
> How this affects Servers:
>
> As long as the server checks mustUnderstand, existing services reject the
> call, preserving their own security.  If a service wants to accept all
> calls regardless of source (i.e. it is already available outside of the
> firewall), it can be modified to ignore the header.  If a server wants to
> discriminate between untrusted sources, it has the information in the
> packet to do so.  This permits the server to say that no pages loaded
> from untrusted domains except domains it trusts may call it.  This is
> obviously usefull for protecting services within a firewall, and I could
> also imagine it might be useful for dealing with potential DOS attacks
> set up by getting unsuspecting hordes of users to load a particular
> page.
>
> Please note that this does not divulge in any way the identity of the
user,
> but only the URI of the page that invoked the service which is likely to
> be in another domain (which is the whole point, because if the user only
> created his own pages, they would all be trusted enough to not need this).
> It is no security against a vicious user who compromizes services within
> his own firewall, but only against user ignorance causing them to
> compromise security or not be able to legitimately use web pages which
> access SOAP services (even if they have no firewall).
>
> What I get out of it is that by widely documenting this header and
> eventually enabling services to allow it and filter messages based
> upon it, I make it possible for the masses to use pages which access
> these services where appropriate with little risk or inconvenience.
>
> FWIW, I participate in the W3C XML Protocols WG, but I don't think it is
> the type of issue they are dealing with there at present.
>
> Thanks,
>
> Ray Whitmer
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
>

Reply via email to