Although JGDMS is a different type of system, this is one of the functions of SecurityManager in our system also.

We have methods equivalent to Subject.doAs, which instead of injecting the privileges of the user into all ProtectionDomain's, instead prepends a ProtectionDomain that represents the user's Subject onto the call stack.  When used in this way, it changes policy file permission grant's, it actually simplifies them, because you can grant AllPermission to trusted code, if you want to, and then limit user permissions and it will restrict only the user, however this doesn't fully explain why we do this.

https://pfirmstone.github.io/JGDMS/jgdms-platform/apidocs/net/jini/security/Security.html#doAs-javax.security.auth.Subject-java.security.PrivilegedAction-

To explain this, requires briefly revisiting the 8 fallacy's of distributed computing, that is, item 6, there is only one administrator.

So for your single JVM server, or many instances of JVM's operating independently, yes, there is only one administrator, but this is not the case for a distributed system.

The reason we prepend a ProtectionDomain representing the user, is that there are ProtectionDomain's on the stack that represent Services other administrators are responsible for, it just happens to be proxy "code" that implements an interface, or interfaces, used for method calls between the two systems.  These proxy's are assigned a ClassLoader based on the Server's security constraints, the server Endpoint address (by the proxy's InvocationHandler implementation) and the codebase annotation.

So it's not really about the code, it's about the Service's identity and keeping the identities of different services separated from each other (even those using the same proxy code, with different server identity, eg for lookup services from different entities) and users, to avoid granting those Services all the users permissions while also allowing the user to authenticate with the service and also run as a logged in subject in threads on the remote node as well.

As previously explained, the permission's required for Service proxy's are negotiated dynamically at runtime, between the two parties.  We don't grant all the permissions the user has to the remote service provider, only those that have been negotiated, when a secure connection was first established, so the user allows the service provider to utilise some of the users permissions. The service provider cannot currently obtain more permission than the user has.

In versions of Java, without a security manager, the third party service provider will have AllPermission, and the user will have restricted permissions (if we still have some form of user Permission based access control).   So basically we might as well remove all access control completely and say that all users and all code is completely trusted, the system will be much more user friendly and easier to use, but legally that can present problems.

It does appear that a side effect of JEP 411, perhaps even an unintended consequence, will be to limit Java to trusted networks with one administrator.  It is most certainly appears to be a single JVM focused change, or a system controlled by one administrator.

Newer versions of Java will of course be less secure without access controls and unsuitable for use in a distributed system that involves more than one administrator.

In a way, it's ironic, IPv4 limited us to local networks, but now it looks like later versions of Java will now be the limiting factor.

I realize less utilized platform features like access controls aren't the concern of OpenJDK developers, but it doesn't appear to be doing any harm talking about them, at least so the consequences of the decision can be better understood.   I realize this is probably a business and marketing based decision.  I guess Java has more of an enterprise history, and it's giving that up to become leaner and more developer friendly (less things to learn or understand).

Cheers,

Peter.

On 17/05/2021 12:11 pm, David Black wrote:
Hi Ron

On Thu, 13 May 2021 at 20:22, Ron Pressler <ron.press...@oracle.com> wrote:


On 13 May 2021, at 03:11, David Black <dbl...@atlassian.com> wrote:


This seems somewhat more useful than 1 & 2 but imho it would be better to be 
able to perform checks/grant access on a call stack basis.
This is an important point we’re trying to get across. The very reason the 
Security Manager was made this
way is because it does *seem* better; certainly it is much more flexible. 
However, twenty five years of
experience have shown us that *in practice* this is not the case, certainly not 
when you look at the
ecosystem as a whole. It is hard to get right, which results in people not 
using the mechanism (which
significantly reduces its utility and the value in maintaining it), or worse, 
use it and think it gets
the job done, but actually use it incorrectly, providing the illusion of 
security without actual security.
Agreed, but if you don't have this level of introspection/detail how
do you propose to, at least partially, mitigating bug classes such as
SSRF?

Atlassian currently makes use of a security manager to prevent access to cloud 
metadata services that do not have an amazon sdk related class on the call 
stack. This helps to mitigate the impact of SSRF in applications running in a 
cloud environment 
(https://github.com/asecurityteam/ssrf-protection-example-manas-security-manager).

We’re talking about a situation where *all* the classes running in your 
application are trusted,
i.e. assumed to not be malicious, and that an accidental vulnerability might 
exist in any one of them.
Can you explain why you believe this mechanism, that treats different classes 
differently is the best
way to improve security?
Because it allows for restrictions to be placed upon "trusted"[0]
classes so as to offer some mitigation against classes of bugs such as
SSRF. You can also use a security manager to monitor for potential
policy implementation issues & make adjustments. Specifically for
SSRF, if you want to mitigate the issue you need to ensure that
network connections being made respect proxy settings but also allow
support for certain code paths bypassing proxy settings to access
potentially sensitive network locations (e.g. cloud metadata
resources) this can result in mistakes in configuration occurring and
or finding libraries/classes that ignore proxy configuration. You may
be thinking "oh but surely no library/class would have proxy
problems?" well that answer is "yes they can and do". For example,
https://bugs.openjdk.java.net/browse/JDK-8161016[1] was fixed in java
9 but has not yet been fixed in java 8[2]. In a similar fashion,
OkHttp before version 3.5.0 could also fallback back to a direct
connection[3]. So having a "belt and braces", prox(y|ies) and a
security manager, approach is valuable.


[0] "Trusted" classes are not immune to security issues
[1] https://hg.openjdk.java.net/jdk10/jdk10/jdk/rev/3dc9d5deab5d
[2]https://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/0056610eefad/src/share/classes/sun/net/www/protocol/http/HttpURLConnection.java#l1180
&  
https://github.com/AdoptOpenJDK/openjdk-jdk8u/blob/master/jdk/src/share/classes/sun/net/www/protocol/http/HttpURLConnection.java#L1180
[3] https://square.github.io/okhttp/changelog_3x/#version-350

Reply via email to