On 5/05/2021 10:55 pm, Sean Mullan wrote:
-bcc jdk-dev
-cc security-dev

On 5/5/21 12:04 AM, Peter Firmstone wrote:


I think we are talking past each other here.   You keep talking about untrusted code, which sounds like applets to me.  I've read and still have a copy of Li Gong's book, applets were only one of the considerations.  I am talking about authorization and access control. We use and develop distributed or p2p systems, we don't allow untrusted code to run at all, never ever, that's a dumb idea, so lets stop talking about untrusted code, we don't use that.    We do utilize dynamic downloaded code from others and use dynamic class loading, we verify this prior to loading. We check it's authorized to run before running it.  Again I repeat, we do not run untrusted code, that would allow an attacker to cause denial of service etc, the JVM has no control over badly behaving code.

But you use self-signed certificates to sign the code that will be run. There is no trust in self-signed certificates unless you have previously used some out-of-band mechanism to trust the public key. I still don't understand why this is not the same as running untrusted code, even if the code is sandboxed. And trusting the TLS server is not an equivalent basis of trust.


Yes, we have a dynamic out of band mechanism.  We authenticate the other party using TLS, and depending on who it is (whether Permission has been granted to their Principal), we dynamically grant them permission to load their code on our system.  They can use a self signed certificate on their code, and we dynamically grant permission for the code signed by that certificate to be loaded, otherwise they can use a cryptographic checksum, so we can be sure their code hasn't been modified by a third party, it just simplifies the process, so we don't have to introduce another CA, we are actually trusting the remote end to audit their own code because we know who they are (configuration concerns).   We also give that code Permission to connect to the remote end of the TLS connection, because threads only run with one Subject, we need the code to represent the remote Principal.  We are not concerned with the names of classes or packages in their code as that is their implementation concern.

Provided there is no data theft, if something is taken down by badly behaving software, it has a level of fault tolerance, services will be restarted automatically, if they fall over, they are re-activated.  We don't use RMI's activation implementation, but we do depend on some activation API classes.

The problem with removal of access control is we would be permitting unrestricted access to a trusted third party, who doesn't require unrestricted access, nor is it in our interests to allow it.

It's a matter of trusting the TLS enpoint, both clients and servers are authenticated.   But these may just be servers talking to each other, not necesarilly a client server relationship, for example a client of a service may require an event notification, so it passes a Remote object as a parameter to the service, now the server of the service, is also a client, when it sends the event notification to listeners.  It's distributed, p2p.  This is used by some as a cluster back end for JEE, although I must admit I don't know many of the details there.

One of the reasons SecurityManager didn't control many Java Serialization vulnerabilities, is because ObjectInputStream was granted AllPermission as it was a platform class.  Clearly ObjectInputStream belonged in an unprivileged domain. Also it's not good that Java serialization circumvents invariant checks that constructors perform.

Our software also allows the company we are dealing with for example, to provide their own GUI window within a GUI so to to speak, to allow a supplier to be integrated in to the system, as an example.  That us if it's a system with user interaction, otherwise it could be an automated process between two systems, based on an agreed interface.

The only thing known prior are the Java interfaces used for intra system communication, the platform software (which we try to maintain as backward compatible) and the principal of the other party which we have granted permission to.

Also I have been developing a public api for serialization (we've discussed the de-serialization component of it previously), which is suitable for other serialization protocols as well as Java's. We don't support circular links in serialized object graphs.  The only class we found that required a circular link was Throwable and we have found it's possible to program around circular links.   Developers implementing it use an annotation and implement a public constructor for de-serialization, a public static method for serialization and another public static method that defines the parameter arguments for serialization, which are wrapped by a common serialization parameter type.   Permission must be granted to allow serialization to be implemented (this is required by the serialization protocol implementation) and different permission is required to serialize, the parameters passed to these methods require a permission for their creation.

The first serialization protocol implemented is compatible with Java's Serialization protocol, and for now, we can use the same serial form, to ease transition.

To use this system, doesn't require a physics degree, and you don't need a billion dollar particle accelerator, because we don't need to turn lead into gold, this is all just Java, basically POJO domain driven design style programming.   We've simplified the permissions systems to Principals, but dynamically grant to code, so that administrators don't have to.   Jar files declare a list of permission required and we have a tool to determine those permissions at testing time.  The actual permissions granted is the intersection of the permissions allowed and the permissions requested.   The set of permissions allowed typically have a wider scope, or are more permissive than those requested.

We reduced complexity with dynamic permission grants and we improved performance by writing our on policy implementation. Developers do need to preserve the Subject across threads in their code.

We use a remote invocation system / protocol, that's similar in principle to RMI, but unlike RMI, it preserves the Subject across secure network connections and it can be configured to use other Serialization protocols.  So serialization protocols are becoming an administration concern.   Unlike RMI, class resolution is determined by ClassLoader's at each endpoint, not a duplicate class resolution mechanism, so it also works on systems that use graph class resolution, rather than a hierarchical class loading system, eg OSGi.

These systems are also capable of dynamically discovering each other, even on global networks using IPv6 dynamic discovery, provided Principal's are known in advance.

Remember Jini and Javaspaces?  We've had twenty years to simplify it and address some fundamental issues, this is modern well maintained code.   We do depend on some classes that will be removed from Java, which originate from Java RMI's API, like the Remote interface and RemoteException, but that's life I guess.  We will have to break backward compatibility to replace them.

That's probably why it sounds like you need a Physics degree and particle accelerator to do these things, because we have made things that would be otherwise very difficult, possible and simple enough for practical application.

One of the things that isn't clear, is functionality provided by Java's module system how it will be used to replace SecurityManager, is how this will work with OSGi module systems. To date, I've considered the Java module system as a platform concern and systems like OSGi as an application concern.   We produce OSGi bundles, in software releases, but we don't depend on OSGi.  OSGi's been around a lot longer, we may consider producing Java modules also, but we are waiting to learn best practices, because it's still relatively young.

It does appear that we might not be able to support Java past version 17 assuming this JEP goes ahead, which seems likely.  But that might not matter, if enough software has the same difficulty, then perhaps support for Java 17 will be extended.   If we had many years, we will probably find a solution, but in 8 years, I'm not sure it's long enough.  You only need to look at how long it took for Project Jigsaw (over 10 years wasn't it? ) to be implemented to realise some things take a long time to implement, especially low level pervasive systems.

Oracle's a pretty big ship, I don't think these decisions were made in haste, but judging by conversations so far, the decision is a done deal, not a proposal, it seems unlikely this ship will be turned around.   No doubt there will be other use cases that come to light.

Thanks for your time and thanks for asking.

--
Regards,
Peter Firmstone
Zeus Project Services Pty Ltd.

Reply via email to