Thanks Peter, you're right, 'simpler' is not always less complex.
Serialized circular object graphs are not required for RMI either (in
our version of RMI, called JERI, which stands for Jini Extensible Remote
Invocation), we found only Throwable used it and were able to work
around it. Another unnecessary complexity is annotation of
serialization streams with codebase URL strings (again it appears a
simple solution at first). Effectively every class has it's ClassLoader
located from a map using a URL string and Thread context ClassLoader and
any class can be loaded given an annotation string. This duplication
of class resolution systems between RMIClassLoader and ClassLoader
visibility, leads to incredible complexity in debugging
ClassNotFoundException and ClassCastException's.
I've eliminated circular object graphs and duplicate class resolution.
Instead ClassLoader's are assigned to each Endpoint, and ClassLoaders
are then responsible for class visiblity. This allows us to support
OSGi, let it perform class resolution and make some very difficult
problems dissapear. I've limited dynamic class download to what we
call smart proxy's (used for remote services), where previously any
class was allowed to be dynamically loaded. Smart proxy's are
serialized independantly of the serialization stream from which they're
obtained, the service the proxy represents is first authenticated, then
if necessary a bundle is provisioned and used for deserialization of the
proxy classes, the proxy relies on the bundle to determine class
resolution and the bundles at each endpoint (the proxy and that of the
service) will have identical bundle versions as well.
Regards,
Peter.
On 14/04/2018 10:38 PM, Peter Kriens wrote:
Very nice example of something I see too often: choosing a bad
solution that just appears to be ’simpler’ to use. I think the Java
world suffers from this terribly.
We also had a big discussions about circularity in the OSGi for the
DTO’s. Initially they were circular but in the we agreed to not make
them have inner references. It is slightly more work for the receiver
but it makes life so much simpler overall …
Kind regards,
Peter Kriens
On 14 Apr 2018, at 05:35, Peter via osgi-dev <osgi-dev@mail.osgi.org
<mailto:osgi-dev@mail.osgi.org>> wrote:
On 13/04/2018 6:32 PM, Neil Bartlett via osgi-dev wrote:
On Thu, Apr 12, 2018 at 10:12 PM, Mark Raynsford via osgi-dev
<osgi-dev@mail.osgi.org
<mailto:osgi-dev@mail.osgi.org><mailto:osgi-dev@mail.osgi.org>> wrote:
On 2018-04-12T20:32:13 +0200
Peter Kriens <peter.kri...@aqute.biz <mailto:peter.kri...@aqute.biz>
<mailto:peter.kri...@aqute.biz>> wrote:
> Caught between a rock and a hard place with only one way forward …
I should make the point that I don't hate the JPMS. I do think that
it's just barely the minimum viable product, though.
The JVM really did need a module system, both for the maintenance of
the JDK itself and the future features that the system enables.
> Oracle’s strategy is a mystery to me.
I think their strategy is fairly explicable, but I think they did
make
some mistakes with some of the specifics (filename-based
automodules!).
There's a pattern that Oracle tend to follow: They solicit
opinions from
everyone vigorously, and then they implement the smallest possible
subset such that the fewest possible people are pissed off by it. If
there's a possibility of doing something wrong, nothing is done
instead.
While I've seen that principle operate at other times (remember how
controversial erasure was in Java 5?), I'm not sure it's worked that
way in the JPMS case. In fact JPMS does far more than it needed to.
The key feature of JPMS that could not be achieved before, even with
ClassLoaders, was strictly enforced isolation via the accessibility
mechanism, as opposed to the visibility mechanism that is employed
by OSGi. That strict isolation was needed primarily to allow Oracle
to close off JVM internals from application code and thereby prevent
a whole class of security vulnerabilities. Remember that Oracle was
being absolutely slaughtered in the press around 2011-12 over the
insecurity of Java, and most corporates uninstalled it from user
desktops.
Java deserialization vulnerabilties.
Ironically, Java serialization was an exception, rather than a
minimalist approach, it was given advanced, if not excessive
functionality, including the ability to serialize circular object graphs.
Circular relationships generally tend to be difficult to manage.
Due to the support for circular object graphs, it wasn't possible to
to use a serialization constructor, so all invariant checks had to be
made after construction, when it was too late. Making matters
worse, an attacker can create any serializable object they want, and
because of the way deserialized objects are created, child class
domains aren't on the call stack during super class deserialization.
An attacker can take advantage of the circular object graph
support, and caching to obtain a reference to any object in a
deserialized graph.
In essence, they needed to have an alternative locked down
implementation of serialization.
There's nothing wrong with the java serialization protocol. I wrote
a hardened implementation of java serialization, refactored from
Apache Harmony's implementation, implementing classes use a
serialization constructor, that ensures an object cannot be created
unless it's invariants were satisfied, this includes the ability to
check inter class invariants as well. It doesn't support circular
object graphs and has limits on how much data could be cached, limits
on array size etc.
I submitted the api to the OpenJDK development mail list, there was
interest there, but they decided they needed to support circular
object graphs.
In the end Oracle decided to use white listing.
Cheers,
Peter.
But they could have achieved this with a thin API, comparable to
ProtectionDomain. If they had done that then OSGi (and other module
systems like JBoss) could have chosen to leverage the API to enforce
strict separation between OSGi bundles.
But they didn't do that. Instead they implemented a whole new,
incompatible module system with its own metadata format, including
changes to the Java language spec. Then they restricted the ability
to apply strict isolation to artifacts that are JPMS modules. With
the thin API they could have still built their own module system on
top, following their own ideas of how modules should work, and
competed with OSGi on a fairer playing field.
Being incomplete and "too strict" is considered preferable to
any kind of maintenance burden or making a mistake that people then
come to depend upon. Then, after a version of something is released,
the dust settles and the people that are still complaining after a
year
or so of implementation are asked for comments again. The process
repeats! You can see this going on with quite a few of their
projects.
A good example of this with the JPMS is that there was a vigorous
insistence that the module graph be a DAG. Now, some way into the
first
year, it's going to be allowed to be cyclic but only at run-time. I
think the process does eventually produce results, but it takes a
long
time to get there and demands a lot of patience from the people
involved. Most VM features seem to start off utterly spartan and then
grow to support the use-cases that people wish they'd supported right
from the start.
Java in particular has awful press, and a userbase that seems to
become
incomprehensibly enraged over all good news and all bad news
indiscriminately, so that doesn't help the perception of the process
(or the results).
I think the key will be to continue complaining for as long as it
takes
to get better interop between OSGi and the JPMS. :)
Interop already works just fine in one direction: OSGi bundles
depending on JPMS modules, with a combination of the changes in R7
to export java.* packages from the system bundle and some creative
use of Provide/Require-Capability. But bidirectional interop will
likely always be impossible or very hard, because JPMS modules are
only allowed to depend on JPMS modules. This was clearly a
deliberate strategy to tilt the table towards JPMS, but it may be
backfiring since -- as you've pointed out -- applications can only
migrate to modules when all of their dependencies are modules,
including third party libraries, and the migration of libraries has
been exceedingly slow.
Neil
-- Mark Raynsford | http://www.io7m.com
_______________________________________________
OSGi Developer Mail List
osgi-dev@mail.osgi.org
<mailto:osgi-dev@mail.osgi.org><mailto:osgi-dev@mail.osgi.org>
https://mail.osgi.org/mailman/listinfo/osgi-dev
<https://mail.osgi.org/mailman/listinfo/osgi-dev>
_______________________________________________
OSGi Developer Mail List
osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
https://mail.osgi.org/mailman/listinfo/osgi-dev
_______________________________________________
OSGi Developer Mail List
osgi-dev@mail.osgi.org <mailto:osgi-dev@mail.osgi.org>
https://mail.osgi.org/mailman/listinfo/osgi-dev
_______________________________________________
OSGi Developer Mail List
osgi-dev@mail.osgi.org
https://mail.osgi.org/mailman/listinfo/osgi-dev