On 9/15/2011 6:36 AM, Peter wrote:
----- Original message -----
First, I'll admit to not having looked at your attached code, so I can't
comment on it.
The idea of starting a new jvm process for every proxy downloaded still
troubles me though. Surely you can still DOS attack something by getting it
to start new JVMs for every download. JVMs are not renowned for being light
weight. Plus, it's going to have an impact on legit services if they're all
stuck in their own VMs. Sorry, I'm not articulating myself very well (need
more coffee) I'm just uneasy about this approach.
That's understandable, one both counts, :) I figured one jvm per registrar
might not be
> as bad, which means the registrar has to share the jvm with all the other
services it
> returns from lookup.
What I'm wondering, is whether you are trying to solve a problem that represents
the real issue. "freely" executing code, is problematic. We know that "process
based" isolation, as well as "filesystem permission based" isolation can create
a reasonable sandbox for "physical" damage to the computing environment. In the
end, both of these types of isolation use a "check and protect" based mechanism
through the APIs that are part of those domains.
When we look at the "shared state" issue in Java, the complexity of "check and
protect" goes up by orders of magnitude, because ALL loaded classes have to have
APIs that take the right precautions.
This is one of the reasons why I hate JVM permissions as a "protection"
mechanism overall, because they are not part of the "machine", but part of
"select code". Granted, the ones that are "filesystem permission based" seem to
do well, but I worry about System.load("arbitrary path") and other thing that
user code can do once particular permissions are granted (such as AllPermission
opening all the doors).
In my work on Jini services and clients, I have not focused on controlling
specifics of client activities with permissions.
Peter's work here, is a very complex issue because of how many more doors there
are that can swing open to various parts of the system running downloaded
service API.
If you want to do isolation through separate processes there is a hard problem.
For deserialized classes that are not proxies on the outside (smart proxies of
some nature), it will be difficult to have the right "type" on a "proxy with
invocation handler" solution to call across processes. Direct Field access that
is part of a public API will be difficult to support with an invocation handler
as well.
ServiceUI presents a whole load of issues about how does the UI component
hierarchy work if running in another process. If from the outside, there is a
ServiceUI container that would use JComponentFactory to integrate the service's
UI into a more complex display, getting that to work in a separate process would
be problematic because you'd have to abstract Graphics activities, mouse events
etc., through the mechanism too.
I'm kinda with Tom that there just seems to be so much complexity here that I'm
not sure, yet how to attack this issue. We'd be creating so much code that we
really would of built a completely new platform it seems. That might be what's
needed. The bigger picture is not really gelling into a tangible set of
concrete characteristics and actionable work items.
If we thought about it from the perspective of the "app store" model as Tom
mentioned, where we'd expect "no" inter-working of services' codebases and
classpaths, then maybe completely separate processes with a "standard" serviceUI
container would be a good solution. It would then start to feel more like the
"Applet" environment. But we'd still have the "Windows can create confusing and
abusive UI components" issue that the Applet model tries to handle through
disallowing borderless windows.
Humm...
Gregg Wonderly
I'm still more in favor of creating implicitly trusted lookup services,
where we can assume that if I get a proxy from that service then the proxy
can be trusted. I'm fuzzy on the details about how you differentiate a
trusted lookup service from an untrusted one.
Identity - I'm thinking about public pgp keys and referee services. Perhaps
something along the lines of the convergence plugin for firefox, but using
services.
Also, how you would verify a
service trying to register with it as a trusted service.
Client Authentication and a method constraint on register.
Sort of like the app-store model, where security is provided by manual
checks and legal contracts rather than a "well, I downloaded the proxy and
it didn't bite me" approach which is what this seems to be.
Sorry for hijacking your thread.
Tom
That's cool, don't mind.
Cheers,
Peter.
Grammar and spelling have been sacrificed on the altar of messaging via
mobile device.
On 15 Sep 2011 03:17, "Peter Firmstone"<peter.firmst...@zeus.net.au> wrote:
Although I plan to set up a sub process jvm for isolation, I originally
wrote the following IsolatedExecutor in an attempt to contain the damage
remote code could do during discovery V1 or unmarshalling.
Discovery or unmarshalling can be executed in a Runnable or Callable and
be isolated to a single thread with no privileges, it handles
StackOverflowError and OutOfMemoryError gracefully, without blowing up
the jvm.
Once an Error's occurs, the executor thread is interrupted and the
Executor shutdown.
Attacks that remote code could perform has not been eliminated, such as
changing public static fields, or insufficiently guarded static methods.
To further isolate the remote code, it's ClassLoader could load it's own
jsk-platform, so platform classes aren't shared, reducing the shared
state to the java platform's static fields and methods.
To eliminate all shared state, the remote code can be placed in a sub
process jvm.
The IsolatedExecutor could be made multi threaded, the first thread that
threw an Error would cause the executor to shutdown, however other
threads may cause the jvm to also throw errors, but these could be
handled.
I don't think it's advisable to continue execution after an Error is
encountered, instead, the cause should be logged and the jvm gracefully
shutdown and restarted.
In a multithreaded application, any thread outside of the
IsolatedExecutor could throw an OutOfMemoryError. To handle this a
ThreadGroup UncaughtExceptionHandler could be set in the parent
ThreadGroup, to log the error and restart the jvm. Since the jvm
ignores any exceptions thrown from the UncaughtExceptionHandler, you
can't really recover from that situation.
If the IsolatedExecutor catches the Error first, the caller has the
opportunity to identify the task that caused the issue and perform some
detailed reporting prior to restarting.
This functionality would be useful to restart a sub process jvm.
In existing ExecutorService implementations, RunnableFuture swallows
Throwable, the ExecutorService is unable to shutdown the thread, even
when the client calls back. The difference with IsolatedExecutor, is
the thread that run's the future calls get after completion, set its
interrupt status and shuts down the executor.
I've got some junit tests that deliberately cause OutOfMemoryError and
StackOverflowError, on one occassion with a standard ExecutorService, it
caused one OS (Windows 7) to go into some kind of hard shutdown, no such
problem with IsolatedExecutor, in fact junit goes on and finishes all
the other tests after IsolatedExecutor passes it's junit tests, catching
both types of error.
I am of course wondering if I can use reflective proxy's with
InvocationHandler's to encapsulate all calls in Callable, to a proxy
that runs in a sub process jvm within it's own Classloader containing
jsk-platform.jar
An alternative to this sort of isolation would be to have some kind of
trust advisory service that downloads codebases and runs FindBugs
bytecode analysis?
I mean, we don't have to run the code to find out if it has a nasty bug.
Just thought I'd post this in case someone else finds it interesting.
Cheers,
Peter.