Peter Firmstone wrote:
Sim IJskes - QCG wrote:
On 10/06/2010 02:58 AM, Peter Firmstone wrote:
I want this trust system to be dynamic. I want to be able to change my
mind.

Once you've loaded the class files, that's it, the ClassLoader will only
become garbage collected when no strong references to objects and class
files remain. You've only got one shot at that decision.

Thats too much black and white for me. I was talking about multiple invocations of a VM. So that means remembering trust decisions.

That makes sense, perhaps you can share the trust decision and the result of that decision by submitting it to a feedback service, your VM could send this information before shutting down.


I we have misbehaving code, we cannot stop it easily. So that means changing the trustlevel, and restarting the VM (no specific order).

Have you got any thoughts on trust levels?


Or are you willing to use Thread.stop()?


No, first implement an UncaughtExceptionHandler, use it with a Single threaded ExecutorService, also create a ThreadFactory that creates a Thread utilising the UncaughtExceptionHandler.

This catches a StackOverflowError, we still need strategies for other Errors. Instead of using Runnable, Callable is used so that the method calls from the reflective proxy return any Exceptions, these can be wrapped in an IOException for the client to handle.

Since the thread is a low priority the impact of it running will be low, so we can set the interrupt status of the Thread. Thread.sleep and Object.wait methods will respond to the interrupted state, so eventually the thread will be interrupted and the smart proxy reference will be set to null, which causes all method calls to throw an IOException. The thread will eventually terminate, without taking out the JVM.

Furthermore all objects created during unmarshalling are not shared, no blocking methods that ignore interruption will prevent the thread from being interrupted. If it's caught in an endless loop, the StackOverflowError will eventually be thrown and caught by the UncaughtExceptionHandler, which deletes the smart proxy reference and sets the correct state of the reflection proxy's InvocationHandler before it shuts down, causing all client calls to receive an IOException.

MarshalledInstance contains the byte array for the objects already, so the thread won't block waiting on a Socket, ignoring the interrupt.

Authentication doesn't utilise the proxy code, only local code, so if we can't authenticate, shut it down.

I've got some sample code up on Pepe, have a look, time permitting, I'll expand it further, we can get some test cases going, if there's a show stopping issue we can't solve, then I'll drop the idea. So far I haven't discovered that show stopper.

If you don't like the eventual solution, you can still vote against it before it gets merged, it may never get that far, you or I might discover that show stopper.

Peter.

Sim,

Firstly let me say that I like your idea, a feature requiring a jar file be signed by a known Certificate, before allowing class loading, I'd like to implement it together, if your interested. This would provide good security for parties already known to each other.

Back to the topic of the DOS attack, the ExecutorService Thread prevents the client's thread memory stack from corruption / overflow.

It's worth noting the possibility that a clever attack implementation thread may run a very long time. The attacker may use the static Thread.interrupted method to reset the interrupted status of the thread, so we'd be reliant on StackOverflowError to eventually stop it. There are measures which can be taken to reduce the impact, such as setting the thread priority very low, using the Java 5 Thread stack size constructor, providing a minimal stack size, this would not be consistent or guaranteed across all platforms, but would have to be tested for each, the stack size would be equal or less than the other running threads, depending on the minimum stack size allowable by the platform. The stack size would also place a limit on the size of the unmarshalled proxy.

An ideal attack implementation would be one that makes cpu intensive calculations, but doesn't recurse the same methods too deeply, since the number of times a method is recursed before a StackOverflowError is thrown is limited by the stack size. Then the attacker would need many service registrars to supply proxy's to unsuspecting clients, to produce enough phantom threads to cause memory problems. The attack to be successful needs many attack registrars, the goal; cause an OutOfMemoryError, by creating too many threads. We could be talking thousands of threads for this to work.

The important thing to remember is application performance would not become degraded until memory is impacted.

The number of bad registrars would be the biggest issue.

We can discover registrars using DNS-SD, where each internet domain specifies it's registrars (addresses and port numbers are discovered using DNS-SD, so one firewall IP address can serve multiple registrars spread over an arbitary range of ports).

(DNS-SD + Unicast Discovery = discovery of internet Registrar's) N.B. I'm curious about RemoteEvent registrars, Michal?

The attacker would need to take advantage of a security vulnerability provided by NAT Routers with UPnP to open firewall ports for computers on private networks to create enough drone computers with bad registrars.

Are you familiar with the use of Canary's in Mining? Canary's were taken underground by coal miners in the 19th Century, when gasses reached dangerous levels, the more sensitive canary's, would be affected, alerting miners and providing time for escape.

So what do we need for reliable services in the Face of DoS attacks? A Canary, constantly searching all web domains for bad registrars, when it discovers one it will notify a Server providing a Canary Service. When the Canary is overrun by bad proxy threads, (all belong to a common ThreadGroup and can be counted) it restarts it's JVM.

Each domain would provide it's own Canary Service along with it's own Registrar's, clients in that domain would first discover a local registrar, then lookup a Canary Service for a list of domains or addresses of public registrar's to avoid. The known bad addresses would then simply not be used in Unicast Discovery.

In effect your using known trusted Registrars to notify you of untrusted ones you don't know.

The domain administrator might run a local domain canary looking for bad proxy's from any services in their own domain, these bad services would be removed from the registrar and their location tracked down.

Registrar Administrators in a domain might perform a search of canary services in other domains to see if they have any bad local service registrars operating from their network and shut them down too.

Most of this could be automated, so clients needn't be too concerned. InternetServiceDiscoveryManager?

Registrars never unmarshall proxy's, they are immune in this regard to the unmarshalling DoS attack.

A company may of course misuse the canary service by discouraging their own domain clients from using particular domains.

This would work very similarly to signing Code and asking clients: "Do you trust this code signer?" When you click ok and something bad happens, you report it. With canary services, it could happen automagicly.

Clients discovering bad services in other domains could report these to their local canary service.

The trick is to make the DoS attack ineffective enough that it can't take down a client so easily, as it can now.

Right now, a client could be taken out simply with a never ending loop from a single bad registrar. If instead we require two thousand bad registrars to take out the client, we've raised the bar slightly higher.

Is it worth investigating further?

What say ye?

Cheers,

Peter.

Reply via email to