Alejandro Abdelnur wrote:
Yes you would have to do it with classloaders (not 'hello world' but not
'rocket science' either).

That's where we differ.

I do actually think that classloaders are incredibly hard to get right, and I say that as someone who has single stepped through the Axis2 code in terror, and help soak-test Ant so that it doesnt leak even over extended builds, but even there we draw the line at saying "this lets you run forever". In fact, I think apache should only allow people to write classloader code that have passed some special malicious classloader competence test devised by all the teams.

You'll be limited on using native libraries, even if you use classloaders
properly as native libs can be loaded only once.

Plus there's that mess that is called "endorsed libraries", and you have to worry about native library leakage.

You will have to ensure you get rid of the task classloader once the task is
over (thus removing all singleton stuff that may be in it).

Which you can only do by getting rid of every single instance of every single class .. very, very hard to do this


You will have to put in place a security manager for the code running out
the task classloader.

You'll end up doing somemthing similar to servlet containers webapp
classloading model with the extra burden of hot-loading for each task run.
Which in the end may have a similar overhead of bootstrapping a JVM for the
task, this should be measured to see what is the time delta to see if it is
worth the effort.


It really comes down to start time and total memory footprint. If you can afford the startup delay and the memory, then separate processes give you best isolation and robustness.

FWIW, in SmartFrog, we let you deploy components (as of last week, hadoop server components) in their own processes, by declaring the process name to use. These are pooled; you can deploy lots of things into a single process; once they are all terminated that process halts itself and all is well...there's a single root process on every box that can take others. Bringing up a nearly-empty child process is good for waiting for a faster deployment of other stuff. One issue here is always JVM options; should child processes have different parameters (like max heap size) from the root process.

For long lived apps, deploying things into child processes is the most robust; it keeps the root process lighter. Deploying into the root process is better for debugging (you just start that one process), and for simplifying liveness. When the root process dies, everything inside is guaranteed 100% dead. But the child processes have to wait to discover they have lost any links to a parent on that process (if they care about such things) and start timing out themselves.

-steve

Reply via email to