Hi Peter,

Yep, I hadn't gotten to improving that part of the implementation yet.
The threading needs to be revised; System threads and the limited stack were used currently to monitor processes (on Linux) but are not suitable for evaluating CompletionStages.
Switching to some form of Executor as a source of thread is needed.
I was also trying to preserve an option to have a lightweight monitor
of thread liveness that did not require a thread to improve scalability.
That would allow the thread needed for CompletionStage to be created only if and
when it is needed.

On 2/10/2015 1:17 PM, Peter Levart wrote:
On 02/10/2015 02:02 PM, Peter Levart wrote:
On 02/10/2015 12:35 PM, Peter Levart wrote:
ProcessHandle.completableFuture().cancel(true) forcibly destorys (destroyForcibly()) the process *and* vice versa: destory[Forcibly]() cancels the CompletableFuture. I don't know if this is the best way - can't decide yet. In particular, in the implementation it would be hard to achieve the atommicity of both destroying the process and canceling the future. Races are inevitable. So it would be better to think of a process (and a ProcessHandle representing it) as the 1st stage in the processing pipeline, where ProcessHandle.completableFuture() is it's dependent stage which tracks real changes of the process. Which means the behaviour would be something like the following:

- ProcessHandle.destroy[Forcibly]() triggers destruction of the process which in turn (when successful) triggers completion of CompletableFuture, exceptionally with CompletionException, wrapping the exception indicating the destruction of the process (ProcessDestroyedException?).

- ProcessHandle.completableFuture().cancel(true/false) just cancels the CompletableFuture and does not do anything to the process itself.

In that variant, then perhaps it would be more appropriate for ProcessHandle.completableFuture() to be a "factory" for CompletableFuture(s) so that each call would return new independent instance.

What do you think?

Contemplating on this a little more, then perhaps the singleton-per pid CompletionStage could be OK if it was a "mirror" of real process state. For that purpose then, instead of .completableFuture() the method would be:

public CompletionStage<ProcessHandle> completionStage()

Returns a CompletionStage<ProcessHandle> for the process. The CompletionStage provides supporting dependent functions and actions that are run upon process completion.

Returns:
a CompletionStage<ProcessHandle> for the ProcessHandle; the same instance is returned for each unique pid.


This would provide the most clean API I think, as CompletionStage does not have any cancel(), complete(), obtrudeXXX() or get() methods. One could still obtain a CompletableFuture by calling .toCompletableFuture() on the CompletionStage, but that future would be a 2nd stage future (like calling .thenApply(x -> x)) which would not propagate cancel(true) to the process destruction.

The implementation could still use CompletableFuture under the hood, but exposed wrapped in a delegating CompletionStage proxy.

So the javadoc might be written as:


public abstract void destroy()

Kills the process. Whether the process represented by this Process object is forcibly terminated or not is implementation dependent. If the process is not alive, no action is taken.

If/when the process dies as the result of calling destroy(), the completionStage() completes exceptionally with CompletionException, wrapping ProcessDestroyedException.


public abstract ProcessHandle destroyForcibly()

Kills the process. The process represented by this ProcessHandle object is forcibly terminated. If the process is not alive, no action is taken.

If/when the process dies as the result of calling destroyForcibly(), the completionStage() completes exceptionally with CompletionException, wrapping ProcessDestroyedException.


But I'm still unsure of whether it would be better for the completionStage() to complete normally in any case. Unless the fact that the process died as a result of killing it could be reliably communicated regardless of who was responsible for the killing (either via ProcessHandle.destoroy() or by a KILL/TERMINATE signal originating from outside the VM).

Peter



Hi Roger,

I checked out your branch in jdk9-sandbox and studied current implementation.

One problem with this approach (returning a singleton-per-pid CompletableFuture or CompletionStage) is that current processReaperExecutor is using threads with very small stack size (32 K) and the returned CompletableFuture could be instructed to append a continuation that executes synchronously:

    CompletionStage.thenApply(), CompletionStage.handle(), etc...

... so user code would execute by such thread and probably get StackOverflowException...
Yes, a normal stack size is needed.

Also, multiple ProcessHandle objects together with a Process object for the same pid each return a separate CompletableFuture instance (not what spec. says). Each of them also spawns it's own thread to wait for process termination.

Here's a better approach (a diff to your branch):

http://cr.openjdk.java.net/~plevart/jdk9-sandbox/JDK-8046092-branch/webrev.01/

...it contains a global registry of internal CompletionStage(s) - one per pid. They are not exposed to users, just used internally (in Unix ProcessImpl) to execute cleanup and in .completionStage() to append an asynchronous stage which is returned by the method on each call. So users appending synchronous continuations to returned CompletionStage would execute them in a common FJ pool.

Yep that's the idea and should be common across Windows/Unix/MacOSx and
usable with Process and ProcessHandles.  I'll take another look.

Thanks, Roger


I haven't tested this yet. It's just an idea to share.

Regards, Peter



Reply via email to