Hey,

I'm designing an RPC server for simulators that can run several different 
long-running commands, and I have a few design questions.

My current design is as follows.
Upon loading files to simulate, the simulator returns a list of unions of 
interfaces, representing all the simulation commands it supports.
Upon calling one of the commands, a reader interface is returned that 
allows streaming simulation results.
This currently happens in-thread, so reading big chunks will block the 
entire server.

So first question is, is there a nicer way to represent a thing that can 
implement a subset of functions? I could just define a mega interface and 
not implement methods, but then the issue is how to discover which methods 
the server implements. Or maybe the correct approach is to use multiple 
inheritance from the smaller interfaces to concrete implementations?

The next question is how to offload the simulation to a thread? I assumed 
this would be a very common task but I can't find much in the docs. I found 
KJ::Thread or something like that, but it's not clear to me how to tie that 
into the event loop promise API.

Final issue I'm thinking about, for *very* long running simulations, the 
client disconnecting in the middle of simulation that takes days or weeks 
becomes a real concern. This is basically level 2 of 
https://capnproto.org/rpc.html#protocol-features but as far as I understand 
C++ is only level 1. What would be a good way to go about things here? If 
level 2 is just around the corner, I can just ignore the issue for a while, 
but maybe I need to manually store simulator references outside the 
connection and hand out tokens to it?

Regards,
Pepijn

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/capnproto/1d9e2507-4a44-42b1-a787-0e73bb07ada3n%40googlegroups.com.

Reply via email to