Antti Kantee writes:

> On 18/01/16 20:19, Lluís Vilanova wrote:
>> Hello list!

> Hey Lluis, long time no see!

Hi! Sorry I did not respond earlier, but I've been swamped.


>> Do you think it would require a lot of effort to write the necessary 
>> wrappers to
>> run a rump kernel as a process separate from the one that actually uses it? 
>> And
>> what about having one user app communicating with multiple rump kernels (each
>> providing different services)?

> Well, yes and no, depending on where you split it and what you want to use for
> transport.

> Out-of-the-box you get remote system calls over sockets.  This tutorial 
> explains
> a bit more on how you can use it:
> http://wiki.rumpkernel.org/Tutorial%3A-Getting-Started

> The transport is not fundamentally limited to suckets, just the 
> implementation,
> so you can imagine any transport you'd like.  For example, the lowRISC people
> have been interested in using some lowRISC-specific transport for their I/O
> cores and the Xen folks have been interested in using vchan.  But of course
> those won't work without someone implementing support.

Exactly what I wanted. If transports are more or less easily interchangeable,
then I can build my comparision testbed.


> You can also split the "kernel" at multiple places, but then you're venturing
> into the domain of manual implementation.  For example, file systems can be
> split between VFS and the file system driver (as offered out-of-the-box on
> NetBSD with p2k), or you can split the networking stack between sockets and
> protocols (like net/lib/libsockin does).

That's what I was aiming for, but I'm not sure about the amount of data that is
shared across layers, which would make the whole thing much more complex.


> There's no limitation on a single client communicating with multiple rump 
> kernel
> servers.  For example, for the file system server case on NetBSD that just 
> works
> (one host kernel, many file servers).  For the syscall case, you do need some
> special code for the client side.  The server doesn't know who else the client
> is talking to, so there's no difference there, but the client requests 
> obviously
> somehow need to reach the correct server.  In the p2k case that selection 
> works
> automatically because the pathname determines the server.

In my case I wanted something as simple as using separate servers for completely
independent subsystems, so that'd be pretty easy as long as they are not
layered.


>> That is, something resembling the prototypical multi-server system on top of 
>> a
>> microkernel.
>> 
>> For example, a user app communicating with a process implementing the 
>> filesystem
>> using rump, and with another implementing the TCP/IP stack. I understand I 
>> won't
>> be able to run subsystems on separate processes, though (e.g., isolate the
>> network driver and TCP/IP stack on different processes).

> Yea, you can't split between tcp/ip and nic out-of-the-box, but if you hack in
> the necessary indirection to the ifnet interface, it should be possible to get
> some sort of results with a day or two of hacking.  In other words, on the
> TCP/IP side you'd write a driver which forwards ifnet requests and on the NIC
> side write a driver which pretends to be the TCP/IP stack.  And then you'd run
> two rump kernels (or more, depending on how many NIC servers you want)  See:
> https://github.com/rumpkernel/src-netbsd/blob/a81387e8432a08cad89b90856efed9801e99858d/sys/net/if.h#L255

> Some things would be "weird", e.g. network tapping working on the NIC server 
> but
> not the TCP/IP server ... unless you build stubs for everything, but that 
> might
> be quite a lot of work.

> Plus, I'm not sure that an architecture split into too many pieces would make
> sense, but at least I've never let those kinds of minor details get in the way
> of work... ;)

Ah, it's just for some quick'n'dirty performance comparison with different
transport methods, including one I wrote that should be much closer to as fast
as no separation.

This is just to quantify a small case I'm making, and I think I've found a much
simpler setup. In any case, I think I've got an accurate general idea (oxymoron
intended) of how it would work in rump.


Thanks a lot!
  Lluis

Reply via email to