Antti Kantee writes:

> On 04/02/16 19:50, Lluís Vilanova wrote:
>> Hi! Sorry I did not respond earlier, but I've been swamped.

> I know you have been, that result was published two days ago:
> http://phdcomics.com/comics/archive.php?comicid=1854
> ;)

And you know what? I immediately thought about that strip when I wrote the
phrase. I was even tempted to change the phrase just to not follow the strip
XDDD


>>> The transport is not fundamentally limited to suckets, just the 
>>> implementation,
>>> so you can imagine any transport you'd like.  For example, the lowRISC 
>>> people
>>> have been interested in using some lowRISC-specific transport for their I/O
>>> cores and the Xen folks have been interested in using vchan.  But of course
>>> those won't work without someone implementing support.
>> 
>> Exactly what I wanted. If transports are more or less easily interchangeable,
>> then I can build my comparision testbed.

> The problem is that the current code is not designed for easy interchanging to
> non-sockets abstractions.  If you don't care about finesse, it's somewhat easy
> to hack in, you just need to cram connect+read+write+poll in there (or 
> emulate a
> suckets-like interface with whatever transport you want to use).  
> Re-abstracting
> the code to allow non-fd-like transports an undertaking of completely 
> different
> magnitude, but should be done eventually some day.

Well, I'd like to add a transport using shmem + futex, and another one using
some form of auto-generated RPC system like rpcgen or thrust.


>>> You can also split the "kernel" at multiple places, but then you're 
>>> venturing
>>> into the domain of manual implementation.  For example, file systems can be
>>> split between VFS and the file system driver (as offered out-of-the-box on
>>> NetBSD with p2k), or you can split the networking stack between sockets and
>>> protocols (like net/lib/libsockin does).
>> 
>> That's what I was aiming for, but I'm not sure about the amount of data that 
>> is
>> shared across layers, which would make the whole thing much more complex.

> I doubt there's much sharing for the actual, hmm, "work path".  Things like
> statistics, quotas, limits etc. are different, but if you don't care about 
> being
> able to retrieve those from one place, you should be more or less fine.  
> That's
> my *guess* (based on quite a few years of doing this stuff), but I'm sure 
> there
> are things I can't think of in an email.

Well, you know research space doesn't care much about these finesses (although
we all know the devil is on the details for a real system).


>>> There's no limitation on a single client communicating with multiple rump 
>>> kernel
>>> servers.  For example, for the file system server case on NetBSD that just 
>>> works
>>> (one host kernel, many file servers).  For the syscall case, you do need 
>>> some
>>> special code for the client side.  The server doesn't know who else the 
>>> client
>>> is talking to, so there's no difference there, but the client requests 
>>> obviously
>>> somehow need to reach the correct server.  In the p2k case that selection 
>>> works
>>> automatically because the pathname determines the server.
>> 
>> In my case I wanted something as simple as using separate servers for 
>> completely
>> independent subsystems, so that'd be pretty easy as long as they are not
>> layered.

> If you're happy with "choosing" a subsystem stack at the syscall layer
> (e.g. you're doing nfs, and you accept that both fs and networking is handled 
> by
> the same rump kernel), it should more or less "just work". Of course, you 
> might
> get different results for something like getpid(), but you just need to be 
> aware
> of that -- I doubt you're building a getpid server anyway.

> (actually for NFS you can actually split it with sockin, though the hypercall
> interface required by sockin more or less assumes you have sockets underneath 
> --
> hence the name)

Yes, this type of inconsistencies would not trump the performance results, and I
was thinking of embedding a syscall demultiplexer on the app to select the
target server (e.g., read/write on files to one server, and read/write on
sockets to another one). Further splitting the network stack would be the
frosting on the cake.


>> This is just to quantify a small case I'm making, and I think I've found a 
>> much
>> simpler setup. In any case, I think I've got an accurate general idea 
>> (oxymoron
>> intended) of how it would work in rump.

> Ok, please report your findings when you're ready to do so.  We value also
> negative results (as opposed to just saying that we do) so that we can 
> evaluate
> if things should be fixed or declared a feature.

> btw:
> http://wiki.rumpkernel.org/Info%3A-FAQ#user-content-What_is_RUMP
> (The nomenclature slightly changed since the days when we used to talk about
> this stuff more)

Ok. But as I said, after looking at it and with your info, it seems very
doable. My backing away is just a mix of having found a simpler non-rump
evaluation example that is also more in line with what I want to demonstrate
(we'd also like to avoid being dubbed as being only applicable to prototypical
multi-server microkernel systems).

But I haven't started with any of them, so who knows! :)


Cheers!
  Lluis

Reply via email to