Hi Nick,

> What's been run on rump kernels? Maybe test suites, lang.interpreters, 
> popular servers?
> 
> e.g.  test suites (fsx), lang.interpreters (luajit), popular servers 
> (thttpd)? 

I'd like to chime in regarding success stories of using Rump kernels.

There are actually two stories:


Rump kernels as donors for OS protocol stacks
---------------------------------------------

I am working on a component-based OS project called Genode [1]. In line
with Unix philosophy, it is made out of small components that can be
combined in flexible ways. But in contrast to Unix, Genode's components
are not merely applications but also cover OS functionalities such as
file systems, device drivers, runtime environments, and even kernels.

Naturally, when creating an OS from scratch, there is much functionality
needed to get to a point where it becomes eventually useful. I.e., we
have been working for more than 7 years to get our basic needs with
respect to device-driver support covered. Some drivers (like AHCI) had
been written from scratch whereas others had been ported from Linux
(like the USB stack) or iPXE (network drivers). Even after years of
experience, the porting of complex driver stacks remains very
challenging and labor intensive. For example, the port of Intel wireless
stack from the Linux kernel to Genode took us more than 6 months [2].

When we came to the point where we needed file systems, we had to decide
whether to develop a file system from scratch or port an existing file
system. We discarded the former approach because we did not want to
waste our time with reinventing the wheel. Also, as we are not very
proficient in the domain of file systems, our solution would certainly
be lacking compared to the state of the art. The second approach - to
port a file system - turned out to become long-winded story. First, we
took a look at the Linux kernel but found the file-system code to be
very much intertwined with other kernel subsystems. So it appeared
difficult to isolate. Next, we looked at FUSE-based file systems. After
adding FUSE support to Genode, we found that the file systems we longed
most after (like ext4) are poorly supported. In hind sight, this is
natural because FUSE is mostly used on Linux, where ext4 is already given.

At this point, we discovered Rump kernels. Antti's book was quite
encouraging. So we (that are Josef Söntgen and Sebastian Sumpf for the
most part) started to experiment. To our positive surprise, Rump kernels
enabled us to obtain functional file systems in a matter of a few weeks.
The Rump hypercall interface, which we needed to implement as the glue
between Rump kernels and Genode is comprised of less than 3000 lines of
code. The most difficult part was actually dealing with the build system.

Thanks to Rump kernels, we got our long-standing issue of file-system
support covered. So we can now enjoy to have the time-tested file
systems of the NetBSD kernel available on our system. The effort needed
on our side was tremendously lower compared to our prior porting
experiences.

There are still some concerns that call for optimizations. First, for
our use case for re-using merely a specific kernel subsystem (like a
file system driver), the supporting infrastructure required around the
subsystem is quite large so that our Rump-based file-system component
contains a lot of "support code" in addition to the "feature code".
E.g., we have to carry with it substantial parts of the NetBSD kernel
such as the VFS or the block cache (we would really love to handle the
block cache by a separate component instead). Second, since the Rump
kernel spawns one host thread for each NetBSD kernel thread, the
component's footprint with respect to the usage of threads is much
higher than it could be. But those things do not at all diminish the
value that Rump kernels provide to us.

When it comes to ext4, we still need to look elsewhere because ext4 is
not supported by the NetBSD kernel. Hence, for now, we are mostly using
ext2.


Rump tools for populating disk images
-------------------------------------

The second use case of Rump kernels for our project is the population of
disk images. Most of our development work happens on Linux. Normally,
when assembling a disk image for a Genode system, one has to create an
empty file, loop-mount it, add a partition table, mount the loop device,
copy the files to the image, unmount the file system, and remove the
loop device. Even though this procedure does create and manipulate an
ordinary file owned by the user (the disk image), one needs root
privileges for creating the loop device or mounting the file system.
This becomes not just annoying if the regular development workflow
includes such a disk-image creation step, but also complicates automated
tests. Thanks to rump kernels, we can perform the entire disk-image
creation in the Linux user land. So we effectively use it as a
user-level file-system infrastructure that is otherwise not available on
Linux.


I hope that my longish email could convince you to investigate Rump
kernels further!

Cheers
Norman

[1] http://genode.org
[2] http://genode.org/documentation/release-notes/14.11#Intel_wireless_stack


-- 
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
_______________________________________________
rumpkernel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/rumpkernel-users

Reply via email to