On 30/09/14 19:19, Martin Lucina wrote:
> Hi Antti,
>
> [email protected] said:
>> Anyway, if I'm allowed to expand on the subject a bit, some food for
>> thought. Did you port UDFtoolkit by replacing the system calls with
>> rump_sys_foo()? Or do you actually directly call the kernel UDF driver?
>
> I used the rump_sys_foo() system calls directly, that seemed like the most
> straightforward approach. I could call the kernel UDF driver directly, but
> for that I'd (presumably) need to understand the NetBSD VFS layer APIs and
> provide upcalls for the kernel UDF driver to be able to access the image
> data/block device -- is that what you meant by the latter approach?
Yes, calling VFS requires you to know VFS. libp2k is a decent example
of how to do it, though it does have its faults. One problem with the
VFS API is that it's not stable. It's not hideously unstable either,
just not bedrock-unchanging.
Block device support is quite baked into the rump kernel, so you don't
have to provide it yourself. Actually, when I say "quite baked in", I
mean "too much baked in", i.e. it's difficult to provide alternative
block device implementations (e.g. one that would use aio). It's kind
of on the "should fix" list, but fixing is hard without breaking
existing users too much. I have some notes scribbled down somewhere, in
case someone is interested.
> The latter would definitely be smaller -- currently using the "full kernel"
> approach gives me a ~1.4MB increase in size for a stripped amd64 debug
> build of the application -- but would increase the amount of work I need to
> do and space is not that much of a concern for this project.
Yes. Rump kernels started from file system kernel drivers being used in
userspace for microkernel-type servers. In fact, there was no
rump_sys_foo() support back then. The syscall layer being cemented so
heavily librump{vfs,net,dev} now is maybe a mistake and adds some
unnecessary bloat where not necessary, but so far there has been no real
motivation to fix it either -- just don't use the syscall layer if you
don't want to.
> My application needs a GUI, so if I were to split off the core into a
> "rumprun" module I'd then need a communication channel to the GUI. This
> could be done with something like nanomsg or plain IPC, debatable
> whether it is worth the effort.
Anything in that area constitutes as "research" in my eyes, very
interesting research too.
>> However, if rumprun suits your application, it's a pretty
>> powerful paradigm. I'm hoping some day we can offer Docker-like icing
>> for rump kernels, making all of what I described easy to accomplish for
>> generic applications.
>
> That would be neat and looks like it wouldn't be too much work. It'd
> certainly please the HN hipsters :-)
Anyone who wants to look cool for HN hipsters, don't hesitate to get
started today ;)
> As it stands I've found the rumpkernel concept and implementation project
> is easy to work with and well documented, better than many FOSS projects,
> *especially* those involving complex build systems and components.
Thanks, I guess. Personally, I think the build system (buildrump.sh)
needs a complete rototill, but it has become so big and scary that I'm
afraid to touch it any longer. I still want to do it some day, though.
That's why the "tools" repository exists.
A large part of those thanks should also be directed to the NetBSD
project. After all, we rely on NetBSD's build infra quite heavily.
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
rumpkernel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/rumpkernel-users