Looking at options other virtual random drivers, here is a shorter
simpler version.

1. The rump kernel only needs one hypercall interface, which reads
random numbers, and may block, ie what is used now with
RUMPUSER_RANDOM_HARD|RUMPUSER_RANDOM_NOWAIT. This should be the only
requirement for hypercalls, and the flags should be removed in the
next revision.
2. Configuring what this is attached to is useful for at least the
POSIX-y implementations; other hypercall implementations are less
likely to need config as they probably only have one available
implementation.
3. In addition, looking at other implementations, configuring max read
size and max read frequency is useful to stop starvation if using host
/dev/random. Its fine that the existing driver tries to read a lot of
data, after all you might have a hardware RNG, but if you don't
throttling to stop startavtion elsewhere on the host is useful.
4. It is most useful to make these runtime config options so no
recompile just to use differently, eg set RUMP_RANDOM_DEV=/dev/random
RUMP_RANDOM_MAXREAD=8. For eg NetBSD also support non file "device"
for RUMP_RANDOM_DEV=arc4random, similarly for OSs that have non file
based random syscalls etc.
5. Still unclear what the defaults should be. Still slightly inclined
to make it non hard random by default as people may want to run very
large numbers of rump kernels many of which might not actually need
randomness but the kernel will try to read some regardless; if it is
hard at least default to small infrequent reads.

------------------------------------------------------------------------------
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
_______________________________________________
rumpkernel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/rumpkernel-users

Reply via email to