Happy New Year everybody!

I just wanted to share some exciting new about my experience getting OSv to 
boot on AWS firecracker. I will be sending a followup RFC patch.

For those unfamiliar with firecracker, firecracker is a new very light and 
simple VMM (Virtual Machine Monitor) implemented in Rust by AWS team that 
uses KVM as an accelerator and is targeted to exclusively run Linux-based 
micro VMs. In essence it replaces QEMU in QEMU/KVM combination. The nicest 
thing about it, besides super fast startup time at around 5 ms, is the REST 
api that can be used to create VMs, configure virtual devices and start VM. 
You can read more about it here - https://firecracker-microvm.github.io/.

Here are the things about firecracker which are not documented very clearly 
and I have learned the hard way:

   - firecracker sets vCPU to long mode, sets pages tables the Linux way 
   and expects kernel to be in vmlinux format (64-bit ELF uncompressed); OSv 
   loader.elf is almost that except it expects to be called in protected mode
   - firecracker implements virtio but based on virtio-mmio devices model 
   so *there is no PCI*; in other words instead of PCI devices we have the 
   mmio devices (read here 
   - https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-gpu.html#x1-1080002)
   - there is no ACPI which means there is no MADT table to parse vcpu 
   information (firecracker provides this information through MP table)
   - OSv implements pre-finalized virtio spec and firecracker implements 
   the finalized one which means that there are some subtle differences around 
   "legacy interface, devices" (see here 
   - https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-gpu.html#x1-60001); 
   this has some small implications on how both virtio-net and virtio-blk 
   should be implemented.

So in last 2 weeks or so I have hacked together a patch that enhances OSv 
enough to run OSv on firecracker. I have managed to run simple tests 
(native hello world) with ramfs, rofs and zfs image. I have also run some 
basic tests to validate that networking is working.

Here is some log snippets with timings when running a native example rofs 
image:
OSv v0.52.0-24-g3628f560
Cmdline: --bootchart /hello virtio_mmio.device=4K@0xd0000000:5
virtio-blk::probe() -> found virtio-mmio device ...
Solaris: NOTICE: Cannot find the pool label for '/dev/vblk0.1'
disk read (real mode): 0.00ms, (+0.00ms)
uncompress lzloader.elf: 0.00ms, (+0.00ms)
TLS initialization: 1.43ms, (+1.43ms)
.init functions: 2.57ms, (+1.14ms)
SMP launched: 3.96ms, (+1.39ms)
VFS initialized: 4.69ms, (+0.74ms)
Network initialized: 5.02ms, (+0.33ms)
pvpanic done: 5.66ms, (+0.63ms)
pci enumerated: 5.66ms, (+0.00ms)
drivers probe: 5.69ms, (+0.03ms)
drivers loaded: 6.62ms, (+0.93ms)
ZFS mounted: 7.27ms, (+0.65ms)
Total time: 7.27ms, (+0.00ms)
Hello from C code

firecracker log:
2019-01-05T00:46:05.925475359 
[anonymous-instance:INFO:api_server/src/http_service.rs:599] The API server 
received a synchronous Put request on "/actions" with body "{n 
"action_type": "InstanceStart"n }".
2019-01-05T00:46:05.925506960 [anonymous-instance:INFO:vmm/src/lib.rs:1104] 
VMM received instance start command
2019-01-05T00:46:05.925561657 
[anonymous-instance:INFO:vmm/src/vstate.rs:97] Guest memory starts at 
7ff5ba400000
*2019-01-05T00:46:05.929402940* 
[anonymous-instance:INFO:api_server/src/http_service.rs:565] The 
synchronous Put request on "/actions" with body "{n "action_type": 
"InstanceStart"n }" was executed successfully. Status code: 204 No Content.
*2019-01-05T00:46:05.943126559* 
[anonymous-instance:INFO:vmm/src/lib.rs:1163] Vmm is stopping.

As you can see the total OSv execution time is around 13-14 ms.

The equivalent ZFS image takes under 60 ms to execute.

Regards,
Waldek

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to