Re: [RFC] Support firecracker

2019-01-06 Thread Dor Laor
On Sun, Jan 6, 2019 at 5:49 PM Asias He  wrote:

>
>
> On Sun, Jan 6, 2019 at 11:32 AM Waldek Kozaczuk 
> wrote:
>
>>
>>
>> On Saturday, January 5, 2019 at 4:09:23 AM UTC-5, דור לאור wrote:
>>>
>>> Great stuff Waldek! Is unix socket the normal way to pass parameters for
>>> Firecracker?
>>>
>> I think so. There is also experimental vsock interface but I am not
>> familiar with it and not sure what purpose of it would be (
>> https://github.com/firecracker-microvm/firecracker/issues/650).
>>
>
> Vsock was added for easier guest host communication with zero
> configuration (I happen to the author of the vsock kernel module).  In
> theory, they can use vsock for anything that a socket is useful between
> host and guest.
> Btw, there are some work to implement nfs over vsock.
>

Good point, this way they can preboot OSv and run the app with the vsock
control (although any guest can
be controlled this way too)


>
>
>>
>>> How's the boot speed vs Qemu?
>>>
>> I have posted some numbers in my other email to the group but here are
>> some more details.
>>
>> First of all my numbers come from running tests on my 5-years old MacBook
>> pro that I have been using for all my OSv development in last three years.
>> It is 4-core 2.3 GHz i7 machine. Not sure how it compares to newer models
>> but given Moore's law does not work any more it might be still pretty fast.
>>
>> Architecture:x86_64
>> CPU op-mode(s):  32-bit, 64-bit
>> Byte Order:  Little Endian
>> CPU(s):  8
>> On-line CPU(s) list: 0-7
>> Thread(s) per core:  2
>> Core(s) per socket:  4
>> Socket(s):   1
>> NUMA node(s):1
>> Vendor ID:   GenuineIntel
>> CPU family:  6
>> Model:   70
>> Model name:  Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz
>> Stepping:1
>> CPU MHz: 898.037
>> CPU max MHz: 3500.
>> CPU min MHz: 800.
>> BogoMIPS:4589.68
>> Virtualization:  VT-x
>> L1d cache:   32K
>> L1i cache:   32K
>> L2 cache:256K
>> L3 cache:6144K
>> L4 cache:131072K
>>
>> Additionally firecracker startup/configuration is more fine-grained than
>> QEMU which does everything in one shot (start VMM process, configure
>> resources, start guest, etc). With firecracker it is broken down like
>> follows:
>>
>>1. Start VMM process which listen on socket for API calls (I have not
>>measured it but seems very fast).
>>2. Make API call to:
>>   - create instance that specified number of vCPUs, memory and
>>   kernel loader file path
>>   - configure block device
>>   - configure networking device
>>   - all these calls seem to take less than 1ms and can be executed
>>   in any order it seems
>>3. Make API call to start instance that eventually starts guest
>>
>> So here are some log snippets (2,3) and OSv bootchart from running
>> native-example rofs image (no networking):
>>
>> Add block device:
>> 2019-01-05T20:21:44.*096*617431 [anonymous-instance:INFO:api_server/src/
>> http_service.rs:599] The API server received a synchronous Put request
>> on "/drives/rootfs" with body "{n "drive_id": "rootfs",n "path_on_host":
>> "/home/wkozaczuk/projects/osv/build/release/usr.rofs",n "is_root_device":
>> false,n "is_read_only": falsen }".
>> 2019-01-05T20:21:44.*096*659174 [anonymous-instance:INFO:api_server/src/
>> http_service.rs:565] The synchronous Put request on "/drives/rootfs"
>> with body "{n "drive_id": "rootfs",n "path_on_host":
>> "/home/wkozaczuk/projects/osv/build/release/usr.rofs",n "is_root_device":
>> false,n "is_read_only": falsen }" was executed successfully. Status code:
>> 204 No Content.
>>
>> Create instance:
>> 2019-01-05T20:21:44.*1085*43339 [anonymous-instance:INFO:api_server/src/
>> http_service.rs:599] The API server received a synchronous Put request
>> on "/boot-source" with body "{n "kernel_image_path":
>> "/home/wkozaczuk/projects/waldek-osv/build/release/loader-stripped.elf",n
>> "boot_args": "--bootchart /hello"n }".
>> 2019-01-05T20:21:44.*108*584295 [anonymous-instance:INFO:api_server/src/
>> http_service.rs:565] The synchronous Put request on "/boot-source" with
>> body "{n "kernel_image_path":
>> "/home/wkozaczuk/projects/waldek-osv/build/release/loader-stripped.elf",n
>> "boot_args": "--bootchart /hello"n }" was executed successfully. Status
>> code: 204 No Content.
>>
>> Start instance that starts guest and terminates the process eventually:
>> 2019-01-05T20:21:44.119820357 [anonymous-instance:INFO:api_server/src/
>> http_service.rs:599] The API server received a synchronous Put request
>> on "/actions" with body "{n "action_type": "InstanceStart"n }".
>> 2019-01-05T20:21:44.119837722 [anonymous-instance:INFO:vmm/src/
>> lib.rs:1104] VMM received instance start command
>> 2019-01-05T20:21:44.119903817 [anonymous-instance:INFO:vmm/src/
>> vstate.rs:97] Guest memory starts at 7faddec0
>>
>> 

Re: [RFC] Support firecracker

2019-01-06 Thread Asias He
On Sun, Jan 6, 2019 at 11:32 AM Waldek Kozaczuk 
wrote:

>
>
> On Saturday, January 5, 2019 at 4:09:23 AM UTC-5, דור לאור wrote:
>>
>> Great stuff Waldek! Is unix socket the normal way to pass parameters for
>> Firecracker?
>>
> I think so. There is also experimental vsock interface but I am not
> familiar with it and not sure what purpose of it would be (
> https://github.com/firecracker-microvm/firecracker/issues/650).
>

Vsock was added for easier guest host communication with zero configuration
(I happen to the author of the vsock kernel module).  In theory, they can
use vsock for anything that a socket is useful between host and guest.
Btw, there are some work to implement nfs over vsock.


>
>> How's the boot speed vs Qemu?
>>
> I have posted some numbers in my other email to the group but here are
> some more details.
>
> First of all my numbers come from running tests on my 5-years old MacBook
> pro that I have been using for all my OSv development in last three years.
> It is 4-core 2.3 GHz i7 machine. Not sure how it compares to newer models
> but given Moore's law does not work any more it might be still pretty fast.
>
> Architecture:x86_64
> CPU op-mode(s):  32-bit, 64-bit
> Byte Order:  Little Endian
> CPU(s):  8
> On-line CPU(s) list: 0-7
> Thread(s) per core:  2
> Core(s) per socket:  4
> Socket(s):   1
> NUMA node(s):1
> Vendor ID:   GenuineIntel
> CPU family:  6
> Model:   70
> Model name:  Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz
> Stepping:1
> CPU MHz: 898.037
> CPU max MHz: 3500.
> CPU min MHz: 800.
> BogoMIPS:4589.68
> Virtualization:  VT-x
> L1d cache:   32K
> L1i cache:   32K
> L2 cache:256K
> L3 cache:6144K
> L4 cache:131072K
>
> Additionally firecracker startup/configuration is more fine-grained than
> QEMU which does everything in one shot (start VMM process, configure
> resources, start guest, etc). With firecracker it is broken down like
> follows:
>
>1. Start VMM process which listen on socket for API calls (I have not
>measured it but seems very fast).
>2. Make API call to:
>   - create instance that specified number of vCPUs, memory and kernel
>   loader file path
>   - configure block device
>   - configure networking device
>   - all these calls seem to take less than 1ms and can be executed in
>   any order it seems
>3. Make API call to start instance that eventually starts guest
>
> So here are some log snippets (2,3) and OSv bootchart from running
> native-example rofs image (no networking):
>
> Add block device:
> 2019-01-05T20:21:44.*096*617431 [anonymous-instance:INFO:api_server/src/
> http_service.rs:599] The API server received a synchronous Put request on
> "/drives/rootfs" with body "{n "drive_id": "rootfs",n "path_on_host":
> "/home/wkozaczuk/projects/osv/build/release/usr.rofs",n "is_root_device":
> false,n "is_read_only": falsen }".
> 2019-01-05T20:21:44.*096*659174 [anonymous-instance:INFO:api_server/src/
> http_service.rs:565] The synchronous Put request on "/drives/rootfs" with
> body "{n "drive_id": "rootfs",n "path_on_host":
> "/home/wkozaczuk/projects/osv/build/release/usr.rofs",n "is_root_device":
> false,n "is_read_only": falsen }" was executed successfully. Status code:
> 204 No Content.
>
> Create instance:
> 2019-01-05T20:21:44.*1085*43339 [anonymous-instance:INFO:api_server/src/
> http_service.rs:599] The API server received a synchronous Put request on
> "/boot-source" with body "{n "kernel_image_path":
> "/home/wkozaczuk/projects/waldek-osv/build/release/loader-stripped.elf",n
> "boot_args": "--bootchart /hello"n }".
> 2019-01-05T20:21:44.*108*584295 [anonymous-instance:INFO:api_server/src/
> http_service.rs:565] The synchronous Put request on "/boot-source" with
> body "{n "kernel_image_path":
> "/home/wkozaczuk/projects/waldek-osv/build/release/loader-stripped.elf",n
> "boot_args": "--bootchart /hello"n }" was executed successfully. Status
> code: 204 No Content.
>
> Start instance that starts guest and terminates the process eventually:
> 2019-01-05T20:21:44.119820357 [anonymous-instance:INFO:api_server/src/
> http_service.rs:599] The API server received a synchronous Put request on
> "/actions" with body "{n "action_type": "InstanceStart"n }".
> 2019-01-05T20:21:44.119837722 [anonymous-instance:INFO:vmm/src/lib.rs:1104]
> VMM received instance start command
> 2019-01-05T20:21:44.119903817 [anonymous-instance:INFO:vmm/src/
> vstate.rs:97] Guest memory starts at 7faddec0
>
> 2019-01-05T20:21:44.1*24*761979 [anonymous-instance:INFO:api_server/src/
> http_service.rs:565] The synchronous Put request on "/actions" with body
> "{n "action_type": "InstanceStart"n }" was executed successfully. Status
> code: 204 No Content.
> 2019-01-05T20:21:44.*141*417423 [anonymous-instance:INFO:vmm/src/
> lib.rs:1163] Vmm 

Build failed in Jenkins: osv-build-nightly #1709

2019-01-06 Thread jenkins
See 


--
[...truncated 139.08 KB...]
Adding /usr/lib/jvm/java/jre/lib/zi/America/Lima...
Adding /usr/lib/jvm/java/jre/lib/management/jmxremote.password.template...
Adding /usr/lib/jvm/java/jre/lib/zi/Africa/Ceuta...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Kolkata...
Adding /usr/lib/jvm/java/jre/lib/zi/Pacific/Chatham...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Thimphu...
Adding /usr/lib/jvm/java/jre/lib/zi/America/Yellowknife...
Adding /usr/lib/jvm/java/jre/lib/zi/Etc/GMT-3...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Kuching...
Adding /usr/lib/jvm/java/jre/lib/zi/Africa/Brazzaville...
Adding /usr/lib/jvm/java/jre/lib/zi/Europe/Malta...
Adding /usr/lib/jvm/java/jre/lib/zi/Europe/Bucharest...
Adding /usr/lib/jvm/java/jre/lib/zi/America/Puerto_Rico...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Ashgabat...
Adding /usr/lib/jvm/java/jre/lib/zi/America/Fortaleza...
Adding /usr/lib/jvm/java/jre/lib/zi/GMT...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Novosibirsk...
Adding /usr/lib/jvm/java/jre/lib/amd64/jli/libjli.so...
Adding /usr/lib/jvm/java/jre/bin/unpack200...
Adding /usr/lib/jvm/java/jre/lib/rhino.jar...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Dhaka...
Adding /usr/lib/jvm/java/jre/lib/zi/Pacific/Norfolk...
Adding /usr/lib/jvm/java/jre/lib/amd64/libnet.so...
Adding /usr/lib/jvm/java/jre/lib/zi/Europe/Madrid...
Adding /usr/lib/jvm/java/jre/lib/amd64/libverify.so...
Adding /usr/lib/jvm/java/jre/lib/zi/America/Merida...
Adding /usr/lib/jvm/java/jre/lib/amd64/libhprof.so...
Adding /usr/lib/jvm/java/jre/lib/zi/Australia/Eucla...
Adding /usr/lib/jvm/java/jre/lib/amd64/libj2gss.so...
Adding /usr/lib/jvm/java/jre/lib/amd64/libattach.so...
Adding /usr/lib/jvm/java/jre/lib/zi/Africa/Dar_es_Salaam...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Krasnoyarsk...
Adding /usr/lib/jvm/java/jre/lib/zi/Pacific/Wake...
Adding /usr/lib/jvm/java/jre/lib/zi/Pacific/Kiritimati...
Adding /usr/lib/jvm/java/jre/lib/zi/America/Mazatlan...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Vientiane...
Adding /usr/lib/jvm/java/jre/lib/zi/Pacific/Enderbury...
Adding /usr/lib/jvm/java/lib/amd64/jli/libjli.so...
Adding /usr/lib/jvm/java/jre/lib/zi/America/Miquelon...
Adding /usr/lib/jvm/java/jre/lib/zi/America/Mexico_City...
Adding /usr/lib/jvm/java/jre/lib/zi/Pacific/Rarotonga...
Adding /usr/lib/jvm/java/jre/lib/zi/Pacific/Tarawa...
Adding /usr/lib/jvm/java/jre/lib/zi/Asia/Novokuznetsk...
Adding /usr/lib/jvm/java/jre/lib/zi/Antarctica/Casey...
Adding /usr/lib/jvm/java/jre/lib/zi/Antarctica/Palmer...
Link /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.4.fc20.x86_64 to java ...
Link /usr/lib/jvm/jre to java/jre ...
Link /usr/lib/jvm/java/jre/lib/security/cacerts to /etc/pki/java/cacerts ...
Adding /java_isolated.so...
Adding /java/runjava-isolated.jar...
Adding /.java.policy...
Adding /tests/ClassPutInRoot.class...
Adding /tests/java/isolates.jar...
Adding /usr/lib/jvm/java/jre/lib/ext/tests-jre-extension.jar...
Adding /tests/java/tests.jar...
Adding /usr/lib/libboost_unit_test_framework.so.1.55.0...
Adding /usr/lib/libboost_filesystem.so.1.55.0...
Adding /testrunner.so...
Adding /tests/tst-pthread.so...
Adding /tests/misc-ramdisk.so...
Adding /tests/tst-vblk.so...
Adding /tests/tst-bsd-evh.so...
Adding /tests/misc-bsd-callout.so...
Adding /tests/tst-bsd-kthread.so...
Adding /tests/tst-bsd-taskqueue.so...
Adding /tests/tst-fpu.so...
Adding /tests/tst-preempt.so...
Adding /tests/tst-tracepoint.so...
Adding /tests/tst-hub.so...
Adding /tests/misc-console.so...
Adding /tests/misc-leak.so...
Adding /tests/misc-readbench.so...
Adding /tests/misc-mmap-anon-perf.so...
Adding /tests/tst-mmap-file.so...
Adding /tests/misc-mmap-big-file.so...
Adding /tests/tst-mmap.so...
Adding /tests/tst-huge.so...
Adding /tests/tst-elf-permissions.so...
Adding /tests/misc-mutex.so...
Adding /tests/misc-sockets.so...
Adding /tests/tst-condvar.so...
Adding /tests/tst-queue-mpsc.so...
Adding /tests/tst-af-local.so...
Adding /tests/tst-pipe.so...
Adding /tests/tst-yield.so...
Adding /tests/misc-ctxsw.so...
Adding /tests/tst-read.so...
Adding /tests/tst-symlink.so...
Adding /tests/tst-openat.so...
Adding /tests/tst-eventfd.so...
Adding /tests/tst-remove.so...
Adding /tests/misc-wake.so...
Adding /tests/tst-epoll.so...
Adding /tests/misc-lfring.so...
Adding /tests/misc-fsx.so...
Adding /tests/tst-sleep.so...
Adding /tests/tst-resolve.so...
Adding /tests/tst-except.so...
Adding /tests/misc-tcp-sendonly.so...
Adding /tests/tst-tcp-nbwrite.so...
Adding /tests/misc-tcp-hash-srv.so...
Adding /tests/misc-loadbalance.so...
Adding /tests/misc-scheduler.so...
Adding /tests/tst-console.so...
Adding /tests/tst-app.so...
Adding /tests/misc-setpriority.so...
Adding /tests/misc-timeslice.so...
Adding /tests/misc-tls.so...
Adding /tests/misc-gtod.so...
Adding /tests/tst-dns-resolver.so...
Adding /tests/tst-kill.so...
Adding /tests/tst-truncate.so...
Adding 

Re: [RFC] Support firecracker

2019-01-06 Thread Nadav Har'El
On Sun, Jan 6, 2019 at 5:46 AM Waldek Kozaczuk  wrote:

> One more thing - our serial console is dreadfully slow (
> https://github.com/cloudius-systems/osv/issues/921). So I think even
> printing bootchart info slows things by 3-5 ms. Without bootchart the 17 ms
> number drops to 12-11 ms.
>

This is firecrackers serial port driver:

https://github.com/firecracker-microvm/firecracker/blob/master/devices/src/legacy/serial.rs

I can't say I understand all of this of this code, but from cursory look I
think that:
1. It doesn't use any sophisticated FIFO. There is just one byte of THR
(transmit hold register).
2. It *does* support output interrupts: It checks bit 2 of the IER
(interrupt enable register) - IER_THR, and triggers interrupt. Since I'm
guessing they didn't implement this feature by accident, it is likely that
Linux is using it, i.e., instead of busy-wait loop to wait for the empty
bit (as in our putchar()) it goes to sleep, and allows the hypervisor to
work on the output - assuming the exit (which we already had when we wrote
the byte) wasn't enough. I don't know how it works on firecracker.

If you can run Linux on firecracker too, you can compare the slowness of
the serial console - output 1000 characters (or whatever) in the guess, and
measure how slow it is. If Linux isn't faster than OSv on this, there's not
much we can probably is. If Linux is much faster than OSv, then maybe these
THR interrupts are indeed the reason.

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osv-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: OSv boots on AWS firecracker in 7ms (seven)

2019-01-06 Thread Nadav Har'El
On Sat, Jan 5, 2019 at 8:36 AM Waldek Kozaczuk  wrote:

> Happy New Year everybody!
>
> I just wanted to share some exciting new about my experience getting OSv
> to boot on AWS firecracker. I will be sending a followup RFC patch.
>
> For those unfamiliar with firecracker, firecracker is a new very light and
> simple VMM (Virtual Machine Monitor) implemented in Rust by AWS team that
> uses KVM as an accelerator and is targeted to exclusively run Linux-based
> micro VMs. In essence it replaces QEMU in QEMU/KVM combination. The nicest
> thing about it, besides super fast startup time at around 5 ms, is the REST
> api that can be used to create VMs, configure virtual devices and start VM.
> You can read more about it here - https://firecracker-microvm.github.io/.
>

Very nice! I'm not familiar with Firecracker (I heard about it, but didn't
know any details), but this does sound like good news, and impressive work.


> Here are the things about firecracker which are not documented very
> clearly and I have learned the hard way:
>
>- firecracker sets vCPU to long mode, sets pages tables the Linux way
>and expects kernel to be in vmlinux format (64-bit ELF uncompressed); OSv
>loader.elf is almost that except it expects to be called in protected mode
>- firecracker implements virtio but based on virtio-mmio devices model
>so *there is no PCI*; in other words instead of PCI devices we have
>the mmio devices (read here -
>https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-gpu.html#x1-1080002
>)
>- there is no ACPI which means there is no MADT table to parse vcpu
>information (firecracker provides this information through MP table)
>- OSv implements pre-finalized virtio spec and firecracker implements
>the finalized one which means that there are some subtle differences around
>"legacy interface, devices" (see here -
>https://www.kraxel.org/virtio/virtio-v1.0-cs03-virtio-gpu.html#x1-60001);
>this has some small implications on how both virtio-net and virtio-blk
>should be implemented.
>
>
Looks interesting. I agree with Pekka's comments about the structure of the
code and the patches. I'll take a look at the actual patch now.


So in last 2 weeks or so I have hacked together a patch that enhances OSv
> enough to run OSv on firecracker. I have managed to run simple tests
> (native hello world) with ramfs, rofs and zfs image. I have also run some
> basic tests to validate that networking is working.
>
> Here is some log snippets with timings when running a native example rofs
> image:
> OSv v0.52.0-24-g3628f560
> Cmdline: --bootchart /hello virtio_mmio.device=4K@0xd000:5
> virtio-blk::probe() -> found virtio-mmio device ...
> Solaris: NOTICE: Cannot find the pool label for '/dev/vblk0.1'
> disk read (real mode): 0.00ms, (+0.00ms)
> uncompress lzloader.elf: 0.00ms, (+0.00ms)
> TLS initialization: 1.43ms, (+1.43ms)
> .init functions: 2.57ms, (+1.14ms)
> SMP launched: 3.96ms, (+1.39ms)
> VFS initialized: 4.69ms, (+0.74ms)
> Network initialized: 5.02ms, (+0.33ms)
> pvpanic done: 5.66ms, (+0.63ms)
> pci enumerated: 5.66ms, (+0.00ms)
> drivers probe: 5.69ms, (+0.03ms)
> drivers loaded: 6.62ms, (+0.93ms)
> ZFS mounted: 7.27ms, (+0.65ms)
> Total time: 7.27ms, (+0.00ms)
> Hello from C code
>
> firecracker log:
> 2019-01-05T00:46:05.925475359 [anonymous-instance:INFO:api_server/src/
> http_service.rs:599] The API server received a synchronous Put request on
> "/actions" with body "{n "action_type": "InstanceStart"n }".
> 2019-01-05T00:46:05.925506960 [anonymous-instance:INFO:vmm/src/lib.rs:1104]
> VMM received instance start command
> 2019-01-05T00:46:05.925561657 [anonymous-instance:INFO:vmm/src/
> vstate.rs:97] Guest memory starts at 7ff5ba40
> *2019-01-05T00:46:05.929402940* [anonymous-instance:INFO:api_server/src/
> http_service.rs:565] The synchronous Put request on "/actions" with body
> "{n "action_type": "InstanceStart"n }" was executed successfully. Status
> code: 204 No Content.
> *2019-01-05T00:46:05.943126559* [anonymous-instance:INFO:vmm/src/
> lib.rs:1163] Vmm is stopping.
>
> As you can see the total OSv execution time is around 13-14 ms.
>
> The equivalent ZFS image takes under 60 ms to execute.
>
> Regards,
> Waldek
>
> --
> You received this message because you are subscribed to the Google Groups
> "OSv Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to osv-dev+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osv-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.