Hi Waldek,

On Thu, Mar 28, 2019 at 12:49 AM Waldek Kozaczuk <[email protected]>
wrote:

> Some questions about the evaluation setup and measurements:
>>>
>>> - Did you establish a baseline with bare metal configuration?
>>>
>> How would I create baseline with with bare metal configuration for 1, 2,
> 4 CPUs? With docker or qemu I can specify number of cpus.
>

You can use the "taskset" command to restrict process to run on specific
CPUs. But I think a 4 CPU bare metal baseline is sufficient because then
you know what is the maximum expected throughput.


>
> - Did you measure CPU utilization during the throughput tests? This is
>>> important because you could be hitting CPU limits with QEMU and Firecracker
>>> because of software processing needed by virtualized networking.
>>>
>> Nothing rigorous. I has mpstat running and I could see that during 1 and
> 2 cpu tests they were pretty highly utilized (80-90%) but only 40-50% for 4
> cpu tests. But nothing I recorded.
>

I would encourage you to run something like "vmstat" or "sar" in the
background to obtain average CPU utilization for the run to be able to
compare the results.

The drop in CPU utilization suggests that you're bound by the network. How
 big are the HTTP requests and responses your test is generating? You could
be hitting the ~110 MB/s bandwidth limit of a 1 GbE NIC. Also, note that
40-50% CPU utilization is quite low for a throughput test so you're mostly
seeing the impact of latency here. This is where network bridge
configuration becomes relevant too. If you have more layers the higher the
latency is going to be.


>
>> - Are the QEMU and Firecracker tests using virtio or vhost?
>>>
>> I thought OSv only support virtio. Sorry to be ignorant. I heard the
> terms but what is actually the difference between vhost and virtio?
>

Sorry for not being explicit. Virtio is the guest/hypervisor I/O interface,
which OSv also supports. However, there are two implementations of the I/O
model: virtio (in host userspace) and vhost (in host kernel). You can think
of vhost as a host kernel accelerator for virtio:

http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html

The main difference is that vhost is supposed to be faster than virtio
because it reduces VM exits.


>
>
>
>> - Is Docker also configured to use the bridge device? If not, QEMU and
>>> Firecracker also have some additional overheads from the bridging.
>>>
>> I need to check. Per this -
> https://raw.githubusercontent.com/wkozaczuk/unikernels-v-containers/master/run-rest-in-docker.shI
> am sure - I would expose container port to the host. So I think I was
> bypassing the bridge.
>
> BTW is there a way to run OSv on QEMU without a bridge to make it visible
> on LAN?
>

AFAICT, this would require either device assignment or SR-IOV, but neither
is supported by OSv due to lack of (real) hardware device drivers.


>
> - Is multiqueue enabled for QEMU and Firecracker? If not, this would
>>> limit the ability to leverage multiple vCPUs.
>>>
>> No idea what you are talking about ;-)
>

IIRC, there's a "queues" option you pass to "-netdev" with QEMU. No idea
about Firecracker.

That said, we have the following comment in virtio-net drivers:

    //
    // We currently have only a single TX queue. Select a proper TXq here
when
    // we implement a multi-queue.
    //

So perhaps we don't even support multiqueue in OSv at the moment...

- Pekka

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to