On 2018/11/6 11:32, Jason Wang wrote:
> 
> On 2018/11/6 上午11:17, jiangyiwen wrote:
>> On 2018/11/6 10:41, Jason Wang wrote:
>>> On 2018/11/6 上午10:17, jiangyiwen wrote:
>>>> On 2018/11/5 17:21, Jason Wang wrote:
>>>>> On 2018/11/5 下午3:43, jiangyiwen wrote:
>>>>>> Now vsock only support send/receive small packet, it can't achieve
>>>>>> high performance. As previous discussed with Jason Wang, I revisit the
>>>>>> idea of vhost-net about mergeable rx buffer and implement the mergeable
>>>>>> rx buffer in vhost-vsock, it can allow big packet to be scattered in
>>>>>> into different buffers and improve performance obviously.
>>>>>>
>>>>>> I write a tool to test the vhost-vsock performance, mainly send big
>>>>>> packet(64K) included guest->Host and Host->Guest. The result as
>>>>>> follows:
>>>>>>
>>>>>> Before performance:
>>>>>>                  Single socket            Multiple sockets(Max Bandwidth)
>>>>>> Guest->Host   ~400MB/s                 ~480MB/s
>>>>>> Host->Guest   ~1450MB/s                ~1600MB/s
>>>>>>
>>>>>> After performance:
>>>>>>                  Single socket            Multiple sockets(Max Bandwidth)
>>>>>> Guest->Host   ~1700MB/s                ~2900MB/s
>>>>>> Host->Guest   ~1700MB/s                ~2900MB/s
>>>>>>
>>>>>>    From the test results, the performance is improved obviously, and 
>>>>>> guest
>>>>>> memory will not be wasted.
>>>>> Hi:
>>>>>
>>>>> Thanks for the patches and the numbers are really impressive.
>>>>>
>>>>> But instead of duplicating codes between sock and net. I was considering 
>>>>> to use virtio-net as a transport of vsock. Then we may have all existed 
>>>>> features likes batching, mergeable rx buffers and multiqueue. Want to 
>>>>> consider this idea? Thoughts?
>>>>>
>>>>>
>>>> Hi Jason,
>>>>
>>>> I am not very familiar with virtio-net, so I am afraid I can't give too
>>>> much effective advice. Then I have several problems:
>>>>
>>>> 1. If use virtio-net as a transport, guest should see a virtio-net
>>>> device instead of virtio-vsock device, right? Is vsock only as a
>>>> transport between socket and net_device? User should still use
>>>> AF_VSOCK type to create socket, right?
>>>
>>> Well, there're many choices. What you need is just to keep the socket API 
>>> and hide the implementation. For example, you can keep the vosck device in 
>>> guest and switch to use vhost-net in host. We probably need a new feature 
>>> bit or header to let vhost know we are passing vsock packet. And vhost-net 
>>> could forward the packet to vsock core on host.
>>>
>>>
>>>> 2. I want to know if this idea has already started, and how is
>>>> the current progress?
>>>
>>> Not yet started.  Just want to listen from the community. If this sounds 
>>> good, do you have interest in implementing this?
>>>
>>>
>>>> 3. And what is stefan's idea?
>>>
>>> Talk with Stefan a little on this during KVM Forum. I think he tends to 
>>> agree on this idea. Anyway, let's wait for his reply.
>>>
>>>
>>> Thanks
>>>
>>>
>> Hi Jason,
>>
>> Thanks your reply, what you want is try to avoid duplicate code, and still
>> use the existed features with virtio-net.
> 
> 
> Yes, technically we can use virtio-net driver is guest as well but we could 
> do it step by step.
> 
> 
>> Yes, if this sounds good and most people can recognize this idea, I am very
>> happy to implement this.
> 
> 
> Cool, thanks.
> 
> 
>>
>> In addition, I hope you can review these patches before the new idea is
>> implemented, after all the performance can be improved. :-)
> 
> 
> Ok.
> 
> 
> So the patch actually did three things:
> 
> - mergeable buffer implementation
> 
> - increase the default rx buffer size
> 
> - add used and signal guest in a batch
> 
> It would be helpful if you can measure the performance improvement 
> independently. This can give reviewer a better understanding on how much did 
> each part help.
> 
> Thanks
> 
> 

Great, I will test the performance independently in the later version.

Thanks,
Yiwen.

>>
>> Thanks,
>> Yiwen.
>>
>>>> Thanks,
>>>> Yiwen.
>>>>
>>> .
>>>
>>
> 
> .
> 


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to