Re: Extending virtio_console to support multiple ports

2009-08-31 Thread Anthony Liguori
Amit Shah wrote:
>> No flags, assume it's a streaming protocol and don't assume anything  
>> about message sizes.  IOW, when you send clipboard data, send size and  
>> then the data.  QEMU consumes bytes until it reaches size.
>> 
>
> Same intent but a different method: I'll have to specify that particular
> data is "size" and that data after this special data is the actual data
> stream.
>   

Sounds like every stream protocol in existence :-)

>>> - A lock has to be introduced to fetch one unused buffer from the list
>>>   and pass it on to the host. And this lock has to be a spinlock, just
>>>   because writes can be called from irq context.
>>>   
>> I don't see a problem here.
>> 
>
> You mean you don't see a problem in using a spinlock vs not using one?
>   

Right.  This isn't a fast path.

> Userspace will typically send the entire buffer to be transmitted in one
> system call. If it's large, the system call will have to be broken into
> several. This results in multiple guest system calls, each one to be
> handled with a spinlock held.
>
> Compare this with the entire write handled in one system call in the
> current method.
>   

Does it matter?  This isn't a fast path.

Regards,

Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: Extending virtio_console to support multiple ports

2009-08-31 Thread Anthony Liguori
Amit Shah wrote:
> Can you please explain your rationale for being so rigid about merging
> the two drivers?
>   

Because they do the same thing.  I'm not going to constantly rehash 
this.  It's been explained multiple times.

If there are implementation issues within the Linux drivers because of 
peculiarities of hvc then hvc needs to be fixed.  It has nothing to do 
with the driver ABI which is what qemu cares about.

Regards,

Anthony Liguori

>   Amit
>   

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: Extending virtio_console to support multiple ports

2009-08-31 Thread Amit Shah
On (Mon) Aug 31 2009 [09:21:13], Anthony Liguori wrote:
> Amit Shah wrote:
>> Can you please explain your rationale for being so rigid about merging
>> the two drivers?
>>   
>
> Because they do the same thing.  I'm not going to constantly rehash  
> this.  It's been explained multiple times.

It hardly looks like the same thing each passing day.

I've also mentioned that each minimal virtio device would start out
looking the same.

We're ending up having to compromise on the performance or functionality
or simplicity the devices just because of this restriction.

> If there are implementation issues within the Linux drivers because of  
> peculiarities of hvc then hvc needs to be fixed.  It has nothing to do  
> with the driver ABI which is what qemu cares about.

I'd welcome that effort as well. But we all know that's not going to
happen anytime soon.

Also, there's no driver ABI for virtio-serial yet. 

Amit
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: Extending virtio_console to support multiple ports

2009-08-31 Thread Amit Shah
On (Mon) Aug 31 2009 [08:17:21], Anthony Liguori wrote:
 - A lock has to be introduced to fetch one unused buffer from the list
   and pass it on to the host. And this lock has to be a spinlock, just
   because writes can be called from irq context.
   
>>> I don't see a problem here.
>>> 
>>
>> You mean you don't see a problem in using a spinlock vs not using one?
>>   
>
> Right.  This isn't a fast path.
>
>> Userspace will typically send the entire buffer to be transmitted in one
>> system call. If it's large, the system call will have to be broken into
>> several. This results in multiple guest system calls, each one to be
>> handled with a spinlock held.
>>
>> Compare this with the entire write handled in one system call in the
>> current method.
>>   
>
> Does it matter?  This isn't a fast path.

The question isn't just about how much work happens inside the spinlock.
It's also a question about introducing spinlocks where they shouldn't
be.

I don't see why such changes have to creep into the kernel.

Can you please explain your rationale for being so rigid about merging
the two drivers?

Amit
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server

2009-08-31 Thread Arnd Bergmann
On Monday 31 August 2009, Xin, Xiaohui wrote:
> 
> Hi, Michael
> That's a great job. We are now working on support VMDq on KVM, and since the 
> VMDq hardware presents L2 sorting
> based on MAC addresses and VLAN tags, our target is to implement a zero copy 
> solution using VMDq.

I'm also interested in helping there, please include me in the discussions.

> We stared
> from the virtio-net architecture. What we want to proposal is to use AIO 
> combined with direct I/O:
> 1) Modify virtio-net Backend service in Qemu to submit aio requests composed 
> from virtqueue.

right, that sounds useful.

> 2) Modify TUN/TAP device to support aio operations and the user space buffer 
> directly mapping into the host kernel.
> 3) Let a TUN/TAP device binds to single rx/tx queue from the NIC.

I don't think we should do that with the tun/tap driver. By design, tun/tap is 
a way to interact with the
networking stack as if coming from a device. The only way this connects to an 
external adapter is through
a bridge or through IP routing, which means that it does not correspond to a 
specific NIC.

I have worked on a driver I called 'macvtap' in lack of a better name, to add a 
new tap frontend to
the 'macvlan' driver. Since macvlan lets you add slaves to a single NIC device, 
this gives you a direct
connection between one or multiple tap devices to an external NIC, which works 
a lot better than when
you have a bridge inbetween. There is also work underway to add a bridging 
capability to macvlan, so
you can communicate directly between guests like you can do with a bridge.

Michael's vhost_net can plug into the same macvlan infrastructure, so the work 
is complementary.

> 4) Modify the net_dev and skb structure to permit allocated skb to use user 
> space directly mapped payload
> buffer address rather then kernel allocated.

yes.

> As zero copy is also your goal, we are interested in what's in your mind, and 
> would like to collaborate with you if possible.
> BTW, we will send our VMDq write-up very soon.

Ok, cool.

Arnd <><
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: Extending virtio_console to support multiple ports

2009-08-31 Thread Anthony Liguori
Amit Shah wrote:
> On (Mon) Aug 31 2009 [09:21:13], Anthony Liguori wrote:
>   
>> Amit Shah wrote:
>> 
>>> Can you please explain your rationale for being so rigid about merging
>>> the two drivers?
>>>   
>>>   
>> Because they do the same thing.  I'm not going to constantly rehash  
>> this.  It's been explained multiple times.
>> 
>
> It hardly looks like the same thing each passing day.
>   

That's BS.  The very first time you posted, you received the same 
feedback from both Paul and I.  See 
http://article.gmane.org/gmane.comp.emulators.qemu/44778.  That was back 
in June.  You've consistently received the same feedback both on the ML 
and in private.

> We're ending up having to compromise on the performance or functionality
> or simplicity the devices just because of this restriction.
>   

This is _not_ a high performance device and there so far has been no 
functionality impact.  I don't understand why you keep dragging your 
feet about this.  It's very simple, if you post a functional set of 
patches for a converged virtio-console driver, we'll merge it.  If you 
keep arguing about having a separate virtio-serial driver, it's not 
going to get merged.  I don't know how to be more clear than this.

>> If there are implementation issues within the Linux drivers because of  
>> peculiarities of hvc then hvc needs to be fixed.  It has nothing to do  
>> with the driver ABI which is what qemu cares about.
>> 
>
> I'd welcome that effort as well. But we all know that's not going to
> happen anytime soon.
>   

That is not a justification to add a new device in QEMU.  If we add a 
new device everytime we encounter a less than ideal interface within a 
guest, we're going to end up having hundreds of devices.

Regards,

Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: Extending virtio_console to support multiple ports

2009-08-31 Thread Amit Shah
On (Mon) Aug 31 2009 [10:56:27], Anthony Liguori wrote:
> Amit Shah wrote:
>> On (Mon) Aug 31 2009 [09:21:13], Anthony Liguori wrote:
>>   
>>> Amit Shah wrote:
>>> 
 Can you please explain your rationale for being so rigid about merging
 the two drivers?
 
>>> Because they do the same thing.  I'm not going to constantly rehash   
>>> this.  It's been explained multiple times.
>>> 
>>
>> It hardly looks like the same thing each passing day.
>>   
>
> That's BS.  The very first time you posted, you received the same  
> feedback from both Paul and I.  See  
> http://article.gmane.org/gmane.comp.emulators.qemu/44778.  That was back  
> in June.  You've consistently received the same feedback both on the ML  
> and in private.

I'm just saying they all start looking the same.

>> We're ending up having to compromise on the performance or functionality
>> or simplicity the devices just because of this restriction.
>>   
>
> This is _not_ a high performance device and there so far has been no  
> functionality impact.  I don't understand why you keep dragging your  
> feet about this.  It's very simple, if you post a functional set of  
> patches for a converged virtio-console driver, we'll merge it.  If you  

I have already posted them and have received no feedback about the
patches since. Let me add another request here for you to review them.

> keep arguing about having a separate virtio-serial driver, it's not  
> going to get merged.  I don't know how to be more clear than this.

I'm not at all arguing for a separate virtio-serial driver. Please note
the difference in what I'm asking for: I'm just asking for a good
justification for the merging of the two since it just makes both the
drivers not simple and also introduces dependencies on code outside our
control.

>>> If there are implementation issues within the Linux drivers because 
>>> of  peculiarities of hvc then hvc needs to be fixed.  It has nothing 
>>> to do  with the driver ABI which is what qemu cares about.
>>> 
>>
>> I'd welcome that effort as well. But we all know that's not going to
>> happen anytime soon.
>>   
>
> That is not a justification to add a new device in QEMU.  If we add a  
> new device everytime we encounter a less than ideal interface within a  
> guest, we're going to end up having hundreds of devices.

I just find this argument funny.

Amit
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: Extending virtio_console to support multiple ports

2009-08-31 Thread Anthony Liguori
Amit Shah wrote:
>>> We're ending up having to compromise on the performance or functionality
>>> or simplicity the devices just because of this restriction.
>>>   
>>>   
>> This is _not_ a high performance device and there so far has been no  
>> functionality impact.  I don't understand why you keep dragging your  
>> feet about this.  It's very simple, if you post a functional set of  
>> patches for a converged virtio-console driver, we'll merge it.  If you  
>> 
>
> I have already posted them and have received no feedback about the
> patches since. Let me add another request here for you to review them.
>   

But the guest drivers do not have proper locking.  Have you posted a new 
series with that fixed?

>> keep arguing about having a separate virtio-serial driver, it's not  
>> going to get merged.  I don't know how to be more clear than this.
>> 
>
> I'm not at all arguing for a separate virtio-serial driver. Please note
> the difference in what I'm asking for: I'm just asking for a good
> justification for the merging of the two since it just makes both the
> drivers not simple and also introduces dependencies on code outside our
> control.
>   

Functionally speaking, both virtio-console and virtio-serial do the same 
thing.  In fact, virtio-console is just a subset of virtio-serial.

If there are problems converging the two drivers in Linux, then I 
suggest you have two separate driver modules in Linux.  That would 
obviously be rejected for Linux though because you cannot have two 
drivers for the same device.  Why should qemu have a different policy?

  

>> That is not a justification to add a new device in QEMU.  If we add a  
>> new device everytime we encounter a less than ideal interface within a  
>> guest, we're going to end up having hundreds of devices.
>> 
>
> I just find this argument funny.
>   

I'm finding this discussion not so productive.

Regards,

Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server

2009-08-31 Thread Avi Kivity
On 08/31/2009 02:42 PM, Xin, Xiaohui wrote:
> Hi, Michael
> That's a great job. We are now working on support VMDq on KVM, and since the 
> VMDq hardware presents L2 sorting based on MAC addresses and VLAN tags, our 
> target is to implement a zero copy solution using VMDq. We stared from the 
> virtio-net architecture. What we want to proposal is to use AIO combined with 
> direct I/O:
> 1) Modify virtio-net Backend service in Qemu to submit aio requests composed 
> from virtqueue.
> 2) Modify TUN/TAP device to support aio operations and the user space buffer 
> directly mapping into the host kernel.
> 3) Let a TUN/TAP device binds to single rx/tx queue from the NIC.
> 4) Modify the net_dev and skb structure to permit allocated skb to use user 
> space directly mapped payload buffer address rather then kernel allocated.
>
> As zero copy is also your goal, we are interested in what's in your mind, and 
> would like to collaborate with you if possible.
>

One way to share the effort is to make vmdq queues available as normal 
kernel interfaces.  It would take quite a bit of work, but the end 
result is that no other components need to be change, and it makes vmdq 
useful outside kvm.  It also greatly reduces the amount of integration 
work needed throughout the stack (kvm/qemu/libvirt).

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server

2009-08-31 Thread Anthony Liguori
Avi Kivity wrote:
> On 08/31/2009 02:42 PM, Xin, Xiaohui wrote:
>> Hi, Michael
>> That's a great job. We are now working on support VMDq on KVM, and 
>> since the VMDq hardware presents L2 sorting based on MAC addresses 
>> and VLAN tags, our target is to implement a zero copy solution using 
>> VMDq. We stared from the virtio-net architecture. What we want to 
>> proposal is to use AIO combined with direct I/O:
>> 1) Modify virtio-net Backend service in Qemu to submit aio requests 
>> composed from virtqueue.
>> 2) Modify TUN/TAP device to support aio operations and the user space 
>> buffer directly mapping into the host kernel.
>> 3) Let a TUN/TAP device binds to single rx/tx queue from the NIC.
>> 4) Modify the net_dev and skb structure to permit allocated skb to 
>> use user space directly mapped payload buffer address rather then 
>> kernel allocated.
>>
>> As zero copy is also your goal, we are interested in what's in your 
>> mind, and would like to collaborate with you if possible.
>>
>
> One way to share the effort is to make vmdq queues available as normal 
> kernel interfaces.

It may be possible to make vmdq appear like an sr-iov capable device 
from userspace.  sr-iov provides the userspace interfaces to allocate 
interfaces and assign mac addresses.  To make it useful, you would have 
to handle tx multiplexing in the driver but that would be much easier to 
consume for kvm.

Regards,

Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization