Anthony Liguori wrote:
> Avi Kivity wrote:
>   
>> Anthony Liguori wrote:
>>     
>>> Avi Kivity wrote:
>>>       
>>>> Anthony Liguori wrote:
>>>>  
>>>>         
>>>>> +    case VIRTIO_PCI_QUEUE_NOTIFY:
>>>>> +    if (val < VIRTIO_PCI_QUEUE_MAX)
>>>>> +        virtio_ring_kick(vdev, &vdev->vq[val]);
>>>>> +    break;
>>>>>       
>>>>>           
>>>> I see you're not using hypercalls for this, presumably for 
>>>> compatibility
>>>> with -no-kvm.
>>>>         
>>> More than just that.  By stick to PIO, we are compatible with just 
>>> about any VMM.  For instance, we get Xen support for free.  If we 
>>> used hypercalls, even if we agreed on a way to determine which number 
>>> to use and how to make those calls, it would still be difficult to 
>>> implement in something like Xen.
>>>
>>>       
>> But pio through the config space basically means you're committed to 
>> handling it in qemu.  We want a more flexible mechanism.
>>     
>
> There's no reason that the PIO operations couldn't be handled in the 
> kernel.  You'll already need some level of cooperation in userspace 
> unless you plan on implementing the PCI bus in kernel space too.  It's 
> easy enough in the pci_map function in QEMU to just notify the kernel 
> that it should listen on a particular PIO range.
>
>   

This is a config space write, right?  If so, the range is the regular 
0xcf8-0xcff and it has to be very specially handled.

>> Detecting how to make hypercalls can be left to paravirt_ops.
>>
>> (for Xen you'd use an event channel; and for kvm the virtio kick 
>> hypercall).
>>
>>     
>>>>   Well I think I have a solution: advertise vmcall,
>>>> vmmcall, pio to some port, and int $some_vector as hypercall feature
>>>> bits in cpuid (for kvm, kvm, qemu, and kvm-lite respectively).  Early
>>>> setup code could patch the instruction as appropriate (I hear code
>>>> patching is now taught in second grade).
>>>>   
>>>>         
>>> That ties our device to our particular hypercall implementation.  If 
>>> we were going to do this, I'd prefer to advertise it in the device I 
>>> think.  I really would need to look at the performance though of 
>>> vmcall and an edge triggered interrupt.  It would have to be pretty 
>>> compelling to warrant the additional complexity I think. 
>>>       
>> vmcall costs will go down, and we don't want to use different 
>> mechanisms for high bandwidth and low bandwidth devices.
>>     
>
> vmcalls will certainly get faster but I doubt that the cost difference 
> between vmcall and pio will ever be greater than a few hundred cycles.  
> The only performance sensitive operation here would be the kick and I 
> don't think a few hundred cycles in the kick path is ever going to be 
> that significant for overall performance.
>
>   

Why do you think the different will be a few hundred cycles?  And if you 
have a large number of devices, searching the list becomes expensive too.

> So why introduce the extra complexity?
>   

Overall I think it reduces comlexity if we have in-kernel devices.  
Anyway we can add additional signalling methods later.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to