Jan Kiszka wrote:
> Dong, Eddie wrote:
>   
>> Avi Kivity wrote:
>>     
>>> Dong, Eddie wrote:
>>>       
>>>>> There's a two-liner required to make it work.  I'll add it soon.
>>>>>
>>>>>
>>>>>           
>>>> But you still needs to issue WBINVD to all pCPUs which just move
>>>> non-response time from one place to another, not?
>>>>
>>>>         
>>> You don't actually need to emulate wbinvd, you can just ignore it.
>>>
>>> The only reason I can think of to use wbinvd is if you're taking a cpu
>>> down (for a deep sleep state, or if you're shutting it off).  A guest
>>> need not do that. 
>>>
>>> Any other reason? dma?  all dma today is cache-coherent, no?
>>>
>>>       
>> For legacy PCI device, yes it is cache-cohetent, but for PCIe devices,
>> it is no longer a must. A PCIe device may not generate snoopy cycle
>> and thus require OS to flush cache.
>>
>> For example, a guest with direct device, say audio, can copy 
>> data to dma buffer and then wbinvd to flush cache out and ask HW 
>> DMA to output.
>>     
>
> So if you want the higher performance of PCIe you need
> performance-killing wbindv (not to speak of latency)? That sounds a bit
> contradictory to me. So this is also true for native PCIe usage?
>
>   

Yes, good point.  wbinvd is not only expensive in that it takes a long 
time to execute, it blows your caches so that anything that executes 
afterwards takes a huge hit.


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to