I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence is the same.

the reason is in function msix_mmio_write, mostly addr is not in mmio range.

static int msix_mmio_write(struct kvm_io_device *this, gpa_t addr, int len,
                           const void *val)
{
        struct kvm_assigned_dev_kernel *adev =
                        container_of(this, struct kvm_assigned_dev_kernel,
                                     msix_mmio_dev);
        int idx, r = 0;
        unsigned long new_val = *(unsigned long *)val;

        mutex_lock(&adev->kvm->lock);
        if (!msix_mmio_in_range(adev, addr, len)) {
                // return here.
                 r = -EOPNOTSUPP;
                goto out;
        }

i printk the value:
addr             start           end           len
F004C00C   F0044000  F0044030     4

00:06.0 Ethernet controller: Intel Corporation Unknown device 10ed (rev 01)
        Subsystem: Intel Corporation Unknown device 000c
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR-
        Latency: 0
        Region 0: Memory at f0040000 (32-bit, non-prefetchable) [size=16K]
        Region 3: Memory at f0044000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] MSI-X: Enable+ Mask- TabSize=3
                Vector table: BAR=3 offset=00000000
                PBA: BAR=3 offset=00002000

00:07.0 Ethernet controller: Intel Corporation Unknown device 10ed (rev 01)
        Subsystem: Intel Corporation Unknown device 000c
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR-
        Latency: 0
        Region 0: Memory at f0048000 (32-bit, non-prefetchable) [size=16K]
        Region 3: Memory at f004c000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] MSI-X: Enable+ Mask- TabSize=3
                Vector table: BAR=3 offset=00000000
                PBA: BAR=3 offset=00002000



+static bool msix_mmio_in_range(struct kvm_assigned_dev_kernel *adev,
+                             gpa_t addr, int len)
+{
+       gpa_t start, end;
+
+       BUG_ON(adev->msix_mmio_base == 0);
+       start = adev->msix_mmio_base;
+       end = adev->msix_mmio_base + PCI_MSIX_ENTRY_SIZE *
+               adev->msix_max_entries_nr;
+       if (addr >= start && addr + len <= end)
+               return true;
+
+       return false;
+}





2010/11/30 Yang, Sheng <[email protected]>:
> On Tuesday 30 November 2010 17:10:11 lidong chen wrote:
>> sr-iov also meet this problem, MSIX mask waste a lot of cpu resource.
>>
>> I test kvm with sriov, which the vf driver could not disable msix.
>> so the host os waste a lot of cpu.  cpu rate of host os is 90%.
>>
>> then I test xen with sriov, there ara also a lot of vm exits caused by
>> MSIX mask.
>> but the cpu rate of xen and domain0 is less than kvm. cpu rate of xen
>> and domain0 is 60%.
>>
>> without sr-iov, the cpu rate of xen and domain0 is higher than kvm.
>>
>> so i think the problem is kvm waste more cpu resource to deal with MSIX
>> mask. and we can see how xen deal with MSIX mask.
>>
>> if this problem sloved, maybe with MSIX enabled, the performace is better.
>
> Please refer to my posted patches for this issue.
>
> http://www.spinics.net/lists/kvm/msg44992.html
>
> --
> regards
> Yang, Sheng
>
>>
>> 2010/11/23 Avi Kivity <[email protected]>:
>> > On 11/23/2010 09:27 AM, lidong chen wrote:
>> >> can you tell me something about this problem.
>> >> thanks.
>> >
>> > Which problem?
>> >
>> > --
>> > I have a truly marvellous patch that fixes the bug which this
>> > signature is too narrow to contain.
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to