On Mon, Nov 27, 2017 at 09:44:07PM -0500, Matthew Rosato wrote:
> On 11/27/2017 08:36 PM, Jason Wang wrote:
> >
> >
> > On 2017年11月28日 00:21, Wei Xu wrote:
> >> On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
> >>> On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> On
On Tue, Nov 28, 2017 at 09:36:37AM +0800, Jason Wang wrote:
>
>
> On 2017年11月28日 00:21, Wei Xu wrote:
> > On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
> > > On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> > > > On 11/12/2017 01:34 PM, Wei Xu wrote:
> > > > > On Sat, Nov 11,
On 11/27/2017 08:36 PM, Jason Wang wrote:
>
>
> On 2017年11月28日 00:21, Wei Xu wrote:
>> On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
>>> On 11/14/2017 03:11 PM, Matthew Rosato wrote:
On 11/12/2017 01:34 PM, Wei Xu wrote:
> On Sat, Nov 11, 2017 at 03:59:54PM -0500,
On 2017年11月28日 00:21, Wei Xu wrote:
On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
On 11/14/2017 03:11 PM, Matthew Rosato wrote:
On 11/12/2017 01:34 PM, Wei Xu wrote:
On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
This case should be quite similar with
On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
> On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> > On 11/12/2017 01:34 PM, Wei Xu wrote:
> >> On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
> > This case should be quite similar with pkgten, if you got
On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> On 11/12/2017 01:34 PM, Wei Xu wrote:
>> On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
> This case should be quite similar with pkgten, if you got improvement with
> pktgen, usually it was also the same for UDP, could you
On 11/12/2017 01:34 PM, Wei Xu wrote:
> On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
This case should be quite similar with pkgten, if you got improvement with
pktgen, usually it was also the same for UDP, could you please try to
disable
tso, gso, gro, ufo
On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
> >> This case should be quite similar with pkgten, if you got improvement with
> >> pktgen, usually it was also the same for UDP, could you please try to
> >> disable
> >> tso, gso, gro, ufo on all host tap devices and guest
On Tue, Nov 07, 2017 at 08:02:48PM -0500, Matthew Rosato wrote:
> On 11/04/2017 07:35 PM, Wei Xu wrote:
> > On Fri, Nov 03, 2017 at 12:30:12AM -0400, Matthew Rosato wrote:
> >> On 10/31/2017 03:07 AM, Wei Xu wrote:
> >>> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>
>
>> This case should be quite similar with pkgten, if you got improvement with
>> pktgen, usually it was also the same for UDP, could you please try to disable
>> tso, gso, gro, ufo on all host tap devices and guest virtio-net devices?
>> Currently
>> the most significant tests would be like this
On 11/04/2017 07:35 PM, Wei Xu wrote:
> On Fri, Nov 03, 2017 at 12:30:12AM -0400, Matthew Rosato wrote:
>> On 10/31/2017 03:07 AM, Wei Xu wrote:
>>> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>
> Are you using the same binding as mentioned in previous mail sent by
On Fri, Nov 03, 2017 at 12:30:12AM -0400, Matthew Rosato wrote:
> On 10/31/2017 03:07 AM, Wei Xu wrote:
> > On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
> >>
> >>>
> >>> Are you using the same binding as mentioned in previous mail sent by you?
> >>> it
> >>> might be caused by
On 10/31/2017 03:07 AM, Wei Xu wrote:
> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>>
>>>
>>> Are you using the same binding as mentioned in previous mail sent by you? it
>>> might be caused by cpu convention between pktgen and vhost, could you please
>>> try to run pktgen
On 2017年10月31日 15:07, Wei Xu wrote:
BTW, did you see any improvement when running pktgen from the host if no
regression was found? Since this can be reproduced with only 1 vcpu for
guest, may you try this bind? This might help simplify the problem.
vcpu0 -> cpu2
vhost -> cpu3
pktgen
On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>
> >
> > Are you using the same binding as mentioned in previous mail sent by you? it
> > might be caused by cpu convention between pktgen and vhost, could you please
> > try to run pktgen from another idle cpu by adjusting the
>
> Are you using the same binding as mentioned in previous mail sent by you? it
> might be caused by cpu convention between pktgen and vhost, could you please
> try to run pktgen from another idle cpu by adjusting the binding?
I don't think that's the case -- I can cause pktgen to hang in the
On Wed, Oct 25, 2017 at 04:21:26PM -0400, Matthew Rosato wrote:
> On 10/22/2017 10:06 PM, Jason Wang wrote:
> >
> >
> > On 2017年10月19日 04:17, Matthew Rosato wrote:
> >>> 2. It might be useful to short the traffic path as a reference, What
> >>> I am running
> >>> is briefly like:
> >>>
On 10/23/2017 09:57 AM, Wei Xu wrote:
> On Wed, Oct 18, 2017 at 04:17:51PM -0400, Matthew Rosato wrote:
>> On 10/12/2017 02:31 PM, Wei Xu wrote:
>>> On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
Ping... Jason, any other ideas or suggestions?
>>>
>>> Hi Matthew,
>>>
On 10/22/2017 10:06 PM, Jason Wang wrote:
>
>
> On 2017年10月19日 04:17, Matthew Rosato wrote:
>>> 2. It might be useful to short the traffic path as a reference, What
>>> I am running
>>> is briefly like:
>>> pktgen(host kernel) -> tap(x) -> guest(DPDK testpmd)
>>>
>>> The bridge
On Wed, Oct 18, 2017 at 04:17:51PM -0400, Matthew Rosato wrote:
> On 10/12/2017 02:31 PM, Wei Xu wrote:
> > On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
> >>
> >> Ping... Jason, any other ideas or suggestions?
> >
> > Hi Matthew,
> > Recently I am doing similar test on x86 for
On Mon, Oct 23, 2017 at 10:06:36AM +0800, Jason Wang wrote:
>
>
> On 2017年10月19日 04:17, Matthew Rosato wrote:
> > > 2. It might be useful to short the traffic path as a reference, What I am
> > > running
> > > is briefly like:
> > > pktgen(host kernel) -> tap(x) -> guest(DPDK testpmd)
> >
On 2017年10月19日 04:17, Matthew Rosato wrote:
2. It might be useful to short the traffic path as a reference, What I am
running
is briefly like:
pktgen(host kernel) -> tap(x) -> guest(DPDK testpmd)
The bridge driver(br_forward(), etc) might impact performance due to my personal
On 10/12/2017 02:31 PM, Wei Xu wrote:
> On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
>>
>> Ping... Jason, any other ideas or suggestions?
>
> Hi Matthew,
> Recently I am doing similar test on x86 for this patch, here are some,
> differences between our testbeds.
>
> 1. It is
On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
>
> Ping... Jason, any other ideas or suggestions?
Hi Matthew,
Recently I am doing similar test on x86 for this patch, here are some,
differences between our testbeds.
1. It is nice you have got improvement with 50+ instances(or
On 2017年10月06日 04:07, Matthew Rosato wrote:
On 09/25/2017 04:18 PM, Matthew Rosato wrote:
On 09/22/2017 12:03 AM, Jason Wang wrote:
On 2017年09月21日 03:38, Matthew Rosato wrote:
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of
On 09/25/2017 04:18 PM, Matthew Rosato wrote:
> On 09/22/2017 12:03 AM, Jason Wang wrote:
>>
>>
>> On 2017年09月21日 03:38, Matthew Rosato wrote:
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of waitqueue during rx. Attached
On 09/22/2017 12:03 AM, Jason Wang wrote:
>
>
> On 2017年09月21日 03:38, Matthew Rosato wrote:
>>> Seems to make some progress on wakeup mitigation. Previous patch tries
>>> to reduce the unnecessary traversal of waitqueue during rx. Attached
>>> patch goes even further which disables rx polling
On 2017年09月21日 03:38, Matthew Rosato wrote:
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of waitqueue during rx. Attached
patch goes even further which disables rx polling during processing tx.
Please try it to see if it has any
> Seems to make some progress on wakeup mitigation. Previous patch tries
> to reduce the unnecessary traversal of waitqueue during rx. Attached
> patch goes even further which disables rx polling during processing tx.
> Please try it to see if it has any difference.
Unfortunately, this patch
On 2017年09月19日 02:11, Matthew Rosato wrote:
On 09/18/2017 03:36 AM, Jason Wang wrote:
On 2017年09月18日 11:13, Jason Wang wrote:
On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated
On 09/18/2017 03:36 AM, Jason Wang wrote:
>
>
> On 2017年09月18日 11:13, Jason Wang wrote:
>>
>>
>> On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
On 2017年09月18日 11:13, Jason Wang wrote:
On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
perf.diff on host, one for rx and one for tx.
perf data below
On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
perf.diff on host, one for rx and one for tx.
perf data below for the associated vhost threads,
> It looks like vhost is slowed down for some reason which leads to more
> idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
> perf.diff on host, one for rx and one for tx.
>
perf data below for the associated vhost threads, baseline=4.12,
delta1=4.13,
On 2017年09月15日 11:36, Matthew Rosato wrote:
Is the issue gone if you reduce VHOST_RX_BATCH to 1? And it would be
also helpful to collect perf diff to see if anything interesting.
(Consider 4.4 shows more obvious regression, please use 4.4).
Issue still exists when I force VHOST_RX_BATCH = 1
> Is the issue gone if you reduce VHOST_RX_BATCH to 1? And it would be
> also helpful to collect perf diff to see if anything interesting.
> (Consider 4.4 shows more obvious regression, please use 4.4).
>
Issue still exists when I force VHOST_RX_BATCH = 1
Collected perf data, with 4.12 as the
On 2017年09月14日 00:59, Matthew Rosato wrote:
On 09/13/2017 04:13 AM, Jason Wang wrote:
On 2017年09月13日 09:16, Jason Wang wrote:
On 2017年09月13日 01:56, Matthew Rosato wrote:
We are seeing a regression for a subset of workloads across KVM guests
over a virtual bridge between host kernel 4.12
On 09/13/2017 04:13 AM, Jason Wang wrote:
>
>
> On 2017年09月13日 09:16, Jason Wang wrote:
>>
>>
>> On 2017年09月13日 01:56, Matthew Rosato wrote:
>>> We are seeing a regression for a subset of workloads across KVM guests
>>> over a virtual bridge between host kernel 4.12 and 4.13. Bisecting
>>>
On 2017年09月13日 09:16, Jason Wang wrote:
On 2017年09月13日 01:56, Matthew Rosato wrote:
We are seeing a regression for a subset of workloads across KVM guests
over a virtual bridge between host kernel 4.12 and 4.13. Bisecting
points to c67df11f "vhost_net: try batch dequing from skb array"
In
On 2017年09月13日 01:56, Matthew Rosato wrote:
We are seeing a regression for a subset of workloads across KVM guests
over a virtual bridge between host kernel 4.12 and 4.13. Bisecting
points to c67df11f "vhost_net: try batch dequing from skb array"
In the regressed environment, we are running
40 matches
Mail list logo