David Ahern writes:
> On 1/8/20 9:18 AM, Aaron Conole wrote:
>> David Ahern writes:
>>
>>> On 12/16/19 2:42 PM, Aaron Conole wrote:
Can you try the following and see if your scalability issue is
addressed? I think it could be better integrated, but this is a
different quick 'n
On 1/8/20 9:18 AM, Aaron Conole wrote:
> David Ahern writes:
>
>> On 12/16/19 2:42 PM, Aaron Conole wrote:
>>> Can you try the following and see if your scalability issue is
>>> addressed? I think it could be better integrated, but this is a
>>> different quick 'n dirty.
>>
>> your patch
David Ahern writes:
> On 12/16/19 2:42 PM, Aaron Conole wrote:
>> Can you try the following and see if your scalability issue is
>> addressed? I think it could be better integrated, but this is a
>> different quick 'n dirty.
>
> your patch reduces the number of threads awakened, but it is still
On 12/16/19 2:42 PM, Aaron Conole wrote:
> Can you try the following and see if your scalability issue is
> addressed? I think it could be better integrated, but this is a
> different quick 'n dirty.
your patch reduces the number of threads awakened, but it is still
really high - 43 out of 71
On 12/16/19 2:42 PM, Aaron Conole wrote:
> Can you try the following and see if your scalability issue is
> addressed? I think it could be better integrated, but this is a
> different quick 'n dirty.
I'll to get to it before the end of the week.
___
David Ahern writes:
> On 12/13/19 1:52 PM, Aaron Conole wrote:
>> Jason Baron via dev writes:
>>
>>> On 12/10/19 5:20 PM, David Ahern wrote:
On 12/10/19 3:09 PM, Jason Baron wrote:
> Hi David,
>
> The idea is that we try and queue new work to 'idle' threads in an
> attempt
On 12/13/19 1:52 PM, Aaron Conole wrote:
> Jason Baron via dev writes:
>
>> On 12/10/19 5:20 PM, David Ahern wrote:
>>> On 12/10/19 3:09 PM, Jason Baron wrote:
Hi David,
The idea is that we try and queue new work to 'idle' threads in an
attempt to distribute a workload. Thus,
Jason Baron via dev writes:
> On 12/10/19 5:20 PM, David Ahern wrote:
>> On 12/10/19 3:09 PM, Jason Baron wrote:
>>> Hi David,
>>>
>>> The idea is that we try and queue new work to 'idle' threads in an
>>> attempt to distribute a workload. Thus, once we find an 'idle' thread we
>>> stop waking
On 12/10/19 5:20 PM, David Ahern wrote:
> On 12/10/19 3:09 PM, Jason Baron wrote:
>> Hi David,
>>
>> The idea is that we try and queue new work to 'idle' threads in an
>> attempt to distribute a workload. Thus, once we find an 'idle' thread we
>> stop waking up other threads. While we are
On 12/10/19 4:00 PM, David Ahern wrote:
> [ adding Jason as author of the patch that added the epoll exclusive flag ]
>
> On 12/10/19 12:37 PM, Matteo Croce wrote:
>> On Tue, Dec 10, 2019 at 8:13 PM David Ahern wrote:
>>>
>>> Hi Matteo:
>>>
>>> On a hypervisor running a 4.14.91 kernel and OVS
On 12/10/19 3:09 PM, Jason Baron wrote:
> Hi David,
>
> The idea is that we try and queue new work to 'idle' threads in an
> attempt to distribute a workload. Thus, once we find an 'idle' thread we
> stop waking up other threads. While we are searching the wakeup list for
> idle threads, we do
On 12/10/19 2:20 PM, Matteo Croce wrote:
>
> Before this patch (which unfortunately is needed to avoid -EMFILE
> errors with many ports), how many sockets are awakened when an ARP is
> received?
>
on systems using 2.7.3 I see only a single handler thread awakened on
upcalls.
Yes, I saw the
On Tue, Dec 10, 2019 at 10:00 PM David Ahern wrote:
>
> [ adding Jason as author of the patch that added the epoll exclusive flag ]
>
> On 12/10/19 12:37 PM, Matteo Croce wrote:
> > On Tue, Dec 10, 2019 at 8:13 PM David Ahern wrote:
> >>
> >> Hi Matteo:
> >>
> >> On a hypervisor running a
[ adding Jason as author of the patch that added the epoll exclusive flag ]
On 12/10/19 12:37 PM, Matteo Croce wrote:
> On Tue, Dec 10, 2019 at 8:13 PM David Ahern wrote:
>>
>> Hi Matteo:
>>
>> On a hypervisor running a 4.14.91 kernel and OVS 2.11 I am seeing a
>> thundering herd wake up
On Tue, Dec 10, 2019 at 8:41 PM David Ahern wrote:
>
> On 12/10/19 12:37 PM, Matteo Croce wrote:
> > On Tue, Dec 10, 2019 at 8:13 PM David Ahern wrote:
> >>
> >> Hi Matteo:
> >>
> >> On a hypervisor running a 4.14.91 kernel and OVS 2.11 I am seeing a
> >> thundering herd wake up problem. Every
On 12/10/19 12:37 PM, Matteo Croce wrote:
> On Tue, Dec 10, 2019 at 8:13 PM David Ahern wrote:
>>
>> Hi Matteo:
>>
>> On a hypervisor running a 4.14.91 kernel and OVS 2.11 I am seeing a
>> thundering herd wake up problem. Every packet punted to userspace wakes
>> up every one of the handler
On Tue, Dec 10, 2019 at 8:13 PM David Ahern wrote:
>
> Hi Matteo:
>
> On a hypervisor running a 4.14.91 kernel and OVS 2.11 I am seeing a
> thundering herd wake up problem. Every packet punted to userspace wakes
> up every one of the handler threads. On a box with 96 cpus, there are 71
> handler
Hi Matteo:
On a hypervisor running a 4.14.91 kernel and OVS 2.11 I am seeing a
thundering herd wake up problem. Every packet punted to userspace wakes
up every one of the handler threads. On a box with 96 cpus, there are 71
handler threads which means 71 process wakeups for every packet punted.
18 matches
Mail list logo