On Wed, Sep 26, 2018 at 2:58 AM Stefano Brivio wrote:
>
> Hi Pravin,
>
> On Wed, 15 Aug 2018 00:19:39 -0700
> Pravin Shelar wrote:
>
> > I understand fairness has cost, but we need to find right balance
> > between performance and fairness. Current fairness scheme is a
> > lockless algorithm
Hi Pravin,
On Wed, 15 Aug 2018 00:19:39 -0700
Pravin Shelar wrote:
> I understand fairness has cost, but we need to find right balance
> between performance and fairness. Current fairness scheme is a
> lockless algorithm without much computational overhead, did you try to
> improve current
Pravin,
On Wed, 15 Aug 2018 00:19:39 -0700
Pravin Shelar wrote:
> My argument is not about proposed fairness algorithm. It is about cost
> of the fairness and I do not see it is addressed in any of the follow
> ups.
We are still working on it (especially on the points that you mentioned
Hi Stefano
On Tue, Aug 7, 2018 at 6:31 AM, Stefano Brivio wrote:
> Hi Pravin,
>
> On Tue, 31 Jul 2018 16:12:03 -0700
> Pravin Shelar wrote:
>
>> Rather than reducing number of thread down to 1, we could find better
>> number of FDs per port.
>> How about this simple solution:
>> 1. Allocate (N
Hi William,
On Fri, 10 Aug 2018 07:11:01 -0700
William Tu wrote:
> > int rr_select_srcport(struct dp_upcall_info *upcall)
> > {
> > /* look up source port from upcall->skb... */
> > }
> >
> > And we could then easily extend this to use BPF with maps one day.
> >
> >
> Hi Stefano,
>
>
On Fri, Aug 3, 2018 at 5:43 PM, Stefano Brivio wrote:
> On Fri, 3 Aug 2018 16:01:08 -0700
> Ben Pfaff wrote:
>
> > I think that a simple mechanism for fairness is fine. The direction
> > of extensibility that makes me anxious is how to decide what matters
> > for fairness. So far, we've
On Tue, 7 Aug 2018 15:31:11 +0200
Stefano Brivio wrote:
> I would instead try to address the concerns that you had about the
> original patch adding fairness in the kernel, rather than trying to
> make the issue appear less severe in ovs-vswitchd.
And, by the way, if we introduce a way to
Hi Pravin,
On Tue, 31 Jul 2018 16:12:03 -0700
Pravin Shelar wrote:
> Rather than reducing number of thread down to 1, we could find better
> number of FDs per port.
> How about this simple solution:
> 1. Allocate (N * P) FDs as long as it is under FD limit.
> 2. If FD limit (-EMFILE) is hit
On Sat, Aug 04, 2018 at 02:43:24AM +0200, Stefano Brivio wrote:
> On Fri, 3 Aug 2018 16:01:08 -0700
> Ben Pfaff wrote:
> > I would be very pleased if we could integrate a simple mechanism for
> > fairness, based for now on some simple criteria like the source port,
> > but thinking ahead to how
On Fri, 3 Aug 2018 16:01:08 -0700
Ben Pfaff wrote:
> I think that a simple mechanism for fairness is fine. The direction
> of extensibility that makes me anxious is how to decide what matters
> for fairness. So far, we've talked about per-vport fairness. That
> works pretty well for packets
On Fri, Aug 03, 2018 at 06:52:41PM +0200, Stefano Brivio wrote:
> On Tue, 31 Jul 2018 15:06:57 -0700 Ben Pfaff wrote:
> > My current thought is that any fairness scheme we implement directly in
> > the kernel is going to need to evolve over time. Maybe we could do
> > something flexible with BPF
Hi Ben,
On Tue, 31 Jul 2018 15:06:57 -0700
Ben Pfaff wrote:
> This is an awkward problem to try to solve with sockets because of the
> nature of sockets, which are strictly first-in first-out. What you
> really want is something closer to the algorithm that we use in
> ovs-vswitchd to send
On Tue, Jul 31, 2018 at 12:43 PM, Matteo Croce wrote:
> On Mon, Jul 16, 2018 at 4:54 PM Matteo Croce wrote:
>>
>> On Tue, Jul 10, 2018 at 6:31 PM Pravin Shelar wrote:
>> >
>> > On Wed, Jul 4, 2018 at 7:23 AM, Matteo Croce wrote:
>> > > From: Stefano Brivio
>> > >
>> > > Open vSwitch sends to
On Tue, Jul 31, 2018 at 07:43:34PM +, Matteo Croce wrote:
> On Mon, Jul 16, 2018 at 4:54 PM Matteo Croce wrote:
> >
> > On Tue, Jul 10, 2018 at 6:31 PM Pravin Shelar wrote:
> > >
> > > On Wed, Jul 4, 2018 at 7:23 AM, Matteo Croce wrote:
> > > > From: Stefano Brivio
> > > >
> > > > Open
On Mon, Jul 16, 2018 at 4:54 PM Matteo Croce wrote:
>
> On Tue, Jul 10, 2018 at 6:31 PM Pravin Shelar wrote:
> >
> > On Wed, Jul 4, 2018 at 7:23 AM, Matteo Croce wrote:
> > > From: Stefano Brivio
> > >
> > > Open vSwitch sends to userspace all received packets that have
> > > no associated
On Tue, Jul 10, 2018 at 6:31 PM Pravin Shelar wrote:
>
> On Wed, Jul 4, 2018 at 7:23 AM, Matteo Croce wrote:
> > From: Stefano Brivio
> >
> > Open vSwitch sends to userspace all received packets that have
> > no associated flow (thus doing an "upcall"). Then the userspace
> > program creates a
On Wed, Jul 4, 2018 at 7:23 AM, Matteo Croce wrote:
> From: Stefano Brivio
>
> Open vSwitch sends to userspace all received packets that have
> no associated flow (thus doing an "upcall"). Then the userspace
> program creates a new flow and determines the actions to apply
> based on its
From: Stefano Brivio
Open vSwitch sends to userspace all received packets that have
no associated flow (thus doing an "upcall"). Then the userspace
program creates a new flow and determines the actions to apply
based on its configuration.
When a single port generates a high rate of upcalls, it
18 matches
Mail list logo