Re: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-16 Thread Yunsheng Lin
Hi, Yuval

On 2017/10/16 14:25, Yuval Mintz wrote:
>> Hi, Yuval
>>
>> On 2017/10/15 13:14, Yuval Mintz wrote:
 Hi, Yuval

 On 2017/10/13 4:21, Yuval Mintz wrote:
>> This patchset adds a new hardware offload type in mqprio before
>> adding
>> mqprio hardware offload support in hns3 driver.
>
> I think one of the biggest issues in tying this to DCB configuration is 
> the
> non-immediate [and possibly non persistent] configuration.
>
> Scenario #1:
> User is configuring mqprio offloaded with 3 TCs while device is in willing
 mode.
> Would you expect the driver to immediately respond with a success or
 instead
> delay the return until the DCBx negotiation is complete and the
>> operational
> num of TCs is actually 3?

 Well, when user requsts the mqprio offloaded by a hardware shared by
>> DCB,
 I expect
 the user is not using the dcb tool.
 If user is still using dcb tool, then result is undefined.

 The scenario you mention maybe can be enforced by setting willing to
>> zero
 when user
 is requesting the mqprio offload, and restore the willing bit when
>> unloaded
 the mqprio
 offload.
>>>
>>> Sounds a bit harsh but would probably work.
>>>
 But I think the real issue is that dcb and mqprio shares the tc system in 
 the
 stack,
 the problem may be better to be fixed in the stack rather than in the
>> driver,
 as you
 suggested in the DCB patchset. What do you think?
>>>
>>> What did you have in mind?
>>
>> I was thinking maybe the tc system can provide a notification to mqprio and
>> dcb.
>> mqprio and dcb register a callback to the tc system, when there is some
>> change of
>> tc configuration, the tc system call the callback from mqprio and dcb.
>>
>>>

>
> Scenario #2:
> Assume user explicitly offloaded mqprio with 3 TCs, but now DCB
 configuration
> has changed on the peer side and 4 TCs is the new negotiated
>> operational
 value.
> Your current driver logic would change the number of TCs underneath
>> the
 user
> configuration [and it would actually probably work due to mqprio being a
 crappy
> qdisc]. But was that the user actual intention?
> [I think the likely answer in this scenario is 'yes' since the 
> alternative is no
 better.
> But I still thought it was worth mentioning]

 You are right, the problem also have something to do with mqprio and dcb
 sharing
 the tc in the stack.

 Druing testing, when user explicitly offloaded mqprio with 3 TCs, all
 queue has a default pfifo mqprio attached, after DCB changes the tc num
>> to
 4,
 using tc qdisc shows some queue does not have a default pfifo mqprio
 attached.
>>>
>>> Really? Then what did it show?
>>> [I assume it has some pfifo attached, and it's an mqprio dump kind of an
>> issue]
>>
>> When queue size of the ndev is 16 and tc num is 3, we set the real queue size
>> to
>> 15 ( 5 * 3 = 15), mqprio only attach pfifo to the first 15 queue, when tc num
>> change
>> to 4 by DCB, we set the real queue size to 16 (4 * 4 = 16).
>> So tc qdisc shows the last queue has no qdisc attached.
> 
> So there is a qdisc attached - mqprio_attach() attches to all transmission
> queues [num_tx_queues] and not only the active ones.
> But the flow for mqprio might be lacking the additional qdisc_hash_add()
> for the additional queue's qdisc.

Yes, I think you may be right.

static void mqprio_attach(struct Qdisc *sch)
{
struct net_device *dev = qdisc_dev(sch);
struct mqprio_sched *priv = qdisc_priv(sch);
struct Qdisc *qdisc, *old;
unsigned int ntx;

/* Attach underlying qdisc */
for (ntx = 0; ntx < dev->num_tx_queues; ntx++) {
qdisc = priv->qdiscs[ntx];
old = dev_graft_qdisc(qdisc->dev_queue, qdisc);
if (old)
qdisc_destroy(old);

--Only call qdisc_hash_add when  ntx < 
dev->real_num_tx_queues---

if (ntx < dev->real_num_tx_queues)
qdisc_hash_add(qdisc, false);
}
kfree(priv->qdiscs);
priv->qdiscs = NULL;
}


> 
>>
>>>

 Maybe we can add a callback to notify mqprio the configuration has
>> changed.

>>>
>>> Which would do what?
>>> You already have the notifications available for monitoring using dcbnl 
>>> logic
>> if the
>>> configuration change [for user]; So user can re-configure whatever it
>> wants.
>>
>> Yes, if user is only using dcb tool.
>>
>>> But other than dropping all the qdisc configurations and going back to the
>> default
>>> qdiscs, what default action would mqprio be able to do when configuration
>> changes
>>> that actually makes sense?
>>
>> As explained above, after dcb changing the configuration, some queue may
>> have no qdisc
>> attached, so I was thinking maybe we can add pfifo to it if there 

RE: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-16 Thread Yuval Mintz
> Hi, Yuval
> 
> On 2017/10/15 13:14, Yuval Mintz wrote:
> >> Hi, Yuval
> >>
> >> On 2017/10/13 4:21, Yuval Mintz wrote:
>  This patchset adds a new hardware offload type in mqprio before
> adding
>  mqprio hardware offload support in hns3 driver.
> >>>
> >>> I think one of the biggest issues in tying this to DCB configuration is 
> >>> the
> >>> non-immediate [and possibly non persistent] configuration.
> >>>
> >>> Scenario #1:
> >>> User is configuring mqprio offloaded with 3 TCs while device is in willing
> >> mode.
> >>> Would you expect the driver to immediately respond with a success or
> >> instead
> >>> delay the return until the DCBx negotiation is complete and the
> operational
> >>> num of TCs is actually 3?
> >>
> >> Well, when user requsts the mqprio offloaded by a hardware shared by
> DCB,
> >> I expect
> >> the user is not using the dcb tool.
> >> If user is still using dcb tool, then result is undefined.
> >>
> >> The scenario you mention maybe can be enforced by setting willing to
> zero
> >> when user
> >> is requesting the mqprio offload, and restore the willing bit when
> unloaded
> >> the mqprio
> >> offload.
> >
> > Sounds a bit harsh but would probably work.
> >
> >> But I think the real issue is that dcb and mqprio shares the tc system in 
> >> the
> >> stack,
> >> the problem may be better to be fixed in the stack rather than in the
> driver,
> >> as you
> >> suggested in the DCB patchset. What do you think?
> >
> > What did you have in mind?
> 
> I was thinking maybe the tc system can provide a notification to mqprio and
> dcb.
> mqprio and dcb register a callback to the tc system, when there is some
> change of
> tc configuration, the tc system call the callback from mqprio and dcb.
> 
> >
> >>
> >>>
> >>> Scenario #2:
> >>> Assume user explicitly offloaded mqprio with 3 TCs, but now DCB
> >> configuration
> >>> has changed on the peer side and 4 TCs is the new negotiated
> operational
> >> value.
> >>> Your current driver logic would change the number of TCs underneath
> the
> >> user
> >>> configuration [and it would actually probably work due to mqprio being a
> >> crappy
> >>> qdisc]. But was that the user actual intention?
> >>> [I think the likely answer in this scenario is 'yes' since the 
> >>> alternative is no
> >> better.
> >>> But I still thought it was worth mentioning]
> >>
> >> You are right, the problem also have something to do with mqprio and dcb
> >> sharing
> >> the tc in the stack.
> >>
> >> Druing testing, when user explicitly offloaded mqprio with 3 TCs, all
> >> queue has a default pfifo mqprio attached, after DCB changes the tc num
> to
> >> 4,
> >> using tc qdisc shows some queue does not have a default pfifo mqprio
> >> attached.
> >
> > Really? Then what did it show?
> > [I assume it has some pfifo attached, and it's an mqprio dump kind of an
> issue]
> 
> When queue size of the ndev is 16 and tc num is 3, we set the real queue size
> to
> 15 ( 5 * 3 = 15), mqprio only attach pfifo to the first 15 queue, when tc num
> change
> to 4 by DCB, we set the real queue size to 16 (4 * 4 = 16).
> So tc qdisc shows the last queue has no qdisc attached.

So there is a qdisc attached - mqprio_attach() attches to all transmission
queues [num_tx_queues] and not only the active ones.
But the flow for mqprio might be lacking the additional qdisc_hash_add()
for the additional queue's qdisc.

> 
> >
> >>
> >> Maybe we can add a callback to notify mqprio the configuration has
> changed.
> >>
> >
> > Which would do what?
> > You already have the notifications available for monitoring using dcbnl 
> > logic
> if the
> > configuration change [for user]; So user can re-configure whatever it
> wants.
> 
> Yes, if user is only using dcb tool.
> 
> > But other than dropping all the qdisc configurations and going back to the
> default
> > qdiscs, what default action would mqprio be able to do when configuration
> changes
> > that actually makes sense?
> 
> As explained above, after dcb changing the configuration, some queue may
> have no qdisc
> attached, so I was thinking maybe we can add pfifo to it if there is no qdsic
> attached
> to it.
> 
> Thanks,
> Yunsheng Lin
> 
> >
> >> Thanks
> >> Yunsheng Lin
> >>
> >>>
> >>> Cheers,
> >>> Yuval
> >>>
> 
>  Yunsheng Lin (2):
>    mqprio: Add a new hardware offload type in mqprio
>    net: hns3: Add mqprio hardware offload support in hns3 driver
> 
>   drivers/net/ethernet/hisilicon/hns3/hnae3.h|  1 +
>   .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 23
> +++
>   .../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 46
> >> ++-
>  ---
>   include/uapi/linux/pkt_sched.h |  1 +
>   4 files changed, 55 insertions(+), 16 deletions(-)
> 
>  --
>  1.9.1
> >>>
> >>>
> >>>
> >



Re: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-15 Thread Yunsheng Lin
Hi, Yuval

On 2017/10/15 16:51, Yuval Mintz wrote:
> This patchset adds a new hardware offload type in mqprio before
>> adding
> mqprio hardware offload support in hns3 driver.
> 
> Apparently Dave has already accepted  Amirtha's changes to mqprio:
> https://marc.info/?l=linux-netdev=150803219824053=2 
> so I guess you need to revise your patchs to align to the new conventions.

Ok.

"If offloads are supported by setting the 'hw' option to 1, the default
offload mode is 'dcb' where only the TC values are offloaded to the
device. "

According to the description of the above patchset, the default mode is already
dcb, so i will drop the dcb mode patch.

I think the scenario you mentioned still existed, and I am willing to implement
it if we come to a solution that will suit most in the community.

Thanks,
Yunsheng Lin

> 

 I think one of the biggest issues in tying this to DCB configuration is the
 non-immediate [and possibly non persistent] configuration.

 Scenario #1:
 User is configuring mqprio offloaded with 3 TCs while device is in willing
>>> mode.
 Would you expect the driver to immediately respond with a success or
>>> instead
 delay the return until the DCBx negotiation is complete and the
>> operational
 num of TCs is actually 3?
>>>
>>> Well, when user requsts the mqprio offloaded by a hardware shared by
>> DCB,
>>> I expect
>>> the user is not using the dcb tool.
>>> If user is still using dcb tool, then result is undefined.
>>>
>>> The scenario you mention maybe can be enforced by setting willing to zero
>>> when user
>>> is requesting the mqprio offload, and restore the willing bit when unloaded
>>> the mqprio
>>> offload.
>>
>> Sounds a bit harsh but would probably work.
>>
>>> But I think the real issue is that dcb and mqprio shares the tc system in 
>>> the
>>> stack,
>>> the problem may be better to be fixed in the stack rather than in the
>> driver,
>>> as you
>>> suggested in the DCB patchset. What do you think?
>>
>> What did you have in mind?
>>
>>>

 Scenario #2:
 Assume user explicitly offloaded mqprio with 3 TCs, but now DCB
>>> configuration
 has changed on the peer side and 4 TCs is the new negotiated operational
>>> value.
 Your current driver logic would change the number of TCs underneath
>> the
>>> user
 configuration [and it would actually probably work due to mqprio being a
>>> crappy
 qdisc]. But was that the user actual intention?
 [I think the likely answer in this scenario is 'yes' since the alternative 
 is no
>>> better.
 But I still thought it was worth mentioning]
>>>
>>> You are right, the problem also have something to do with mqprio and dcb
>>> sharing
>>> the tc in the stack.
>>>
>>> Druing testing, when user explicitly offloaded mqprio with 3 TCs, all
>>> queue has a default pfifo mqprio attached, after DCB changes the tc num
>> to
>>> 4,
>>> using tc qdisc shows some queue does not have a default pfifo mqprio
>>> attached.
>>
>> Really? Then what did it show?
>> [I assume it has some pfifo attached, and it's an mqprio dump kind of an
>> issue]
>>
>>>
>>> Maybe we can add a callback to notify mqprio the configuration has
>> changed.
>>>
>>
>> Which would do what?
>> You already have the notifications available for monitoring using dcbnl 
>> logic if
>> the
>> configuration change [for user]; So user can re-configure whatever it wants.
>> But other than dropping all the qdisc configurations and going back to the
>> default
>> qdiscs, what default action would mqprio be able to do when configuration
>> changes
>> that actually makes sense?
>>
>>> Thanks
>>> Yunsheng Lin
>>>

 Cheers,
 Yuval

>
> Yunsheng Lin (2):
>   mqprio: Add a new hardware offload type in mqprio
>   net: hns3: Add mqprio hardware offload support in hns3 driver
>
>  drivers/net/ethernet/hisilicon/hns3/hnae3.h|  1 +
>  .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 23 +++
>  .../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 46
>>> ++-
> ---
>  include/uapi/linux/pkt_sched.h |  1 +
>  4 files changed, 55 insertions(+), 16 deletions(-)
>
> --
> 1.9.1



> 



Re: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-15 Thread Yunsheng Lin
Hi, Yuval

On 2017/10/15 13:14, Yuval Mintz wrote:
>> Hi, Yuval
>>
>> On 2017/10/13 4:21, Yuval Mintz wrote:
 This patchset adds a new hardware offload type in mqprio before adding
 mqprio hardware offload support in hns3 driver.
>>>
>>> I think one of the biggest issues in tying this to DCB configuration is the
>>> non-immediate [and possibly non persistent] configuration.
>>>
>>> Scenario #1:
>>> User is configuring mqprio offloaded with 3 TCs while device is in willing
>> mode.
>>> Would you expect the driver to immediately respond with a success or
>> instead
>>> delay the return until the DCBx negotiation is complete and the operational
>>> num of TCs is actually 3?
>>
>> Well, when user requsts the mqprio offloaded by a hardware shared by DCB,
>> I expect
>> the user is not using the dcb tool.
>> If user is still using dcb tool, then result is undefined.
>>
>> The scenario you mention maybe can be enforced by setting willing to zero
>> when user
>> is requesting the mqprio offload, and restore the willing bit when unloaded
>> the mqprio
>> offload.
> 
> Sounds a bit harsh but would probably work.
> 
>> But I think the real issue is that dcb and mqprio shares the tc system in the
>> stack,
>> the problem may be better to be fixed in the stack rather than in the driver,
>> as you
>> suggested in the DCB patchset. What do you think?
> 
> What did you have in mind?

I was thinking maybe the tc system can provide a notification to mqprio and dcb.
mqprio and dcb register a callback to the tc system, when there is some change 
of
tc configuration, the tc system call the callback from mqprio and dcb.

> 
>>
>>>
>>> Scenario #2:
>>> Assume user explicitly offloaded mqprio with 3 TCs, but now DCB
>> configuration
>>> has changed on the peer side and 4 TCs is the new negotiated operational
>> value.
>>> Your current driver logic would change the number of TCs underneath the
>> user
>>> configuration [and it would actually probably work due to mqprio being a
>> crappy
>>> qdisc]. But was that the user actual intention?
>>> [I think the likely answer in this scenario is 'yes' since the alternative 
>>> is no
>> better.
>>> But I still thought it was worth mentioning]
>>
>> You are right, the problem also have something to do with mqprio and dcb
>> sharing
>> the tc in the stack.
>>
>> Druing testing, when user explicitly offloaded mqprio with 3 TCs, all
>> queue has a default pfifo mqprio attached, after DCB changes the tc num to
>> 4,
>> using tc qdisc shows some queue does not have a default pfifo mqprio
>> attached.
> 
> Really? Then what did it show? 
> [I assume it has some pfifo attached, and it's an mqprio dump kind of an 
> issue]

When queue size of the ndev is 16 and tc num is 3, we set the real queue size to
15 ( 5 * 3 = 15), mqprio only attach pfifo to the first 15 queue, when tc num 
change
to 4 by DCB, we set the real queue size to 16 (4 * 4 = 16).
So tc qdisc shows the last queue has no qdisc attached.

> 
>>
>> Maybe we can add a callback to notify mqprio the configuration has changed.
>>
> 
> Which would do what?
> You already have the notifications available for monitoring using dcbnl logic 
> if the
> configuration change [for user]; So user can re-configure whatever it wants.

Yes, if user is only using dcb tool.

> But other than dropping all the qdisc configurations and going back to the 
> default
> qdiscs, what default action would mqprio be able to do when configuration 
> changes
> that actually makes sense?

As explained above, after dcb changing the configuration, some queue may have 
no qdisc
attached, so I was thinking maybe we can add pfifo to it if there is no qdsic 
attached
to it.

Thanks,
Yunsheng Lin

> 
>> Thanks
>> Yunsheng Lin
>>
>>>
>>> Cheers,
>>> Yuval
>>>

 Yunsheng Lin (2):
   mqprio: Add a new hardware offload type in mqprio
   net: hns3: Add mqprio hardware offload support in hns3 driver

  drivers/net/ethernet/hisilicon/hns3/hnae3.h|  1 +
  .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 23 +++
  .../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 46
>> ++-
 ---
  include/uapi/linux/pkt_sched.h |  1 +
  4 files changed, 55 insertions(+), 16 deletions(-)

 --
 1.9.1
>>>
>>>
>>>
> 



RE: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-15 Thread Yuval Mintz
> > >> This patchset adds a new hardware offload type in mqprio before
> adding
> > >> mqprio hardware offload support in hns3 driver.

Apparently Dave has already acceptedAmirtha's changes to mqprio:
https://marc.info/?l=linux-netdev=150803219824053=2 
so I guess you need to revise your patchs to align to the new conventions.

> > >
> > > I think one of the biggest issues in tying this to DCB configuration is 
> > > the
> > > non-immediate [and possibly non persistent] configuration.
> > >
> > > Scenario #1:
> > > User is configuring mqprio offloaded with 3 TCs while device is in willing
> > mode.
> > > Would you expect the driver to immediately respond with a success or
> > instead
> > > delay the return until the DCBx negotiation is complete and the
> operational
> > > num of TCs is actually 3?
> >
> > Well, when user requsts the mqprio offloaded by a hardware shared by
> DCB,
> > I expect
> > the user is not using the dcb tool.
> > If user is still using dcb tool, then result is undefined.
> >
> > The scenario you mention maybe can be enforced by setting willing to zero
> > when user
> > is requesting the mqprio offload, and restore the willing bit when unloaded
> > the mqprio
> > offload.
> 
> Sounds a bit harsh but would probably work.
> 
> > But I think the real issue is that dcb and mqprio shares the tc system in 
> > the
> > stack,
> > the problem may be better to be fixed in the stack rather than in the
> driver,
> > as you
> > suggested in the DCB patchset. What do you think?
> 
> What did you have in mind?
> 
> >
> > >
> > > Scenario #2:
> > > Assume user explicitly offloaded mqprio with 3 TCs, but now DCB
> > configuration
> > > has changed on the peer side and 4 TCs is the new negotiated operational
> > value.
> > > Your current driver logic would change the number of TCs underneath
> the
> > user
> > > configuration [and it would actually probably work due to mqprio being a
> > crappy
> > > qdisc]. But was that the user actual intention?
> > > [I think the likely answer in this scenario is 'yes' since the 
> > > alternative is no
> > better.
> > > But I still thought it was worth mentioning]
> >
> > You are right, the problem also have something to do with mqprio and dcb
> > sharing
> > the tc in the stack.
> >
> > Druing testing, when user explicitly offloaded mqprio with 3 TCs, all
> > queue has a default pfifo mqprio attached, after DCB changes the tc num
> to
> > 4,
> > using tc qdisc shows some queue does not have a default pfifo mqprio
> > attached.
> 
> Really? Then what did it show?
> [I assume it has some pfifo attached, and it's an mqprio dump kind of an
> issue]
> 
> >
> > Maybe we can add a callback to notify mqprio the configuration has
> changed.
> >
> 
> Which would do what?
> You already have the notifications available for monitoring using dcbnl logic 
> if
> the
> configuration change [for user]; So user can re-configure whatever it wants.
> But other than dropping all the qdisc configurations and going back to the
> default
> qdiscs, what default action would mqprio be able to do when configuration
> changes
> that actually makes sense?
> 
> > Thanks
> > Yunsheng Lin
> >
> > >
> > > Cheers,
> > > Yuval
> > >
> > >>
> > >> Yunsheng Lin (2):
> > >>   mqprio: Add a new hardware offload type in mqprio
> > >>   net: hns3: Add mqprio hardware offload support in hns3 driver
> > >>
> > >>  drivers/net/ethernet/hisilicon/hns3/hnae3.h|  1 +
> > >>  .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 23 +++
> > >>  .../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 46
> > ++-
> > >> ---
> > >>  include/uapi/linux/pkt_sched.h |  1 +
> > >>  4 files changed, 55 insertions(+), 16 deletions(-)
> > >>
> > >> --
> > >> 1.9.1
> > >
> > >
> > >



RE: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-14 Thread Yuval Mintz
> Hi, Yuval
> 
> On 2017/10/13 4:21, Yuval Mintz wrote:
> >> This patchset adds a new hardware offload type in mqprio before adding
> >> mqprio hardware offload support in hns3 driver.
> >
> > I think one of the biggest issues in tying this to DCB configuration is the
> > non-immediate [and possibly non persistent] configuration.
> >
> > Scenario #1:
> > User is configuring mqprio offloaded with 3 TCs while device is in willing
> mode.
> > Would you expect the driver to immediately respond with a success or
> instead
> > delay the return until the DCBx negotiation is complete and the operational
> > num of TCs is actually 3?
> 
> Well, when user requsts the mqprio offloaded by a hardware shared by DCB,
> I expect
> the user is not using the dcb tool.
> If user is still using dcb tool, then result is undefined.
> 
> The scenario you mention maybe can be enforced by setting willing to zero
> when user
> is requesting the mqprio offload, and restore the willing bit when unloaded
> the mqprio
> offload.

Sounds a bit harsh but would probably work.

> But I think the real issue is that dcb and mqprio shares the tc system in the
> stack,
> the problem may be better to be fixed in the stack rather than in the driver,
> as you
> suggested in the DCB patchset. What do you think?

What did you have in mind?

> 
> >
> > Scenario #2:
> > Assume user explicitly offloaded mqprio with 3 TCs, but now DCB
> configuration
> > has changed on the peer side and 4 TCs is the new negotiated operational
> value.
> > Your current driver logic would change the number of TCs underneath the
> user
> > configuration [and it would actually probably work due to mqprio being a
> crappy
> > qdisc]. But was that the user actual intention?
> > [I think the likely answer in this scenario is 'yes' since the alternative 
> > is no
> better.
> > But I still thought it was worth mentioning]
> 
> You are right, the problem also have something to do with mqprio and dcb
> sharing
> the tc in the stack.
> 
> Druing testing, when user explicitly offloaded mqprio with 3 TCs, all
> queue has a default pfifo mqprio attached, after DCB changes the tc num to
> 4,
> using tc qdisc shows some queue does not have a default pfifo mqprio
> attached.

Really? Then what did it show? 
[I assume it has some pfifo attached, and it's an mqprio dump kind of an issue]

> 
> Maybe we can add a callback to notify mqprio the configuration has changed.
> 

Which would do what?
You already have the notifications available for monitoring using dcbnl logic 
if the
configuration change [for user]; So user can re-configure whatever it wants.
But other than dropping all the qdisc configurations and going back to the 
default
qdiscs, what default action would mqprio be able to do when configuration 
changes
that actually makes sense?

> Thanks
> Yunsheng Lin
> 
> >
> > Cheers,
> > Yuval
> >
> >>
> >> Yunsheng Lin (2):
> >>   mqprio: Add a new hardware offload type in mqprio
> >>   net: hns3: Add mqprio hardware offload support in hns3 driver
> >>
> >>  drivers/net/ethernet/hisilicon/hns3/hnae3.h|  1 +
> >>  .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 23 +++
> >>  .../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 46
> ++-
> >> ---
> >>  include/uapi/linux/pkt_sched.h |  1 +
> >>  4 files changed, 55 insertions(+), 16 deletions(-)
> >>
> >> --
> >> 1.9.1
> >
> >
> >



Re: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-12 Thread Yunsheng Lin
Hi, Yuval

On 2017/10/13 4:21, Yuval Mintz wrote:
>> This patchset adds a new hardware offload type in mqprio before adding
>> mqprio hardware offload support in hns3 driver.
> 
> I think one of the biggest issues in tying this to DCB configuration is the
> non-immediate [and possibly non persistent] configuration.
> 
> Scenario #1:
> User is configuring mqprio offloaded with 3 TCs while device is in willing 
> mode.
> Would you expect the driver to immediately respond with a success or instead
> delay the return until the DCBx negotiation is complete and the operational
> num of TCs is actually 3?

Well, when user requsts the mqprio offloaded by a hardware shared by DCB, I 
expect
the user is not using the dcb tool.
If user is still using dcb tool, then result is undefined.

The scenario you mention maybe can be enforced by setting willing to zero when 
user
is requesting the mqprio offload, and restore the willing bit when unloaded the 
mqprio
offload.
But I think the real issue is that dcb and mqprio shares the tc system in the 
stack,
the problem may be better to be fixed in the stack rather than in the driver, 
as you
suggested in the DCB patchset. What do you think?

> 
> Scenario #2:
> Assume user explicitly offloaded mqprio with 3 TCs, but now DCB configuration
> has changed on the peer side and 4 TCs is the new negotiated operational 
> value.
> Your current driver logic would change the number of TCs underneath the user
> configuration [and it would actually probably work due to mqprio being a 
> crappy
> qdisc]. But was that the user actual intention?
> [I think the likely answer in this scenario is 'yes' since the alternative is 
> no better.
> But I still thought it was worth mentioning]

You are right, the problem also have something to do with mqprio and dcb sharing
the tc in the stack.

Druing testing, when user explicitly offloaded mqprio with 3 TCs, all
queue has a default pfifo mqprio attached, after DCB changes the tc num to 4,
using tc qdisc shows some queue does not have a default pfifo mqprio attached.

Maybe we can add a callback to notify mqprio the configuration has changed.

Thanks
Yunsheng Lin

> 
> Cheers,
> Yuval
> 
>>
>> Yunsheng Lin (2):
>>   mqprio: Add a new hardware offload type in mqprio
>>   net: hns3: Add mqprio hardware offload support in hns3 driver
>>
>>  drivers/net/ethernet/hisilicon/hns3/hnae3.h|  1 +
>>  .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 23 +++
>>  .../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 46 ++-
>> ---
>>  include/uapi/linux/pkt_sched.h |  1 +
>>  4 files changed, 55 insertions(+), 16 deletions(-)
>>
>> --
>> 1.9.1
> 
> 
> 



RE: [PATCH net-next 0/2] Add mqprio hardware offload support in hns3 driver

2017-10-12 Thread Yuval Mintz
> This patchset adds a new hardware offload type in mqprio before adding
> mqprio hardware offload support in hns3 driver.

I think one of the biggest issues in tying this to DCB configuration is the
non-immediate [and possibly non persistent] configuration.

Scenario #1:
User is configuring mqprio offloaded with 3 TCs while device is in willing mode.
Would you expect the driver to immediately respond with a success or instead
delay the return until the DCBx negotiation is complete and the operational
num of TCs is actually 3?

Scenario #2:
Assume user explicitly offloaded mqprio with 3 TCs, but now DCB configuration
has changed on the peer side and 4 TCs is the new negotiated operational value.
Your current driver logic would change the number of TCs underneath the user
configuration [and it would actually probably work due to mqprio being a crappy
qdisc]. But was that the user actual intention?
[I think the likely answer in this scenario is 'yes' since the alternative is 
no better.
But I still thought it was worth mentioning]

Cheers,
Yuval

> 
> Yunsheng Lin (2):
>   mqprio: Add a new hardware offload type in mqprio
>   net: hns3: Add mqprio hardware offload support in hns3 driver
> 
>  drivers/net/ethernet/hisilicon/hns3/hnae3.h|  1 +
>  .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 23 +++
>  .../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 46 ++-
> ---
>  include/uapi/linux/pkt_sched.h |  1 +
>  4 files changed, 55 insertions(+), 16 deletions(-)
> 
> --
> 1.9.1