On 4/1/2024 11:29 PM, Damodharam Ammepalli wrote:
> On Mon, Apr 1, 2024 at 1:07 PM Thomas Monjalon <tho...@monjalon.net> wrote:
>>
>> 30/03/2024 12:38, huangdengdui:
>>> But, there are different solutions for the device to report the setting
>>> lane capability, as following:
>>> 1. Like the current patch, reporting device capabilities in speed and
>>>    lane coupling mode. However, if we use this solution, we will have
>>>    to couple the the lanes setting with speed setting.
>>>
>>> 2. Like the Damodharam's RFC patch [1], the device reports the maximum
>>>    number of supported lanes. Users can config a lane randomly,
>>>    which is completely separated from the speed.
>>>
>>> 3. Similar to the FEC capability reported by a device, the device reports 
>>> the
>>>    relationship table of the number of lanes supported by the speed,
>>>    for example:
>>>       speed    lanes_capa
>>>       50G      1,2
>>>       100G     1,2,4
>>>       200G     2,4
>>>
>>> Options 1 and 2 have been discussed a lot above.
>>>
>>> For solution 1, the speed and lanes are over-coupled, and the 
>>> implementation is too
>>> complex. But I think it's easier to understand and easier for the device to 
>>> report
>>> capabilities. In addition, the ethtool reporting capability also uses this 
>>> mode.
>>>
>>> For solution 2, as huisong said that user don't know what lanes should or 
>>> can be set
>>> for a specified speed on one NIC.
>>>
>>> I think that when the device reports the capability, the lanes should be 
>>> associated
>>> with the speed. In this way, users can know which lanes are supported by 
>>> the current
>>> speed and verify the configuration validity.
>>>
>>> So I think solution 3 is better. What do you think?
>>
>> I don't understand your proposals.
>> Please could you show the function signature for each option?
>>
>>
>>
> testpmd can query the driver, and driver can export latest bit-map say in,
> rte_eth_speed_lanes_get()->supported_bmap
> 
> 0  1Gb   link speed
> 1  10Gb  (NRZ: 10G per lane, 1 lane) link speed
> 2  25Gb  (NRZ: 25G per lane, 1 lane) link speed
> 3  40Gb  (NRZ: 10G per lane, 4 lanes) link speed
> 4  50Gb  (NRZ: 25G per lane, 2 lanes) link speed
> 5  100Gb (NRZ: 25G per lane, 4 lanes) link speed
> 6  50Gb  (PAM4-56: 50G per lane, 1 lane) link speed
> 7  100Gb (PAM4-56: 50G per lane, 2 lanes) link speed
> 8  200Gb (PAM4-56: 50G per lane, 4 lanes) link speed
> 9  400Gb (PAM4-56: 50G per lane, 8 lanes) link speed
> 10 100Gb (PAM4-112: 100G per lane, 1 lane) link speed
> 11 200Gb (PAM4-112: 100G per lane, 2 lanes) link speed
> 12 400Gb (PAM4-112: 100G per lane, 4 lanes) link speed
> 13 800Gb (PAM4-112: 100G per lane, 8 lanes) link speed
> 14 For future
>

Dengdui & Huisong mentioned that in HW, speed and lane is coupled, when
we have two different APIs for speed and lane, user may end up providing
invalid configuration.

If lane capability report is tied with speed, this will enable user to
figure out correct lane for speed.

We already have similar implementation for FEC, for similar concern:
```
rte_eth_fec_get_capability(port_id, struct rte_eth_fec_capa             
                *speed_fec_capa, num);

struct rte_eth_fec_capa {
        uint32_t speed;
        uint32_t capa;
}
```


@Damodharam, can you please check 'rte_eth_fec_get_capability()'?

If we can have similar lane capability reporting per speed, user can
provide lane value based on this. And rest of the validation can be done
by driver.

I will comment on your RFC.


> In cmd_config_speed_specific_parsed()
>    if (parse_and_check_speed_duplex(res->value1, res->value2, &link_speed) < 
> 0)
>         return;
> + /* validate speed x lanes combo */
> + if (!cmd_validate_lanes(res->id, link_speed))
> + return;
> 
> Driver can validate the rest of other internal link parameters in
> rte_eth_dev_start() before
> applying the config to the hardware.
> 

Reply via email to