Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-23 Thread Si-Wei Liu




On 12/22/2021 9:39 PM, Eli Cohen wrote:

On Wed, Dec 22, 2021 at 06:43:38PM -0800, Si-Wei Liu wrote:


On 12/22/2021 6:27 PM, Jason Wang wrote:

On Thu, Dec 23, 2021 at 3:25 AM Si-Wei Liu  wrote:


On 12/21/2021 11:54 PM, Eli Cohen wrote:

On Tue, Dec 21, 2021 at 11:29:36PM -0800, Si-Wei Liu wrote:

On 12/21/2021 11:10 PM, Eli Cohen wrote:

On Wed, Dec 22, 2021 at 09:03:37AM +0200, Parav Pandit wrote:

From: Eli Cohen 
Sent: Wednesday, December 22, 2021 12:17 PM


--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct

vdpa_mgmt_dev *mdev, struct sk_buff *m

 err = -EMSGSIZE;
 goto msg_err;
 }
+  if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
+  mdev->max_supported_vqs))

It still needs a default value when the field is not explicitly
filled in by the driver.


Unlikely. This can be optional field to help user decide device max limit.
When max_supported_vqs is set to zero. Vdpa should omit exposing it to user

space.
This is not about what you expose to userspace. It's about the number of VQs
you want to create for a specific instance of vdpa.

This value on mgmtdev indicates that a given mgmt device supports creating a 
vdpa device who can have maximum VQs of N.
User will choose to create VQ with VQs <= N depending on its vcpu and other 
factors.

You're right.
So each vendor needs to put there their value.

If I understand Parav correctly, he was suggesting not to expose
VDPA_ATTR_DEV_MGMTDEV_MAX_VQS to userspace if seeing (max_supported_vqs ==
0) from the driver.

I can see the reasoning, but maybe we should leave it as zero which
means it was not reported. The user will then need to guess. I believe
other vendors will follow with an update so this to a real value.

Unless you place a check in the vdpa core to enforce it on vdpa
creation, otherwise it's very likely to get ignored by other vendors.


But meanwhile, I do wonder how users tell apart multiqueue supporting parent
from the single queue mgmtdev without getting the aid from this field. I
hope the answer won't be to create a vdpa instance to try.


Do you see a scenario that an admin decides to not instantiate vdpa just
because it does not support MQ?

Yes, there is. If the hardware doesn't support MQ, the provisioning tool
in the mgmt software will need to fallback to software vhost backend
with mq=on. At the time the tool is checking out, it doesn't run with
root privilege.


And it the management device reports it does support, there's still no
guarantee you'll end up with a MQ net device.

I'm not sure I follow. Do you mean it may be up to the guest feature
negotiation? But the device itself is still MQ capable, isn't it?

I think we need to clarify the "device" here.

For compatibility reasons, there could be a case that mgmt doesn't
expect a mq capable vdpa device. So in this case, even if the parent
is MQ capable, the vdpa isn't.

Right. The mgmt software is not necessarily libvirt. Perhaps I should be
explicit to say the mgmt software we're building would definitely create MQ
vdpa device in case on a MQ capable parent.

OK, to recap:

1. I think waht you're asking for is to see what the parent device (e.g.
mlx5_vdpa) is going to report to the virtio driver in the features, at
the management device.
So how about I add a features field to struct vdpa_mgmt_dev and return
it in netlink. Userspace will present it as shown in patch 0008 in v6.

Yes, that's exactly what I want.



2. Si-Wei, you mentioned you want this information to be available to
non privileged user. This is the case today.

Yep. As long as it doesn't need to involve creating a vdpa to check out

Thanks!
-Siwei




-Siwei


Thanks


Thanks,
-Siwei


-Siwei


This is what is exposed to the user to decide the upper bound.

There has been some talk/patches of rdma virtio device.
I anticipate such device to support more than 64K queues by nature of rdma.
It is better to keep max_supported_vqs as u32.

Why not add it when we have it?

Sure, with that approach we will end up adding two fields (current u16 and 
later another u32) due to smaller bit width of current one.
Either way is fine. Michael was suggesting similar higher bit-width in other 
patches, so bringing up here for this field on how he sees it.

I can use u32 then.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-22 Thread Si-Wei Liu




On 12/22/2021 6:27 PM, Jason Wang wrote:

On Thu, Dec 23, 2021 at 3:25 AM Si-Wei Liu  wrote:



On 12/21/2021 11:54 PM, Eli Cohen wrote:

On Tue, Dec 21, 2021 at 11:29:36PM -0800, Si-Wei Liu wrote:

On 12/21/2021 11:10 PM, Eli Cohen wrote:

On Wed, Dec 22, 2021 at 09:03:37AM +0200, Parav Pandit wrote:

From: Eli Cohen 
Sent: Wednesday, December 22, 2021 12:17 PM


--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct

vdpa_mgmt_dev *mdev, struct sk_buff *m

err = -EMSGSIZE;
goto msg_err;
}
+  if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
+  mdev->max_supported_vqs))

It still needs a default value when the field is not explicitly
filled in by the driver.


Unlikely. This can be optional field to help user decide device max limit.
When max_supported_vqs is set to zero. Vdpa should omit exposing it to user

space.
This is not about what you expose to userspace. It's about the number of VQs
you want to create for a specific instance of vdpa.

This value on mgmtdev indicates that a given mgmt device supports creating a 
vdpa device who can have maximum VQs of N.
User will choose to create VQ with VQs <= N depending on its vcpu and other 
factors.

You're right.
So each vendor needs to put there their value.

If I understand Parav correctly, he was suggesting not to expose
VDPA_ATTR_DEV_MGMTDEV_MAX_VQS to userspace if seeing (max_supported_vqs ==
0) from the driver.

I can see the reasoning, but maybe we should leave it as zero which
means it was not reported. The user will then need to guess. I believe
other vendors will follow with an update so this to a real value.

Unless you place a check in the vdpa core to enforce it on vdpa
creation, otherwise it's very likely to get ignored by other vendors.


But meanwhile, I do wonder how users tell apart multiqueue supporting parent
from the single queue mgmtdev without getting the aid from this field. I
hope the answer won't be to create a vdpa instance to try.


Do you see a scenario that an admin decides to not instantiate vdpa just
because it does not support MQ?

Yes, there is. If the hardware doesn't support MQ, the provisioning tool
in the mgmt software will need to fallback to software vhost backend
with mq=on. At the time the tool is checking out, it doesn't run with
root privilege.


And it the management device reports it does support, there's still no
guarantee you'll end up with a MQ net device.

I'm not sure I follow. Do you mean it may be up to the guest feature
negotiation? But the device itself is still MQ capable, isn't it?

I think we need to clarify the "device" here.

For compatibility reasons, there could be a case that mgmt doesn't
expect a mq capable vdpa device. So in this case, even if the parent
is MQ capable, the vdpa isn't.
Right. The mgmt software is not necessarily libvirt. Perhaps I should be 
explicit to say the mgmt software we're building would definitely create 
MQ vdpa device in case on a MQ capable parent.


-Siwei



Thanks


Thanks,
-Siwei




-Siwei


This is what is exposed to the user to decide the upper bound.

There has been some talk/patches of rdma virtio device.
I anticipate such device to support more than 64K queues by nature of rdma.
It is better to keep max_supported_vqs as u32.

Why not add it when we have it?

Sure, with that approach we will end up adding two fields (current u16 and 
later another u32) due to smaller bit width of current one.
Either way is fine. Michael was suggesting similar higher bit-width in other 
patches, so bringing up here for this field on how he sees it.

I can use u32 then.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-22 Thread Jason Wang
On Thu, Dec 23, 2021 at 3:25 AM Si-Wei Liu  wrote:
>
>
>
> On 12/21/2021 11:54 PM, Eli Cohen wrote:
> > On Tue, Dec 21, 2021 at 11:29:36PM -0800, Si-Wei Liu wrote:
> >>
> >> On 12/21/2021 11:10 PM, Eli Cohen wrote:
> >>> On Wed, Dec 22, 2021 at 09:03:37AM +0200, Parav Pandit wrote:
> > From: Eli Cohen 
> > Sent: Wednesday, December 22, 2021 12:17 PM
> >
>  --- a/drivers/vdpa/vdpa.c
>  +++ b/drivers/vdpa/vdpa.c
>  @@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct
> >>> vdpa_mgmt_dev *mdev, struct sk_buff *m
> err = -EMSGSIZE;
> goto msg_err;
> }
>  +  if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
>  +  mdev->max_supported_vqs))
> >>> It still needs a default value when the field is not explicitly
> >>> filled in by the driver.
> >>>
> >> Unlikely. This can be optional field to help user decide device max 
> >> limit.
> >> When max_supported_vqs is set to zero. Vdpa should omit exposing it to 
> >> user
> > space.
> > This is not about what you expose to userspace. It's about the number 
> > of VQs
> > you want to create for a specific instance of vdpa.
>  This value on mgmtdev indicates that a given mgmt device supports 
>  creating a vdpa device who can have maximum VQs of N.
>  User will choose to create VQ with VQs <= N depending on its vcpu and 
>  other factors.
> >>> You're right.
> >>> So each vendor needs to put there their value.
> >> If I understand Parav correctly, he was suggesting not to expose
> >> VDPA_ATTR_DEV_MGMTDEV_MAX_VQS to userspace if seeing (max_supported_vqs ==
> >> 0) from the driver.
> > I can see the reasoning, but maybe we should leave it as zero which
> > means it was not reported. The user will then need to guess. I believe
> > other vendors will follow with an update so this to a real value.
> Unless you place a check in the vdpa core to enforce it on vdpa
> creation, otherwise it's very likely to get ignored by other vendors.
>
> >
> >> But meanwhile, I do wonder how users tell apart multiqueue supporting 
> >> parent
> >> from the single queue mgmtdev without getting the aid from this field. I
> >> hope the answer won't be to create a vdpa instance to try.
> >>
> > Do you see a scenario that an admin decides to not instantiate vdpa just
> > because it does not support MQ?
> Yes, there is. If the hardware doesn't support MQ, the provisioning tool
> in the mgmt software will need to fallback to software vhost backend
> with mq=on. At the time the tool is checking out, it doesn't run with
> root privilege.
>
> >
> > And it the management device reports it does support, there's still no
> > guarantee you'll end up with a MQ net device.
> I'm not sure I follow. Do you mean it may be up to the guest feature
> negotiation? But the device itself is still MQ capable, isn't it?

I think we need to clarify the "device" here.

For compatibility reasons, there could be a case that mgmt doesn't
expect a mq capable vdpa device. So in this case, even if the parent
is MQ capable, the vdpa isn't.

Thanks

>
> Thanks,
> -Siwei
>
> >
> >
> >> -Siwei
> >>
>  This is what is exposed to the user to decide the upper bound.
> >> There has been some talk/patches of rdma virtio device.
> >> I anticipate such device to support more than 64K queues by nature of 
> >> rdma.
> >> It is better to keep max_supported_vqs as u32.
> > Why not add it when we have it?
>  Sure, with that approach we will end up adding two fields (current u16 
>  and later another u32) due to smaller bit width of current one.
>  Either way is fine. Michael was suggesting similar higher bit-width in 
>  other patches, so bringing up here for this field on how he sees it.
> >>> I can use u32 then.
>

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-22 Thread Si-Wei Liu




On 12/21/2021 11:54 PM, Eli Cohen wrote:

On Tue, Dec 21, 2021 at 11:29:36PM -0800, Si-Wei Liu wrote:


On 12/21/2021 11:10 PM, Eli Cohen wrote:

On Wed, Dec 22, 2021 at 09:03:37AM +0200, Parav Pandit wrote:

From: Eli Cohen 
Sent: Wednesday, December 22, 2021 12:17 PM


--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct

vdpa_mgmt_dev *mdev, struct sk_buff *m

err = -EMSGSIZE;
goto msg_err;
}
+   if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
+   mdev->max_supported_vqs))

It still needs a default value when the field is not explicitly
filled in by the driver.


Unlikely. This can be optional field to help user decide device max limit.
When max_supported_vqs is set to zero. Vdpa should omit exposing it to user

space.
This is not about what you expose to userspace. It's about the number of VQs
you want to create for a specific instance of vdpa.

This value on mgmtdev indicates that a given mgmt device supports creating a 
vdpa device who can have maximum VQs of N.
User will choose to create VQ with VQs <= N depending on its vcpu and other 
factors.

You're right.
So each vendor needs to put there their value.

If I understand Parav correctly, he was suggesting not to expose
VDPA_ATTR_DEV_MGMTDEV_MAX_VQS to userspace if seeing (max_supported_vqs ==
0) from the driver.

I can see the reasoning, but maybe we should leave it as zero which
means it was not reported. The user will then need to guess. I believe
other vendors will follow with an update so this to a real value.
Unless you place a check in the vdpa core to enforce it on vdpa 
creation, otherwise it's very likely to get ignored by other vendors.





But meanwhile, I do wonder how users tell apart multiqueue supporting parent
from the single queue mgmtdev without getting the aid from this field. I
hope the answer won't be to create a vdpa instance to try.


Do you see a scenario that an admin decides to not instantiate vdpa just
because it does not support MQ?
Yes, there is. If the hardware doesn't support MQ, the provisioning tool 
in the mgmt software will need to fallback to software vhost backend 
with mq=on. At the time the tool is checking out, it doesn't run with 
root privilege.




And it the management device reports it does support, there's still no
guarantee you'll end up with a MQ net device.
I'm not sure I follow. Do you mean it may be up to the guest feature 
negotiation? But the device itself is still MQ capable, isn't it?


Thanks,
-Siwei





-Siwei


This is what is exposed to the user to decide the upper bound.

There has been some talk/patches of rdma virtio device.
I anticipate such device to support more than 64K queues by nature of rdma.
It is better to keep max_supported_vqs as u32.

Why not add it when we have it?

Sure, with that approach we will end up adding two fields (current u16 and 
later another u32) due to smaller bit width of current one.
Either way is fine. Michael was suggesting similar higher bit-width in other 
patches, so bringing up here for this field on how he sees it.

I can use u32 then.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-21 Thread Si-Wei Liu




On 12/21/2021 11:10 PM, Eli Cohen wrote:

On Wed, Dec 22, 2021 at 09:03:37AM +0200, Parav Pandit wrote:



From: Eli Cohen 
Sent: Wednesday, December 22, 2021 12:17 PM


--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct

vdpa_mgmt_dev *mdev, struct sk_buff *m

err = -EMSGSIZE;
goto msg_err;
}
+   if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
+   mdev->max_supported_vqs))

It still needs a default value when the field is not explicitly
filled in by the driver.


Unlikely. This can be optional field to help user decide device max limit.
When max_supported_vqs is set to zero. Vdpa should omit exposing it to user

space.
This is not about what you expose to userspace. It's about the number of VQs
you want to create for a specific instance of vdpa.

This value on mgmtdev indicates that a given mgmt device supports creating a 
vdpa device who can have maximum VQs of N.
User will choose to create VQ with VQs <= N depending on its vcpu and other 
factors.

You're right.
So each vendor needs to put there their value.
If I understand Parav correctly, he was suggesting not to expose 
VDPA_ATTR_DEV_MGMTDEV_MAX_VQS to userspace if seeing (max_supported_vqs 
== 0) from the driver.


But meanwhile, I do wonder how users tell apart multiqueue supporting 
parent from the single queue mgmtdev without getting the aid from this 
field. I hope the answer won't be to create a vdpa instance to try.


-Siwei




This is what is exposed to the user to decide the upper bound.

There has been some talk/patches of rdma virtio device.
I anticipate such device to support more than 64K queues by nature of rdma.
It is better to keep max_supported_vqs as u32.

Why not add it when we have it?

Sure, with that approach we will end up adding two fields (current u16 and 
later another u32) due to smaller bit width of current one.
Either way is fine. Michael was suggesting similar higher bit-width in other 
patches, so bringing up here for this field on how he sees it.

I can use u32 then.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-21 Thread Parav Pandit via Virtualization



> From: Eli Cohen 
> Sent: Wednesday, December 22, 2021 12:17 PM
> 
> > > > --- a/drivers/vdpa/vdpa.c
> > > > +++ b/drivers/vdpa/vdpa.c
> > > > @@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct
> > > vdpa_mgmt_dev *mdev, struct sk_buff *m
> > > > err = -EMSGSIZE;
> > > > goto msg_err;
> > > > }
> > > > +   if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
> > > > +   mdev->max_supported_vqs))
> > > It still needs a default value when the field is not explicitly
> > > filled in by the driver.
> > >
> > Unlikely. This can be optional field to help user decide device max limit.
> > When max_supported_vqs is set to zero. Vdpa should omit exposing it to user
> space.
> >
> 
> This is not about what you expose to userspace. It's about the number of VQs
> you want to create for a specific instance of vdpa.
This value on mgmtdev indicates that a given mgmt device supports creating a 
vdpa device who can have maximum VQs of N.
User will choose to create VQ with VQs <= N depending on its vcpu and other 
factors.

This is what is exposed to the user to decide the upper bound.
> > There has been some talk/patches of rdma virtio device.
> > I anticipate such device to support more than 64K queues by nature of rdma.
> > It is better to keep max_supported_vqs as u32.
> 
> Why not add it when we have it?
Sure, with that approach we will end up adding two fields (current u16 and 
later another u32) due to smaller bit width of current one.
Either way is fine. Michael was suggesting similar higher bit-width in other 
patches, so bringing up here for this field on how he sees it.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-21 Thread Si-Wei Liu




On 12/21/2021 9:06 PM, Parav Pandit wrote:



From: Si-Wei Liu 
Sent: Wednesday, December 22, 2021 7:31 AM

On 12/21/2021 9:20 AM, Eli Cohen wrote:

Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
need to feel this value according to the device capabilities.

This value is reported back in a netlink message when showing
management devices.

Example:

$ vdpa dev show

s/dev/mgmtdev/

and,

vdpa mgmtdev show

remove this line.

auxiliary/mlx5_core.sf.1:
supported_classes net
max_supported_vqs 256

It should be in same line.
Also please show the JSON output.


Not consistent with the example in patch #11 in the series.

Signed-off-by: Eli Cohen 
---
   drivers/vdpa/vdpa.c   | 3 +++
   include/linux/vdpa.h  | 1 +
   include/uapi/linux/vdpa.h | 1 +
   3 files changed, 5 insertions(+)

diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index
eb223bec5209..4b649125a038 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct

vdpa_mgmt_dev *mdev, struct sk_buff *m

err = -EMSGSIZE;
goto msg_err;
}
+   if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
+   mdev->max_supported_vqs))

It still needs a default value when the field is not explicitly filled in by the
driver.


Unlikely. This can be optional field to help user decide device max limit.
When max_supported_vqs is set to zero. Vdpa should omit exposing it to user 
space.
That'd be okay. I thought this field might also be useful for user to 
tell if the parent can support mq.


-Siwei




--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -447,6 +447,7 @@ struct vdpa_mgmt_dev {
const struct vdpa_mgmtdev_ops *ops;
const struct virtio_device_id *id_table;
u64 config_attr_mask;
+   u16 max_supported_vqs;

This breaks the natural alignment and create holes in the struct.
Please move it at the last entry in the struct after list.

There has been some talk/patches of rdma virtio device.
I anticipate such device to support more than 64K queues by nature of rdma.
It is better to keep max_supported_vqs as u32.


struct list_head list;
   };

diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
index db3738ef3beb..995257c6bf2a 100644
--- a/include/uapi/linux/vdpa.h
+++ b/include/uapi/linux/vdpa.h
@@ -44,6 +44,7 @@ enum vdpa_attr {
VDPA_ATTR_DEV_NET_CFG_MTU,  /* u16 */

VDPA_ATTR_DEV_NEGOTIATED_FEATURES,  /* u64 */
+   VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,  /* u16 */
/* new attributes must be added above here */
VDPA_ATTR_MAX,
   };


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-21 Thread Parav Pandit via Virtualization



> From: Si-Wei Liu 
> Sent: Wednesday, December 22, 2021 7:31 AM
> 
> On 12/21/2021 9:20 AM, Eli Cohen wrote:
> > Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
> > need to feel this value according to the device capabilities.
> >
> > This value is reported back in a netlink message when showing
> > management devices.
> >
> > Example:
> >
> > $ vdpa dev show
> s/dev/mgmtdev/
> 
> and,
> > vdpa mgmtdev show
> remove this line.
> > auxiliary/mlx5_core.sf.1:
> >supported_classes net
> >max_supported_vqs 256
It should be in same line.
Also please show the JSON output.

> Not consistent with the example in patch #11 in the series.
> >
> > Signed-off-by: Eli Cohen 
> > ---
> >   drivers/vdpa/vdpa.c   | 3 +++
> >   include/linux/vdpa.h  | 1 +
> >   include/uapi/linux/vdpa.h | 1 +
> >   3 files changed, 5 insertions(+)
> >
> > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index
> > eb223bec5209..4b649125a038 100644
> > --- a/drivers/vdpa/vdpa.c
> > +++ b/drivers/vdpa/vdpa.c
> > @@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct
> vdpa_mgmt_dev *mdev, struct sk_buff *m
> > err = -EMSGSIZE;
> > goto msg_err;
> > }
> > +   if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
> > +   mdev->max_supported_vqs))
> It still needs a default value when the field is not explicitly filled in by 
> the
> driver.
> 
Unlikely. This can be optional field to help user decide device max limit.
When max_supported_vqs is set to zero. Vdpa should omit exposing it to user 
space.

> > --- a/include/linux/vdpa.h
> > +++ b/include/linux/vdpa.h
> > @@ -447,6 +447,7 @@ struct vdpa_mgmt_dev {
> > const struct vdpa_mgmtdev_ops *ops;
> > const struct virtio_device_id *id_table;
> > u64 config_attr_mask;
> > +   u16 max_supported_vqs;
This breaks the natural alignment and create holes in the struct.
Please move it at the last entry in the struct after list.

There has been some talk/patches of rdma virtio device.
I anticipate such device to support more than 64K queues by nature of rdma.
It is better to keep max_supported_vqs as u32.

> > struct list_head list;
> >   };
> >
> > diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
> > index db3738ef3beb..995257c6bf2a 100644
> > --- a/include/uapi/linux/vdpa.h
> > +++ b/include/uapi/linux/vdpa.h
> > @@ -44,6 +44,7 @@ enum vdpa_attr {
> > VDPA_ATTR_DEV_NET_CFG_MTU,  /* u16 */
> >
> > VDPA_ATTR_DEV_NEGOTIATED_FEATURES,  /* u64 */
> > +   VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,  /* u16 */
> > /* new attributes must be added above here */
> > VDPA_ATTR_MAX,
> >   };

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v5 10/13] vdpa: Support reporting max device virtqueues

2021-12-21 Thread Si-Wei Liu




On 12/21/2021 9:20 AM, Eli Cohen wrote:

Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
need to feel this value according to the device capabilities.

This value is reported back in a netlink message when showing management
devices.

Example:

$ vdpa dev show

s/dev/mgmtdev/

and,

vdpa mgmtdev show

remove this line.

auxiliary/mlx5_core.sf.1:
   supported_classes net
   max_supported_vqs 256

Not consistent with the example in patch #11 in the series.


Signed-off-by: Eli Cohen 
---
  drivers/vdpa/vdpa.c   | 3 +++
  include/linux/vdpa.h  | 1 +
  include/uapi/linux/vdpa.h | 1 +
  3 files changed, 5 insertions(+)

diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index eb223bec5209..4b649125a038 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct vdpa_mgmt_dev 
*mdev, struct sk_buff *m
err = -EMSGSIZE;
goto msg_err;
}
+   if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
+   mdev->max_supported_vqs))
It still needs a default value when the field is not explicitly filled 
in by the driver.


-Siwei


+   goto msg_err;
  
  	genlmsg_end(msg, hdr);

return 0;
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 47e2b780e4bc..b575f71fa5e7 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -447,6 +447,7 @@ struct vdpa_mgmt_dev {
const struct vdpa_mgmtdev_ops *ops;
const struct virtio_device_id *id_table;
u64 config_attr_mask;
+   u16 max_supported_vqs;
struct list_head list;
  };
  
diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h

index db3738ef3beb..995257c6bf2a 100644
--- a/include/uapi/linux/vdpa.h
+++ b/include/uapi/linux/vdpa.h
@@ -44,6 +44,7 @@ enum vdpa_attr {
VDPA_ATTR_DEV_NET_CFG_MTU,  /* u16 */
  
  	VDPA_ATTR_DEV_NEGOTIATED_FEATURES,	/* u64 */

+   VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,  /* u16 */
/* new attributes must be added above here */
VDPA_ATTR_MAX,
  };


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization