[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-10-13 Thread Yuanhan Liu
On Mon, Oct 12, 2015 at 10:58:17PM +0200, Steffen Bauch wrote:
> On 10/12/2015 10:39 AM, Yuanhan Liu wrote:
> >Hi,
> >
> >I just recognized that this dead loop is the same one that I have
> >experienced (see
> >http://dpdk.org/ml/archives/dev/2015-October/024737.html for
> >reference). Just applying the changes in this patch (only 07/12)
> >will not fix the dead loop at least in my setup.
> >Try to enable CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT, and dump more log?
> I enabled the additional debug output. First try was without any
> additional changes in master, but it blocked also. Second try was
> with
> 
> [dpdk-dev] [PATCH v6 06/13] virtio: read virtio_net_config correctly
> 
> applied, but same result.
> 
> If you want to recreate my setup, just follow instructions in
> 
> http://dpdk.org/ml/archives/dev/2015-October/024737.html
> 
> 
> vagrant at vagrant-ubuntu-vivid-64:~/dpdk$ git status
> On branch master
> Your branch is up-to-date with 'origin/master'.
> Changes not staged for commit:
>   (use "git add ..." to update what will be committed)
>   (use "git checkout -- ..." to discard changes in working directory)
> 
> modified:   config/defconfig_x86_64-native-linuxapp-gcc
> 
> ..

Don't have clear clue there. But you could try Huawei's solution first.
It's likely that it will fix your problem.

If not, would you please try to reproduce it with qemu (you were using
virtualbox, right)?  And then dump the whoe command line here so that I
can try to reproduce and debug it on my side. Sorry that I don't use
virtualbox, as well as vagrant.

--yliu

> 
> vagrant at vagrant-ubuntu-vivid-64:~/dpdk/x86_64-native-linuxapp-gcc/app$
> sudo ./testpmd -b :00:03.0 -c 3 -n 1 -- -i
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 1 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 2 lcore(s)
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40 bytes
> EAL: Virtual area found at 0x7f2a3a80 (size = 0x40)
> EAL: Ask a virtual area of 0xe00 bytes
> EAL: Virtual area found at 0x7f2a2c60 (size = 0xe00)
> EAL: Ask a virtual area of 0x30c0 bytes
> EAL: Virtual area found at 0x7f29fb80 (size = 0x30c0)
> EAL: Ask a virtual area of 0x40 bytes
> EAL: Virtual area found at 0x7f29fb20 (size = 0x40)
> EAL: Ask a virtual area of 0xa0 bytes
> EAL: Virtual area found at 0x7f29fa60 (size = 0xa0)
> EAL: Ask a virtual area of 0x20 bytes
> EAL: Virtual area found at 0x7f29fa20 (size = 0x20)
> EAL: Requesting 512 pages of size 2MB from socket 0
> EAL: TSC frequency is ~2198491 KHz
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable clock cycles !
> EAL: Master lcore 0 is ready (tid=3c9938c0;cpuset=[0])
> EAL: lcore 1 is ready (tid=fa1ff700;cpuset=[1])
> EAL: PCI device :00:03.0 on NUMA socket -1
> EAL:   probe driver: 1af4:1000 rte_virtio_pmd
> EAL:   Device is blacklisted, not initializing
> EAL: PCI device :00:08.0 on NUMA socket -1
> EAL:   probe driver: 1af4:1000 rte_virtio_pmd
> PMD: parse_sysfs_value(): parse_sysfs_value(): cannot open sysfs
> value /sys/bus/pci/devices/:00:08.0/uio/uio0/portio/port0/size
> PMD: virtio_resource_init_by_uio(): virtio_resource_init_by_uio():
> cannot parse size
> PMD: virtio_resource_init_by_ioports(): PCI Port IO found
> start=0xd040 with size=0x20
> PMD: virtio_negotiate_features(): guest_features before negotiate = cf8020
> PMD: virtio_negotiate_features(): host_features before negotiate = 410fdda3
> PMD: virtio_negotiate_features(): features after negotiate = f8020
> PMD: eth_virtio_dev_init(): PORT MAC: 08:00:27:CC:DE:CD
> PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported
> PMD: virtio_dev_cq_queue_setup():  >>
> PMD: virtio_dev_queue_setup(): selecting queue: 2
> PMD: virtio_dev_queue_setup(): vq_size: 16 nb_desc:0
> PMD: virtio_dev_queue_setup(): vring_size: 4228, rounded_vring_size: 8192
> PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:  0x67b54000
> PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7f29fb354000
> PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1
> PMD: eth_virtio_dev_init(): config->status=1
> PMD: eth_virtio_dev_init(): PORT MAC: 08:00:27:CC:DE:CD
> PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1
> PMD: eth_virtio_dev_init(): port 0 vendorID=0x1af4 deviceID=0x1000
> PMD: virtio_dev_vring_start():  >>
> EAL: PCI device :00:09.0 on NUMA socket -1
> EAL:   probe driver: 1af4:1000 rte_virtio_pmd
> PMD: parse_sysfs_value(): parse_sysfs_value(): cannot open sysfs
> value /sys/bus/pci/devices/:00:09.0/uio/uio1/portio/port0/size
> PMD: virtio_resource_init_by_uio(): virtio_resource_init_by_uio():
> cannot parse size
> PMD: virtio_resource_init_by_ioports(): PCI Port IO found
> start=0xd060 with size=0x20
> PMD: virtio_negotiate_features(): 

[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-10-12 Thread Steffen Bauch
On 10/12/2015 10:39 AM, Yuanhan Liu wrote:
> Hi,
>
> I just recognized that this dead loop is the same one that I have
> experienced (see
> http://dpdk.org/ml/archives/dev/2015-October/024737.html for
> reference). Just applying the changes in this patch (only 07/12)
> will not fix the dead loop at least in my setup.
> Try to enable CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT, and dump more log?
I enabled the additional debug output. First try was without any 
additional changes in master, but it blocked also. Second try was with

[dpdk-dev] [PATCH v6 06/13] virtio: read virtio_net_config correctly

applied, but same result.

If you want to recreate my setup, just follow instructions in

http://dpdk.org/ml/archives/dev/2015-October/024737.html


vagrant at vagrant-ubuntu-vivid-64:~/dpdk$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
   (use "git add ..." to update what will be committed)
   (use "git checkout -- ..." to discard changes in working directory)

 modified:   config/defconfig_x86_64-native-linuxapp-gcc

..

vagrant at vagrant-ubuntu-vivid-64:~/dpdk/x86_64-native-linuxapp-gcc/app$ 
sudo ./testpmd -b :00:03.0 -c 3 -n 1 -- -i
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x40 bytes
EAL: Virtual area found at 0x7f2a3a80 (size = 0x40)
EAL: Ask a virtual area of 0xe00 bytes
EAL: Virtual area found at 0x7f2a2c60 (size = 0xe00)
EAL: Ask a virtual area of 0x30c0 bytes
EAL: Virtual area found at 0x7f29fb80 (size = 0x30c0)
EAL: Ask a virtual area of 0x40 bytes
EAL: Virtual area found at 0x7f29fb20 (size = 0x40)
EAL: Ask a virtual area of 0xa0 bytes
EAL: Virtual area found at 0x7f29fa60 (size = 0xa0)
EAL: Ask a virtual area of 0x20 bytes
EAL: Virtual area found at 0x7f29fa20 (size = 0x20)
EAL: Requesting 512 pages of size 2MB from socket 0
EAL: TSC frequency is ~2198491 KHz
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using 
unreliable clock cycles !
EAL: Master lcore 0 is ready (tid=3c9938c0;cpuset=[0])
EAL: lcore 1 is ready (tid=fa1ff700;cpuset=[1])
EAL: PCI device :00:03.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL:   Device is blacklisted, not initializing
EAL: PCI device :00:08.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
PMD: parse_sysfs_value(): parse_sysfs_value(): cannot open sysfs value 
/sys/bus/pci/devices/:00:08.0/uio/uio0/portio/port0/size
PMD: virtio_resource_init_by_uio(): virtio_resource_init_by_uio(): 
cannot parse size
PMD: virtio_resource_init_by_ioports(): PCI Port IO found start=0xd040 
with size=0x20
PMD: virtio_negotiate_features(): guest_features before negotiate = cf8020
PMD: virtio_negotiate_features(): host_features before negotiate = 410fdda3
PMD: virtio_negotiate_features(): features after negotiate = f8020
PMD: eth_virtio_dev_init(): PORT MAC: 08:00:27:CC:DE:CD
PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported
PMD: virtio_dev_cq_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 2
PMD: virtio_dev_queue_setup(): vq_size: 16 nb_desc:0
PMD: virtio_dev_queue_setup(): vring_size: 4228, rounded_vring_size: 8192
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:  0x67b54000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7f29fb354000
PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1
PMD: eth_virtio_dev_init(): config->status=1
PMD: eth_virtio_dev_init(): PORT MAC: 08:00:27:CC:DE:CD
PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1
PMD: eth_virtio_dev_init(): port 0 vendorID=0x1af4 deviceID=0x1000
PMD: virtio_dev_vring_start():  >>
EAL: PCI device :00:09.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
PMD: parse_sysfs_value(): parse_sysfs_value(): cannot open sysfs value 
/sys/bus/pci/devices/:00:09.0/uio/uio1/portio/port0/size
PMD: virtio_resource_init_by_uio(): virtio_resource_init_by_uio(): 
cannot parse size
PMD: virtio_resource_init_by_ioports(): PCI Port IO found start=0xd060 
with size=0x20
PMD: virtio_negotiate_features(): guest_features before negotiate = cf8020
PMD: virtio_negotiate_features(): host_features before negotiate = 410fdda3
PMD: virtio_negotiate_features(): features after negotiate = f8020
PMD: eth_virtio_dev_init(): PORT MAC: 08:00:27:07:D3:F5
PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported
PMD: virtio_dev_cq_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 2
PMD: virtio_dev_queue_setup(): vq_size: 16 nb_desc:0
PMD: virtio_dev_queue_setup(): vring_size: 4228, rounded_vring_size: 8192
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:  0x67b5
PMD: virtio_dev_queue_setup(): 

[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-10-12 Thread Yuanhan Liu
On Thu, Oct 08, 2015 at 10:51:02PM +0200, Steffen Bauch wrote:
> 
> 
> On 10/08/2015 05:32 PM, Nikita Kalyazin wrote:
> >Hi Yuanhan,
> >
> >
> >As I understand, the dead loop happened here (virtio_send_command):
> >while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
> >   rte_rmb();
> >   usleep(100);
> >}
> >
> >Could you explain why wrong config reading caused that and how correct 
> >reading helps to avoid?

Wrong config reading results to wrong config->max_virtqueue_pairs, which
ends up with wrong ctrl vq index being set:

PMD: virtio_send_command(): vq->vq_queue_index = 37120

Note that you need enable CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT to see above
debug log.

That is to say we are waiting for the backend to consume a non-exist
queue, and that's how the dead loop comes.


> >
> Hi,
> 
> I just recognized that this dead loop is the same one that I have
> experienced (see
> http://dpdk.org/ml/archives/dev/2015-October/024737.html for
> reference). Just applying the changes in this patch (only 07/12)
> will not fix the dead loop at least in my setup.

Try to enable CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT, and dump more log?

--yliu


[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-10-12 Thread Xie, Huawei
On 10/12/2015 10:33 AM, Xie, Huawei wrote:
> On 10/12/2015 9:39 AM, Yuanhan Liu wrote:
>> On Thu, Oct 08, 2015 at 10:51:02PM +0200, Steffen Bauch wrote:
>>> On 10/08/2015 05:32 PM, Nikita Kalyazin wrote:
 Hi Yuanhan,


 As I understand, the dead loop happened here (virtio_send_command):
 while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
> Nikita:
>
> Didn't review the whole patch, but happen to  find a serious problem in
> the code snippet here, as volatile isn't used, compiler will assume the
> memory will not be changed outside and do only one comparison.
>
> Try add volatile prefix, and it might fix your problem.
Read other mails in this thread, if the specific queue is due to wrong
queue index.
Fix the volatile in the code, otherwise if first time no match, the code
will go to dead loop directly and no chance to compare again in
optimized code.
   rte_rmb();
   usleep(100);
 }

 Could you explain why wrong config reading caused that and how correct 
 reading helps to avoid?
>> Wrong config reading results to wrong config->max_virtqueue_pairs, which
>> ends up with wrong ctrl vq index being set:
>>
>> PMD: virtio_send_command(): vq->vq_queue_index = 37120
>>
>> Note that you need enable CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT to see above
>> debug log.
>>
>> That is to say we are waiting for the backend to consume a non-exist
>> queue, and that's how the dead loop comes.
>>
>>
>>> Hi,
>>>
>>> I just recognized that this dead loop is the same one that I have
>>> experienced (see
>>> http://dpdk.org/ml/archives/dev/2015-October/024737.html for
>>> reference). Just applying the changes in this patch (only 07/12)
>>> will not fix the dead loop at least in my setup.
>> Try to enable CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT, and dump more log?
>>
>>  --yliu
>>
>



[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-10-09 Thread Nikita Kalyazin
Hi,

> I just recognized that this dead loop is the same one that I have 
> experienced (see 
> http://dpdk.org/ml/archives/dev/2015-October/024737.html for reference). 
> Just applying the changes in this patch (only 07/12) will not fix the 
> dead loop at least in my setup.
Yes, exactly. I observe it same way even after applying the patch.

-- 

Best regards,

Nikita Kalyazin,
n.kalyazin at samsung.com

Software Engineer
Virtualization Group
Samsung R Institute Russia
Tel: +7 (495) 797-25-00 #3816
Tel: +7 (495) 797-25-03
Office #1501, 12-1, Dvintsev str.,
Moscow, 127018, Russia

On Thu, Oct 08, 2015 at 10:51:02PM +0200, Steffen Bauch wrote:
> 
> 
> On 10/08/2015 05:32 PM, Nikita Kalyazin wrote:
> > Hi Yuanhan,
> >
> >
> > As I understand, the dead loop happened here (virtio_send_command):
> > while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
> >rte_rmb();
> >usleep(100);
> > }
> >
> > Could you explain why wrong config reading caused that and how correct 
> > reading helps to avoid?
> >
> Hi,
> 
> I just recognized that this dead loop is the same one that I have 
> experienced (see 
> http://dpdk.org/ml/archives/dev/2015-October/024737.html for reference). 
> Just applying the changes in this patch (only 07/12) will not fix the 
> dead loop at least in my setup.
> 
> Best regards,
> 
> Steffen


[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-10-08 Thread Steffen Bauch


On 10/08/2015 05:32 PM, Nikita Kalyazin wrote:
> Hi Yuanhan,
>
>
> As I understand, the dead loop happened here (virtio_send_command):
> while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
>rte_rmb();
>usleep(100);
> }
>
> Could you explain why wrong config reading caused that and how correct 
> reading helps to avoid?
>
Hi,

I just recognized that this dead loop is the same one that I have 
experienced (see 
http://dpdk.org/ml/archives/dev/2015-October/024737.html for reference). 
Just applying the changes in this patch (only 07/12) will not fix the 
dead loop at least in my setup.

Best regards,

Steffen


[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-10-08 Thread Nikita Kalyazin
Hi Yuanhan,


As I understand, the dead loop happened here (virtio_send_command):
while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
  rte_rmb();
  usleep(100);
}

Could you explain why wrong config reading caused that and how correct reading 
helps to avoid?

-- 

Best regards,

Nikita Kalyazin,
n.kalyazin at samsung.com

Software Engineer
Virtualization Group
Samsung R Institute Russia
Tel: +7 (495) 797-25-00 #3816
Tel: +7 (495) 797-25-03
Office #1501, 12-1, Dvintsev str.,
Moscow, 127018, Russia

On Mon, Sep 21, 2015 at 02:36:47PM +0800, Yuanhan Liu wrote:
> On Sun, Sep 20, 2015 at 12:21:14PM +0300, Michael S. Tsirkin wrote:
> > On Fri, Sep 18, 2015 at 11:10:56PM +0800, Yuanhan Liu wrote:
> > > From: Changchun Ouyang 
> > > 
> > > Fix the max virtio queue pair read issue.
> > > 
> > > Control queue can't work for vhost-user mulitple queue mode,
> > > so introduce a counter to void the dead loop when polling
> > > the control queue.
> > > 
> > > Signed-off-by: Changchun Ouyang 
> > > Signed-off-by: Yuanhan Liu 
> > 
> > Per virtio spec, the multiqueue feature depends on control queue -
> > what do you mean when you say it can't work?
> > 
> > > ---
> > >  drivers/net/virtio/virtio_ethdev.c | 12 +++-
> > >  1 file changed, 7 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/drivers/net/virtio/virtio_ethdev.c 
> > > b/drivers/net/virtio/virtio_ethdev.c
> > > index 465d3cd..b2f4120 100644
> > > --- a/drivers/net/virtio/virtio_ethdev.c
> > > +++ b/drivers/net/virtio/virtio_ethdev.c
> > > @@ -1162,7 +1162,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
> > >   struct virtio_hw *hw = eth_dev->data->dev_private;
> > >   struct virtio_net_config *config;
> > >   struct virtio_net_config local_config;
> > > - uint32_t offset_conf = sizeof(config->mac);
> > >   struct rte_pci_device *pci_dev;
> > >  
> > >   RTE_BUILD_BUG_ON(RTE_PKTMBUF_HEADROOM < sizeof(struct virtio_net_hdr));
> > > @@ -1222,7 +1221,9 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
> > >   config = _config;
> > >  
> > >   if (vtpci_with_feature(hw, VIRTIO_NET_F_STATUS)) {
> > > - offset_conf += sizeof(config->status);
> > > + vtpci_read_dev_config(hw,
> > > + offsetof(struct virtio_net_config, status),
> > > + >status, sizeof(config->status));
> > >   } else {
> > >   PMD_INIT_LOG(DEBUG,
> > >"VIRTIO_NET_F_STATUS is not supported");
> > > @@ -1230,15 +1231,16 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
> > >   }
> > >  
> > >   if (vtpci_with_feature(hw, VIRTIO_NET_F_MQ)) {
> > > - offset_conf += sizeof(config->max_virtqueue_pairs);
> > > + vtpci_read_dev_config(hw,
> > > + offsetof(struct virtio_net_config, 
> > > max_virtqueue_pairs),
> > > + >max_virtqueue_pairs,
> > > + sizeof(config->max_virtqueue_pairs));
> > >   } else {
> > >   PMD_INIT_LOG(DEBUG,
> > >"VIRTIO_NET_F_MQ is not supported");
> > >   config->max_virtqueue_pairs = 1;
> > >   }
> > >  
> > > - vtpci_read_dev_config(hw, 0, (uint8_t *)config, offset_conf);
> > > -
> > >   hw->max_rx_queues =
> > >   (VIRTIO_MAX_RX_QUEUES < config->max_virtqueue_pairs) ?
> > >   VIRTIO_MAX_RX_QUEUES : config->max_virtqueue_pairs;
> > 
> > 
> > Does the patch actually do what the commit log says?
> 
> Sorry, the commit log is wrong as you said.
> 
> It was actually a bug in our code, which happens to be revealed when
> MQ is enabled. The old code adjusts the config bytes we want to read
> depending on what kind of features we have, but we later cast the
> entire buf we read with "struct virtio_net_config", which is obviously
> wrong.
> 
> The right way to go is to read related config bytes when corresponding
> feature is set, which is exactly what this patch does.
> 
> > It seems tobe about reading the device confing,
> > not breaking out of a loop ...
> 
> It's just a (bad) side effect of getting the vritio_net_config wrongly:
> the wrong config causes a dead loop in our code.
> 
> And sorry for the buggy commit log, will fix it next version.
> 
> Thanks.
> 
>   --yliu


[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-09-21 Thread Yuanhan Liu
On Sun, Sep 20, 2015 at 12:21:14PM +0300, Michael S. Tsirkin wrote:
> On Fri, Sep 18, 2015 at 11:10:56PM +0800, Yuanhan Liu wrote:
> > From: Changchun Ouyang 
> > 
> > Fix the max virtio queue pair read issue.
> > 
> > Control queue can't work for vhost-user mulitple queue mode,
> > so introduce a counter to void the dead loop when polling
> > the control queue.
> > 
> > Signed-off-by: Changchun Ouyang 
> > Signed-off-by: Yuanhan Liu 
> 
> Per virtio spec, the multiqueue feature depends on control queue -
> what do you mean when you say it can't work?
> 
> > ---
> >  drivers/net/virtio/virtio_ethdev.c | 12 +++-
> >  1 file changed, 7 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/net/virtio/virtio_ethdev.c 
> > b/drivers/net/virtio/virtio_ethdev.c
> > index 465d3cd..b2f4120 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -1162,7 +1162,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
> > struct virtio_hw *hw = eth_dev->data->dev_private;
> > struct virtio_net_config *config;
> > struct virtio_net_config local_config;
> > -   uint32_t offset_conf = sizeof(config->mac);
> > struct rte_pci_device *pci_dev;
> >  
> > RTE_BUILD_BUG_ON(RTE_PKTMBUF_HEADROOM < sizeof(struct virtio_net_hdr));
> > @@ -1222,7 +1221,9 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
> > config = _config;
> >  
> > if (vtpci_with_feature(hw, VIRTIO_NET_F_STATUS)) {
> > -   offset_conf += sizeof(config->status);
> > +   vtpci_read_dev_config(hw,
> > +   offsetof(struct virtio_net_config, status),
> > +   >status, sizeof(config->status));
> > } else {
> > PMD_INIT_LOG(DEBUG,
> >  "VIRTIO_NET_F_STATUS is not supported");
> > @@ -1230,15 +1231,16 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
> > }
> >  
> > if (vtpci_with_feature(hw, VIRTIO_NET_F_MQ)) {
> > -   offset_conf += sizeof(config->max_virtqueue_pairs);
> > +   vtpci_read_dev_config(hw,
> > +   offsetof(struct virtio_net_config, 
> > max_virtqueue_pairs),
> > +   >max_virtqueue_pairs,
> > +   sizeof(config->max_virtqueue_pairs));
> > } else {
> > PMD_INIT_LOG(DEBUG,
> >  "VIRTIO_NET_F_MQ is not supported");
> > config->max_virtqueue_pairs = 1;
> > }
> >  
> > -   vtpci_read_dev_config(hw, 0, (uint8_t *)config, offset_conf);
> > -
> > hw->max_rx_queues =
> > (VIRTIO_MAX_RX_QUEUES < config->max_virtqueue_pairs) ?
> > VIRTIO_MAX_RX_QUEUES : config->max_virtqueue_pairs;
> 
> 
> Does the patch actually do what the commit log says?

Sorry, the commit log is wrong as you said.

It was actually a bug in our code, which happens to be revealed when
MQ is enabled. The old code adjusts the config bytes we want to read
depending on what kind of features we have, but we later cast the
entire buf we read with "struct virtio_net_config", which is obviously
wrong.

The right way to go is to read related config bytes when corresponding
feature is set, which is exactly what this patch does.

> It seems tobe about reading the device confing,
> not breaking out of a loop ...

It's just a (bad) side effect of getting the vritio_net_config wrongly:
the wrong config causes a dead loop in our code.

And sorry for the buggy commit log, will fix it next version.

Thanks.

--yliu


[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-09-20 Thread Michael S. Tsirkin
On Fri, Sep 18, 2015 at 11:10:56PM +0800, Yuanhan Liu wrote:
> From: Changchun Ouyang 
> 
> Fix the max virtio queue pair read issue.
> 
> Control queue can't work for vhost-user mulitple queue mode,
> so introduce a counter to void the dead loop when polling
> the control queue.
> 
> Signed-off-by: Changchun Ouyang 
> Signed-off-by: Yuanhan Liu 

Per virtio spec, the multiqueue feature depends on control queue -
what do you mean when you say it can't work?

> ---
>  drivers/net/virtio/virtio_ethdev.c | 12 +++-
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtio_ethdev.c 
> b/drivers/net/virtio/virtio_ethdev.c
> index 465d3cd..b2f4120 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -1162,7 +1162,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
>   struct virtio_hw *hw = eth_dev->data->dev_private;
>   struct virtio_net_config *config;
>   struct virtio_net_config local_config;
> - uint32_t offset_conf = sizeof(config->mac);
>   struct rte_pci_device *pci_dev;
>  
>   RTE_BUILD_BUG_ON(RTE_PKTMBUF_HEADROOM < sizeof(struct virtio_net_hdr));
> @@ -1222,7 +1221,9 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
>   config = _config;
>  
>   if (vtpci_with_feature(hw, VIRTIO_NET_F_STATUS)) {
> - offset_conf += sizeof(config->status);
> + vtpci_read_dev_config(hw,
> + offsetof(struct virtio_net_config, status),
> + >status, sizeof(config->status));
>   } else {
>   PMD_INIT_LOG(DEBUG,
>"VIRTIO_NET_F_STATUS is not supported");
> @@ -1230,15 +1231,16 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
>   }
>  
>   if (vtpci_with_feature(hw, VIRTIO_NET_F_MQ)) {
> - offset_conf += sizeof(config->max_virtqueue_pairs);
> + vtpci_read_dev_config(hw,
> + offsetof(struct virtio_net_config, 
> max_virtqueue_pairs),
> + >max_virtqueue_pairs,
> + sizeof(config->max_virtqueue_pairs));
>   } else {
>   PMD_INIT_LOG(DEBUG,
>"VIRTIO_NET_F_MQ is not supported");
>   config->max_virtqueue_pairs = 1;
>   }
>  
> - vtpci_read_dev_config(hw, 0, (uint8_t *)config, offset_conf);
> -
>   hw->max_rx_queues =
>   (VIRTIO_MAX_RX_QUEUES < config->max_virtqueue_pairs) ?
>   VIRTIO_MAX_RX_QUEUES : config->max_virtqueue_pairs;


Does the patch actually do what the commit log says?
It seems tobe about reading the device confing,
not breaking out of a loop ...

> -- 
> 1.9.0


[dpdk-dev] [PATCH v5 resend 07/12] virtio: resolve for control queue

2015-09-19 Thread Yuanhan Liu
From: Changchun Ouyang 

Fix the max virtio queue pair read issue.

Control queue can't work for vhost-user mulitple queue mode,
so introduce a counter to void the dead loop when polling
the control queue.

Signed-off-by: Changchun Ouyang 
Signed-off-by: Yuanhan Liu 
---
 drivers/net/virtio/virtio_ethdev.c | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio/virtio_ethdev.c 
b/drivers/net/virtio/virtio_ethdev.c
index 465d3cd..b2f4120 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1162,7 +1162,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
struct virtio_hw *hw = eth_dev->data->dev_private;
struct virtio_net_config *config;
struct virtio_net_config local_config;
-   uint32_t offset_conf = sizeof(config->mac);
struct rte_pci_device *pci_dev;

RTE_BUILD_BUG_ON(RTE_PKTMBUF_HEADROOM < sizeof(struct virtio_net_hdr));
@@ -1222,7 +1221,9 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
config = _config;

if (vtpci_with_feature(hw, VIRTIO_NET_F_STATUS)) {
-   offset_conf += sizeof(config->status);
+   vtpci_read_dev_config(hw,
+   offsetof(struct virtio_net_config, status),
+   >status, sizeof(config->status));
} else {
PMD_INIT_LOG(DEBUG,
 "VIRTIO_NET_F_STATUS is not supported");
@@ -1230,15 +1231,16 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
}

if (vtpci_with_feature(hw, VIRTIO_NET_F_MQ)) {
-   offset_conf += sizeof(config->max_virtqueue_pairs);
+   vtpci_read_dev_config(hw,
+   offsetof(struct virtio_net_config, 
max_virtqueue_pairs),
+   >max_virtqueue_pairs,
+   sizeof(config->max_virtqueue_pairs));
} else {
PMD_INIT_LOG(DEBUG,
 "VIRTIO_NET_F_MQ is not supported");
config->max_virtqueue_pairs = 1;
}

-   vtpci_read_dev_config(hw, 0, (uint8_t *)config, offset_conf);
-
hw->max_rx_queues =
(VIRTIO_MAX_RX_QUEUES < config->max_virtqueue_pairs) ?
VIRTIO_MAX_RX_QUEUES : config->max_virtqueue_pairs;
-- 
1.9.0