hello kevin Traynor,

       Thanks for your answer, I  configure   'other_config:dpdk-scoket-mem 
=8192,8192' in open_vswitch table.  So, I'm sure that there is enough  memory 
being pre-allocated  for ovs.
       
       Is it related to  vhua55a43ae-04  muliti-core scheduled  ?  Is it 
userful  to configuring "ovs-vsctl set interfacevhua55a43ae-04  
other_config:pmd-rxq-affinity="0:50" ?
       
       Some detail log as follows:

       dpdk(pmd-c26/id:9)|ERR|VHOST_DATA : 
###dev_name:/var/run/openvswitch/vhua55a43ae-04####mempool 
<ovs856dfe7b01021580020512>@0x242f762f40
  flags=10
  pool=0x242f722d00
  iova=0x242f762f40
  nb_mem_chunks=1
  size=20512
  populated_size=20512
  header_size=64
  elt_size=2880
  trailer_size=64
  total_obj_size=3008
  private_data_size=64
  avg bytes/object=3008.146646
  internal cache infos:
    cache_size=512
    cache_count[0]=0
    cache_count[1]=0
    cache_count[2]=513
    cache_count[3]=0
    cache_count[4]=0
    cache_count[5]=0
    cache_count[6]=0
    cache_count[7]=0
    cache_count[8]=0
    cache_count[9]=0
    cache_count[10]=0
    cache_count[11]=0
    cache_count[12]=0
    cache_count[13]=0
    cache_count[14]=0
    cache_count[15]=0
    cache_count[16]=0
    cache_count[17]=0
    cache_count[18]=0
    cache_count[19]=0
    cache_count[20]=0
    cache_count[21]=0
    cache_count[22]=0
    cache_count[23]=0
    cache_count[24]=0
    cache_count[25]=0
    cache_count[26]=0
    cache_count[27]=0
    cache_count[28]=0
    cache_count[29]=0
    cache_count[30]=0
    cache_count[31]=0
    cache_count[32]=0
    cache_count[33]=0
    cache_count[34]=0
    cache_count[35]=0
    cache_count[36]=0
    cache_count[37]=0
    cache_count[38]=0
    cache_count[39]=0
    cache_count[40]=0
    cache_count[41]=0
    cache_count[42]=0
    cache_count[43]=0
    cache_count[44]=0
    cache_count[45]=0
    cache_count[46]=0
    cache_count[47]=0
    cache_count[48]=0
    cache_count[49]=0
    cache_count[50]=513
    cache_count[51]=0
    cache_count[52]=0
    cache_count[53]=0
    cache_count[54]=0
    cache_count[55]=0
    cache_count[56]=0
    cache_count[57]=0
    cache_count[58]=0
    cache_count[59]=0
    cache_count[60]=0
    cache_count[61]=0
    cache_count[62]=0
    cache_count[63]=0
    cache_count[64]=0
    cache_count[65]=0
    cache_count[66]=0
    cache_count[67]=0
    cache_count[68]=0
    cache_count[69]=0
    cache_count[70]=0
    cache_count[71]=0
    cache_count[72]=0
    cache_count[73]=0
    cache_count[74]=760
    cache_count[75]=0
    cache_count[76]=0
    cache_count[77]=0
    cache_count[78]=0
    cache_count[79]=0
    cache_count[80]=0
    cache_count[81]=0
    cache_count[82]=0
    cache_count[83]=0
    cache_count[84]=0
    cache_count[85]=0
    cache_count[86]=0
    cache_count[87]=0
    cache_count[88]=0
    cache_count[89]=0
    cache_count[90]=0
    cache_count[91]=0
    cache_count[92]=0
    cache_count[93]=0
    cache_count[94]=0
    cache_count[95]=0
    cache_count[96]=0
    cache_count[97]=0
    cache_count[98]=0
    cache_count[99]=0
    cache_count[100]=0
    cache_count[101]=0
    cache_count[102]=0
    cache_count[103]=0
    cache_count[104]=0
    cache_count[105]=0
    cache_count[106]=0
    cache_count[107]=0
    cache_count[108]=0
    cache_count[109]=0
    cache_count[110]=0
    cache_count[111]=0
    cache_count[112]=0
    cache_count[113]=0
    cache_count[114]=0
    cache_count[115]=0
    cache_count[116]=0
    cache_count[117]=0
    cache_count[118]=0
    cache_count[119]=0
    cache_count[120]=0
    cache_count[121]=0
    cache_count[122]=0
    cache_count[123]=0
    cache_count[124]=0
    cache_count[125]=0
    cache_count[126]=0
    cache_count[127]=0
    total_cache_count=1786
  common_pool_count=0
  stats:
    put_bulk=125723156
    put_objs=129890960
    get_success_bulk=129909686
    get_success_objs=129909686
    get_fail_bulk=1
    get_fail_objs=1
    get_success_blks=0
    get_fail_blks=0
dpdk(pmd-c26/id:9)|ERR|VHOST_DATA : Failed to copy desc to mbuf on 
/var/run/openvswitch/vhua55a43ae-04.
2022-05-14T08:58:21.587Z|11083|bond|INFO|bond bond_br-dpdk: shift 98758kB of 
load (with hash 24) from pci_0000:5f:00.0 to pci_0000:3d:00.0 (now carrying 
199156kB and 183853kB load, respectively)
2022-05-14T08:58:21.587Z|11084|bond|INFO|bond bond_br-dpdk: shift 1712kB of 
load (with hash 2) from pci_0000:5f:00.0 to pci_0000:3d:00.0 (now carrying 
197443kB and 185566kB load, respectively)
2022-05-14T08:58:21.587Z|11085|bond|INFO|bond bond_br-dpdk: shift 1226kB of 
load (with hash 6) from pci_0000:5f:00.0 to pci_0000:3d:00.0 (now carrying 
196216kB and 186792kB load, respectively)
2022-05-14T08:58:21.587Z|11086|bond|INFO|bond bond_br-dpdk: shift 591kB of load 
(with hash 13) from pci_0000:5f:00.0 to pci_0000:3d:00.0 (now carrying 195625kB 
and 187383kB load, respectively)
2022-05-14T08:58:21.587Z|11087|bond|INFO|bond bond_br-dpdk: shift 449kB of load 
(with hash 1) from pci_0000:5f:00.0 to pci_0000:3d:00.0 (now carrying 195176kB 
and 187833kB load, respectively)
2022-05-14T08:58:21.587Z|11088|bond|INFO|bond bond_br-dpdk: shift 370kB of load 
(with hash 16) from pci_0000:5f:00.0 to pci_0000:3d:00.0 (now carrying 194805kB 
and 188204kB load, respectively)
2022-05-14T08:58:21.587Z|11089|bond|INFO|bond bond_br-dpdk: shift 365kB of load 
(with hash 10) from pci_0000:5f:00.0 to pci_0000:3d:00.0 (now carrying 194440kB 
and 188569kB load, respectively)
2022-05-14T08:58:21.590Z|00004|dpdk(pmd-c26/id:9)|ERR|VHOST_DATA : Failed to 
allocate memory for mbuf.

Line 160235: 2022-05-14T09:36:28.964Z|00298|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 160254: 2022-05-14T09:36:28.967Z|00317|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161425: 2022-05-14T10:48:14.686Z|00189|dpif_netdev|INFO|Core 50 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161436: 2022-05-14T10:48:14.697Z|00200|dpif_netdev|INFO|Core 50 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161534: 2022-05-14T10:48:14.715Z|00215|dpif_netdev|INFO|Core 50 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161584: 2022-05-14T10:48:14.731Z|00231|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161707: 2022-05-14T10:48:14.791Z|00248|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161722: 2022-05-14T10:48:14.800Z|00263|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161786: 2022-05-14T10:48:14.856Z|00280|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161804: 2022-05-14T10:48:14.902Z|00298|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 161823: 2022-05-14T10:48:14.905Z|00317|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162330: 2022-05-14T10:49:10.938Z|00189|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162341: 2022-05-14T10:49:10.938Z|00200|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162356: 2022-05-14T10:49:10.944Z|00215|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162532: 2022-05-14T10:49:10.951Z|00231|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162587: 2022-05-14T10:49:11.028Z|00248|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162602: 2022-05-14T10:49:11.028Z|00263|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162956: 2022-05-14T10:49:11.094Z|00280|dpif_netdev|INFO|Core 2 on numa 
node 0 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162975: 2022-05-14T10:49:11.248Z|00298|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 162994: 2022-05-14T10:49:11.253Z|00317|dpif_netdev|INFO|Core 74 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 0).
Line 163265: 2022-05-14T12:10:44.416Z|00576|dpif_netdev|INFO|Core 26 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 
11635980).
Line 163366: 2022-05-14T13:31:29.651Z|00677|dpif_netdev|INFO|Core 26 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 
11340993).
Line 163384: 2022-05-14T13:31:29.708Z|00695|dpif_netdev|INFO|Core 26 on numa 
node 1 assigned port 'vhua55a43ae-04' rx queue 0 (measured processing cycles 
11340993).

ovs-appctl dpif-netdev/pmd-rxq-show , 

pmd thread numa_id 0 core_id 2:
  isolated : false
  port: pci_0000:3d:00.0  queue-id:  1 (enabled)   pmd usage:  0 %
  port: pci_0000:5f:00.0  queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhu089329c1-20    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhuc9295e0b-53    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhuf52a2681-5e    queue-id:  0 (enabled)   pmd usage:  0 %
pmd thread numa_id 1 core_id 26:
  isolated : false
  port: vhu1bb59873-2a    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhua55a43ae-04    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhud0319f0d-52    queue-id:  0 (enabled)   pmd usage:  0 %
pmd thread numa_id 0 core_id 50:
  isolated : false
  port: pci_0000:3d:00.0  queue-id:  0 (enabled)   pmd usage:  0 %
  port: pci_0000:5f:00.0  queue-id:  1 (enabled)   pmd usage:  0 %
  port: vhu1dc6c838-2b    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhu2fe6e707-bb    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhu79df090c-0b    queue-id:  0 (enabled)   pmd usage:  0 %
pmd thread numa_id 1 core_id 74:
  isolated : false
  port: vhu79791a40-57    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhuad2f21ae-38    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhub1ed4607-2e    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhud6cbc805-d2    queue-id:  0 (enabled)   pmd usage:  0 %

> thanks!
> Best regards, kevin Traynor.
 
From: Kevin Traynor
Date: 2022-05-16 17:03
To: [email protected]; ovs-dev
Subject: Re: [ovs-dev] Is there a patch to fix this problem ?
On 15/05/2022 06:12, [email protected] wrote:
> Dear all,
>            I am using ovs 2.14.1 and dpdk 20.08 on CentOS Linux release 
> 8.3.2011 with kernel of 5.10.44.  It is observed that there are some error 
> information about memory in ovs-vswitchd.log
> 
> /var/log/openvswitch/ovs-vswitchd.log:113981:2022-05-12T04:41:37.399Z|00001|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113983:2022-05-12T04:41:37.407Z|00003|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113984:2022-05-12T04:41:37.415Z|00004|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113985:2022-05-12T04:41:37.422Z|00005|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113986:2022-05-12T04:41:37.430Z|00006|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113987:2022-05-12T04:41:37.437Z|00007|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113988:2022-05-12T04:41:37.444Z|00008|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113989:2022-05-12T04:41:37.451Z|00009|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
> /var/log/openvswitch/ovs-vswitchd.log:113990:2022-05-12T04:41:37.458Z|00010|dpdk(pmd-c02/id:12)|ERR|VHOST_DATA
>  : Failed to allocate memory for mbuf.
>      
>     Is there a patch to fix this problem ?
>   
 
The short answer is that there is probably not enough memory being 
pre-allocated in 'w and at some point 
you are running out of mbufs. The first thing you should do is increase 
the amount of memory and re-test.
 
These blogs are a few years old, but should help:
https://developers.redhat.com/blog/2018/06/14/debugging-ovs-dpdk-memory-issues
https://developers.redhat.com/blog/2018/03/16/ovs-dpdk-hugepage-memory
 
> thanks!
> Best regards, Peter.
> 
> 
> _______________________________________________
> dev mailing list
> [email protected]
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> 
 
 
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to