Re: [vpp-dev] VLIB headroom buffer size modfification

2020-02-16 Thread Mohamed feroz Abdul majeeth
Hi Damjan,

Thank you for your valuable input.

Regards,
Feroz


On Fri, 14 Feb, 2020, 5:27 PM Damjan Marion,  wrote:

>
> you need to set it on both sides:
>
> For VPP:
>
> $ ccmake build-root/build-vpp-native/vpp
> and change PRE_DATA_SIZE to 256
>
> or modify following line:
>
> src/vlib/CMakeLists.txt:
>
> set(PRE_DATA_SIZE 128 CACHE STRING "Buffer headroom size.”)
>
> For DPDK you should be able to build custom ext deps package:
>
> $ sudo dpkg -r vpp-ext-deps
> $ make install-ext-deps DPDK_PKTMBUF_HEADROOM=256
>
>
>
>
> > On 14 Feb 2020, at 11:44, Mohamed feroz Abdul majeeth <
> feroz...@gmail.com> wrote:
> >
> > Hi folks,
> >
> > In FD.io vpp the default VLIB_BUFFER_PRE_DATA_SIZE data header room size
> is defined as 128.
> > And in dpdk also it is defined as 128 as we have encapsulation which
> goes beyond 128 bytes
> > the packet descriptor block is getting corrupted in the structure
> vlib_buffer_t as defined in vlib_buffer.h
> >
> > how to increase the headroom buffer size to 256 bytes for both dpdk and
> vpp?
> >
> > Thanks,
> > Feroz
> > 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15415): https://lists.fd.io/g/vpp-dev/message/15415
Mute This Topic: https://lists.fd.io/mt/71267509/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-16 Thread chetan bhasin
Bottom line is stable/vpp 908 does not work with higher number of buffers
but stable/vpp2001 does. Could you please advise which area we can look at
,as it would be difficult for us to move to vpp2001 at this time.

On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
 wrote:

> Thanks Damjan for the reply!
>
> Following are my observations on Intel X710/XL710 pci-
> 1) I took latest code base from stable/vpp19.08  : Seeing error as " 
> *ethernet-input
> l3 mac mismatch*"
> *With Buffers 537600*
>
>
>
> *vpp# show
> buffers
> |Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
> Useddefault-numa-0 0 0   2496 2048   537600 510464
> 131925817default-numa-1 1 1   2496 2048   537600
> 5288963908314*
>
>
> *vpp# show hardware-interfaces*  NameIdx
> Link  Hardware
> BondEthernet0  3 up   BondEthernet0
>   Link speed: unknown
>   Ethernet address 3c:fd:fe:b5:5e:40
> FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> tx errors 17
> rx frames ok4585
> rx bytes ok   391078
> extended stats:
>   rx good packets   4585
>   rx good bytes   391078
>   tx errors   17
>   rx multicast packets  4345
>   rx broadcast packets   243
>   rx unknown protocol packets   4588
>   rx size 65 to 127 packets 4529
>   rx size 128 to 255 packets  32
>   rx size 256 to 511 packets  26
>   rx size 1024 to 1522 packets 1
>   tx size 65 to 127 packets   33
> FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086: address :12:00.01 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> rx frames ok

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-16 Thread chetan bhasin
Thanks Damjan for the reply!

Following are my observations on Intel X710/XL710 pci-
1) I took latest code base from stable/vpp19.08  : Seeing error as "
*ethernet-input
l3 mac mismatch*"
*With Buffers 537600*



*vpp# show
buffers
|Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
Useddefault-numa-0 0 0   2496 2048   537600 510464
131925817default-numa-1 1 1   2496 2048   537600
5288963908314*


*vpp# show hardware-interfaces*  NameIdx
Link  Hardware
BondEthernet0  3 up   BondEthernet0
  Link speed: unknown
  Ethernet address 3c:fd:fe:b5:5e:40
FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: none
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
ipv6-tcp
   ipv6-udp ipv6-other
tx burst function: i40e_xmit_pkts_vec_avx2
rx burst function: i40e_recv_pkts_vec_avx2
tx errors 17
rx frames ok4585
rx bytes ok   391078
extended stats:
  rx good packets   4585
  rx good bytes   391078
  tx errors   17
  rx multicast packets  4345
  rx broadcast packets   243
  rx unknown protocol packets   4588
  rx size 65 to 127 packets 4529
  rx size 128 to 255 packets  32
  rx size 256 to 511 packets  26
  rx size 1024 to 1522 packets 1
  tx size 65 to 127 packets   33
FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
pci: device 8086:1583 subsystem 8086: address :12:00.01 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: none
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
ipv6-tcp
   ipv6-udp ipv6-other
tx burst function: i40e_xmit_pkts_vec_avx2
rx burst function: i40e_recv_pkts_vec_avx2
rx frames ok4585
rx bytes ok   391078
extended stats:
  rx good packets   4585
  rx good bytes   391078
  rx multicast packets  4344
  rx broadcast packets   243
  rx unknown protocol packets
4587|
  rx size 65 to 127 packets

[vpp-dev] sub interface after virtual interfaces doesn't work

2020-02-16 Thread abbas ali chezgi via Lists.Fd.Io

1- create gre tunnel between n1--n2
2- add ip and route . it works
3- delete ip and route from gre and delete gre tunnel
4- create sub interface
5- add ip and route.

it doesn't work



[VPP-1841] sub interface after gre doesn't work - FD.io Jira

| 
| 
|  | 
[VPP-1841] sub interface after gre doesn't work - FD.io Jira


 |

 |

 |




where can i look for itthanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15412): https://lists.fd.io/g/vpp-dev/message/15412
Mute This Topic: https://lists.fd.io/mt/71343791/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] sh hardware-interfaces extended stats are not showing up

2020-02-16 Thread carlito nueno
Hi Damjan,

Sorry for the late reply. I tested it on v20.01 and this is now working.

Thanks!

On Fri, Sep 20, 2019 at 2:07 PM Damjan Marion  wrote:

>
> AFAIK it is fixed, please try latest master and report back if it doesn't
> work.
>
> On 20 Sep 2019, at 19:53, Devis Reagan  wrote:
>
> Hi David ,
>
> Is there any fix or work around for this extended stats issue
>
> Thanks
>
> On Thu, Aug 29, 2019 at 6:58 AM Carlito Nueno 
> wrote:
>
>> Hi David,
>>
>> I tried "vppctl interface collect detailed-stats enable" but it doesn't
>> work.
>>
>> I will git bisect as Damjan mentioned and try to see what changed.
>>
>> Thanks
>>
>> On Wed, Aug 28, 2019 at 8:00 AM Damjan Marion via Lists.Fd.Io
>>   wrote:
>>
>>>
>>> It is not intentional so somebody needs to debug it… "git bisect" might
>>> be good choice here.
>>>
>>> On 28 Aug 2019, at 13:50, Devis Reagan  wrote:
>>>
>>> Can any one help on this ? Extended stats not shown  in vpp 19.08 via
>>> ‘show hardware-interfaces’ command
>>>
>>> Thanks
>>>
>>> On Tue, Aug 27, 2019 at 12:49 PM Devis Reagan via Lists.Fd.Io
>>>   wrote:
>>>
 Even I am using vpp 19.08 & don’t see the extended stats which I used
 to see in other vpp release .
 There was not change in the configuration but with vpp 19.08 it’s not
 showing up .

 When I use dpdk application called testpmd the extended stats just show
 up fine . It’s only the vpp not showing it .

 Do we need to configure any thing to get it ?

 Note : In the release note of 19.08 I saw some changes gone in for
 extended stats .

 Thanks


 On Tue, Aug 27, 2019 at 7:12 AM David Cornejo  wrote:

> did you make sure that you have detailed stats collection enabled for
> the interface?
>
> (see vl_api_collect_detailed_interface_stats_t)
>
> On Mon, Aug 26, 2019 at 2:24 PM carlito nueno 
> wrote:
> >
> > Hi all,
> >
> > I am using: vpp v19.08-release built by root on 365637461ad3 at Wed
> Aug 21 18:20:49 UTC 2019
> >
> > When I do sh hardware-interfaces or sh hardware-interfaces detail or
> verbose, extended stats are not showing.
> >
> > On 19.08 I only see stats like below:
> >
> > rss active:none
> > tx burst function: eth_igb_xmit_pkts
> > rx burst function: eth_igb_recv_scattered_pkts
> >
> > tx frames ok   26115
> > tx bytes ok 34203511
> > rx frames ok   12853
> > rx bytes ok  1337944
> >
> > On 19.04 I am able to see:
> >
> > rss active:none
> > tx burst function: eth_igb_xmit_pkts
> > rx burst function: eth_igb_recv_scattered_pkts
> >
> > tx frames ok21535933
> > tx bytes ok  21806938127
> > rx frames ok13773533
> > rx bytes ok   3642009224
> > extended stats:
> >   rx good packets   13773533
> >   tx good packets   21535933
> >   rx good bytes   3642009224
> >   tx good bytes  21806938127
> >   rx size 64 packets 1171276
> >   rx size 65 to 127 packets  8462547
> >   rx size 128 to 255 packets 1506266
> >   rx size 256 to 511 packets  606052
> >   rx size 512 to 1023 packets 560122
> >   rx size 1024 to max packets1467270
> >   rx broadcast packets383890
> >   rx multicast packets291769
> >   rx total packets  13773533
> >   tx total packets  21535933
> >   rx total bytes  3642009224
> >   tx total bytes 21806938127
> >   tx size 64 packets  397270
> >   tx size 65 to 127 packets  3649953
> >   tx size 128 to 255 packets 1817099
> >   tx size 256 to 511 packets  976902
> >   tx size 512 to 1023 packets 773963
> >   tx size 1023 to max packets   13920746
> >   tx multicast packets   893
> >   tx broadcast packets356966
> >   rx sent to host packets 59
> >   tx sent by host packets

[vpp-dev] Coverity run FAILED as of 2020-02-16 14:00:23 UTC

2020-02-16 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 1
Newly detected: 0
Eliminated: 1
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15410): https://lists.fd.io/g/vpp-dev/message/15410
Mute This Topic: https://lists.fd.io/mt/71325701/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-