Re: [dpdk-users] [dpdk-dev] Suggestions on how to customize the metadata fields of each packet

2018-02-23 Thread Ananyev, Konstantin
Hi Victor,

> 
> Thanks for your quick answer,
> 
> I have read so many documents and web pages on this issue that probably I
> confounded the utility of the headroom. It is good to know that this 128
> bytes space is available to my disposal. The fact of being lost once the
> NIC transmits the frame it is not a problem at all for my application.
> However, in case that this space is not enough, I have seen in the rte_mbuf
> struct a (void *) pointer called userdata which is in theory used for extra
> user-defined metadata. If I wanted to attach an additional metadata struct,
> I guess that I just have to assign the pointer to this struct to the
> userdata field. However, what happens if I want that the content of this
> struct travels with the packet through a software ring in order to be
> processed by another thread? Should I reserve more space in the ring to
> allocate such extra metadata?
> 
> Thanks again,


In theory headroom inside mbuf should be left for packet's data.
To do things properly you'll need to create your mbuf mempools with
priv_size >= your_extra_metadata_size.

Konstantin

> 
> PD: I have copied the message to users mailing list
> 
> 2018-02-23 4:13 GMT+01:00 :
> 
> > Hi,
> >
> > First, I think your question should be sent to the user mailing list, not
> > the dev mailing list.
> >
> > > I have seen that each packet has a headroom memory space (128 bytes
> > long)
> >
> > > where RSS hashing and other metadata provided by the NIC is stored.
> >
> > If I’m not mistaken, the headroom is not where metadata provided by the
> > NIC are stored. Those metadata are stored in the rte_mbuf struct, which
> > is also 128 bytes long.
> >
> > The headroom area is located AFTER the end of rte_mbuf (at offset 128).
> > By default the headroom area is also 128 byte long, so the actual packet
> > data is stored at offset 256.
> >
> > You can store whatever you want in this headroom area. However those
> > information are lost as soon as the packet leaves DPDK (the NIC will start
> > sending at offset 256).
> >
> > -BL.
> >
> 
> 
> 
> --
> Victor


Re: [dpdk-users] [dpdk-dev] IPSEC: No IPv6 SP Inbound rule specified

2020-09-22 Thread Ananyev, Konstantin

> Hi Anoob and DPDK Dev Team,
> > -->While we are running the ipsec-secgw application. We are getting
> > following error:
> >
> > IPSEC: No IPv6 SP Inbound rule specified
> > IPSEC: No IPv6 SP Outbound rule specified

Wonder what makes you think it is an error?
It is just a message saying that you don't have SP rules for IPv6
If you don't plan to process ipv6 traffic that is probably ok.

> > Creating IPv4 Routing Table (RT) context with 1024 max routes
> > LPM: Adding route 192.168.122.0/24 (1)
> >
> > -->Below are the steps we've performed..
> >
> > Since we don't have IXGBE driver in the virtual machines. we used SRIOV to
> > import the IXGBE NIC's (Virtual function) from host(Physical function).
> > We refered the below link for SRIOV:
> >
> > https://software.intel.com/content/www/us/en/develop/articles/configure-sr-iov-network-virtual-functions-in-linux-kvm.html
> >
> > This the info after importing IXGBE NIC's into virtual machine:
> > ---> ip link show
> > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
> > DEFAULT group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > 2: ens8:  mtu 1500 qdisc pfifo_fast state
> > UP mode DEFAULT group default qlen 1000
> > link/ether 52:54:00:bd:f7:ce brd ff:ff:ff:ff:ff:ff
> > 3: ens9:  mtu 1500 qdisc pfifo_fast state
> > UP mode DEFAULT group default qlen 1000
> > link/ether 52:54:00:e5:35:b5 brd ff:ff:ff:ff:ff:ff
> > 4: ens12:  mtu 1500 qdisc pfifo_fast state
> > UP mode DEFAULT group default qlen 1000
> > link/ether 52:54:00:14:ce:7c brd ff:ff:ff:ff:ff:ff
> > 5: ens13:  mtu 1500 qdisc pfifo_fast state
> > UP mode DEFAULT group default qlen 1000
> > link/ether 52:54:00:d0:73:4c brd ff:ff:ff:ff:ff:ff
> >
> > 6: ens3:  mtu 1500 qdisc mq state DOWN
> > mode DEFAULT group default qlen 1000
> > link/ether 96:42:49:cd:9c:61 brd ff:ff:ff:ff:ff:ff
> >  <<
> > 7: ens11:  mtu 1500 qdisc mq state DOWN
> > mode DEFAULT group default qlen 1000
> > link/ether 3e:f5:92:ef:9e:16 brd ff:ff:ff:ff:ff:ff
> >  <<
> >
> >
> > 8: ens10:  mtu 1500 qdisc pfifo_fast state
> > UP mode DEFAULT group default qlen 1000
> > link/ether 52:54:00:51:ee:74 brd ff:ff:ff:ff:ff:ff
> > 9: ens14:  mtu 1500 qdisc pfifo_fast state
> > UP mode DEFAULT group default qlen 1000
> > link/ether 52:54:00:da:08:cb brd ff:ff:ff:ff:ff:ff
> > 10: ens15:  mtu 1500 qdisc pfifo_fast
> > state UP mode DEFAULT group default qlen 1000
> > link/ether 52:54:00:31:7c:6f brd ff:ff:ff:ff:ff:ff
> >
> > -->While we are running the ipsec-secgw application. We are getting
> > following error:
> >
> > IPSEC: No IPv6 SP Inbound rule specified
> > IPSEC: No IPv6 SP Outbound rule specified
> > Creating IPv4 Routing Table (RT) context with 1024 max routes
> > LPM: Adding route 192.168.122.0/24 (1)
> >
> > Checking link
> >
> > status..done
> > Port 0 Link Down
> >
> > <<<
> > Port 1 Link Down
> >
> > <<<
> > IPSEC: entering main loop on lcore 0
> > IPSEC:  -- lcoreid=0 portid=0 rxqueueid=0
> > IPSEC:  -- lcoreid=0 portid=1 rxqueueid=0
> >
> > Please find the attachment which contain console logs.
> >
> > Thanks and Regards
> >


RE: Does ACL support field size of 8 bytes?

2022-04-28 Thread Ananyev, Konstantin

Hi Ido,

> I've lots of good experience with ACL but can't make it work with u64 values
> I know it can be split to 2xu32 fields, but it makes it more complex to use 
> and a wastes double  number of fields (we hit the
> RTE_ACL_MAX_FIELDS 64 limit)

Wow, that's a lot of fields...

> According to the documentation and rte_acl.h fields size can be 8 bytes (u64)
> e.g.
>   'The size parameter defines the length of the field in bytes. Allowable 
> values are 1, 2, 4, or 8 bytes.'
>   (from 
> https://doc.dpdk.org/guides-21.11/prog_guide/packet_classif_access_ctrl.html#rule-definition)
> 
> Though there's a hint it's less recommended
>   'Also, it is best to define fields of 8 or more bytes as 4 byte fields so 
> that the build processes can eliminate fields that are all wild.'
>
> It's also not clear how it fits in a group (i.e. what's input_index stride) 
> which is only 4 bytes
> 'All subsequent fields has to be grouped into sets of 4 consecutive 
> bytes.'
> 
> I couldn't find any example or test app that's using 8 bytes
> e.g. for IPv6 address 4xu32 fields are always used and not 2xu64
> 
> Should it work?
> Did anyone try it successfully and/or can share an example?

You are right: though it is formally supported, we do not test it, and AFAIK 
no-one used it till now.
As we do group fields by 4B long chunks anyway, 8B field is sort of awkward and 
confusing.
To be honest, I don't even remember what was the rationale beyond introducing 
it at first place.
Anyway, just submitted patches that should fix 8B field support (at least it 
works for me now):
https://patches.dpdk.org/project/dpdk/list/?series=22676
Please give it a try.
In long term it probably would be good to hear from you and other users, should 
we keep 8B
support at all, or might be it would be easier just to abandon it.
Thanks
Konstantin