Re: [lng-odp] ODP traffic manager and comparison with DPDK

2017-07-21 Thread Puneet Gupta
Thanks Bill for your replies. ☺

I have subscribed and got the following message :

“Your subscription request has been received, and will soon be acted upon. 
Depending on the configuration of this mailing list, your subscription request 
may have to be first confirmed by you via email, or approved by the list 
moderator. If confirmation is required, you will soon get a confirmation email 
which contains further instructions.”

Thanks,
Puneet


From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
Sent: Thursday, July 20, 2017 8:06 PM
To: Puneet Gupta <pune...@xilinx.com>
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] ODP traffic manager and comparison with DPDK



On Thu, Jul 20, 2017 at 8:10 AM, Puneet Gupta 
<puneet.gu...@xilinx.com<mailto:puneet.gu...@xilinx.com>> wrote:
Hi,

Thanks Bill for your answers. It’s very much helpful!
As I am beginner, so some more initial dumb queries! ☺


-  What do meant by “ODP applications write to TM input queues and TM 
output queues”?
So if we are having only one TM stage, so therefore we have only input queues 
which are connected to TM node and based upon the shaper/Wred profile they are 
sent to the Pktout interface?

Please see the Traffic Manager section of the ODP User Guide: 
https://docs.opendataplane.org/snapshots/odp-publish/generic/usr_html/master/latest/linux-generic/output/users-guide.html#_traffic_manager_tm

The TM is organized into a hierarchy of TM Nodes that are fed by TM Queues and 
that output to either other TM Nodes or else to Pktios.


My requirement is , I need to offload the classification and packet queueing 
functionality to the hardware.

-  For offloading packet queuing  , I can see that ODP gives a 
mechanism by which we can create a new file in the following directory : 
platform/linux-generic/pktio//.c and then can assign 
the function pointers for the following ops: input_queues_config and 
output_queues_config. These operations will be called by the 
odp_pktin_queue_config and odp_pktout_queue_config, which are further called by 
our ODP application.
Let me know if my above statement is correct or not? ☺

From these questions I assume you're looking at creating your own ODP 
implementation. That's a larger question and may be better handled in a 
conversation.

The classifier is a separate module from the traffic manager, and queues are 
also separate from both. We're actually in the process of developing and 
testing a modular framework that will make it easier to do "pluggable" 
replacements of these various components in a more flexible and dynamic manner. 
The cloud-dev branch of the ODP git repo is where this work is being done. We 
hope to have the first stage of these structures in place there in the coming 
weeks, but an official ODP release with these capabilities is probably sometime 
next year.

In the meantime, you'd use platform/myplatform to contain replacement files for 
those that are currently in platform/linux-generic. If you look at how the 
odp-dpdk repository is organized, you can see how that works as there is a 
platform/linux-dpdk that contains modules that "override" the base modules in 
platform/linux-generic.

You can see the current list of available ODP implementations at 
https://www.opendataplane.org/downloads/



-  For offloading packet classification , Can I modify the following 
ODP functions: odp_cls_pmr_create and odp_pmr_create_term ( implemented in 
platform/linux-generic/odp_classifier.c) to suite my own needs? Or is there any 
other way to offload classification?

Yes, that's what you'd do--replace the SW implementations of various ODP APIs 
with those that interface directly with your HW.




-  For setting a classification rule , we have to give src_cos and 
dst_cos.  Queue and buffer pool are associated with each CoS. I understand that 
dst_cos associated with the PMR is the  one where the packets matching that 
rule will land into that CoS queue.
Correct. It's a cascade where packets get filtered through PMRs and eventually 
land in a CoS that defines the pool and queue that the packets should be sent 
to.

Packet comes from the Pktin interface , then goes to the classifier and then 
goes the queues ( which we have associated with the dst_cos) and not from any 
CoS. Then what does src_Cos means? In many examples of ODP , I can see , they 
are using default_cos as the src_cos argument for classification setup. What 
does that default_Cos means?

The default CoS is the output from a PktIO in the absence of any further 
filtering rules. The src_cos is how rules are chained together. The 
odp_cls_pmr_create() API says to take packets assigned to the src_cos and 
further filter them to try to put them in a more specific CoS (the dst_cos). So 
the progression is from coarse filtering to finer-grained filtering. When a CoS 
is reached that isn't the src_cos to any other matching PMR, then that CoS is 
the one tha

Re: [lng-odp] ODP traffic manager and comparison with DPDK

2017-07-21 Thread Puneet Gupta
Hi,

Thanks Bill for your answers. It’s very much helpful!
As I am beginner, so some more initial dumb queries! ☺


-  What do meant by “ODP applications write to TM input queues and TM 
output queues”?
So if we are having only one TM stage, so therefore we have only input queues 
which are connected to TM node and based upon the shaper/Wred profile they are 
sent to the Pktout interface?


My requirement is , I need to offload the classification and packet queueing 
functionality to the hardware.

-  For offloading packet queuing  , I can see that ODP gives a 
mechanism by which we can create a new file in the following directory : 
platform/linux-generic/pktio//.c and then can assign 
the function pointers for the following ops: input_queues_config and 
output_queues_config. These operations will be called by the 
odp_pktin_queue_config and odp_pktout_queue_config, which are further called by 
our ODP application.
Let me know if my above statement is correct or not? ☺


-  For offloading packet classification , Can I modify the following 
ODP functions: odp_cls_pmr_create and odp_pmr_create_term ( implemented in 
platform/linux-generic/odp_classifier.c) to suite my own needs? Or is there any 
other way to offload classification?



-  For setting a classification rule , we have to give src_cos and 
dst_cos.  Queue and buffer pool are associated with each CoS. I understand that 
dst_cos associated with the PMR is the  one where the packets matching that 
rule will land into that CoS queue.
Packet comes from the Pktin interface , then goes to the classifier and then 
goes the queues ( which we have associated with the dst_cos) and not from any 
CoS. Then what does src_Cos means? In many examples of ODP , I can see , they 
are using default_cos as the src_cos argument for classification setup. What 
does that default_Cos means?



I tried subscribing to the list but its giving the following message,
“The form is too old. Please GET it again.” So that’s why directly written the 
mail to the list


Thanks
Puneet



From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
Sent: Thursday, July 20, 2017 6:00 PM
To: Puneet Gupta <pune...@xilinx.com>
Cc: lng-odp@lists.linaro.org
Subject: Re: [lng-odp] ODP traffic manager and comparison with DPDK

Hi Puneet, and thanks for your post. Please subscribe to the ODP mailing list 
if you wish to post as otherwise your posts will be delayed as they require 
manual approval to make it to the list.



On Thu, Jul 20, 2017 at 3:44 AM, Puneet Gupta 
<puneet.gu...@xilinx.com<mailto:puneet.gu...@xilinx.com>> wrote:
Hi all,

I am understanding the ODP framework for last few days. I am have some queries 
related to it.



1.   There is a traffic manager block in the egress direction . In the 
diagram (section 5.3 in 
https://docs.opendataplane.org/snapshots/odp-publish/generic/usr_html/master/latest/linux-generic/output/users-guide.html
 ), it is shown that it has TM input queues and TM output queues. What does 
that mean?? How these input queues are different from the ones which are there 
in the ingress direction ( that are after the classifier)

ODP applications can send output directly to pktio via odp_pktout_queue_t 
objects, which are the output ports associated with a logical interface, or 
they can have output shaped via the traffic manager by sending them to 
odp_tm_queue_t objects instead. This is similar to the receive side where 
applications can choose to read odp_pktin_queue_t objects directly or else 
receive input via the classifier / scheduler. It's really a question of how the 
application wishes to work.

TM input and output queues are from the perspective of the traffic manager 
itself. ODP applications write to TM input queues and TM output queues then 
connect either to other TM input stages (in more complex configurations) or 
else directly to Pktio output ports.


2.   The queues which are created through the API odp_tm_queue_create ( in 
which we associate shaper/wred profile), whether they are input queues or 
output queues or both?

Both. A TM queue is fed by the application (or by a higher TM stage), performs 
it's shaping / weighting functions as configured, and outputs to either the 
next stage or to the target pktio object.


3.   How it is different from DPDK. I heard that it supports hardware 
offload features such as classification , buffering , queueing etc wherease 
dpdk doesn't. But is there any other reason why we should go with ODP and not 
DDK?

ODP makes a distinction between the API definitions, which are abstract, and 
the implementation of those APIs, which can be in hardware, software, or any 
combination of these as defined by an ODP implementer. As part of the project, 
ODP offers reference implementations of the ODP API set that can either be used 
directly or as a starting point by SoC vendors for creating an implementation 
tailored to their platform.

In addition 

Re: [lng-odp] ODP traffic manager and comparison with DPDK

2017-07-20 Thread Bill Fischofer
On Thu, Jul 20, 2017 at 8:10 AM, Puneet Gupta <puneet.gu...@xilinx.com>
wrote:

> Hi,
>
>
>
> Thanks Bill for your answers. It’s very much helpful!
>
> As I am beginner, so some more initial dumb queries! J
>
>
>
> -  What do meant by “ODP applications write to TM input queues
> and TM output queues”?
>
> So if we are having only one TM stage, so therefore we have only input
> queues which are connected to TM node and based upon the shaper/Wred
> profile they are sent to the Pktout interface?
>

Please see the Traffic Manager section of the ODP User Guide:
https://docs.opendataplane.org/snapshots/odp-publish/generic/usr_html/master/latest/linux-generic/output/users-guide.html#_traffic_manager_tm

The TM is organized into a hierarchy of TM Nodes that are fed by TM Queues
and that output to either other TM Nodes or else to Pktios.

>
>
>
>
> My requirement is , I need to offload the classification and packet
> queueing functionality to the hardware.
>
> -  For offloading packet queuing  , I can see that ODP gives a
> mechanism by which we can create a new file in the following directory :
> platform/linux-generic/pktio//.c and then can
> assign the function pointers for the following ops: input_queues_config and
> output_queues_config. These operations will be called by the
> odp_pktin_queue_config and odp_pktout_queue_config, which are further
> called by our ODP application.
>
> Let me know if my above statement is correct or not? J
>

>From these questions I assume you're looking at creating your own ODP
implementation. That's a larger question and may be better handled in a
conversation.

The classifier is a separate module from the traffic manager, and queues
are also separate from both. We're actually in the process of developing
and testing a modular framework that will make it easier to do "pluggable"
replacements of these various components in a more flexible and dynamic
manner. The cloud-dev branch of the ODP git repo is where this work is
being done. We hope to have the first stage of these structures in place
there in the coming weeks, but an official ODP release with these
capabilities is probably sometime next year.

In the meantime, you'd use platform/myplatform to contain replacement files
for those that are currently in platform/linux-generic. If you look at how
the odp-dpdk repository is organized, you can see how that works as there
is a platform/linux-dpdk that contains modules that "override" the base
modules in platform/linux-generic.

You can see the current list of available ODP implementations at
https://www.opendataplane.org/downloads/


>
>
> -  For offloading packet classification , Can I modify the
> following ODP functions: odp_cls_pmr_create and odp_pmr_create_term (
> implemented in platform/linux-generic/odp_classifier.c) to suite my own
> needs? Or is there any other way to offload classification?
>

Yes, that's what you'd do--replace the SW implementations of various ODP
APIs with those that interface directly with your HW.


>
>
> -  For setting a classification rule , we have to give src_cos
> and dst_cos.  Queue and buffer pool are associated with each CoS. I
> understand that dst_cos associated with the PMR is the  one where the
> packets matching that rule will land into that CoS queue.
>
Correct. It's a cascade where packets get filtered through PMRs and
eventually land in a CoS that defines the pool and queue that the packets
should be sent to.


> Packet comes from the Pktin interface , then goes to the classifier and
> then goes the queues ( which we have associated with the dst_cos) and not
> from any CoS. Then what does src_Cos means? In many examples of ODP , I can
> see , they are using default_cos as the src_cos argument for classification
> setup. What does that default_Cos means?
>

The default CoS is the output from a PktIO in the absence of any further
filtering rules. The src_cos is how rules are chained together. The
odp_cls_pmr_create() API says to take packets assigned to the src_cos and
further filter them to try to put them in a more specific CoS (the
dst_cos). So the progression is from coarse filtering to finer-grained
filtering. When a CoS is reached that isn't the src_cos to any other
matching PMR, then that CoS is the one that's used to determine which
pool/queue the packet goes to.


>
>
>
>
>
>
> I tried subscribing to the list but its giving the following message,
>
> “The form is too old. Please GET it again.” So that’s why directly
> written the mail to the list
>

You should be able to subscribe at
https://lists.linaro.org/mailman/listinfo/lng-odp
If this isn't working for you, please let me know direct.


>
>
>
>
> Thanks
>
> Puneet
>
>
>
>
>
>

Re: [lng-odp] ODP traffic manager and comparison with DPDK

2017-07-20 Thread Bill Fischofer
Hi Puneet, and thanks for your post. Please subscribe to the ODP mailing
list if you wish to post as otherwise your posts will be delayed as they
require manual approval to make it to the list.



On Thu, Jul 20, 2017 at 3:44 AM, Puneet Gupta 
wrote:

> Hi all,
>
> I am understanding the ODP framework for last few days. I am have some
> queries related to it.
>
>
>
> 1.   There is a traffic manager block in the egress direction . In the
> diagram (section 5.3 in https://docs.opendataplane.
> org/snapshots/odp-publish/generic/usr_html/master/
> latest/linux-generic/output/users-guide.html ), it is shown that it has
> TM input queues and TM output queues. What does that mean?? How these input
> queues are different from the ones which are there in the ingress direction
> ( that are after the classifier)
>

ODP applications can send output directly to pktio via odp_pktout_queue_t
objects, which are the output ports associated with a logical interface, or
they can have output shaped via the traffic manager by sending them to
odp_tm_queue_t objects instead. This is similar to the receive side where
applications can choose to read odp_pktin_queue_t objects directly or else
receive input via the classifier / scheduler. It's really a question of how
the application wishes to work.

TM input and output queues are from the perspective of the traffic manager
itself. ODP applications write to TM input queues and TM output queues then
connect either to other TM input stages (in more complex configurations) or
else directly to Pktio output ports.


>
> 2.   The queues which are created through the API odp_tm_queue_create
> ( in which we associate shaper/wred profile), whether they are input queues
> or output queues or both?
>

Both. A TM queue is fed by the application (or by a higher TM stage),
performs it's shaping / weighting functions as configured, and outputs to
either the next stage or to the target pktio object.


>
> 3.   How it is different from DPDK. I heard that it supports hardware
> offload features such as classification , buffering , queueing etc wherease
> dpdk doesn't. But is there any other reason why we should go with ODP and
> not DDK?
>

ODP makes a distinction between the API definitions, which are abstract,
and the implementation of those APIs, which can be in hardware, software,
or any combination of these as defined by an ODP implementer. As part of
the project, ODP offers reference implementations of the ODP API set that
can either be used directly or as a starting point by SoC vendors for
creating an implementation tailored to their platform.

In addition to offering an API specification, ODP also includes a
validation test suite that enables ODP implementations (and users of those
implementations) to verify that they have correctly implemented the various
ODP APIs in a manner conforming to their specification.

In addition to defining new APIs for features such as IPsec offload
support, ODP is also developing a production-grade implementation of ODP
targeting NFV/cloud environments. So ODP offers a highly flexible and
tailorable set of compatible APIs that can be used in a variety of
settings, from datacenters, to smart NICs, to on-chip firmware.


>
> 4.   What is OFP. I couldn't understand why it has been created. I can
> see that the main benefir of OFP is portability. But that portability
> feature is already achieved by ODP.
>

OpenFastPath (OFP) is a TCP/IP stack that is written to run on top of ODP.
ODP itself handles basic packet I/O and manipulation, but doesn't provide
any upper-layer protocol features. While this by itself is sufficient for
"classic" data plane applications like switches and routers, many
applications want to use TCP. OFP offers this in a manner that fully
leverages the offload and acceleration features offered by O


>
> Thanks in advance
>
> -Puneet
>
>
>
> This email and any attachments are intended for the sole use of the named
> recipient(s) and contain(s) confidential information that may be
> proprietary, privileged or copyrighted under applicable law. If you are not
> the intended recipient, do not read, copy, or forward this email message or
> any attachments. Delete this email message and any attachments immediately.
>
>


Re: [lng-odp] ODP traffic manager and comparison with DPDK

2017-07-20 Thread Bogdan Pricope
OFP (OpenFastPath) is a user space IP stack (IPv4/IPv6, UDP/TCP,
routes/arp, sockets, select(), epoll(), etc.) on top of ODP. See
http://www.openfastpath.org/


On 20 July 2017 at 11:44, Puneet Gupta  wrote:
> Hi all,
>
> I am understanding the ODP framework for last few days. I am have some 
> queries related to it.
>
>
>
> 1.   There is a traffic manager block in the egress direction . In the 
> diagram (section 5.3 in 
> https://docs.opendataplane.org/snapshots/odp-publish/generic/usr_html/master/latest/linux-generic/output/users-guide.html
>  ), it is shown that it has TM input queues and TM output queues. What does 
> that mean?? How these input queues are different from the ones which are 
> there in the ingress direction ( that are after the classifier)
>
> 2.   The queues which are created through the API odp_tm_queue_create ( 
> in which we associate shaper/wred profile), whether they are input queues or 
> output queues or both?
>
> 3.   How it is different from DPDK. I heard that it supports hardware 
> offload features such as classification , buffering , queueing etc wherease 
> dpdk doesn't. But is there any other reason why we should go with ODP and not 
> DDK?
>
> 4.   What is OFP. I couldn't understand why it has been created. I can 
> see that the main benefir of OFP is portability. But that portability feature 
> is already achieved by ODP.
>
> Thanks in advance
>
> -Puneet
>
>
>
> This email and any attachments are intended for the sole use of the named 
> recipient(s) and contain(s) confidential information that may be proprietary, 
> privileged or copyrighted under applicable law. If you are not the intended 
> recipient, do not read, copy, or forward this email message or any 
> attachments. Delete this email message and any attachments immediately.
>


[lng-odp] ODP traffic manager and comparison with DPDK

2017-07-20 Thread Puneet Gupta
Hi all,

I am understanding the ODP framework for last few days. I am have some queries 
related to it.



1.   There is a traffic manager block in the egress direction . In the 
diagram (section 5.3 in 
https://docs.opendataplane.org/snapshots/odp-publish/generic/usr_html/master/latest/linux-generic/output/users-guide.html
 ), it is shown that it has TM input queues and TM output queues. What does 
that mean?? How these input queues are different from the ones which are there 
in the ingress direction ( that are after the classifier)

2.   The queues which are created through the API odp_tm_queue_create ( in 
which we associate shaper/wred profile), whether they are input queues or 
output queues or both?

3.   How it is different from DPDK. I heard that it supports hardware 
offload features such as classification , buffering , queueing etc wherease 
dpdk doesn't. But is there any other reason why we should go with ODP and not 
DDK?

4.   What is OFP. I couldn't understand why it has been created. I can see 
that the main benefir of OFP is portability. But that portability feature is 
already achieved by ODP.

Thanks in advance

-Puneet



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.