On 24/11/15 09:24, Savolainen, Petri (Nokia - FI/Espoo) wrote:
      diff --git a/include/odp/api/packet.h b/include/odp/api/packet.h
      index 3a454b5..f5d2142 100644
      --- a/include/odp/api/packet.h
      +++ b/include/odp/api/packet.h
      @@ -73,6 +73,9 @@ extern "C" {
        * @note The default headroom and tailroom used for packets is
      specified by
        * the ODP_CONFIG_PACKET_HEADROOM and ODP_CONFIG_PACKET_TAILROOM
      defines in
        * odp_config.h.
      + * @note Data changed in user area might be preserved by the
      platform from
      + * previous usage of the buffer, so values preset in
uarea_init()
      are not
      + * guaranteed.


Terms "buffer" and uarea_init() are ambiguous in the API spec.

We can swap those two instances of "buffer" to "packet" in a subsequent
patch.

This is packet API and uarea_init() is not an API function.

uarea_init() is a defined function pointer in the API, I think it's
quite clear what it is. Maybe a '*' character before it would be good to
emphasize that it's a function pointer, not a function?


Exact term is e.g. "packet pool parameter uarea_init" or "odp_packet_uarea_init_t 
callback". Those are easier to find  than uarea_init(), which seem to be an implementation 
specific function leaked into the API spec.

Ok, I'll send a patch with this, it also has some additional clarifications:

+++ b/include/odp/api/packet.h
@@ -94,8 +94,9 @@ extern "C" {
* the ODP_CONFIG_PACKET_HEADROOM and ODP_CONFIG_PACKET_TAILROOM defines in
  * odp_config.h.
  * @note Data changed in user area might be preserved by the platform from
- * previous usage of the buffer, so values preset in uarea_init() are not
- * guaranteed.
+ * previous usage of the packet (if the underlying memory were allocated at pool + * creation time), so values preset by packet pool parameter *uarea_init() are
+ * only guaranteed to be known if the application never changes them.
  */
 odp_packet_t odp_packet_alloc(odp_pool_t pool, uint32_t len);






Also, it should be well defined if area content is preserved,

"Any subsequent change to the user area might be preserved after
odp_packet_free() is called, so applications should take care of
(re)initialization if they change data preset by this function."


== data is not preserved (might or might not be preserved)
Yes, and the Evil Purple Pony (1) might or might not eat it. It's not the purpose of this callback to do anything about the preservation of data in the user area. These are just guidelines about possible platform strategies. If the implementation choose to allocate the memory for the user area in odp_packet_alloc(), obviously the previous changes are not preserved, because we start from a fresh piece of memory. If the implementation allocate the memory at pool allocation time, the changes after odp_packet_free() are preserved. Or maybe not, if the platform decides to zap the memory before/after each use, regardless of memory allocation. That scenario is not addressed indeed, we need to say explicitly that if the platform does that, it should run the init function again. Here is what I made up:

+++ b/include/odp/api/pool.h
@@ -47,13 +47,13 @@ extern "C" {
  * @param pkt                   Handle of the packet
  * @param uarea_init_arg        Opaque pointer defined in odp_pool_param_t
  *
- * @note If the application specifies this pointer, it expects that every buffer - * is initialized exactly once with it when the underlying memory is allocated. + * @note If the application specifies this pointer, it expects that every user area + * is initialised exactly once with it when the underlying memory is allocated (and optionally zeroed). * It is not called from odp_packet_alloc(), unless the platform chooses to - * allocate the memory at that point. Applications can only assume that this + * allocate (or zero) the memory at that point. Applications can only assume that this
  * callback is called once before the packet is first used. Any subsequent
* change to the user area might be preserved after odp_packet_free() is called, - * so applications should take care of (re)initialization if they change data
+ * so applications should not change data
  * preset by this function.
  */
typedef void (odp_packet_uarea_init_t)(odp_packet_t pkt, void *uarea_init_arg);


The application should not rely on whether changes are preserved. I don't see an use case for that anyway. But this patch is not about that at all. The point of this whole thing is to init certain parts of the user area when the memory allocated (or if it's zeroed out, then after it) and leave it as it is. The commit message discusses the use case as well. Of course the application can shoot itself in the leg by modifying data set by this function, because it could be preserved and after that the application can't rely on the fact that the values preset by this function are intact. that's what the last sentence wants to clarify. Maybe we should explicitly forbid that, as it doesn't make any sense?

(1) I just made this up, but it seems it actually exists :)

== data is not re-initialized on every alloc

No. The function is called when the memory for the user area is allocated (or after zeroed, if that happens). When does that happens is implementation's choice.




always initialized (in alloc),

"It is not called from odp_packet_alloc(), unless the platform chooses
to allocate the memory at that point."


== data may be reinitialized on every alloc

Yes.

== data is not preserved (may be initialized)

It depends, see above.




initialized only once (and not preserved)
"initialized exactly once with it when the underlying memory is allocated"

or not initialized.
"If the application specifies this pointer, it expects that every buffer
is initialized exactly once"


== application demands preserved memory

Not at all. Read the rest of the sentence. "when the underlying memory is allocated" doesn't say anything about preservation.

== data initialized only once and not modified by implementation ever since, so 
application modification should be preserved

It depends, it's explained in the rest of comment.


In summary the current spec says that:
- the area may or may not be re-initialized
- data may or may not be preserved
Yes.


It's trying to say "init once and preserve"

Not exactly.

, but speculates on implementation (which time the one init happens) and end up confusing 
the spec with too many "mights".
These additional explanations about possible implementation behaviors were added because during the arch meetings several questions came up about the exact method this whole thing works, so we wanted to avoid some easy misunderstandings.





The spec is too loose now.
Your questions could be answered from the odp_packet_uarea_init_t
definition, let me know if there is a need for improvement in the wording.

API should give exact spec that data is either:
* not init and not preserved
* init once and preserved
* init on every alloc and not preserve

None of these are true. They are absolutely not requirements.


First two modes are needed now. Third one could useful only if platform cannot 
offer preserved memory or it's very costly.

I think you are talking about something different. The first mode is the current status, the next two are things which are probably won't fly with some implementation strategies. E.g. the second doesn't make sense when the platform allocates memory during odp_packet_alloc(), as there is nothing to preserve.
Do you have an use case for these?







        */
       odp_packet_t odp_packet_alloc(odp_pool_t pool, uint32_t len);

      diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
      index 2e79a55..01f770f 100644
      --- a/include/odp/api/pool.h
      +++ b/include/odp/api/pool.h
      @@ -21,6 +21,7 @@ extern "C" {


       #include <odp/std_types.h>
      +#include <odp/packet.h>

       /** @defgroup odp_pool ODP POOL
        *  Operations on a pool.
      @@ -41,6 +42,23 @@ extern "C" {
       #define ODP_POOL_NAME_LEN  32

       /**
      + * Packet user area initializer callback function for pools.
      + *
      + * @param pkt                   Handle of the packet
      + * @param uarea_init_arg        Opaque pointer defined in
      odp_pool_param_t
      + *
      + * @note If the application specifies this pointer, it expects
      that every buffer

Packet, not buffer


      + * is initialized exactly once with it when the underlying
memory
      is allocated.
      + * It is not called from odp_packet_alloc(), unless the platform
      chooses to
      + * allocate the memory at that point. Applications can only
      assume that this
      + * callback is called once before the packet is first used. Any
      subsequent
      + * change to the user area might be preserved after
      odp_packet_free() is called,
      + * so applications should take care of (re)initialization if
they
      change data
      + * preset by this function.


I think this should be two modes:
* init once and preserve
* init on every alloc

OVS only needs the first option, but we can extend this later on.




      + */
      +typedef void (odp_packet_uarea_init_t)(odp_packet_t pkt, void
      *uarea_init_arg);


Which packet fields have been correctly initialized when this callback
is called?

Do you mean what ODP packet metadata were initialized BEFORE this
callback were called? That's not defined, and couldn't be, as a lot of
these metadata could be undefined when the allocation of the packet
buffers happen, as the buffer is unused at that point by the packet. So
the packet length is not known, for example.
Looking through the API the following functions could be available at
the execution of the callback:
odp_packet_pool
odp_packet_to_event

Otherwise we should mention that the callback must not call other API
functions on the handle.

That's the point, it better to deny all packet API calls at this point. It's 
not yet a packet that was received from an interface but a piece of memory that 
it initialized.

I think we should allow at least odp_packet_pool(), the init function might need it. Otherwise I agree, we should deny other functions:

+ * @note The packet is not in use when this function is called, so only the
+ * following functions could be called by the init function with this handle:
+ * odp_packet_pool()
+ * odp_packet_user_area()
+ * odp_packet_user_area_size()





What metadata fields the callback can change?
As per above, nothing.

Yes, better to deny everything from the app.



If the intention is to init only uarea content, maybe it's better to
just pass pointer and length (uarea size), instead of the entire packet.

OVS particularly needs to save the packet handle to the user area, hence
the handle passed.

Why at this point? Since the memory is preserved
As explained above, it might not be the case.


, the init function can init the handle to invalid at this phase and 
application can fill in the handle when it's first received.
The application doesn't have a way to tell when a particular handle received first. If the platform allocate memory on the spot, then it never going to be true anyway. I think adding an odp_packet_first_received() function would be much less practical.


Also, what if the packet handles are generation counted (for error detection 
etc reasons), meaning that each handle is unique. The one you save in this 
phase may be invalid when the same packet structure stores an incoming packet.

Good point, we should say "every packet handle's user area" in the first line

In general, it's wrong to cache handles.
An application owns a handle only during create->destroy, deq->enq, alloc->free, etc - 
not between destroy->create, free->alloc, enq->deq, etc
The application don't need to own the handle. We just want to avoid storing the handle for each new packet into legacy application's glueing structures.




-Petri




typedef void (odp_packet_uarea_init_t)(void *user_area, uint32_t
user_area_size, void *uarea_init_arg);



      +
      +/**
        * Pool parameters
        * Used to communicate pool creation options.
        */
      @@ -82,6 +100,14 @@ typedef struct odp_pool_param_t {
                              /** User area size in bytes. Specify as 0
      if no user
                                  area is needed. */
                              uint32_t uarea_size;
      +
      +                       /** Initialize every packet's user area
at
      allocation
      +                           time. Use NULL if no initialization
      needed. */


"Allocation time" hints that it's called in every alloc.

We can change that to "memory allocation time", but I think explaining
the whole purpose should be in the above description.





-Petri



      +                       odp_packet_uarea_init_t *uarea_init;
      +
      +                       /** Opaque pointer passed to packet user
area
      +                           constructor. */
      +                       void *uarea_init_arg;
                      } pkt;
                      struct {
                              /** Number of timeouts in the pool */
      --
      1.9.1

      _______________________________________________
      lng-odp mailing list
      [email protected] <mailto:[email protected]>
      https://lists.linaro.org/mailman/listinfo/lng-odp




_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to