See comments inline On Tue, Oct 21, 2014 at 12:37 PM, Maxim Uvarov <[email protected]> wrote:
> On 10/21/2014 08:25 PM, Gilad Ben Yossef wrote: > >> >> Gilad Ben-Yossef >> Software Architect >> EZchip Technologies Ltd. >> 37 Israel Pollak Ave, Kiryat Gat 82025 ,Israel >> Tel: +972-4-959-6666 ext. 576, Fax: +972-8-681-1483 >> Mobile: +972-52-826-0388, US Mobile: +1-973-826-0388 >> Email: [email protected], Web: http://www.ezchip.com >> >> "Ethernet always wins." >> — Andy Bechtolsheim >> >> >> -----Original Message----- >>> From: [email protected] [mailto:lng-odp- >>> [email protected]] On Behalf Of Maxim Uvarov >>> Sent: Tuesday, October 21, 2014 6:28 PM >>> To: [email protected] >>> Subject: Re: [lng-odp] Questions about odp_buffer_t >>> >>> On 10/21/2014 06:52 PM, Gilad Ben Yossef wrote: >>> >>>> Quick question – >>>> >>>> What are the assumptions about an odp_buffer_t ? >>>> >>>> For example - >>>> >>>> Can an application assume the odp_buffer_t is always the same for the >>>> same buffer? I would assume no. >>>> >>>> For current linux-generic implementation it's so. For other I think >>> it's >>> the same. odp_buffer_t is offset in pool, which always points to the >>> same memory. >>> >> [gby] My understanding was that odp_buffer_t is an opaque handle to a >> buffer and offset in a pool is a specific implementation of linux-generic. >> Just as an example - >> Armv8 has a notion of "tagged pointers". If I want an Armv8 based SoC to >> use tagged pointers to be used as odp_buffer_t and the tag part of the >> pointer is, say, which queue the buffer came from so I can track it for >> prvided ORDERED and ATOMIC queues support, the result is that two different >> odp_buffer_t may point to the same buffer. >> My platform doesn't happen to be Armv8, I just used it as an example for >> what a specific SoC can do if you allow odp_buffer_t to be opaque. >> > > they can point, but they can point in different time, right? I.e. you will > call odp_buffer_free() for first buffer and alloc or dequeue will return > new buffer with different odp_buffer_t value but pointed to the location of > the first buffer. I think application itself will never know about that. > I.e. application will call getters/setters to get pointers to buffer data. > And how it's implemented inside platform - it does not matter. > > The ODP abstract types are opaque and that is expected to be used by implementations to help provide efficient implementations of ODP APIs that refer to these opaque types. Freeing a buffer destroys the odp_buffer_t that refers to it. If the buffer is returned to a free list where it is recycled on subsequent allocs() then it's up to the implementation to decide whether the same internal representation of the odp_buffer_t is used or whether it incorporates a "generation counter" or similar. That sort of thing is not visible to the ODP application and it should not be remembering the odp_buffer_t of a freed or released buffer but instead should only be working with the handles associated with the current unit of work. > Can an application assume the odp_buffer_t is globally unique? I would >>>> assume no. >>>> >>>> yes, why no? again in linux-generic odp_buffer_t is coded by bits >>> <pool_id, index offset>, so it's point to exact memory address. Soc use >>> the same thing I think. >>> >> [gby] Again, I understand this is the *implementation* of linux-generic. >> Again, different SoC platform may have many reason to do something else. >> For example - if I have a hardware queue mechanism my odp_buffer_t may >> mean on my platform "position X in the incoming work scheduling queue for >> this thread" - and I have data structure in a ring in memory that tell me >> where to find the buffer. >> It means that two different threads will see two different buffers at the >> same numeric value of odp_buffer_t. >> But again, it's a completely valid way for a platform to handle buffers >> if it wants ORDERED queues >> > > That is came from requirement. Even in your case you can add some extra > data odp_buffer_t value, like thread id or ever. I think everything depends > on how do you initialization. If you called odp_pool_create() / > pktio_open() before thread creation then odp_buffer_t will be uniq. If you > call odp_poll_create()/pktio_open() inside each thread then most likely you > will have the same odp_buffer_t values pointed to different pools, packet > io and buffer memory. So my point is probably we don't need special > requirement here and everything depends on how odp api is used. As long as the references are not ambiguous to the implementation, then it is free to use whatever implementation it choses. For example, an implementation might represent an odp_buffer_t as an index into a thread-local list of active buffers that it maintains, or any other mechanism of its choosing. The ODP application will receive the handle via ODP APIs and dispose of it via other ODP APIs. That's all that really matters from its perspective. > > > Can an application assume the odp_buffer_t it may be passed via shared >>>> memory and be expected keep its meaning? I would assume no. >>>> >>>> That depends. If it's transfered in the same process then it will be >>> same odp_buffer_t. If it's separate process and shared memory between >>> them then to have the same odp_buffer_t you need to have the same table >>> for pool entries. I.e. if you do fork() after odp_global_init() it will >>> be the same. If it's completely 2 separate process and shared memory >>> between them, than ... it's possible to do that, but definitely not >>> implemented now. >>> >> [gby] Again, if my odp_buffer_t is a tagged Armv8 pointer with the tag >> bits pointing to an index in a thread local array where I store status bits >> (e.g. which queue I came from...) than if I put an odp_buffer_t in a shared >> memory and expect to grab the odp_buffer_t from some other thread and that >> it will be valid it won't work. >> >> Which brings me back to my original question - what can application >> assume about an odp_buffer_t and what it cannot assume? >> Surely we don't expect all SoC and platforms to implement odp_buffer_t as >> bits coding a pool and an index. >> What are the rules a platform must obey? >> Personally, I'd be very happy if the rules says odp_buffer_t is an opaque >> handle and you can't assume anything about it (think Linux file descriptor) >> but obviously this needs some discussion. >> Gilad >> > Answered my opinion above - depends on odp initialization sequence. These are exactly the sort of implementation-specific choices that the abstract types are designed to enable. The handles are fully opaque and the only way applications should be manipulating them is via defined ODP APIs. > > > Maxim. > > > Maxim. >>> >>> Anyone has any thought about this? >>>> >>>> Thanks, >>>> >>>> Gilad >>>> >>>> *Gilad Ben-Yossef* >>>> >>>> Software Architect >>>> >>>> EZchip Technologies Ltd. >>>> >>>> 37 Israel Pollak Ave, Kiryat Gat 82025 ,Israel >>>> >>>> Tel: +972-4-959-6666 ext. 576, Fax: +972-8-681-1483 >>>> Mobile: +972-52-826-0388, US Mobile: +1-973-826-0388 >>>> >>>> Email: [email protected] <mailto:[email protected]>, Web: >>>> http://www.ezchip.com <http://www.ezchip.com/> >>>> >>>> // >>>> >>>> /"Ethernet always wins."/ >>>> >>>> — Andy Bechtolsheim// >>>> >>>> >>>> >>>> _______________________________________________ >>>> lng-odp mailing list >>>> [email protected] >>>> http://lists.linaro.org/mailman/listinfo/lng-odp >>>> >>> >>> _______________________________________________ >>> lng-odp mailing list >>> [email protected] >>> http://lists.linaro.org/mailman/listinfo/lng-odp >>> >> > > _______________________________________________ > lng-odp mailing list > [email protected] > http://lists.linaro.org/mailman/listinfo/lng-odp >
_______________________________________________ lng-odp mailing list [email protected] http://lists.linaro.org/mailman/listinfo/lng-odp
