On 28 April 2015 at 06:40, Benoît Ganne <benoit.ga...@kalray.eu> wrote:

> We are also interested by IPC and I also think that pktio is the way to go.

pktio because it is an already existing concept in ODP we can potentially
piggy back on. But IPC has some differences compared to the vanilla packet
I/O. As I wrote before, ODP should not bother with the structure or content
of messages.

IPC must be reliable (what are the guarantees of pktio as long as the
packet is still on-chip?). Give me reliability or give me death.
This opens the issue of flow control. If message delivery cannot be
guaranteed (e.g. due to full queues), should the 'send' call return an
error? We don't want blocking semantics in ODP. If send returns success,
the message must (eventually) be delivered to the recipient.



> What is transferred is opaque, it is application-dependent. We think it
> will be useful for control plane/data plane communication, but might also
> be used to implement pipelines between ODP data-plane processes.
>
Would those "pipelines" transfer only packets? Or any type of events (i.e.
also timeouts and buffers)?


> I think that "IPC pktio" need some implementation freedom. In our case, it
> will be mapped on our NoC, so we would like to:
>  - be able to define the device name as to identify NoC resources: rather
> than 'ipc_*' I'd like something which allow us to publish a hierarchy like
> a path '/ipc/path/to/some/resource'. We could decide to let that to the
> implementation anyway.
>  - be able to define unidirectional or bidirectional pktio (rx, tx,
> rx+tx). This is to save HW resources, we need 1 HW resource per direction.
> For example if we do not pass a pool when opening the IPC pktio, it could
> mean it is open only for TX. A special IPC path (eg. '/ipc/local') might
> refer to a RX-only IPC pktio, and using a non-local path + pool might refer
> to a RX+TX IPC pktio.
>
> I am not sure how much we want to push in the API and how much will be
> defined by the implementation. My personal opinion would be to let it to
> the implementation, but maybe to standardize pktio 'path':
>  - '/dev/<platform dependent' for real devices (eg. '/dev/eth0')
>  - '/ipc/<platform dependent' for IPC pktio
> This would allow to easily discriminate.
>
Is this path also used to identify the endpoint with which you intend to
communicate?


>
> ben
>
> On 04/27/2015 12:23 PM, Bill Fischofer wrote:
>
>> IPCs should be short messages used either as "shoulder taps" or for
>> brief communication. If a larger data structure is needed, a pointer to
>> it can be passed as the IPC message, so again the interpretation of the
>> IPC message contents would generally be up to the application, with
>> perhaps some basic message type decoding provided by the APIs.
>>
>> On Mon, Apr 27, 2015 at 9:18 AM, Ola Liljedahl <ola.liljed...@linaro.org
>> <mailto:ola.liljed...@linaro.org>> wrote:
>>
>>     I am also interested in IPC (between control and dataplane) but not
>>     for packets but for messages (so more like buffers). I don't see
>>     that these IPC messages should be parsed and classified like packets
>>     and I don't need the packet_flags.h API either.
>>
>>     However pktio seems like the only abstraction that allows for an ODP
>>     application to communicate (send/receive data) with the outside
>>     (where is bufio?) so is perhaps the mechanism to use anyway. The
>>     application would have to know that "packets" received from a
>>     certain pktio interface (through queues/scheduler) are not really
>>     packets but messages and need to be treated differently.
>>
>>     On 23 April 2015 at 19:46, Maxim Uvarov <maxim.uva...@linaro.org
>>     <mailto:maxim.uva...@linaro.org>> wrote:
>>
>>         Since nobody replayed on rfc patch I did clean up and splitting it
>>         on several patches.
>>
>>         Please consider this version for review.
>>
>>         Thanks,
>>         Maxim.
>>
>>         Maxim Uvarov (7):
>>            linux-generic: zero params for pool create
>>            api: ipc: shared memory add no create flag
>>            api ipc: update ring with shm proc argument
>>            linux-generic: reflect shm flags and add mode debug prints
>>            linux-generic: ipc init odp ring
>>            linux-generic: add ipc pktio support
>>            ipc: example app
>>
>>         configure.ac <http://configure.ac>
>>
>>                   |   1 +
>>           example/Makefile.am                                |   2 +-
>>           example/ipc/.gitignore                             |   1 +
>>           example/ipc/Makefile.am                            |   7 +
>>           example/ipc/odp_ipc.c                              | 441
>>         +++++++++++++++
>>           helper/include/odp/helper/ring.h                   |   2 +
>>           helper/ring.c                                      |   9 +-
>>           include/odp/api/pool.h                             |   1 +
>>           include/odp/api/shared_memory.h                    |   1 +
>>           platform/linux-generic/Makefile.am                 |   2 +
>>           .../linux-generic/include/odp_buffer_internal.h    |   3 +
>>           .../linux-generic/include/odp_packet_io_internal.h |  15 +
>>           .../include/odp_packet_io_ipc_internal.h           |  47 ++
>>           platform/linux-generic/odp_init.c                  |   6 +
>>           platform/linux-generic/odp_packet_io.c             |  30 +-
>>           platform/linux-generic/odp_packet_io_ipc.c         | 590
>>         +++++++++++++++++++++
>>           platform/linux-generic/odp_pool.c                  |  16 +-
>>           platform/linux-generic/odp_schedule.c              |   1 +
>>           platform/linux-generic/odp_shared_memory.c         |   9 +-
>>           test/validation/odp_queue.c                        |   1 +
>>           20 files changed, 1177 insertions(+), 8 deletions(-)
>>           create mode 100644 example/ipc/.gitignore
>>           create mode 100644 example/ipc/Makefile.am
>>           create mode 100644 example/ipc/odp_ipc.c
>>           create mode 100644
>>         platform/linux-generic/include/odp_packet_io_ipc_internal.h
>>           create mode 100644 platform/linux-generic/odp_packet_io_ipc.c
>>
>>         --
>>         1.9.1
>>
>>         _______________________________________________
>>         lng-odp mailing list
>>         lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>         https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>>
>>     _______________________________________________
>>     lng-odp mailing list
>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>>
>>
>> _______________________________________________
>> lng-odp mailing list
>> lng-odp@lists.linaro.org
>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>
> --
> Benoît GANNE
> Field Application Engineer, Kalray
> +33 (0)648 125 843
>
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to