On 03/17/15 14:30, Ciprian Barbu wrote:
On Fri, Mar 13, 2015 at 5:30 PM, Ola Liljedahl <[email protected]> wrote:
On 13 March 2015 at 13:35, Maxim Uvarov <[email protected]> wrote:
On 03/13/15 13:43, Ciprian Barbu wrote:
On Fri, Mar 13, 2015 at 11:34 AM, Ola Liljedahl
<[email protected]> wrote:
When I think about IPC, it is message passing between different programs
(different processes, different address spaces, possibly even different
machines). IPC provides services like address resolution (some form of
name
lookup), being able to respond to messages, a model for message typing
(so
you can parse and understand messages from different senders). One of
the
main use cases for IPC would be between the control plane (which does
not
use ODP) and the data plane (which would be implemented using ODP).
Synchronization between threads using the same address space can be done
in
many way (e.g. by posting ODP buffers to queues). What kind of generic
helpers are needed here?
I agree with Ola here. The problem you have seems to only apply to
linux-generic, on some real HW platforms it should be possible to
"transfer" packets from one process to another trough globally visible
queue ids, indistinctly from the two entities being different ODP
applications or different threads of the same ODP application.
I think we need to first define better the visibility scope of pools
and queues for ODP in general and see how it applies to linux-generic
in particular. If needed linux-generic should make the queue ids and
buffer ids globally visible transparently, not necessarily with best
performance, the ODP design should first and foremost be clear and
linux-generic is a reference software implementation.
As for IPC, as Ola said, it should be a service that provides a form
of communication framework between different ODP "workers", with the
possibility of being able to synchronize on different message types.
ENEA has had an IPC framework for a long time that works this way and
provides transparent, reliable, scalable IPC, not only between
processes on the same system, but between different nodes in a
network. The protocol is even open sourced and available for Linux
distributions. I think at the very least ODP should incorporate some
of the design aspects, like for example defining new event types to be
used for IPC and define an IPC API for exchanging messages between the
different ODP instances.
/Ciprian
Actually I do not understand the reason of that "messages". And why it has
to be in ODP.
Say for example if you have some odp packet and you want to send it to
other node, just
do it in software. Hardware here can not help you.
There has to be some form of support in ODP because some of the endpoints
(consumers, perhaps producers of the messages) will be ODP (dataplane)
threads. Possibly IPC has to be integrated with queues and scheduling
because that is how the dataplane threads are assigned work (events).
I understand my use case. It it to have best performance on exchanging
packets between different
Exchanging packets in shared memory does not sound like IPC to me.
processes which can share memory on the same machine. Taras some time ago
said that if that
sort of IPC will be on pktio level then TI KS2 can do exchanging packets
in hardware not related if
Keystone can automatically copy buffers between different pools as the
buffer is enqueued on specific queues. That's a nice feature. The primary
use case for this was probably to do managed prefetch between the different
layers of memory hierarchy of the KS DSP's (which have per-core SRAM, shared
on-chip SRAM and SDRAM).
it's same pool or not. One of initial version of patch was bases on queue
abstraction, while after
we decided to go with pktio abstraction.
I don't remember why this was changed. Queues seems like a better
abstraction for exchanging messages (which are just another form of events).
But pktio can be accessed through queues?
If I understand right Ola and Ciprian speak more about connection protocol
between data plane and control plane.
That is not IPC, as I see this. It's connection protocol. IPC is for
communication on the same node between different processes.
The concept of IPC in general is not limited to the widely known
System V IPC, IPC is any form of sharing data between multiple
processes, possibly across network boundaries, using communication
protocols. Socket based communication is also IPC in that sense, so is
message passing:
http://en.wikipedia.org/wiki/Inter-process_communication#Approaches
At the very least I think it's wrong to call your mechanism THE IPC.
Ciprian and Ola if you have ideas for requirement can you came up with
some API proposal?
I think it will save time and you can discuss your solution.
My approach is based on shared memory because it only case to have good
performance on systems
which can not do packet exchanging in software. Does not matter how it's
named. It might be 'shared memory pktio'.
But I need it for specific reason.
In your case it looks like you want to wrap packet to tunnel ip2ip or
set vlan or mpls tags. That is horrible for performance and definitely
will not help to scale Snort. Might be you can solve Snort problem too,
but it's not clear
for me how.
Maxim.
You would not be able to make a phone call without this kind of IPC between
different control plane processes (lots of them) and also data plane. That's
how telecom systems are designed. Telecom systems are typically distributed
so the IPC needs to needs to support this. Location transparency is an
important feature of IPC implementations. Independence of the underlying
transport mechanism (e.g. shared memory, DMA, RapidIO, Ethernet, ATM/AAL5,
IP-based protocols etc) is also important.
btw, did you review proposal patches? There is also test application which
send back and forth odp packets.
http://lists.linaro.org/pipermail/lng-odp/2014-November/004525.html
http://lists.linaro.org/pipermail/lng-odp/2014-November/004526.html
Maxim.
-- Ola
On 13 March 2015 at 09:49, Maxim Uvarov <[email protected]> wrote:
What was decision of supporting IPC for ODP? Should I update IPC
patches
on the latest ODP now?
In the latest IPC patch as I remember IPC was done on pktio level:
a) odp pool is shared between 2 processes with shared consumer/producer
paths for odp_buffer_t descriptors.
2) producer process adds odp_buffer_t to producer path.
3) consumer process reads odp_buffer_t from producer path, and
translates
it to it's own odp_buffer_t.
4) After some work consumer thread puts odp_buffer_t to producer path.
So
that control of the packet is back to first process and first process
can
free this packet from the pool.
I.e. in fact that exchange between processes is done with odp_buffer_t
handlers and no packet body coping. That is mostly description of
software
path, i.e. linux-generic.
I need ipc for things like snort (single threaded app) and I think that
the same way can be used for virtualization.
Best regards,
Maxim.
_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp
_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp
_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp