On Tue, Mar 17, 2015 at 1:44 PM, Maxim Uvarov <[email protected]> wrote: > On 03/17/15 14:30, Ciprian Barbu wrote: >> >> On Fri, Mar 13, 2015 at 5:30 PM, Ola Liljedahl <[email protected]> >> wrote: >>> >>> On 13 March 2015 at 13:35, Maxim Uvarov <[email protected]> wrote: >>>> >>>> On 03/13/15 13:43, Ciprian Barbu wrote: >>>>> >>>>> On Fri, Mar 13, 2015 at 11:34 AM, Ola Liljedahl >>>>> <[email protected]> wrote: >>>>>> >>>>>> When I think about IPC, it is message passing between different >>>>>> programs >>>>>> (different processes, different address spaces, possibly even >>>>>> different >>>>>> machines). IPC provides services like address resolution (some form of >>>>>> name >>>>>> lookup), being able to respond to messages, a model for message typing >>>>>> (so >>>>>> you can parse and understand messages from different senders). One of >>>>>> the >>>>>> main use cases for IPC would be between the control plane (which does >>>>>> not >>>>>> use ODP) and the data plane (which would be implemented using ODP). >>>>>> >>>>>> Synchronization between threads using the same address space can be >>>>>> done >>>>>> in >>>>>> many way (e.g. by posting ODP buffers to queues). What kind of generic >>>>>> helpers are needed here? >>>>> >>>>> I agree with Ola here. The problem you have seems to only apply to >>>>> linux-generic, on some real HW platforms it should be possible to >>>>> "transfer" packets from one process to another trough globally visible >>>>> queue ids, indistinctly from the two entities being different ODP >>>>> applications or different threads of the same ODP application. >>>>> >>>>> I think we need to first define better the visibility scope of pools >>>>> and queues for ODP in general and see how it applies to linux-generic >>>>> in particular. If needed linux-generic should make the queue ids and >>>>> buffer ids globally visible transparently, not necessarily with best >>>>> performance, the ODP design should first and foremost be clear and >>>>> linux-generic is a reference software implementation. >>>>> >>>>> As for IPC, as Ola said, it should be a service that provides a form >>>>> of communication framework between different ODP "workers", with the >>>>> possibility of being able to synchronize on different message types. >>>>> ENEA has had an IPC framework for a long time that works this way and >>>>> provides transparent, reliable, scalable IPC, not only between >>>>> processes on the same system, but between different nodes in a >>>>> network. The protocol is even open sourced and available for Linux >>>>> distributions. I think at the very least ODP should incorporate some >>>>> of the design aspects, like for example defining new event types to be >>>>> used for IPC and define an IPC API for exchanging messages between the >>>>> different ODP instances. >>>>> >>>>> /Ciprian >>>> >>>> Actually I do not understand the reason of that "messages". And why it >>>> has >>>> to be in ODP. >>>> Say for example if you have some odp packet and you want to send it to >>>> other node, just >>>> do it in software. Hardware here can not help you. >>> >>> There has to be some form of support in ODP because some of the endpoints >>> (consumers, perhaps producers of the messages) will be ODP (dataplane) >>> threads. Possibly IPC has to be integrated with queues and scheduling >>> because that is how the dataplane threads are assigned work (events). >>> >>>> I understand my use case. It it to have best performance on exchanging >>>> packets between different >>> >>> Exchanging packets in shared memory does not sound like IPC to me. >>> >>>> processes which can share memory on the same machine. Taras some time >>>> ago >>>> said that if that >>>> sort of IPC will be on pktio level then TI KS2 can do exchanging packets >>>> in hardware not related if >>> >>> Keystone can automatically copy buffers between different pools as the >>> buffer is enqueued on specific queues. That's a nice feature. The primary >>> use case for this was probably to do managed prefetch between the >>> different >>> layers of memory hierarchy of the KS DSP's (which have per-core SRAM, >>> shared >>> on-chip SRAM and SDRAM). >>> >>>> it's same pool or not. One of initial version of patch was bases on >>>> queue >>>> abstraction, while after >>>> we decided to go with pktio abstraction. >>> >>> I don't remember why this was changed. Queues seems like a better >>> abstraction for exchanging messages (which are just another form of >>> events). >>> But pktio can be accessed through queues? >>> >>>> If I understand right Ola and Ciprian speak more about connection >>>> protocol >>>> between data plane and control plane. >>>> That is not IPC, as I see this. It's connection protocol. IPC is for >>>> communication on the same node between different processes. >> >> The concept of IPC in general is not limited to the widely known >> System V IPC, IPC is any form of sharing data between multiple >> processes, possibly across network boundaries, using communication >> protocols. Socket based communication is also IPC in that sense, so is >> message passing: >> http://en.wikipedia.org/wiki/Inter-process_communication#Approaches >> >> At the very least I think it's wrong to call your mechanism THE IPC. > > > Ciprian and Ola if you have ideas for requirement can you came up with some > API proposal? > I think it will save time and you can discuss your solution.
I think we have more important things to work on rather than IPC and we might need input from more people. I've been thinking about this a little, but I don't have a clear idea just yet. > > My approach is based on shared memory because it only case to have good > performance on systems > which can not do packet exchanging in software. Does not matter how it's > named. It might be 'shared memory pktio'. > But I need it for specific reason. > > In your case it looks like you want to wrap packet to tunnel ip2ip or set > vlan or mpls tags. That is horrible for performance and definitely will not > help to scale Snort. Might be you can solve Snort problem too, but it's not > clear > for me how. Let's not mix things here, the IPC based on message passing as I see it might no have anything to do with what you're trying to achieve. What you need is still a form of IPC, but it's rather limited to the scope of passing packets from one process to another. The proposal resembles exchanging packets through a loopback interface, only that you have a more specialized type of pktio. A powerful IPC framework would hide the fact that processes reside on the same node or are distributed over the network. The programming paradigm based on messages provides transparent and reliable exchange of messages, it enables processes to synchronize simply by waiting until a specified message (identified by a message id) is received, name resolution and possibly other functionality. So for snort in particular if you want to use the IPC framework I described you would not care if the processes are on the same node or over the network, this aspect is hidden by the IPC API and the implementation would make sure that for local processes the packets are sent with zero-copy, if possible, or by copying data between pools where needed. But to move forward with your proposal, I think we need some input from HW guys too, it would be helpful to know this mechanism can be implemented on other platforms and if it's worth going forward with it until we shape up the requirements of a more complete iPC framework. /Ciprian > > Maxim. > > >>> You would not be able to make a phone call without this kind of IPC >>> between >>> different control plane processes (lots of them) and also data plane. >>> That's >>> how telecom systems are designed. Telecom systems are typically >>> distributed >>> so the IPC needs to needs to support this. Location transparency is an >>> important feature of IPC implementations. Independence of the underlying >>> transport mechanism (e.g. shared memory, DMA, RapidIO, Ethernet, >>> ATM/AAL5, >>> IP-based protocols etc) is also important. >>> >>> >>>> >>>> btw, did you review proposal patches? There is also test application >>>> which >>>> send back and forth odp packets. >>>> http://lists.linaro.org/pipermail/lng-odp/2014-November/004525.html >>>> http://lists.linaro.org/pipermail/lng-odp/2014-November/004526.html >>>> >>>> Maxim. >>>> >>>> >>>> >>>>>> -- Ola >>>>>> >>>>>> >>>>>> On 13 March 2015 at 09:49, Maxim Uvarov <[email protected]> >>>>>> wrote: >>>>>>> >>>>>>> What was decision of supporting IPC for ODP? Should I update IPC >>>>>>> patches >>>>>>> on the latest ODP now? >>>>>>> >>>>>>> In the latest IPC patch as I remember IPC was done on pktio level: >>>>>>> a) odp pool is shared between 2 processes with shared >>>>>>> consumer/producer >>>>>>> paths for odp_buffer_t descriptors. >>>>>>> 2) producer process adds odp_buffer_t to producer path. >>>>>>> 3) consumer process reads odp_buffer_t from producer path, and >>>>>>> translates >>>>>>> it to it's own odp_buffer_t. >>>>>>> 4) After some work consumer thread puts odp_buffer_t to producer >>>>>>> path. >>>>>>> So >>>>>>> that control of the packet is back to first process and first process >>>>>>> can >>>>>>> free this packet from the pool. >>>>>>> >>>>>>> I.e. in fact that exchange between processes is done with >>>>>>> odp_buffer_t >>>>>>> handlers and no packet body coping. That is mostly description of >>>>>>> software >>>>>>> path, i.e. linux-generic. >>>>>>> >>>>>>> I need ipc for things like snort (single threaded app) and I think >>>>>>> that >>>>>>> the same way can be used for virtualization. >>>>>>> >>>>>>> Best regards, >>>>>>> Maxim. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> lng-odp mailing list >>>>>>> [email protected] >>>>>>> http://lists.linaro.org/mailman/listinfo/lng-odp >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> lng-odp mailing list >>>>>> [email protected] >>>>>> http://lists.linaro.org/mailman/listinfo/lng-odp >>>>>> > _______________________________________________ lng-odp mailing list [email protected] http://lists.linaro.org/mailman/listinfo/lng-odp
