Hi everyone, I'm looking at building an application that allows lockless interprocess data passing using a ring buffer style (sorta) shared memory zone.
My initial plan is to create a shared memory zone (probably using C functions thru ctypes), then write data to it with a writer process, then the reader process comes to read that data in its own time. Basically the zone is read/written using ctypes or memoryview objects. The zone (say 1GB) is split in many small buckets approx 10k in size. Each bucket has a flag either showing 0 (free) or 1 (needs reading) in a header. The flag is reset by the reader when done. and is set by the writer once the full bucket has been written to memory and is ready to be read. I know I have to be carefull to make sure writes appear in the proper order to the reader process (that is : data writen by the writer needs to appear before the flag is set by the writter process, from the point of view of the reader process, which is absolutely not a given on modern cpu). So questions : *does python takes care of that automagically, and that's why I havn't found anything on the subject while googling ? (I imagine that within one single multithreaded process the GIL does that neatly, but what of multiprocess applications ?) *or is there a native mechanism or module that allows to play with memory barriers like C does allow one to (https://www.kernel.org/doc/Documentation/memory-barriers.txt). * or do I have to write my own shims to C functions wrapping C/asm memory barrier instructions ? (sounds lighter weight than pulling in the various python asm modules I have around) Any answers much appreciated. Please don't : * tell me to use message queues (that works with some trickery, it's not the purpose of this question) * tell me to use array from the the multiprocessing module, or the data proxy classes (or explain me if the locking scheme in that context can be lockless :) ) Cheers ! Charlie -- https://mail.python.org/mailman/listinfo/python-list