"Dennis Lee Bieber" <[EMAIL PROTECTED]> Wrote: | On Thu, 27 Jul 2006 09:17:56 -0700, "Carl J. Van Arsdall" | <[EMAIL PROTECTED]> declaimed the following in comp.lang.python: | | > Ah, alright, I think I understand, so threading works well for sharing | > python objects. Would a scenario for this be something like a a job | > queue (say Queue.Queue) for example. This is a situation in which each | > process/thread needs access to the Queue to get the next task it must | > work on. Does that sound right? Would the same apply to multiple | > threads needed access to a dictionary? list? | > | Python's Queue module is only (to my knowledge) an internal | (thread-shared) communication channel; you'd need something else to work | IPC -- VMS mailboxes, for example (more general than UNIX pipes with | their single reader/writer concept) | | > "shared memory" mean something more low-level like some bits that don't | > necessarily mean anything to python but might mean something to your | > application? | > | Most OSs support creation and allocation of memory blocks with an | attached name; this allows multiple processes to map that block of | memory into their address space. The contents of said memory block is | totally up to application agreements (won't work well with Python native | objects). | | mmap() | | is one such system. By rough description, it maps a disk file into a | block of memory, so the OS handles loading the data (instead of, say, | file.seek(somewhere_long) followed by file.read(some_data_type) you | treat the mapped memory as an array and use x = mapped[somewhere_long]; | if somewhere_long is not yet in memory, the OS will page swap that part | of the file into place). The "file" can be shared, so different | processes can map the same file, and thereby, the same memory contents. | | This can be useful, for example, with multiple identical processes | feeding status telemetry. Each process is started with some ID, and the | ID determines which section of mapped memory it is to store its status | into. The controller program can just do a loop over all the mapped | memory, updating a display with whatever is current -- doesn't matter if | process_N manages to update a field twice while the monitor is | scanning... The display always shows the data that was current at the | time of the scan. | | Carried further -- special memory cards can (at least they were | where I work) be obtained. These cards have fiber-optic connections. In | a closely distributed system, each computer has one of these cards, and | the fiber-optics link them in a cycle. Each process (on each computer) | maps the memory of the card -- the cards then have logic to relay all | memory changes, via fiber, to the next card in the link... Thus, all the | closely linked computers "share" this block of memory.
This is nice to share inputs from the real world - but there are some hairy issues if it is to be used for general purpose consumption - unless there are hardware restrictions to stop machines stomping on each other's memories - i.e. the machines have to be *polite* and *well behaved* - or you can easily have a major smash... A structure has to agreed on, and respected... - Hendrik -- http://mail.python.org/mailman/listinfo/python-list