On Dec 9, 11:58 am, Valery <khame...@gmail.com> wrote:
> Hi all,
>
> Q: how to organize parallel accesses to a huge common read-only Python
> data structure?
>
> Details:
>
> I have a huge data structure that takes >50% of RAM.
> My goal is to have many computational threads (or processes) that can
> have an efficient read-access to the huge and complex data structure.
>
> <snip>
>
> 1. multi-processing
>  => a. child-processes get their own *copies* of huge data structure
> -- bad and not possible at all in my case;

How's the layout of your data, in terms # of objects vs. bytes used?
Just to have an idea of the overhead involved in refcount
externalization (you know, what I mentioned here:
http://groups.google.com/group/unladen-swallow/browse_thread/thread/9d2af1ac3628dc24
)
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to