On Dec 31 2009, 6:36 pm, garyrob wrote:
> One thing I'm not clear on regarding Klauss' patch. He says it's
> applicable where the data is primarily non-numeric. In trying to
> understand why that would be the case, I'm thinking that the increased
> per-object memory overhead for reference-counting
One thing I'm not clear on regarding Klauss' patch. He says it's
applicable where the data is primarily non-numeric. In trying to
understand why that would be the case, I'm thinking that the increased
per-object memory overhead for reference-counting would outweigh the
space gains from the shared m
Hi Antoine
On Dec 11, 3:00 pm, Antoine Pitrou wrote:
> I was going to suggest memcached but it probably serializes non-atomic
> types. It doesn't mean it will be slow, though. Serialization implemented
> in C may well be faster than any "smart" non-serializing scheme
> implemented in Python.
No
On Dec 11, 11:00 am, Antoine Pitrou wrote:
> I was going to suggest memcached but it probably serializes non-atomic
> types.
Atomic as well.
memcached communicates through sockets[3] (albeit possibly unix
sockets, which are faster than TCP ones).
multiprocessing has shared memory schemes, but doe
Le Wed, 09 Dec 2009 06:58:11 -0800, Valery a écrit :
>
> I have a huge data structure that takes >50% of RAM. My goal is to have
> many computational threads (or processes) that can have an efficient
> read-access to the huge and complex data structure.
>
> "Efficient" in particular means "withou
Hi Klauss,
> How's the layout of your data, in terms # of objects vs. bytes used?
dict (or list) of 10K-100K objects. The objects are lists or dicts.
The whole structure eats up to 2+ Gb RAM
> Just to have an idea of the overhead involved in refcount
> externalization (you know, what I mentione
On Dec 9, 11:58 am, Valery wrote:
> Hi all,
>
> Q: how to organize parallel accesses to a huge common read-only Python
> data structure?
>
> Details:
>
> I have a huge data structure that takes >50% of RAM.
> My goal is to have many computational threads (or processes) that can
> have an efficient
On Dec 9, 9:58 am, Valery wrote:
> Hi all,
>
> Q: how to organize parallel accesses to a huge common read-only Python
> data structure?
Use a BTree on disk in a file. A good file system will keep most of
the
pages you need in RAM whenever the data is "warm". This works
for Python or any other p
On 12/9/2009 6:58 AM Valery said...
Hi all,
Q: how to organize parallel accesses to a huge common read-only Python
data structure?
I have such a structure which I buried in a zope process which keeps it
in memory and is accessed through http requests. This was done about 8
years ago, and I
Hi all,
Q: how to organize parallel accesses to a huge common read-only Python
data structure?
Details:
I have a huge data structure that takes >50% of RAM.
My goal is to have many computational threads (or processes) that can
have an efficient read-access to the huge and complex data structure.
10 matches
Mail list logo