Andrew MacIntyre wrote:
I guess the freebsd limits must be different to the original
development environment.
The number of semaphores is certainly tunable - the SYSV IPC KERNEL
PARAMETERS section in the file /usr/src/sys/conf/NOTES lists the SYSV
semaphore parameters that can be
Robin Becker wrote:
Andrew MacIntyre wrote:
Robin Becker wrote:
I think it uses sysv semaphores and although freeBSD 6 has them
perhaps there's something I need to do to allow them to work.
IIRC, you need to explicitly configure loading the kernel module, or
compile the kernel with the
Robin Becker wrote:
Robin Becker wrote:
Andrew MacIntyre wrote:
Robin Becker wrote:
I think it uses sysv semaphores and although freeBSD 6 has them
perhaps there's something I need to do to allow them to work.
IIRC, you need to explicitly configure loading the kernel module, or
compile
robert wrote:
Shane Hathaway wrote:
of multiple cores. I think Python only needs a nice way to share a
relatively small set of objects using shared memory. POSH goes in that
direction, but I don't think it's simple enough yet.
http://poshmodule.sourceforge.net/
interesting, a
Paul Boddie wrote:
My impression is that POSH isn't maintained any more and that work was
needed to make it portable, as you have observed. Some discussions did
occur on one of the Python development mailing lists about the
possibility of using shared memory together with serialisation
Paul Rubin wrote:
robert [EMAIL PROTECTED] writes:
what about speed. Is it true that IronPython is almost as fast as C-Python
meanwhile?
When this all is really true, its probably a proof that putting out
LOCK-INC-lock's (on dicts, lists, mutables ...) in CPython to remove
the GIL in
sturlamolden wrote:
robert wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Threading is not the best way to exploit multiprocessors in this
context. Threads are not the preferred way of exploiting
robert schrieb:
what about speed. Is it true that IronPython is almost as fast as
C-Python meanwhile?
thus there would be crash if 2 threads use the global variables
(module.__dict__) of a module?
IronPython uses the .NET virtual machine. That, in itself, gives
consistency guarantees. Read
Paul Boddie wrote:
robert wrote:
Shane Hathaway wrote:
of multiple cores. I think Python only needs a nice way to share a
relatively small set of objects using shared memory. POSH goes in that
direction, but I don't think it's simple enough yet.
http://poshmodule.sourceforge.net/
Robin Becker wrote:
I think it uses sysv semaphores and although freeBSD 6 has them perhaps
there's
something I need to do to allow them to work.
IIRC, you need to explicitly configure loading the kernel module, or
compile the kernel with the necessary option in the config file.
--
Andrew MacIntyre wrote:
Robin Becker wrote:
I think it uses sysv semaphores and although freeBSD 6 has them
perhaps there's something I need to do to allow them to work.
IIRC, you need to explicitly configure loading the kernel module, or
compile the kernel with the necessary option in
robert wrote:
--
0040101F mov eax,3B9ACA00h
13: for (i = 0; i count; ++i) {
14: __asm lock inc x;
00401024 lock incdword ptr [_x (00408a00)]
15: sum += x;
0040102B mov edx,dword ptr [_x (00408a00)]
00401031 add esi,edx
Ross Ridge schrieb:
So give an example where reference counting is unsafe.
Martin v. Löwis wrote:
Nobody claimed that, in that thread. Instead, the claim was
Atomic increment and decrement instructions are not by themselves
sufficient to make reference counting safe.
So give an example of
Ross Ridge wrote:
Ross Ridge schrieb:
So give an example where reference counting is unsafe.
Martin v. Löwis wrote:
Nobody claimed that, in that thread. Instead, the claim was
Atomic increment and decrement instructions are not by themselves
sufficient to make reference counting safe.
Ross Ridge wrote:
Ross Ridge schrieb:
So give an example where reference counting is unsafe.
Martin v. Löwis wrote:
Nobody claimed that, in that thread. Instead, the claim was
Atomic increment and decrement instructions are not by themselves
sufficient to make reference counting safe.
robert wrote:
Martin v. Löwis wrote:
[..]
Thanks for that info. That is interesting.
Thus even on x86 currently this LOCK is not used (just
(op)-ob_refcnt++) )
Reading this I got pinched: In win32ui there are infact Py_INC/DECREF's
outside of the GIL !
And I have a severe crash
Sandra-24 wrote:
On Nov 2, 1:32 pm, robert [EMAIL PROTECTED] wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I thought
about simply using a
Shane Hathaway wrote:
of multiple cores. I think Python only needs a nice way to share a
relatively small set of objects using shared memory. POSH goes in that
direction, but I don't think it's simple enough yet.
http://poshmodule.sourceforge.net/
interesting, a solution possibly a little
robert wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Threading is not the best way to exploit multiprocessors in this
context. Threads are not the preferred way of exploiting multiple
processors in
robert [EMAIL PROTECTED] writes:
what about speed. Is it true that IronPython is almost as fast as C-Python
meanwhile?
When this all is really true, its probably a proof that putting out
LOCK-INC-lock's (on dicts, lists, mutables ...) in CPython to remove
the GIL in future should not be
sturlamolden wrote:
3. One often uses cluster architectures (e.g. Beowulf) instead of SMPs
for scientific computing. MPI works on SMP and clusters. Threads only
work on SMPs.
Following up on my previous post, there is a simple Python MPI wrapper
that can be used to exploit multiple
sturlamolden wrote:
http://www-unix.mcs.anl.gov/mpi/mpich1/mpich-nt/
One should probably use this instead:
http://www-unix.mcs.anl.gov/mpi/mpich2/index.htm
--
http://mail.python.org/mailman/listinfo/python-list
Ross Ridge schrieb:
The problem your describing isn't that reference counting hasn't been
made safe. What you and Joe seem to be trying to say is that atomic
increment and decrement instructions alone don't make accessing shared
structure members safe.
All I can do is to repeat Joe's words
sturlamolden wrote:
Following up on my previous post, there is a simple Python MPI wrapper
that can be used to exploit multiple processors for scientific
computing. It only works for Numeric, but an adaptaion to NumPy should
be easy (there is only one small C file in the source):
Martin v. Löwis [EMAIL PROTECTED] writes:
Ah, but in the case where the lock# signal is used, it's known that
the data is not in the cache of the CPU performing the lock operation;
I believe it is also known that the data is not in the cache of any
other CPU. So the CPU performing the LOCK INC
Martin v. Löwis [EMAIL PROTECTED] writes:
Ah, but in the case where the lock# signal is used, it's known that
the data is not in the cache of the CPU performing the lock operation;
I believe it is also known that the data is not in the cache of any
other CPU. So the CPU performing the LOCK INC
Paul Rubin wrote:
robert [EMAIL PROTECTED] writes:
I don't want to discourage you but what about reference
counting/memory
management for shared objects? Doesn't seem fun for me.
in combination with some simple locking (anyway necessary) I don't
see a problem in ref-counting.
If at least any
Paul Rubin schrieb:
Martin v. Löwis [EMAIL PROTECTED] writes:
Ah, but in the case where the lock# signal is used, it's known that
the data is not in the cache of the CPU performing the lock operation;
I believe it is also known that the data is not in the cache of any
other CPU. So the CPU
Joe Seigh wrote:
Basically there's a race condition where an object containing the
refcount can be deleted between the time you load a pointer to
the object and the time you increment what used to be a refcount
and is possibly something else but definitely undefined.
That doesn't really make
Ross Ridge wrote:
Joe Seigh wrote:
Basically there's a race condition where an object containing the
refcount can be deleted between the time you load a pointer to
the object and the time you increment what used to be a refcount
and is possibly something else but definitely undefined.
That
Ross Ridge wrote:
That doesn't really make sense. The object can't be deleted because
the thread should already have a reference (directly or indirectly) to
the object, otherwise any access to it can cause the race condition you
describe.
Joe Seigh wrote:
True but if the thread didn't
Ross Ridge schrieb:
The thread that shares it increments the reference count before passing
its address to directly another thread or indirectly through a shared
container.
To make a specific example, consider this fragment from
Objects/fileobject.c:
static PyObject *
file_repr(PyFileObject
Martin v. Löwis wrote:
How would you propose to fix file_repr to prevent such
a race condition?
The race condition you describe is different from the one Joe Seigh
described. It's caused because without GIL access to the file object
is no longer thread safe, not because reference counting
Ross Ridge schrieb:
Martin v. Löwis wrote:
How would you propose to fix file_repr to prevent such
a race condition?
The race condition you describe is different from the one Joe Seigh
described. It's caused because without GIL access to the file object
is no longer thread safe, not
Martin v. Löwis wrote:
You still didn't say what you would suggest to make it thread-safe
again; most likely, you proposal would be to add locking. If I
understand Joe's approach correctly, he has a solution that does
not involve locking (although I don't understand how it works).
Sun had
Joe Seigh wrote:
Martin v. Löwis wrote:
You still didn't say what you would suggest to make it thread-safe
again; most likely, you proposal would be to add locking. If I
understand Joe's approach correctly, he has a solution that does
not involve locking (although I don't understand how it
robert wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I
thought about simply using a more Python interpreter instances
(Py_NewInterpreter)
Martin v. Löwis wrote:
How would you propose to fix file_repr to prevent such
a race condition?
Ross Ridge schrieb:
The race condition you describe is different from the one Joe Seigh
described. It's caused because without GIL access to the file object
is no longer thread safe, not because
Shane Hathaway wrote:
robert wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I
thought about simply using a more Python interpreter
On Nov 2, 1:32 pm, robert [EMAIL PROTECTED] wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I thought about
simply using a more Python
Ross Ridge schrieb:
So give an example where reference counting is unsafe.
Nobody claimed that, in that thread. Instead, the claim was
Atomic increment and decrement instructions are not by themselves
sufficient to make reference counting safe.
I did give an example, in [EMAIL PROTECTED].
Even
Martin v. Löwis [EMAIL PROTECTED] writes:
I think that has to be on a single processor, or at most a dual core
processor with shared cache on die. With multiple cpu chips I don't
think can get the signals around that fast.
Can you explain what you mean? The lock# signal takes *immediate*
Paul Rubin wrote:
I dunno about x86 hardware signals but these instructions do
read-modify-write operaitons. That means there has to be enough
interlocking to prevent two cpu's from updating the same memory
location simultaneously, which means the CPU's have to communicate.
See
Paul Rubin schrieb:
I dunno about x86 hardware signals but these instructions do
read-modify-write operaitons. That means there has to be enough
interlocking to prevent two cpu's from updating the same memory
location simultaneously, which means the CPU's have to communicate.
See
Martin v. Löwis wrote:
robert schrieb:
in combination with some simple locking (anyway necessary) I don't see a
problem in ref-counting.
In the current implementation, simple locking isn't necessary.
The refcounter can be modified freely since the code modifying
it will always hold the
robert,
Interprocess communication is tedious and out of questio
[...]
I expect to be able to directly push around Python Object-Trees between the 2
(or more) interpreters by doing some careful locking.
Please do yourself a favour and have a look at pyro. pyro makes
InterComputer and
Paul Rubin wrote:
robert [EMAIL PROTECTED] writes:
I don't want to discourage you but what about reference
counting/memory
management for shared objects? Doesn't seem fun for me.
in combination with some simple locking (anyway necessary) I don't
see a problem in ref-counting.
If at least
GHUM wrote:
robert,
Interprocess communication is tedious and out of questio
[...]
I expect to be able to directly push around Python Object-Trees between the
2 (or more) interpreters by doing some careful locking.
Please do yourself a favour and have a look at pyro. pyro makes
robert schrieb:
PS: Besides: what are speed costs of LOCK INC mem ?
That very much depends on the implementation. In
http://gcc.gnu.org/ml/java/2001-03/msg00132.html
Hans Boehm claims it's 15 cycles. The LOCK prefix
itself asserts the lock# bus signal for the entire
operation, meaning that the
Martin v. Löwis [EMAIL PROTECTED] writes:
PS: Besides: what are speed costs of LOCK INC mem ?
That very much depends on the implementation. In
http://gcc.gnu.org/ml/java/2001-03/msg00132.html
Hans Boehm claims it's 15 cycles.
I think that has to be on a single processor, or at most a dual
Paul Rubin schrieb:
Martin v. Löwis [EMAIL PROTECTED] writes:
PS: Besides: what are speed costs of LOCK INC mem ?
That very much depends on the implementation. In
http://gcc.gnu.org/ml/java/2001-03/msg00132.html
Hans Boehm claims it's 15 cycles.
I think that has to be on a single
robert wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I thought
about simply using a more Python interpreter instances
Jean-Paul Calderone wrote:
On Thu, 2 Nov 2006 14:15:58 -0500, Jean-Paul Calderone
[EMAIL PROTECTED] wrote:
On Thu, 02 Nov 2006 19:32:54 +0100, robert
[EMAIL PROTECTED] wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a
robert wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I thought about
simply using a more Python interpreter instances (Py_NewInterpreter)
Filip Wasilewski wrote:
robert wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I thought
about simply using a more Python interpreter
robert wrote:
Question Besides: do concurrent INC/DEC machine OP-commands
execute atomically on Multi-Cores as they do in Single-Core threads?
No on the level that that Python reference counting is implemented. The
CPUs have often special assembler ops for these operations. I think that
Daniel Dittmar wrote:
robert wrote:
Question Besides: do concurrent INC/DEC machine OP-commands
execute atomically on Multi-Cores as they do in Single-Core threads?
No on the level that that Python reference counting is implemented. The
CPUs have often special assembler ops for
robert wrote:
(IPython is only a special python network terminal as already said.)
Sorry, I thought of IronPython, the .NET variant.
Does Jython really eliminate the GIL? What happens when different
Yes.
threads alter/read a dict concurrently - the basic operation in python,
which is
Daniel Dittmar wrote:
robert wrote:
(IPython is only a special python network terminal as already said.)
Sorry, I thought of IronPython, the .NET variant.
Does Jython really eliminate the GIL? What happens when different
Yes.
threads alter/read a dict concurrently - the basic
robert schrieb:
in combination with some simple locking (anyway necessary) I don't see a
problem in ref-counting.
In the current implementation, simple locking isn't necessary.
The refcounter can be modified freely since the code modifying
it will always hold the GIL.
Question Besides:
robert [EMAIL PROTECTED] writes:
I don't want to discourage you but what about reference
counting/memory
management for shared objects? Doesn't seem fun for me.
in combination with some simple locking (anyway necessary) I don't
see a problem in ref-counting.
If at least any interpreter
robert wrote:
Daniel Dittmar wrote:
robert wrote:
[...]
garbage is collected earliest, when the refcount went to 0. If it ever
went to 0, no one will ever use such object again. Thus GC should not
be different at all.
Since Python 2.?, there's a mark-and-sweep garbage collection in
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
Interprocess communication is tedious and out of question, so I thought about
simply using a more Python interpreter instances (Py_NewInterpreter) with extra
GIL in
On Thu, 02 Nov 2006 19:32:54 +0100, robert [EMAIL PROTECTED] wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
NumPy releases the GIL in quite a few places. I haven't used scipy much,
but I would expect it
On Thu, 2 Nov 2006 14:15:58 -0500, Jean-Paul Calderone [EMAIL PROTECTED]
wrote:
On Thu, 02 Nov 2006 19:32:54 +0100, robert [EMAIL PROTECTED] wrote:
I'd like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.
NumPy releases the
65 matches
Mail list logo