[issue29158] Possible glitch in the interaction of a thread and a multiprocessing manager

2017-01-06 Thread luke_16

luke_16 added the comment:

Regarding Davin's last paragraph: "Without pulling apart your code...", I would 
like to point out that what I'm doing is what the Documentation instructs:

https://docs.python.org/2/library/multiprocessing.html#using-a-remote-manager

So, I want to access a process running in another machine, which is the 
server.py, from another machine running client.py. But if I want tho run both 
separately in the same machine, I should also be able to.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29158] Possible glitch in the interaction of a thread and a multiprocessing manager

2017-01-05 Thread luke_16

luke_16 added the comment:

Relating to the idea that it is not recommended to spawn a process whenever 
there are already spawned threads running, which would be the case of the 
server side of my example, I have to disagree. If a process is supposed to be 
completely independent of the current one (no shared memory), than this should 
not be a problem. Anyway, I changed the server.py module just to clear this 
possible error source.

Trying to simplify the example, I moved everything to the client.py module and 
eliminated the "thread spawns process" part in order to see what happens. 
Actually, I don't think that that was the case because I could not see any 
process being spawned when trying to connect to the server. I think that this 
happens in the current MainThread.

In the new client, I'm doing exactly what I was trying to avoid, which is to 
wait for the connect() method to return. Surprisingly, the problem is still 
there, in a way. Let's say you start the server and then the client. The first 
attempt to send something fails and the client tries to connect to the server. 
All goes well with the connection, and the client starts sending stuff to the 
server. If you stop the server, wait a while (or not), and restart it again, 
the client can no longer send anything, and tries to reconnect again. The 
connection is then successful, but immediately after that, it cannot send 
anything because of a BrokenPipeError. Only after the exception is raised and 
the next loop of the "while" begins, the client can send something to the 
server again. If you insert a time.sleep(x) of any number of seconds inside the 
scope of the first "else" (between line 26 and 38) and right after a 
"remote_buffer.put('anything')", it will raise a BrokenPipeError. Only when a 
new l
 oop in "while" begins, then it is possible again to send something to the 
server.

This makes no sense to me. There must be something wrong with it.

--
Added file: http://bugs.python.org/file46159/case2.zip

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29158] Possible glitch in the interaction of a thread and a multiprocessing manager

2017-01-04 Thread Davin Potts

Davin Potts added the comment:

There are too many things going on in this example -- it would be far easier to 
digest if the example could be simplified.

The general programming rule of thumb (completely unrelated to but still just 
as relevant to Python) that I think David might have been invoking is:  create 
processes first then create threads inside of them.  Otherwise, if you fork a 
process that has multiple threads going inside it, you should expect problems.  
Assuming you're on a unix platform, it looks like you're creating threads then 
forking a process as well as doing it the other way around in another part of 
your code.

Different topic:  you mention killing the main process for server.py... which 
would likely kill the manager process referred to by shared_objects_manager... 
but you're creating a different manager process in bridge.py that is told to 
listen on the same port...


Without pulling apart your code further, I suspect confusion over how to use a 
Manager to share objects / data / information across processes.  If it helps, 
generally one process creates a manager instance (which itself results in the 
creation of a new process) and then other processes / threads created are 
created by that first process and given a handle on the manager instance or the 
objects managed by that manager.  I am a bit confused by your example but I 
hope that explanation helps provide some clarity?

--
nosy: +davin

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29158] Possible glitch in the interaction of a thread and a multiprocessing manager

2017-01-04 Thread R. David Murray

R. David Murray added the comment:

My understanding is that the basic rule of thumb is: don't mix threads and 
multiprocessing.  You may find that if you use spawn, it won't ever work.  But 
I haven't used multiprocessing myself.

--
nosy: +r.david.murray
type: crash -> behavior

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29158] Possible glitch in the interaction of a thread and a multiprocessing manager

2017-01-04 Thread Luciano Dionisio

New submission from Luciano Dionisio:

After spending a lot of time trying to understand why my code will not execute 
as expected and after not getting any help from StackOverflow:

http://stackoverflow.com/questions/41444081/python-multiprocessing-manager-delegate-client-connection-and-dont-wait

I assumed that it may be a possible glitch in the interaction of a thread and a 
multiprocessing manager.

I have tried this in Python 2.7.13 and Python 3.6.0 and assume the problem 
still exists in between and beyond.

The problem appears when I try to delegate the connect() method of a 
multiprocessing manager client to a thread. The first time the procedure takes 
place, everything works fine and there is no problem of shared memory or 
anything. The problem arises on the second and forth trials of connect, when 
there seems to be a memory sharing problem. To reproduce the problem you have 
to run the server.py and client.py modules. You can see that the client is 
capable of populating the server's queue. If you terminate the server.py 
process and start it again, the client can no longer reassign the remote queue. 
Actually, reconnecting to the server always take place, as well as the correct 
linkage to the remote queue on line 53 of bridge.py:

self.items_buffer = self.remote_manager.items_buffer()

but the problem is that this procedure no longer works after the first time. 
Even though the connection is re-established, and at the moment of reconnection 
it is possible to send info to the server, whenever the thread dies, the pipe 
gets somehow broken.

--
components: Interpreter Core
files: case.zip
messages: 284661
nosy: Luciano Dionisio
priority: normal
severity: normal
status: open
title: Possible glitch in the interaction of a thread and a multiprocessing 
manager
type: crash
versions: Python 2.7, Python 3.6
Added file: http://bugs.python.org/file46145/case.zip

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com