Hello everyone,

I've been playing around with distributed computing with multiprocessing on
CPython, and thought I'd see if I could spin up some workers running pypy
and have them connect to a server running CPython.

Well, it didn't go so well. The client always gets an IOError: bad message
length in the answer_challenge() function of connection.py.

Traceback (most recent call last):
>   File "app_main.py", line 51, in run_toplevel
>   File "client1.py", line 6, in <module>
>     m.connect()
>   File
> "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/managers.py", line
> 474, in connect
>     conn = Client(self._address, authkey=self._authkey)
>   File
> "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/connection.py",
> line 149, in Client
>     answer_challenge(c, authkey)
>   File
> "[...]/pypy-1.7/lib-python/modified-2.7/multiprocessing/connection.py",
> line 383, in answer_challenge
>     message = connection.recv_bytes(256)         # reject large message
> IOError: bad message length


You can try the simple example from the documentation:
http://docs.python.org/library/multiprocessing.html#using-a-remote-manager to
trigger this. Obviously CPython server and CPython client works, but I've
also found that pypy server and pypy client works as well. It's only a
mixed server and client that triggers this same error. (either way... pypy
server + cpython client or cpython server + pypy client)

Is this a bug? Or is multiprocessing not supposed to be compatible across
implementations? I would think it is, since it's just sockets with pickled
data, right?

Thanks,
Andrew Brown
_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
http://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to