[issue21553] Behaviour of modules depends on how they where imported

2014-05-22 Thread mythsmith

New submission from mythsmith:

I found a condition where different behaviour could be observed depending on 
how a module is imported. 

It seems to be different to write:

import module
# against:
from package import module

In the attachment you find a minimal package (imptest) with this organization:

imptest
  |- __init__.py (empty)
  |- m.py (module which initializes a variable foo=0)
  |- sub (package)
  |- __init__.py (empty)
  |- subm.py (module which, upon import, changes m.foo=1)

And two scripts which can be directly executed:
  |- run0.py (using import m)
  |- run1.py (using from imptest import m)

Contents of the module m:
#
foo=0
def do():
global foo
foo=1
print('doing foo=',foo)
print(imported foo=,foo)

Contents of module subm:
###
from imptest import m
from imptest import m
print(imported sub, foo=,m.foo)

Both run0.py and run1.py imports module m and calls the do() function, thus 
theoretically changing foo to 1.
Both later import the subm module, which in turn imports again the m module. 
What I would expect is that, 
since m is already in memory, it is not really imported again: so foo remains 
equal to 1 also after subm import.

I found that this actually depends on how I imported m in the script.

Contents of run0.py:

import m
m.do()
print(importing subm)
from imptest.sub import subm

Result:
imported m; foo= 0
doing foo= 1
importing subm
imported m; foo= 0
imported sub, foo= 0

As you can see from printout importing subm, the m module is imported again 
and thus foo is reset to 0. In run1.py, 
I changed the line import m to from imptest import m, and got the expected 
behaviour:

Contents of run1.py:

from imptest import m
m.do()
print(importing subm)
from imptest.sub import subm


Result:
imported m; foo= 0
doing foo= 1
importing subm
imported sub, foo= 1

I know that directly running a module in the first level of a package may seem 
strange or not correct, but could someone explain why this is happening? 
I would expect a module to be loaded in memory at the first import and then 
referred in any way I later or elsewhere in the program choose to import it.

--
components: Interpreter Core
files: imptest.zip
messages: 218901
nosy: mythsmith
priority: normal
severity: normal
status: open
title: Behaviour of modules depends on how they where imported
type: behavior
versions: Python 2.7, Python 3.2
Added file: http://bugs.python.org/file35314/imptest.zip

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21553
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20660] Starting a second multiprocessing.Manager causes INCREF on all object created by the first one.

2014-02-19 Thread mythsmith

mythsmith added the comment:

 That would probably mean that proxy objects could not be inherited by *any* 
 sub-process. 

If I only avoid after-fork incref, I must be very careful at not deleting them 
in any subprocess, as this would cause a decref which was not compensated by 
the after-fork incref at the beginning of the subprocess. 

But as long as I never delete such objects in any subprocess, this should make 
no difference... Or will they get deleted whenever any of the subprocesses ends?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20660
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue20660] Starting a second multiprocessing.Manager causes INCREF on all object created by the first one.

2014-02-17 Thread mythsmith

New submission from mythsmith:

I seems that upon the start of a second manager, all objects referenced in the 
first one gets an INCREF. On the third start, all objects created by the first 
and the second manager get another INCREF. And so on. I cannot understand why 
the start of a totally new manager, in a new process, will cause all these 
INCREF about object hosted on other managers - which cause the program to take 
forever to start/stop.

This small script fully reproduces the behaviour, tested in python 2.7 and 3.2:

from __future__ import print_function
import multiprocessing, logging
# # Activate multiprocessing logging
mplog = multiprocessing.get_logger()
mplog.setLevel(multiprocessing.util.DEBUG)
mplog.addHandler(logging.StreamHandler())
objs=[]
def newman(n=50):
global objs
m=multiprocessing.Manager()
print('created')
for i in range(n):
objs.append(m.Value('i',i)) 
return m

print(' first man')
m1=newman()

print (' second man')
m2=newman()

print (' third man')
m3=newman(0)

(Output is attached)

After the start of the first manager, the logger prints out the messages 
relative to the creation of the first 50 objects.

But when the second manager is starting - before any object was created by it - 
the logger prints out exactly 50 INCREF messages.Then follows the messages 
relating to the creation of the 50 new objects on manager 2.

When the third manager starts - before any object was created by it - 100 more 
INCREF messages are printed.

No object creation message is seen after m3 creation, as I passed 0 to the 
newman() function.

When the program ends, a similar amount of DECREF messages is printed.

It seems that, when I start a new manager, it creates a reference to all 
objects referenced by previous managers. In a big application this translates 
into extremely slow startup/shutdown.

--
components: Library (Lib)
files: output.txt
messages: 211421
nosy: mythsmith
priority: normal
severity: normal
status: open
title: Starting a second multiprocessing.Manager causes INCREF on all object 
created by the first one.
type: behavior
versions: Python 2.7, Python 3.2
Added file: http://bugs.python.org/file34121/output.txt

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue20660
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com