Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Brian Quinlan


On May 27, 2010, at 1:21 PM, Greg Ewing wrote:


On 27/05/10 12:04, Jesse Noller wrote:


Namespaces are
only a honking great idea if you actually let them do the job
they're designed for.


concurrent.* is the namespace, futures is the package within the
namespace - concurrent.futures is highly descriptive of the items
contained therein.


I was referring to the issue of ThreadPool vs. ThreadPoolExecutor
etc. By your own argument above, concurrent.futures.ThreadPool is
quite descriptive enough of what it provides. It's not a problem
if some other module also provides something called a ThreadPool.



I think that the Executor suffix is a good indicator of the  
interface being provided. Pool is not because you can could have  
Executor implementations that don't involve pools.


Cheers,
Brian
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Lennart Regebro
One  worry with an official sumo distribution is that it could become
an excuse for *not* putting something in the stdlib.
Otherwise it's an interesting idea.

-- 
Lennart Regebro: Python, Zope, Plone, Grok
http://regebro.wordpress.com/
+33 661 58 14 64
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Paul Moore
On 27 May 2010 00:11, geremy condra debat...@gmail.com wrote:
 I'm not clear, you seem to be arguing that there's a market for many
 augmented python distributions but not one. Why not just have one
 that includes the best from each domain?

Because that's bloat. You later argue that a web designer wouldn't
care if his distribution included numpy. OK, maybe, but if my needs
are simply futures, cx_Oracle and pywin32, I *would* object to
downloading many megabytes of other stuff just to get those three.
It's a matter of degree.

 I'm genuinely struggling to see how a Sumo distribution ever comes
 into being under your proposal. There's no evidence that anyone wants
 it (otherwise it would have been created by now!!)

 Everything worth making has already been made?

Not what I'm saying. But if/when it happens, something will trigger
it. I see no sign of such a trigger. That's all I'm saying.

 and until it exists, it's not a plausible place to put modules that don't
 make it into the stdlib.

 Of course its implausible to put something somewhere that
 doesn't exist... until it does.

Hence my point - people are saying futures don't belong in the stdlib
but they could go in a sumo distribution. The second half of that
statement is (currently) content free if not self-contradictory.

 I'd say rather that there are a large number of specialized tools which
 aren't individually popular enough to be included in Python, but which
 when taken together greatly increase its utility, and that sumo offers a
 way to provide that additional utility to python's users without forcing
 python core devs to shoulder the maintenance burden.

I don't believe that there's evidence that aggregation (except in the
context of specialist areas) does provide additional utility. (In the
context of the discussion that sparked this debate, that contrasts
with inclusion in the stdlib, which *does* offer additional utility -
batteries included, guaranteed and tested cross-platform
functioning, a statement of best practice, etc etc).

Paul.

PS One thing I haven't seen made clear - in my view, they hypothetical
sumo is a single aggregated distribution of Python
modules/packages/extensions. It would NOT include core Python and the
stdlib (in contrast to Enthought or ActivePython). I get the
impression that other people may be thinking in terms of a full Python
distribution, like those 2 cases. We probably ought to be clear which
we're talking about.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Floris Bruynooghe
On Thu, May 27, 2010 at 01:46:07PM +1200, Greg Ewing wrote:
 On 27/05/10 00:31, Brian Quinlan wrote:
 
 You have two semantic choices here:
 1. let the interpreter exit with the future still running
 2. wait until the future finishes and then exit
 
 I'd go for (1). I don't think it's unreasonable to
 expect a program that wants all its tasks to finish
 to explicitly wait for that to happen.

I'd got for (1) as well, it's no more then reasonable that if you want
a result you wait for it.  And I dislike libraries doing magic you
can't see, I'd prefer if I explicitly had to shut a pool down.  And
yes, if you shut the interpreter down while threads are running they
sometimes wake up at the wrong time to find the world around them
destroyed.  But that's part of programming with threads so it's not
like the futures lib suddenly makes things behave differently.

I'm glad I'm not alone in preferring (1) tough.

Regards
Floris

-- 
Debian GNU/Linux -- The Power of Freedom
www.debian.org | www.gnu.org | www.kernel.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Brian Quinlan


On 27 May 2010, at 17:53, Floris Bruynooghe wrote:


On Thu, May 27, 2010 at 01:46:07PM +1200, Greg Ewing wrote:

On 27/05/10 00:31, Brian Quinlan wrote:


You have two semantic choices here:
1. let the interpreter exit with the future still running
2. wait until the future finishes and then exit


I'd go for (1). I don't think it's unreasonable to
expect a program that wants all its tasks to finish
to explicitly wait for that to happen.


I'd got for (1) as well, it's no more then reasonable that if you want
a result you wait for it.  And I dislike libraries doing magic you
can't see, I'd prefer if I explicitly had to shut a pool down.  And
yes, if you shut the interpreter down while threads are running they
sometimes wake up at the wrong time to find the world around them
destroyed.  But that's part of programming with threads so it's not
like the futures lib suddenly makes things behave differently.

I'm glad I'm not alone in preferring (1) tough.


Keep in mind that this library magic is consistent with the library  
magic that the threading module does - unless the user sets  
Thread.daemon to True, the interpreter does *not* exit until the  
thread does.


Cheers,
Brian
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Lennart Regebro
OK, I had an idea here:

How about that the people affected by difficulties in getting software
approved got together to put together not a sumo-python, but a
python-extras package? That package could include all the popular
stuff, like SciPy, Numpy, twisted, distribute, buildout, virtualenv,
pip, pytz, PIL, openid, docutils, simplejson, nose, genshi, and tons
of others.

That would be a big download. But here's the trick: You don't *have*
to install them! Just bundle all of it.

If licensing is a problem I guess you'd need to have permission to
relicense them all to the Python license, which would be problematic.
But otherwise having a team of people overseeing and bundling all this
might not be that much work, and you'd avoid the bloat by not
installing all of it. :-)

Or would this not fool the company trolls?

-- 
Lennart Regebro: Python, Zope, Plone, Grok
http://regebro.wordpress.com/
+33 661 58 14 64
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Antoine Pitrou
On Thu, 27 May 2010 14:29:28 +1200
Greg Ewing greg.ew...@canterbury.ac.nz wrote:
 On 27/05/10 01:48, Nick Coghlan wrote:
 
  I would say it is precisely that extra configurability which separates
  the executor pools in the PEP implementation from more flexible general
  purpose pools.
 
 Wouldn't this be better addressed by adding the relevant
 options to the futures pools, rather than adding another
 module that provides almost exactly the same thing with
 different options?

+1.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Colin H
I needed to make a small modification to the workaround - I wasn't
able to delete from 'stuff', as the definitions in exec()'d code won't
run - they're relying on that being present at runtime. In practice
the overhead of doing this is quite noticeable if you run your code
like this a lot, and build up a decent sized context (which I do). It
will obviously depend on the usage scenario though.

def define_stuff(user_code):
  context = {...}
  stuff = {}
  stuff.update(context)

  exec(user_code, stuff)

  return_stuff = {}
  return_stuff.update(stuff)

  del return_stuff['__builtins__']
  for key in context:
    if key in return_stuff and return_stuff[key] == context[key]:
      del return_stuff[key]

  return return_stuff

On Thu, May 27, 2010 at 2:13 AM, Colin H hawk...@gmail.com wrote:
 Of course :) - I need to pay more attention. Your workaround should do
 the trick. It would make sense if locals could be used for this
 purpose, but the workaround doesn't add so much overhead in most
 situations.  Thanks for the help, much appreciated,

 Colin

 On Thu, May 27, 2010 at 2:05 AM, Guido van Rossum gu...@python.org wrote:
 On Wed, May 26, 2010 at 5:53 PM, Colin H hawk...@gmail.com wrote:
   Thanks for the possible workaround - unfortunately 'stuff' will
 contain a whole stack of things that are not in 'context', and were
 not defined in 'user_code' - things that python embeds - a (very
 small) selection -

 {..., 'NameError': type 'exceptions.NameError', 'BytesWarning':
 type 'exceptions.BytesWarning', 'dict': type 'dict', 'input':
 function input at 0x10047a9b0, 'oct': built-in function oct,
 'bin': built-in function bin, ...}

 It makes sense why this happens of course, but upon return, the
 globals dict is very large, and finding the stuff you defined in your
 user_code amongst it is a very difficult task.  Avoiding this problem
 is the 'locals' use-case for me.  Cheers,

 No, if taken literally that doesn't make sense. Those are builtins. I
 think you are mistaken that each of those (e.g. NameError) is in stuff
 -- they are in stuff['__builtins__'] which represents the built-in
 namespace. You should remove that key from stuff as well.

 --Guido

 Colin

 On Thu, May 27, 2010 at 1:38 AM, Guido van Rossum gu...@python.org wrote:
 This is not easy to fix. The best short-term work-around is probably a
 hack like this:

 def define_stuff(user_code):
  context = {...}
  stuff = {}
  stuff.update(context)
  exec(user_code, stuff)
  for key in context:
    if key in stuff and stuff[key] == context[key]:
      del stuff[key]
  return stuff

 --
 --Guido van Rossum (python.org/~guido)





 --
 --Guido van Rossum (python.org/~guido)


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Scott Dial
On 5/27/2010 7:14 AM, Colin H wrote:
 def define_stuff(user_code):
   context = {...}
   stuff = {}
   stuff.update(context)
 
   exec(user_code, stuff)
 
   return_stuff = {}
   return_stuff.update(stuff)
 
   del return_stuff['__builtins__']
   for key in context:
 if key in return_stuff and return_stuff[key] == context[key]:
   del return_stuff[key]
 
   return return_stuff

I'm not sure your application, but I suspect you would benefit from
using an identity check instead of an __eq__ check. The equality check
may be expensive (e.g., a large dictionary), and I don't think it
actually is checking what you want -- if the user_code generates an
__eq__-similar dictionary, wouldn't you still want that? The only reason
I can see to use __eq__ is if you are trying to detect user_code
modifying an object passed in, which is something that wouldn't be
addressed by your original complaint about exec (as in, modifying a
global data structure).

Instead of:
 if key in return_stuff and return_stuff[key] == context[key]:

Use:
 if key in return_stuff and return_stuff[key] is context[key]:

-- 
Scott Dial
sc...@scottdial.com
scod...@cs.indiana.edu
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Colin H
Yep fair call - was primarily modifying Guido's example to make the
point about not being able to delete from the globals returned from
exec - cheers,

Colin

On Thu, May 27, 2010 at 2:09 PM, Scott Dial
scott+python-...@scottdial.com wrote:
 On 5/27/2010 7:14 AM, Colin H wrote:
 def define_stuff(user_code):
   context = {...}
   stuff = {}
   stuff.update(context)

   exec(user_code, stuff)

   return_stuff = {}
   return_stuff.update(stuff)

   del return_stuff['__builtins__']
   for key in context:
     if key in return_stuff and return_stuff[key] == context[key]:
       del return_stuff[key]

   return return_stuff

 I'm not sure your application, but I suspect you would benefit from
 using an identity check instead of an __eq__ check. The equality check
 may be expensive (e.g., a large dictionary), and I don't think it
 actually is checking what you want -- if the user_code generates an
 __eq__-similar dictionary, wouldn't you still want that? The only reason
 I can see to use __eq__ is if you are trying to detect user_code
 modifying an object passed in, which is something that wouldn't be
 addressed by your original complaint about exec (as in, modifying a
 global data structure).

 Instead of:
     if key in return_stuff and return_stuff[key] == context[key]:

 Use:
     if key in return_stuff and return_stuff[key] is context[key]:

 --
 Scott Dial
 sc...@scottdial.com
 scod...@cs.indiana.edu

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Colin H
Just to put a couple of alternatives on the table that don't break
existing code - not necessarily promoting them, or suggesting they
would be easy to do -

1. modify exec() to take an optional third argument - 'scope_type' -
if it is not supplied (but locals is), then it runs as class namespace
- i.e. identical to existing behaviour. If it is supplied then it will
run as whichever is specified, with function namespace being an
option.  The API already operates along these lines, with the second
argument being optional and implying module namespace if it is not
present.

2. a new API exec2() which uses function namespace, and deprecating
the old exec() - assuming there is agreement that function namespace
makes more sense than the class namespace, because there are real use
cases, and developers would generally expect this behaviour when
approaching the API for the first time.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Nick Coghlan

On 27/05/10 10:38, Guido van Rossum wrote:

On Wed, May 26, 2010 at 5:12 PM, Nick Coghlanncogh...@gmail.com  wrote:

Lexical scoping only works for code that is compiled as part of a single
operation - the separation between the compilation of the individual string
and the code defining that string means that the symbol table analysis
needed for lexical scoping can't cross the boundary.


Hi Nick,

I don't think Colin was asking for such things.


Yes, I realised some time after sending that message that I'd gone off 
on a tangent unrelated to the original question (as a result of earlier 
parts of the discussion I'd been pondering the scoping differences 
between exec with two namespaces and a class definition and ended up 
writing about that instead of the topic Colin originally brought up).


I suspect Thomas is right that the current two namespace exec behaviour 
is mostly a legacy of the standard scoping before nested scopes were added.


To state the problem as succinctly as I can, the basic issue is that a 
code object which includes a function definition that refers to top 
level variables will execute correctly when the same namespace is used 
for both locals and globals (i.e. like module level code) but will fail 
when these namespaces are different (i.e. like code in class definition).


So long as the code being executed doesn't define any functions that 
refer to top level variables in the executed code the two argument form 
is currently perfectly usable, so deprecating it would be an overreaction.


However, attaining the (sensible) behaviour Colin is requesting when 
such top level variable references exist would actually be somewhat 
tricky. Considering Guido's suggestion to treat two argument exec like a 
function rather than a class and generate a closure with full lexical 
scoping a little further, I don't believe this could be done in exec 
itself without breaking code that expects the current behaviour. 
However, something along these lines could probably be managed as a new 
compilation mode for compile() (e.g. compile(code_str, name, 
closure)), which would then allow these code objects to be passed to 
exec to get the desired behaviour.


Compare and contrast:

 def f():
...   x = 1
...   def g():
... print x
...   g()
...
 exec f.func_code in globals(), {}
1

 source = \
... x = 1
... def g():
...   print x
... g()
... 
 exec source in globals(), {}
Traceback (most recent call last):
  File stdin, line 1, in module
  File string, line 4, in module
  File string, line 3, in g
NameError: global name 'x' is not defined

Breaking out dis.dis on these examples is fairly enlightening, as they 
generate *very* different bytecode for the definition of g().


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Nick Coghlan

On 27/05/10 12:48, Scott Dial wrote:

On 5/26/2010 8:03 PM, Nick Coghlan wrote:

On 27/05/10 02:27, Terry Reedy wrote:

I am suggesting that if we add a package, we do it right, from the
beginning.


This is a reasonable point of view, but I wouldn't want to hold up PEP
3148 over it (call it a +0 for the idea in general, but a -1 for linking
it to the acceptance of PEP 3148).


That sounds backward. How can you justify accepting PEP 3148 into a
concurrent namespace without also accepting the demand for such a
namespace? What is the contingency if this TBD migration PEP is not
accepted, what happens to PEP 3148? After all, there was some complaints
about just calling it futures, without putting it in a concurrent
namespace.


We can accept PEP 3148 by saying that we're happy to add the extra 
namespace level purely for disambiguation purposes, even if we never 
follow through on adding anything else to the package (although I 
consider such an outcome to be highly unlikely).


Any future additions or renames to move things into the concurrent 
package would then be handled as their own PEPs.


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Nick Coghlan

On 27/05/10 12:29, Greg Ewing wrote:

On 27/05/10 01:48, Nick Coghlan wrote:


I would say it is precisely that extra configurability which separates
the executor pools in the PEP implementation from more flexible general
purpose pools.


Wouldn't this be better addressed by adding the relevant
options to the futures pools, rather than adding another
module that provides almost exactly the same thing with
different options?


It would depend on the details, but my instinct says no (instead, the 
futures pools would be refactored to be task specific tailorings of the 
general purpose pools).


However, this is all very hypothetical at this point and not really 
relevant to the PEP review. We may never even bother creating these more 
general purpose threading pools - it was just an example of the kind of 
thing that may go alongside the futures module.


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Stephen J. Turnbull
Paul Moore writes:
  On 27 May 2010 00:11, geremy condra debat...@gmail.com wrote:
   I'm not clear, you seem to be arguing that there's a market for many
   augmented python distributions but not one. Why not just have one
   that includes the best from each domain?
  
  Because that's bloat. You later argue that a web designer wouldn't
  care if his distribution included numpy. OK, maybe, but if my needs
  are simply futures, cx_Oracle and pywin32, I *would* object to
  downloading many megabytes of other stuff just to get those three.
  It's a matter of degree.

So don't do that.  Go to PyPI and get just what you need.

The point of the sumo is that there are people and organizations with
more bandwidth/diskspace than brains (or to be more accurate, they
have enough bandwidth that optimizing bandwidth is a poor use of their
brains).

XEmacs has used a separate distribution for packages for over a
decade, and it's been a very popular feature.  Since originally all
packages were part of Emacs (and still are in GNU Emacs), the package
distribution is a single source hierarchy (like the Python stdlib).
So there are three ways of acquiring packages: check out the sources
and build and install them, download individual pre-built packages,
and download the sumo of all pre-built packages.  The sumos are very
popular.

The reason is simple.  A distribution of all Emacs packages ever made
would still probably be under 1GB.  This just isn't a lot of bandwidth
or disk space when people are schlepping around DVD images, even BD
images.  A Python sumo would probably be much bigger (multiple GB)
than XEmacs's (about 120MB, IIRC), but it would still be a negligible
amount of resources *for some people/organizations*.

And I have to support the organizational constraints argument here.
Several people have told me that (strangely enough, given its rather
random nature, both in what is provided and the quality) getting the
sumo certified by their organization was less trouble than getting
XEmacs itself certified, and neither was all that much more effort
than getting a single package certified.

Maintaining a sumo would be a significant effort.  The XEmacs rule is
that we allow someone to add a package to the distro if they promise
to maintain it for a couple years, or if we think it matters enough
that we'll accept the burden.  We're permissive enough that there are
at least 4 different MUAs in the distribution, several IRC clients,
two TeX modes, etc, etc.  Still, just maintaining contact with
external maintainers (who do go AWOL regularly), and dealing with
issues where somebody wants to upgrade (eg) vcard which is provided
by gnus but doesn't want to deal with gnus, etc takes time,
thought, and sometimes improvement in the distribution
infrastructure.

It's not clear to me that Python users would benefit that much over
and above the current stdlib, which provides a huge amount of
functionality, of somewhat uneven but generally high quality.  But I
certainly think significant additional benefit would be gained, the
question is is it worth the effort?  It's worth discussing.

  I don't believe that there's evidence that aggregation (except in the
  context of specialist areas) does provide additional utility.

We'll just have to agree to disagree, then.  Plenty of evidence has
been provided; it just doesn't happen to apply to you.  Fine, but I
wish you'd make the to me part explicit, because I know that it does
apply to others, many of them, from their personal testimony, both
related to XEmacs and to Python.

  PS One thing I haven't seen made clear - in my view, they hypothetical
  sumo is a single aggregated distribution of Python
  modules/packages/extensions. It would NOT include core Python and the
  stdlib (in contrast to Enthought or ActivePython). I get the
  impression that other people may be thinking in terms of a full Python
  distribution, like those 2 cases. We probably ought to be clear which
  we're talking about.

On the XEmacs model, it would not include core Python, but it would
include much of the stdlib.  The reason is that the stdlib makes
commitments to compatibility that the sumo would not need to.  So the
sumo might include (a) recent, relatively experimental versions of
stdlib packages (yes, this kind of duplication is a pain, but (some)
users do want it) and (b) packages which are formally separate but
duplicate functionality in the stdlib (eg, ipaddr and netaddr) -- in
some cases the sumo distro would want to make adjustments so they can
co-exist.

I wouldn't recommend building a production system on top of a sumo in
any case.  But (given resources to maintain multiple Python development
installations) it is a good environment for experimentation, because
not only batteries but screwdrivers and duct tape are supplied.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 

Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Nick Coghlan

On 27/05/10 18:13, Brian Quinlan wrote:


On 27 May 2010, at 17:53, Floris Bruynooghe wrote:

I'm glad I'm not alone in preferring (1) tough.


Keep in mind that this library magic is consistent with the library
magic that the threading module does - unless the user sets
Thread.daemon to True, the interpreter does *not* exit until the thread
does.


Along those lines, an Executor.daemon option may be a good idea. That 
way the default behaviour is to wait until things are done (just like 
threading itself), but it is easy for someone to turn that behaviour off 
for a specific executor.


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Nick Coghlan

On 27/05/10 10:29, Antoine Pitrou wrote:

On Thu, 27 May 2010 10:19:50 +1000
Nick Coghlanncogh...@gmail.com  wrote:


futures.ThreadPoolExecutor would likely be refactored to inherit from
the mooted pool.ThreadPool.


There still doesn't seem to be reason to have two different thread pool
APIs, though. Shouldn't there be one obvious way to do it?


Executors and thread pools are not the same thing.

I might create a thread pool for *anything*. An executor will always 
have a specific execution model associated with it (whether it be called 
futures, as in this case, or runnables or something else).


This confusion is making me think that dropping the Pool from the 
names might even be beneficial (since, to my mind, it currently 
emphasises a largely irrelevant implementation detail).


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Nick Coghlan

On 27/05/10 13:13, Greg Ewing wrote:

The way that functions get access to names in enclosing
local scopes is by having them passed in as cells, but that
mechanism is only available for optimised local namespaces,
not ones implemented as dicts.


I believe exec already includes the tapdancing needed to make that work.

As Guido pointed out, it's the ability to generate closures directly 
from a source string that is currently missing.


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Stephen J. Turnbull
Lennart Regebro writes:

  One  worry with an official sumo distribution is that it could become
  an excuse for *not* putting something in the stdlib.
  Otherwise it's an interesting idea.

On the contrary, that is the meat of why it's an interesting idea.

I really don't think the proponents of ipaddr and futures (to take two
recent PEPs) would have been willing to stop with a hypothetical sumo.
Both of those packages were designed with general use in mind.
Substantial effort was put into making them TOOWTDI-able.  Partly
that's pride (my stuff is good enough for the stdlib), and partly
there's a genuine need for it to be there (for your customers or just
to pay back the community).  Of course there was a lot of criticism of
both that they don't really come up to that standard, but even
opponents would credit the proponents for good intentions and making
the necessary effort, I think.  And it's the stdlib that (in a certain
sense) puts the OO in TOOWTDI.

On the other hand, some ideas deserve widespread exposure, but they
need real experience because the appropriate requirements and specs
are unclear.  It would be premature to put in the effort to make them
TOOWTDI.  However, to get the momentum to become BCP, and thus an
obvious candidate for stdlib inclusion, it's helpful to be *already*
available on *typical* installations.  PyPI is great, but it's not
quite there; it's not as discoverable and accessible as simply putting
import stuff based on some snippet you found on the web.  And the
stdlib itself can't be the means, it's the end.

At present, such ideas face the alternative stdlib or die.  The sumo
would give them a place to be.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Michael Foord

On 27/05/2010 16:56, Stephen J. Turnbull wrote:

Paul Moore writes:
On 27 May 2010 00:11, geremy condradebat...@gmail.com  wrote:
  I'm not clear, you seem to be arguing that there's a market for many
  augmented python distributions but not one. Why not just have one
  that includes the best from each domain?
  
Because that's bloat. You later argue that a web designer wouldn't
care if his distribution included numpy. OK, maybe, but if my needs
are simply futures, cx_Oracle and pywin32, I *would* object to
downloading many megabytes of other stuff just to get those three.
It's a matter of degree.

So don't do that.  Go to PyPI and get just what you need.

The point of the sumo is that there are people and organizations with
more bandwidth/diskspace than brains (or to be more accurate, they
have enough bandwidth that optimizing bandwidth is a poor use of their
brains).
   
To my mind one of the most important benefits of a sumo style 
distribution is not just that it easily provides a whole bunch of useful 
modules - but that it *highlights* which modules are the community 
blessed best of breed. At the moment if a new user wants to work out 
how to achieve a particular task (work with images for example) they 
have to google around and try and work out what the right module to use is.


For some problem domains there are a host of modules on PyPI many of 
which are unmaintained, immature or simply rubbish. A standardised 
solution makes choosing solutions for common problems *dramatically* 
easier, and may save people much heartache and frustration. For that to 
work though it needs to be well curated and genuinely have the 
substantial backing of the Python development community.


All the best,

Michael

--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of 
your employer, to release me from all obligations and waivers arising from any 
and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, 
clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and 
acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your 
employer, its partners, licensors, agents and assigns, in perpetuity, without 
prejudice to my ongoing rights and privileges. You further represent that you 
have the authority to release me from any BOGUS AGREEMENTS on behalf of your 
employer.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Stephen J. Turnbull
Lennart Regebro writes:

  If licensing is a problem I guess you'd need to have permission to
  relicense them all to the Python license,

Licensing compatibility is only a problem for copyleft, but most
copyleft licenses have mere aggregation is not derivation clauses.

Corporate concern about *knowing* what the license is, is a problem.
The XEmacs experience I've referred to elsewhere doesn't apply because
all our stuff is GPL, and therefore all our stuff has to be GPL. :-(
It's not obvious to me what the resolution is, although lots of
distributions now have some way to find out what licenses are.  GCC
(and soon GNU Emacs) even have a way to check GPL-compatibility at
runtime (inspired by the Linux kernel feature, maybe?)

Perhaps the sumo infrastructure could provide a license-ok-or-fatal
feature.  Eg, the application would do something like

sumo_ok_licenses = ['gplv2','bsd','microsoft_eula']

and the sumo version of the package's __init.py__ would do

sumo_check_my_license('artistic')

and raise LicenseError if it's not in sumo_ok_licenses.  In theory it
might be able to do more complex stuff like keep track of declared
licenses and barf if they're incompatible.

This scheme probably doesn't save lawyer time.  The lawyers would
still have to go over the actual packages to make sure the licenses
are what they say etc. before going into distribution.  Its selling
point is that the developers would be warned of problems that need
corporate legal's attention early in 90% of the cases, thus not
wasting developer time on using packages that were non-starters
because of license issues.

  which would be problematic.  But otherwise having a team of people
  overseeing and bundling all this might not be that much work, and
  you'd avoid the bloat by not installing all of it. :-)

As I've argued elsewhere, bloat is good, for some purposes.

  Or would this not fool the company trolls?

It will satisfy some, and not others, in my experience, described
elsewhere.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Nick Coghlan

On 28/05/10 02:16, Antoine Pitrou wrote:

On Fri, 28 May 2010 02:05:14 +1000
Nick Coghlanncogh...@gmail.com  wrote:


Executors and thread pools are not the same thing.

I might create a thread pool for *anything*. An executor will always
have a specific execution model associated with it (whether it be called
futures, as in this case, or runnables or something else).


I'm sorry, but what is the specific execution model you are talking
about, and how is it different from what you usually do with a thread
pool?  Why would you do something other with a thread pool than running
tasks and (optionally) collecting their results?


Both the execution and communications models may be different. The 
components may be long-lived state machines, they may be active objects, 
they may communicate by message passing or by manipulating shared state, 
who knows. Executors are designed around a model of go do this and let 
me know when you're done. A general purpose pool needs to support other 
execution models, and hence will look different (and is harder to design 
and write, since it needs to be more flexible).


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Stephen J. Turnbull
Michael Foord writes:

  To my mind one of the most important benefits of a sumo style 
  distribution is not just that it easily provides a whole bunch of useful 
  modules - but that it *highlights* which modules are the community 
  blessed best of breed.

That has several problems.

(1) There is a lot of overlap with the mission of the stdlib, and I
think confusion over roles would be quite costly.

(2) As the stdlib demonstrates, picking winners is expensive.  I
greatly doubt that running *two* such processes is worthwhile.

(3) Very often there is no best of breed.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Sumo

2010-05-27 Thread Paul Moore
On 27 May 2010 16:56, Stephen J. Turnbull step...@xemacs.org wrote:

 We'll just have to agree to disagree, then.  Plenty of evidence has
 been provided; it just doesn't happen to apply to you.  Fine, but I
 wish you'd make the to me part explicit, because I know that it does
 apply to others, many of them, from their personal testimony, both
 related to XEmacs and to Python.

Sorry, you're right. There's a very strong to me in all of this, but
I more or less assumed it was obvious, as I was originally responding
to comments implying that a sumo distribution was a solution to a
problem I stated that I have. In trying to trim things, and keep
things concise, I completely lost the context. My apologies.

 I wouldn't recommend building a production system on top of a sumo in
 any case.  But (given resources to maintain multiple Python development
 installations) it is a good environment for experimentation, because
 not only batteries but screwdrivers and duct tape are supplied.

That's an interesting perspective that I hadn't seen mentioned before.
For experimentation, I'd *love* a sumo distribution as you describe.
But I thought this whole discussion focussed around building
production systems. For that, the stdlib's quality guarantees are a
major benefit, and the costs of locating and validating appropriately
high-quality external packages are (sometimes prohibitively) high.

But I think I'm getting to the point where I'm adding more confusion
than information, so I'll bow out of this discussion at this point.

Paul.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Reid Kleckner
On Thu, May 27, 2010 at 11:42 AM, Nick Coghlan ncogh...@gmail.com wrote:
 However, attaining the (sensible) behaviour Colin is requesting when such
 top level variable references exist would actually be somewhat tricky.
 Considering Guido's suggestion to treat two argument exec like a function
 rather than a class and generate a closure with full lexical scoping a
 little further, I don't believe this could be done in exec itself without
 breaking code that expects the current behaviour.

Just to give a concrete example, here is code that would break if exec
were to execute code in a function scope instead of a class scope:

exec 
def len(xs):
return -1
def foo():
return len([])
print foo()
 in globals(), {}

Currently, the call to 'len' inside 'foo' skips the outer scope
(because it's a class scope) and goes straight to globals and
builtins.  If it were switched to a local scope, a cell would be
created for the broken definition of 'len', and the call would resolve
to it.

Honestly, to me, the fact that the above code ever worked (ie prints
0, not -1) seems like a bug, so I wouldn't worry about backwards
compatibility.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Greg Ewing

Brian Quinlan wrote:

I think that the Executor suffix is a good indicator of the  interface 
being provided.


It's not usually considered necessary for the name of a
type to indicate its interface. We don't have 'listsequence'
and 'dictmapping' for example.

I think what bothers me most about these names is their
longwindedness. Two parts to a name is okay, but three or
more starts to sound pedantic. And for me, Pool is a
more important piece of information than Executor.
The fact that it manages a pool is the main reason I'd
use such a module rather than just spawning a thread myself
for each task.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Colin H
This option sounds very promising - seems right to do it at the
compile stage - i.e. compile(code_str, name, closure) as you have
suggested.  If there were any argument against, it would be that the
most obvious behaviour (function namespace) is the hardest to induce,
but the value in knowing you're not breaking anything is pretty high.

Cheers,
Colin

On Thu, May 27, 2010 at 4:42 PM, Nick Coghlan ncogh...@gmail.com wrote:
 On 27/05/10 10:38, Guido van Rossum wrote:

 On Wed, May 26, 2010 at 5:12 PM, Nick Coghlanncogh...@gmail.com  wrote:

 Lexical scoping only works for code that is compiled as part of a single
 operation - the separation between the compilation of the individual
 string
 and the code defining that string means that the symbol table analysis
 needed for lexical scoping can't cross the boundary.

 Hi Nick,

 I don't think Colin was asking for such things.

 Yes, I realised some time after sending that message that I'd gone off on a
 tangent unrelated to the original question (as a result of earlier parts of
 the discussion I'd been pondering the scoping differences between exec with
 two namespaces and a class definition and ended up writing about that
 instead of the topic Colin originally brought up).

 I suspect Thomas is right that the current two namespace exec behaviour is
 mostly a legacy of the standard scoping before nested scopes were added.

 To state the problem as succinctly as I can, the basic issue is that a code
 object which includes a function definition that refers to top level
 variables will execute correctly when the same namespace is used for both
 locals and globals (i.e. like module level code) but will fail when these
 namespaces are different (i.e. like code in class definition).

 So long as the code being executed doesn't define any functions that refer
 to top level variables in the executed code the two argument form is
 currently perfectly usable, so deprecating it would be an overreaction.

 However, attaining the (sensible) behaviour Colin is requesting when such
 top level variable references exist would actually be somewhat tricky.
 Considering Guido's suggestion to treat two argument exec like a function
 rather than a class and generate a closure with full lexical scoping a
 little further, I don't believe this could be done in exec itself without
 breaking code that expects the current behaviour. However, something along
 these lines could probably be managed as a new compilation mode for
 compile() (e.g. compile(code_str, name, closure)), which would then allow
 these code objects to be passed to exec to get the desired behaviour.

 Compare and contrast:

 def f():
 ...   x = 1
 ...   def g():
 ...     print x
 ...   g()
 ...
 exec f.func_code in globals(), {}
 1

 source = \
 ... x = 1
 ... def g():
 ...   print x
 ... g()
 ... 
 exec source in globals(), {}
 Traceback (most recent call last):
  File stdin, line 1, in module
  File string, line 4, in module
  File string, line 3, in g
 NameError: global name 'x' is not defined

 Breaking out dis.dis on these examples is fairly enlightening, as they
 generate *very* different bytecode for the definition of g().

 Cheers,
 Nick.

 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 ---

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] variable name resolution in exec is incorrect

2010-05-27 Thread Colin H
By hardest to induce I mean the default compile exec(code_str, {}, {})
would still be class namespace, but it's pretty insignificant.

On Fri, May 28, 2010 at 12:32 AM, Colin H hawk...@gmail.com wrote:
 This option sounds very promising - seems right to do it at the
 compile stage - i.e. compile(code_str, name, closure) as you have
 suggested.  If there were any argument against, it would be that the
 most obvious behaviour (function namespace) is the hardest to induce,
 but the value in knowing you're not breaking anything is pretty high.

 Cheers,
 Colin

 On Thu, May 27, 2010 at 4:42 PM, Nick Coghlan ncogh...@gmail.com wrote:
 On 27/05/10 10:38, Guido van Rossum wrote:

 On Wed, May 26, 2010 at 5:12 PM, Nick Coghlanncogh...@gmail.com  wrote:

 Lexical scoping only works for code that is compiled as part of a single
 operation - the separation between the compilation of the individual
 string
 and the code defining that string means that the symbol table analysis
 needed for lexical scoping can't cross the boundary.

 Hi Nick,

 I don't think Colin was asking for such things.

 Yes, I realised some time after sending that message that I'd gone off on a
 tangent unrelated to the original question (as a result of earlier parts of
 the discussion I'd been pondering the scoping differences between exec with
 two namespaces and a class definition and ended up writing about that
 instead of the topic Colin originally brought up).

 I suspect Thomas is right that the current two namespace exec behaviour is
 mostly a legacy of the standard scoping before nested scopes were added.

 To state the problem as succinctly as I can, the basic issue is that a code
 object which includes a function definition that refers to top level
 variables will execute correctly when the same namespace is used for both
 locals and globals (i.e. like module level code) but will fail when these
 namespaces are different (i.e. like code in class definition).

 So long as the code being executed doesn't define any functions that refer
 to top level variables in the executed code the two argument form is
 currently perfectly usable, so deprecating it would be an overreaction.

 However, attaining the (sensible) behaviour Colin is requesting when such
 top level variable references exist would actually be somewhat tricky.
 Considering Guido's suggestion to treat two argument exec like a function
 rather than a class and generate a closure with full lexical scoping a
 little further, I don't believe this could be done in exec itself without
 breaking code that expects the current behaviour. However, something along
 these lines could probably be managed as a new compilation mode for
 compile() (e.g. compile(code_str, name, closure)), which would then allow
 these code objects to be passed to exec to get the desired behaviour.

 Compare and contrast:

 def f():
 ...   x = 1
 ...   def g():
 ...     print x
 ...   g()
 ...
 exec f.func_code in globals(), {}
 1

 source = \
 ... x = 1
 ... def g():
 ...   print x
 ... g()
 ... 
 exec source in globals(), {}
 Traceback (most recent call last):
  File stdin, line 1, in module
  File string, line 4, in module
  File string, line 3, in g
 NameError: global name 'x' is not defined

 Breaking out dis.dis on these examples is fairly enlightening, as they
 generate *very* different bytecode for the definition of g().

 Cheers,
 Nick.

 --
 Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
 ---


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Brian Quinlan


On 28 May 2010, at 09:18, Greg Ewing wrote:

Brian Quinlan wrote:

I think that the Executor suffix is a good indicator of the   
interface being provided.


It's not usually considered necessary for the name of a
type to indicate its interface. We don't have 'listsequence'
and 'dictmapping' for example.

I think what bothers me most about these names is their
longwindedness. Two parts to a name is okay, but three or
more starts to sound pedantic. And for me, Pool is a
more important piece of information than Executor.
The fact that it manages a pool is the main reason I'd
use such a module rather than just spawning a thread myself
for each task.



Actually, an executor implementation that created a new thread per  
task would still be useful - it would save you the hassle of  
developing a mechanism to wait for the thread to finish and to collect  
the results. We actually have such an implementation at Google and it  
is quite popular.


Cheers,
Brian
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Reid Kleckner
On Thu, May 27, 2010 at 4:13 AM, Brian Quinlan br...@sweetapp.com wrote:
 Keep in mind that this library magic is consistent with the library magic
 that the threading module does - unless the user sets Thread.daemon to True,
 the interpreter does *not* exit until the thread does.

Is there a compelling to make the threads daemon threads?  If not,
perhaps they can just be normal threads, and you can rely on the
threading module to wait for them to finish.

Unrelatedly, I feel like this behavior of waiting for the thread to
terminate usually manifests as deadlocks when the main thread throws
an uncaught exception.  The application then no longer responds
properly to interrupts, since it's stuck waiting on a semaphore.  I
guess it's better than the alternative of random crashes when daemon
threads wake up during interpreter shutdown, though.

Reid
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Brian Quinlan


On May 28, 2010, at 11:57 AM, Reid Kleckner wrote:

On Thu, May 27, 2010 at 4:13 AM, Brian Quinlan br...@sweetapp.com  
wrote:
Keep in mind that this library magic is consistent with the library  
magic
that the threading module does - unless the user sets Thread.daemon  
to True,

the interpreter does *not* exit until the thread does.


Is there a compelling to make the threads daemon threads?  If not,
perhaps they can just be normal threads, and you can rely on the
threading module to wait for them to finish.


Did you read my explanation of the reasoning behind my approach?

Cheers,
Brian


Unrelatedly, I feel like this behavior of waiting for the thread to
terminate usually manifests as deadlocks when the main thread throws
an uncaught exception.  The application then no longer responds
properly to interrupts, since it's stuck waiting on a semaphore.  I
guess it's better than the alternative of random crashes when daemon
threads wake up during interpreter shutdown, though.

Reid


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Jeffrey Yasskin
On Thu, May 27, 2010 at 8:06 PM, Brian Quinlan br...@sweetapp.com wrote:

 On May 28, 2010, at 11:57 AM, Reid Kleckner wrote:

 On Thu, May 27, 2010 at 4:13 AM, Brian Quinlan br...@sweetapp.com wrote:

 Keep in mind that this library magic is consistent with the library magic
 that the threading module does - unless the user sets Thread.daemon to
 True,
 the interpreter does *not* exit until the thread does.

 Is there a compelling to make the threads daemon threads?  If not,
 perhaps they can just be normal threads, and you can rely on the
 threading module to wait for them to finish.

 Did you read my explanation of the reasoning behind my approach?

You should try to link to explanations. These have been long threads,
and it's often hard to find the previous message where a subject was
addressed.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Scott Dial
On 5/27/2010 4:13 AM, Brian Quinlan wrote:
 On 27 May 2010, at 17:53, Floris Bruynooghe wrote:
 On Thu, May 27, 2010 at 01:46:07PM +1200, Greg Ewing wrote:
 On 27/05/10 00:31, Brian Quinlan wrote:
 You have two semantic choices here:
 1. let the interpreter exit with the future still running
 2. wait until the future finishes and then exit

 I'd go for (1). I don't think it's unreasonable to
 expect a program that wants all its tasks to finish
 to explicitly wait for that to happen.

 I'd got for (1) as well, it's no more then reasonable that if you want
 a result you wait for it.  And I dislike libraries doing magic you
 can't see, I'd prefer if I explicitly had to shut a pool down.

 I'm glad I'm not alone in preferring (1) tough.
 
 Keep in mind that this library magic is consistent with the library
 magic that the threading module does - unless the user sets
 Thread.daemon to True, the interpreter does *not* exit until the thread
 does.

Given your rationale, I don't understand from the PEP:

 shutdown(wait=True)
 
 Signal the executor that it should free any resources that it is
 using when the currently pending futures are done executing. Calls to
 Executor.submit and Executor.map and made after shutdown will raise
 RuntimeError.
 
 If wait is True then the executor will not return until all the
 pending futures are done executing and the resources associated with
 the executor have been freed.

Can you tell me what is the expected execution time of the following:

 executor = ThreadPoolExecutor(max_workers=1)
 executor.submit(lambda: time.sleep(1000))
 executor.shutdown(wait=False)
 sys.exit(0)

I believe it's 1000 seconds, which seems to defy my request of
shutdown(wait=False) because secretly the Python exit is going to wait
anyways. ISTM, it is much easier to get behavior #2 if you have behavior
#1, and it would also seem rather trivial to make ThreadPoolExecutor
take an optional argument specifying which behavior you want.

Your reference implementation does not actually implement the
specification given in the PEP, so it's quite impossible to check this
myself. There is no wait=True option for shutdown() in the reference
implementation, so I can only guess what that implementation might look
like.

-- 
Scott Dial
sc...@scottdial.com
scod...@cs.indiana.edu
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3148 ready for pronouncement

2010-05-27 Thread Nick Coghlan

On 28/05/10 09:52, Greg Ewing wrote:

Nick Coghlan wrote:


We can accept PEP 3148 by saying that we're happy to add the extra
namespace level purely for disambiguation purposes,


If that's the only rationale for the namespace, it makes it
sound like a kludge to work around a poor choice of name.


It's the right name though (it really is a futures implementation - I 
don't know what else you would even consider calling it). The problem is 
that the same word is used to mean different things in other programming 
domains (most obviously finance).


Resolving that kind of ambiguity is an *excellent* use of a package 
namespace - you remove the ambiguity without imposing any significant 
long term cognitive overhead.


Cheers,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
---
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com